Guides

Coming soon

Market insights

Coming soon

Search

Personalize

0%

New Upgraded ChatGPT Model: GPT‑4o. How To Use It to Learn Maths and Flirt?

8 mins

Markus Ivakha

Published by: Markus Ivakha

14 May 2024, 08:13PM

In Brief

GPT-4o is now free for all ChatGPT users, with enhanced speed and engagement.

This AI model integrates text, image, and soon audio, in a single platform.

It boasts quicker response times and advanced conversational abilities.

Errors and limitations still persist, needing further refinement for reliability.

GPT-4o offers expanded API access, promising greater innovation in development.

New Upgraded ChatGPT Model: GPT‑4o. How To Use It to Learn Maths and Flirt?

OpenAI has just released the latest version of the technology behind its AI chatbot, ChatGPT. This new version, called GPT-4o, is already available to everyone using ChatGPT, even if they don’t subscribe.

GPT-4o is faster than previous versions and has been designed to converse in a more relaxed and engaging way, sometimes even adding a flirtatious touch to its replies. This new model can also handle images, translating languages, and recognizing emotions from facial expressions. The integration and capabilities of AI models, especially with the release of GPT-4o, exemplify the newest model improvements, enhancing ChatGPT's abilities in handling text, speech, and vision, alongside its multilingual and speed capabilities. Plus, it remembers past interactions, which helps it keep track of ongoing conversations.

Another improvement is how smoothly it chats. You can ask it a question and get an immediate response without any noticeable delay. GPT-4o, as an advanced language model, significantly enhances the conversational experience, making it more humanlike and versatile for various applications, from drafting legal documents to creating engaging content for social media.

OpenAI debuts GPT-4o 'omni' language model now powering ChatGPT

GPT-4o (where “o” stands for “omni”) marks a significant advancement towards more natural interactions between humans and computers. It’s designed to handle a mix of text, audio, and images both as input and output, showcasing the broader use of AI systems in various contexts, highlighting the versatility of GPT-4o in potentially replacing human journalists, integrating into financial markets, and addressing existential risks and societal impacts. Impressively, it can respond to audio queries in just about 232 milliseconds on average, which is as fast as human response times in conversation. OpenAI plans to initially launch support for GPT-4o's new audio capabilities to a small group of trusted partners, enhancing interaction and performance across multiple modalities.

In terms of performance, GPT-4o matches the speed and efficiency of the previous GPT-4 Turbo model for English and coding tasks. It also shows considerable improvements in handling non-English texts and is both faster and 50% cheaper to use via its API.

Additionally, GPT-4o has enhanced capabilities in understanding both visual and auditory information compared to earlier models, benefiting greatly from professional input in professional and technical tasks.

Before GPT-4o, using ChatGPT in Voice Mode involved a somewhat slow process with delays of 2.8 seconds with GPT-3.5 and even longer with GPT-4. This older method used a three-part model system where one model converted speech to text, a second processed the text, and a third converted the response back into audio. This process often lost nuances like tone, background noise, and emotional expression.

Now, with GPT-4o, there’s just one model handling everything—text, vision, and audio—which means it processes all forms of input and output directly. This integration allows for a much richer and more responsive interaction. Although this is our first attempt at such a comprehensive model, it’s already showing a lot of promise in bridging the gap between human and computer communication.

What are the glitches of new ChatGPT's newest model?

During a recent demonstration of GPT-4o’s voice capabilities, it impressively guided users on how to solve a simple equation written on paper, instead of just giving the answer. It also translated between Italian and English, analyzed computer code, and interpreted emotions from a selfie of a smiling man. ChatGPT users have found diverse applications for the model, from solving complex equations to translating languages across the globe. GPT-4o, using a warm American female voice, even greeted users by asking about their day and playfully responded to a compliment with:

Stop it, you’re making me blush!

GPT-4o 'omni'

However, it wasn’t flawless. At one point, it mistakenly identified the smiling man as a wooden surface and began solving an equation that hadn’t been presented yet. These errors show that there are still some kinks to be worked out to avoid the glitches that can make chatbots unreliable or even unsafe. The potential for automating tasks like data entry raises concerns about the impact on employment, highlighting the need to consider how technologies like GPT-4o could affect the workforce.

But these demonstrations also highlight OpenAI’s vision for the future. GPT-4o seems to be stepping towards becoming a next-generation AI digital assistant, much like an enhanced Siri or Google Assistant, but with the ability to remember past interactions and handle more than just voice or text commands. Its capability to write code brings forth ethical considerations, such as the potential misuse in creating malware or impersonating individuals by mimicking their writing style.

Model safety and limitations

The introduction of GPT-4o by OpenAI represents a significant advancement in the realm of artificial intelligence, blending enhanced capabilities with rigorous safety measures. As we delve into the intricate fabric of this AI model, it’s important to understand both its innovations and the conscientious approach taken to mitigate potential risks across its functionalities.

Firstly, GPT-4o is designed with built-in safety protocols, a cornerstone in the evolution of AI technologies. The model employs a method of filtering training data, ensuring that the foundation upon which it learns is as clean and unbiased as possible. Additionally, post-training refinements are applied to fine-tune behavior and responses, establishing a safer interactive environment. This approach not only demonstrates a proactive stance on safety but also highlights the ongoing commitment to ethical AI development. Reward models play a crucial role in training GPT-4o to ensure safe and accurate responses, utilizing reinforcement learning through human feedback to rank the best answers.

The evaluation of GPT-4o using the Preparedness Framework is particularly noteworthy. This framework assesses various risk categories including cybersecurity, chemical, biological, radiological, and nuclear (CBRN) threats, persuasive capabilities, and model autonomy. Remarkably, GPT-4o has been rated to pose no more than a Medium risk in any of these categories. This assessment was rigorously conducted through both automated and human evaluations, employing tests on pre- and post-safety-mitigation versions to scrutinize the model’s capabilities comprehensively. GPT-4o offers tailored advice, demonstrating its ability to provide personalized guidance and support to users in various contexts.

In an effort to further fortify the model against unforeseen vulnerabilities, GPT-4o was subjected to extensive external red teaming. This involved collaboration with over 70 experts from diverse fields such as social psychology, bias and fairness, and misinformation. Such a robust red-teaming process is crucial as it helps uncover potential risks introduced or exacerbated by new modalities, paving the way for targeted safety interventions.

The decision to initially release only text and image inputs, with text outputs, underscores a cautious yet progressive approach towards launching new modalities. The gradual rollout of audio functionalities—beginning with a selection of preset voices—reflects a thoughtful integration of safety and usability. This phased deployment allows for continual improvement of the technical infrastructure and safety mechanisms before fully launching all modalities.

GPT-4o’s comprehensive safety and risk management strategies set a new standard in the field of AI. By continuously adapting and responding to new risks, OpenAI ensures that GPT-4o not only advances technological frontiers but does so with the utmost responsibility. This conscientious approach underlines the critical importance of safety in the rapidly evolving landscape of artificial intelligence. As we look forward to the full realization of GPT-4o’s capabilities, the ongoing commitment to safety and innovation remains paramount.

Model availability. Voice mode and other accessibility features

Starting today, GPT-4o’s rollout begins with its integration into ChatGPT, showcasing its advanced text and image processing capabilities. What’s particularly exciting is the democratization of this technology—GPT-4o is being made available to users on the free tier as well as to Plus users, but with significantly higher message limits, up to five times that of previous allowances. ChatGPT Enterprise is also part of OpenAI's premium offerings, integrating OpenAI's latest image generation model, DALL-E 3, to allow ChatGPT to write prompts for DALL-E guided by conversation with users. This strategic move not only broadens access but also encourages widespread adoption and integration of AI into daily digital interactions.

For those more technically inclined, the news that GPT-4o will also be accessible via API is particularly thrilling. Developers are now able to harness a text and vision model that is not only twice as fast as the previous iteration, GPT-4 Turbo, but also half the price, and capable of handling higher traffic—characteristics that promise to spur innovation and creativity in the development community.

Looking forward, the phased introduction of new modalities such as audio and video capabilities to a select group of trusted partners hints at the potential for even more dynamic and multifaceted applications. This cautious yet forward-thinking approach to deployment reflects a keen awareness of the complexities involved in integrating such sophisticated technologies into diverse environments.

The iterative rollout, starting with extended red team access, underscores a commitment to meticulous testing and refinement. This method ensures that each feature not only meets high standards of performance but is also robust against potential vulnerabilities before it reaches a wider audience.

 Download ChatGPT  today to experience the power of instant answers and tailored advice at half the price. This call to action encourages users to engage with a technology that is not only more accessible but also designed to provide immediate and personalized support.




In essence, GPT-4o is not just a testament to the technical advancements in AI but a beacon for the future of practical, user-centric applications of deep learning. Its introduction marks a significant milestone in making powerful AI tools more accessible and efficient, thereby paving the way for innovative uses that we are only beginning to imagine.

User Comments

There are no reviews here yet. Be the first to leave review.

Hi, there!

Tags:

Join our newsletter

Stay in the know on the latest alpha, news and product updates.