OpenAI Reveals Multimodal GPT-4o To Take On Google's Gemini AI

OpenAI has announced a new model called GPT-4o to power ChatGPT. But, unlike the advancements introduced by previous models like GPT-4, this one brings a massive boost to its multimodal capabilities, allowing it to interact with text, visuals, audio, or a combination of it all. Think of it as an AI tool with eyes and ears that can make sense of the world around you, just the way you would use something like Google Lens, but now supercharged with a generative AI chatbot on your phone.

Advertisement

The company claims GPT-4o can answer audio queries in just about 0.2 seconds. For example, it can facilitate two-way bilingual conversations by translating one language into another, without having to prompt it at the end of each person's speech. Notably, OpenAI says it has cut down the cost of APIs in half for developers and has also dramatically reduced the token size for each request, which means the process is faster. 

Advertisement

GPT-4o seems like a convenient all-in-one alternative to tools like Google Gemini, which is also multimodal. Notably, ChatGPT with GPT-4o has a critical advantage here. Gemini's Nano model requires a certain hardware baseline, but ChatGPT doesn't because it follows an entirely cloud-based workflow and can run on any modern phone. Moreover, from what we've seen of ChatGPT's new vision capabilities and how it intelligently makes sense of the world around it as seen with the camera lens, AI hardware like the Rabbit R1 seems obsolete from a value and capability perspective.

What can ChatGPT vision accomplish?

In the demo videos released by OpenAI, GPT-4o can be seen identifying real-world objects and interpreting them in another language, teaching mathematics in split screen mode based on a problem appearing in another app, identifying people and their surroundings in the camera frame, and even cracking terrible dad jokes. Unfortunately, all these fancy multimodal capabilities will take some time to land on every enthusiast's phone. In the early phase, which begins with a public rollout starting today, GPT-4o will arrive only with its upgraded text and image capabilities. "With GPT-4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network," OpenAI says in an official blog post.

Advertisement

In the weeks to come, the company will be extensively testing the audio and vision capabilities, but even when they are released, there will be certain limitations in the early days. For example, the audio outputs will only allow a limited selection of sound presets to pick from. The most interesting element of today's announcement, however, is that GPT-4o will be available to all users without any subscription caveat. As for users with a ChatGPT Plus subscription, they will get a 5x higher limit for conversations powered by the new model and will also be rewarded with priority access to the audio and vision capabilities in the coming weeks.

Recommended

Advertisement