What Is GPT-4 Turbo, And Is It Better Than GPT-4?
ChatGPT, the chatbot implementation of OpenAI's GPT-3 and GPT-4 language models, has taken the world by storm in less than a year. Its immediate ubiquity has kept it in the headlines consistently, as it's drawn copyright infringement lawsuits based on how it uses the data that the model was trained with and defamation lawsuits for faking its way through requests for factual information, as well as having acted disturbingly human at times, among other things. Confusion as to what it is and how it works has led to alternative chatbots based on the GPT models, like Microsoft's Bing Chatbot, adding citations to make it clear that it was serving verifiable information where ChatGPT was not.
All of these very public issues have informed the development of ChatGPT, and, in November, OpenAI announced a major update, GPT-4 Turbo, at an inaugural developers conference. Not only does the update tweak the underlying model and its chatbot interface, but it also includes legal protections for enterprise users. Let's take a look at what's new.
What is ChatGPT-4 Turbo?
GPT-4 Turbo is the newest version of the language model at the heart of ChatGPT. Though it's been upgraded technologically in various ways, the most important difference may be that OpenAI will cover the costs of any copyright infringement litigation brought against enterprise customers.
"OpenAI is committed to protecting our customers with built-in copyright safeguards in our systems," reads the official announcement. "Today, we're going one step further and introducing Copyright Shield—we will now step in and defend our customers, and pay the costs incurred, if you face legal claims around copyright infringement. This applies to generally available features of ChatGPT Enterprise and our developer platform."
In its announcement, OpenAI described the GPT-4 Turbo model as "more capable and ha[ving] knowledge of world events up to April 2023." It also touts a 128k content window, meaning that the newer model "can fit the equivalent of more than 300 pages of text in a single prompt." The performance of the new model, meanwhile, has been optimized so that it requires less input tokens, with the announcement describing "a 3x cheaper price for input tokens and a 2x cheaper price for output tokens compared to GPT-4."
How does ChatGPT-4 Turbo work?
As of this writing, ChatGPT-4 Turbo is only available to paying developers, but it will make its way to the general public soon.
In its announcement, OpenAI touted how much more context GPT-4 Turbo can glean from both chatbot queries and the data it's been trained to parse. Not only is the training data set, as mentioned earlier, current as of April 2023, but the chatbot now has a 128k context window, which means that queries can now "fit the equivalent of more than 300 pages of text in a single prompt."
This better contextual understanding can also be seen in removing the dropdown menu that dictated what other AI tools you wanted to use with that chatbot, like DALL-E 3 for image generation. Starting with ChatGPT-4 Turbo, the chatbot will understand from your query that you're seeking an image, for example, and solicit help from DALL-E as a result. The same goes for utilizing "Browse with Bing" if you make it clear that you want ChatGPT to connect to the internet. The new model also does an improved job of staying on task when given precise instructions, like when solicited to write a specific kind of code. (The example given in OpenAI's announcement is "always respond in XML.")
ChatGPT-4 Turbo benefits
Running ChatGPT4-Turbo is more efficient and, thus, less expensive for developers to run on a per-token basis than ChatGPT-4 was. In numerical terms, the rate of one cent per 1,000 input tokens is one-third of the previous cost, while the rate of three cents per 1,000 output tokens is half of what the cost was in ChatGPT-4.
ChatGPT-4 Turbo also, as noted earlier, has gotten a significant boost when it comes to understanding context. Being able to accept the equivalent of over 300 pages of text in a single prompt makes it stronger than ever at summarizing documents. It's also a better "listener" in terms of being able to discern instructions in various ways, like sticking to writing code or knowing when to call on other AI tools without having to pre-select them from a dropdown menu. And even if you don't connect it to Bing so it can access the contemporary internet, its data set has at least been updated to current as of April 2023.
ChatGPT-4 Turbo negatives
ChatGPT-4 Turbo still has many of the issues that have been inherent to ChatGPT since its launch in November 2022 — the biggest one is that it can't be looked at as an encyclopedic resource. It's a language and its goal is not to find accurate information. This is probably best exemplified by the case of Steven Schwartz, the lawyer who enlisted ChatGPT's help in legal research thinking it was an AI-assisted search engine. The way Schwartz explained what happened in a June 8, 2023 filing probably best explains the confusion he and others have faced when using ChatGPT.
"Mr. Schwartz, a personal injury and workers compensation lawyer who does not often practice in federal court, found himself researching a bankruptcy issue under the Montreal Convention 1999," he wrote. "He also found that his firm's Fastcase subscription no longer worked for federal searches. With no Westlaw or LexisNexis subscription, he turned to ChatGPT, which he understood to be a highly-touted research tool that utilizes artificial intelligence (AI). He did not understand it was not a search engine, but a generative language processing tool primarily designed to generate human-like text response based on the user's text input and the patterns it recognized in data and information used during its development or 'training,' with little regard for whether those responses are factual."