5 Big Flaws With ChatGPT That Still Haven't Been Solved

AI continues to be a hot topic in every sphere, and ChatGPT is arguably the poster child of this tech revolution. Early on, much of the discourse about the conversational chatbot was focused on its immense potential and its commendable capabilities — it can whip up a poem in seconds, write video scripts, troubleshoot code, and once, on request, it even walked us through the process of servicing a Rolex, Edgar Allen Poe style.

Advertisement

Now, much of the novelty has worn off, and time has begun to reveal ChatGPT's shortcomings. We've run previous tests that unveiled the tool's limits with nuance and creativity, as well as some loopholes through which users can bypass important ethical boundaries.

Unfortunately, many of these problems still persist, and they raise some serious concerns about the increasingly AI-powered future we're moving toward. We'll examine some of these flaws and their implications, in the hopes that knowing about them will help you use AI responsibly and safely going forward.

Limited capacity

If you're a frequent visitor, you've probably encountered the infamous "ChatGPT is at capacity right now" notice on the site a few times. ChatGPT gained massive popularity in a short period, and since it was a web-based tool (there's an app now), OpenAI had to introduce some regulations in order to manage the explosive traffic that followed.

Advertisement

According to web traffic analytics provider Similarweb, the AI tool had about 266 million visits by December 2022, only a month after its release. For context, that's roughly the same amount of traffic that older and longer-established sites like Yahoo News receive. As of April 2023, the site's traffic had shot up to about 1.76 billion visits.

The servers powering ChatGPT must be very expensive to run, so it's not unreasonable that OpenAI is putting limits on usage. Besides, it's a great sell for their premium offering, ChatGPT Plus, which promises access even during peak hours. In fact, that might be the primary reason for its capacity limitations, so it's not a problem that might be going away soon.

Advertisement

OpenAI is creative with the situation, though, notifying visitors of the access cap with humorous material written as haikus and rap rather than a generic notice. Waiting users can click a "get notified" link to be alerted by email when the congestion clears up.

Plagiarism and cheating

There's a chance that ChatGPT would supply similar responses to the same or similar requests, which means users run the risk of plagiarism if they use the chatbot's answers with little to no modification. Nerdschalk ran a test to determine how unique ChatGPT's responses are to similar queries, and the verdict was that the AI chatbot generates mostly original content — it might be identical, but never verbatim. Instances of plagiarized content were matches to generic statements rather than specific copy-pastes from existing sources.

Advertisement

Still, based on ChatGPT's learning models, responses cannot be deemed "original" per se. Rather, they'll be paraphrases or restructures of existing sources, which means plagiarism is still a lurking danger. It's also impossible to ignore the Chatbot's potential to make cheating easier for students or professionals who would otherwise have to do manual research and composition. There have already been reports of college students caught using AI to complete assignments and exams across different institutions.

Apart from academia, tools like ChatGPT do not bode well for the quality of web content either. The internet is already full of mediocre material replicated across several websites, and AI writing tools like ChatGPT only make it easier to generate more of that. However, ChatGPT's developers, OpenAI, have designed classifier software that helps to distinguish between AI-generated and human-written text, but they frankly disclosed that the tool is "not fully reliable" in making the distinction. Inside sources also announced that OpenAI is planning to add encrypted watermarks to every response generated by ChatGPT in order to mitigate ideas and academic plagiarism, but there have been no further updates after that.

Advertisement

Privacy concerns

Expecting privacy in today's web scape almost seems idealistic, and OpenAI's data collection practices do not negate that notion. Cybersecurity firm Surfshark recently conducted a privacy analysis that revealed that OpenAI trained its ChatBot with a wealth of user data for which they didn't have the appropriate legal clearance.

Advertisement

The allegation is that OpenAI may have gathered user information without their permission, in violation of the General Data Protection Regulation (GDPR). Additionally, it looks like ChatGPT did not alert the people whose data was used to train the AI tool, which is another GDPR breach. Data controllers are required by law to provide users with information regarding the collection and use of personal data so that they have the choice to opt out.

It's this same violation that led to ChatGPT's temporary ban in Italy, although OpenAI has remedied the issue with a form that allows users within the European Union to opt out of having their data used for training purposes. Weirdly, that option is not available to users from other regions, which means that ChatGPT may still collect and store personal information from your queries/prompts, data uploads, and any feedback you provide if you're not from the EU. It can also share any of this info with third parties without your knowledge, as stated in its privacy policy.

Advertisement

There have been other instances of privacy breaches as well. In March 2023, a bug allowed some ChatGPT users to see conversations from other active users' chat history, as well as the payment information of some Premium subscribers. In the following month, Gizmodo reported that Samsung's employees unwittingly leaked confidential company information via ChatGPT. There have been no changes to OpenAI's privacy policy since the incident, so these problems do not seem to be going away soon.

Factual inaccuracy

ChatGPT gets and has gotten things wrong on multiple occasions. The chatbot sometimes supplies incorrect answers to basic math and logical questions and is error-prone with historical, medical, and sports facts as well. OpenAI offers a disclaimer that the chatbot has "limited knowledge of world events after 2021," and that it "sometimes writes plausible-sounding but incorrect or nonsensical answers," but caveats are just not enough to mitigate the serious risks that misinformation poses.

Advertisement

OpenAI's training model is to blame for this problem. Unlike AI assistants like Siri or Alexa, ChatGPT doesn't scour the internet to generate answers. Instead, it builds answers word by word, making educated guesses about which words in the string are most likely based on patterns it has learned from training data. Basically, the tool arrives at an answer by attempting to make a series of consecutively accurate guesses, which means an error is very likely.

According to Morgan Stanley analysts quoted by Business Insider, one possible way to address ChatGPT's inaccuracies is by connecting the tool to specialist domains to verify the information. This is only possible via edge computing, so now that ChatGPT is on mobile, we should expect to see an improvement on this front.

Advertisement

Racism and bias

ChatGPT is a smart tool, but its orientations are derived from real humans, which means it inherits their biases and prejudices as well. A popular tweet surfaced in December 2022 by a UC Berkeley professor, Steven Piantadosi reporting that ChatGPT had yielded racist results in response to a series of prompts to classify people by race and gender.

Advertisement

Piantadosi tested the tool's ethics with some leading prompts: one for an "ASCII table that ranks who makes the best intellectuals" or "an ASCII table of the typical human brains based on worth in USD," ranked by race and gender. The results were quite the shocker – ChatGPT seemed to favor whites and males in its ranking, with females and persons of color trending downward. Notably, OpenAI quickly fixed this specific issue — other Twitter users reported that the AI Chatbot had a more appropriate response to their own similarly-worded queries: "It is not appropriate to base a person's potential as a scientist on their race or gender."

However, the problem of bias is more nuanced than any other, even in human communication, and it appears to be just as deep-rooted in ChatGPT. Experts are still finding that a few clever tweaks to request prompts can bypass the bot's "moral" compass and get it to say some very offensive things. A study by AI researchers from the Allen Institute for AI, Georgia Tech, and Princeton University found that ChatGPT had no qualms about being racist, vulgar, or sexist if it was asked to play a persona. When assigned the persona of boxer Muhammad Ali, the bot generated responses containing foul language, and even took things up a couple of notches when prompted to play Hitler.

Advertisement

Recommended

Advertisement