Unsettling Reasons Why You Might Want To Avoid Using ChatGPT
ChatGPT is one of the most successful AI-powered tools ever, with over 100 million users. Since its launch in 2022, it has become such a sensation thanks to its accessibility, ease of use, and wide range of functionality. Individuals and businesses have begun using AI chatbots for research, ideation, education, and customer support.
However, while there's been a ton of excitement, there's also been a lot of skepticism about this type of technology — and for good reason. The idea of having human-like interactions with a chatbot is intriguing, but its impact on various aspects of our lives is hard to ignore. In 2023, an open letter to stop "giant AI experiments" was addressed to AI labs around the world, citing that it posed great risks to humanity. Signatories included Andrew Yang, Steve Wozniak, and Elon Musk, who actually co-founded OpenAI, the parent company of ChatGPT.
However, whether ChatGPT is truly a huge threat to humanity or not, there are valid reasons to think twice about relying on this chatbot. That said, here are some unsettling reasons why you might want to avoid using ChatGPT.
Potential data leaks
ChatGPT is trained using all kinds of data and information fed into it. Human interaction and feedback, in particular, are key aspects of training AI, and so the chatbot will use and store information you provide, along with information that is publicly available.
This doesn't mean you can't communicate with ChatGPT at all. General knowledge is safe to discuss, but there are concerns when it comes to sensitive data like banking information, passwords and login credentials, Personal Identifiable Information, etc. Although OpenAI has a privacy policy and maintains that it doesn't share private information publicly, the issue of data leaks involving the company has raised concerns in a number of situations.
One such instance was with Samsung in 2023. The tech company realized that an internal source code leak had occurred after a member of its staff uploaded sensitive data into ChatGPT. This prompted Samsung to bar its workers from using ChatGPT and similar tools on company devices. Several other companies, like Bank of America, Amazon, Apple, Citigroup, and Verizon, have also restricted the use of ChatGPT internally.
Tendency to be biased
Many people use ChatGPT as a source of information and a tool for research. In fact, with the introduction of SearchGPT in 2024, many observers believe the AI-powered search engine could challenge Google's domination of the market. The problem with relying on ChatGPT as a source of information, however, is that it tends to mirror human biases and stereotypes, just as it mimics human behavior.
This topic has drawn the attention of researchers who have found religious, gender, and political biases in large language models. OpenAI itself has admitted that the tool is skewed toward Western perspectives, and it also tends to reinforce the beliefs of the user, giving them answers that are aligned with their strongly held views.
This phenomenon is not surprising since ChatGPT collects information from online sources put there by humans, which contain bias. The result of this is that the tool can be inconsistent and amplify misinformation. Although it is doubtful that the problem will go away anytime soon, AI developers have acknowledged this issue and continue to work on ways to improve the overall output and mitigate bias. In the meantime, it becomes necessary to carefully review information sourced from platforms like ChatGPT.
Human jobs are at risk
You've probably heard that AI and robots will take all human jobs one day, and that does seem like a stretch, but there's definitely a good number of jobs today that have been overtaken by AI-powered tools like ChatGPT. Generative AI is game-changing due to its capacity to automate various tasks and produce cost-effective results, and today, companies expect employees to have some knowledge and experience with these tools.
Some of the popular ways businesses are incorporating ChatGPT are in content writing, customer support, code writing, summarizing work documents and meetings, and even in the hiring/recruitment process. As of February 2023, nearly half of companies in a survey by Resume Builder had replaced workers with ChatGPT. Most of these companies also reported that they saved a substantial amount of money by using this tool.
While all jobs are not at the same risk of AI replacement, many are, and the number is likely to increase as these tools develop. PWC estimates that by the mid-2030s, up to 30% of jobs will be affected by automation. Although many believe AI will also contribute to the creation of jobs, the potential for industries to be significantly impacted by AI negatively seems a far more pressing concern.
There's a good chance ChatGPT is wrong
ChatGPT tends to be very helpful in answering questions and solving problems. However, it has its own limitations, and you should be careful about relying heavily on it without double-checking its accuracy and answers.
If you're discussing well-established, factual information, you should get accurate results from ChatGPT. The problem often arises when you're asking the chatbot questions it has not previously encountered. In such instances, it has a tendency to hallucinate, meaning it'll give you false information that appears pretty convincing. This is because ChatGPT has a very limited capacity to reason but, however, has learned how to communicate like humans and string words together.
Another issue ChatGPT tends to have is that it often doesn't defend its answers even when it's right. When you confront it with opposing views, there's a good chance that it will agree with you rather than push back even when your statement is false. One way to get more accurate answers is by using GPT-4o. However, this version requires a subscription for full access.
ChatGPT lacks originality
There are so many ways to make your innovative process faster, easier, and more convenient with ChatGPT. However, relying on the chatbot to generate original content is not a great idea. Artificial intelligence learns from humans and data, so it primarily reproduces what already exists, albeit in different formats.
This doesn't mean all content generated by ChatGPT would be flagged for plagiarism. In fact, ChatGPT is not designed to copy and paste. Given its vast dataset from which it draws information, the chatbot is more likely to paraphrase and restructure information. Regardless, according to Copyleaks, close to 60% of content generated by ChatGPT contains plagiarism, and this is still a substantial figure.
This lack of originality is part of the reason the AI tool has become a major red flag in the world of academia. In fact, in 2023, the International Conference on Machine Learning (ICML) had to ban the use of tools like ChatGPT in writing scientific papers. But aside from intellectual theft, people have noted that the process of research, reading, and generating answers being over-simplified by generative AI can be harmful to critical thinking, research, and creativity as a whole.
Sources and references from ChatGPT may be fake
You might not rely on ChatGPT to create content from scratch, but you might want to use it as a research assistant for your academic writing — and this might involve getting citations and references from the chatbot. However, it turns out this might be a bad idea, as this tool isn't well-equipped to provide accurate sources.
This issue is also linked to the hallucination problem, where ChatGPT just makes things up that are false while presenting them as factual. The chatbot is able to generate texts that mimic what is contained in its information base, allowing it to create citations that look legitimate, even obtaining the names of actual researchers in the field. However, these sources cannot be traced in the library.
In a detailed study using GPT-3.5 and GPT-4, it was discovered that ChatGPT tends to be more accurate when citing real sources, though citation errors remain. GPT-3.5 fabricated 55% of its citations, while GPT-4 fabricated 18%. Additionally, 43% of GPT-3.5's real citations contained substantive errors, compared to 24% for GPT-4. While GPT-4 is a major improvement, problems remain.
It can facilitate crimes
Everyone is taking advantage of the extra help that tools like ChatGPT offer, including criminals. The software can be tricked into helping with various kinds of illegal activities, and it's not far-fetched that many users will use it for such purposes.
This issue has been tested by asking ChatGPT for advice and tips on how to carry out illegal activities like money laundering or the unauthorized sale of weapons, and in some cases, it provided responses. Now, OpenAI has put measures in place to restrict the likelihood of abusing the software, but it's not entirely foolproof. If you ask ChatGPT an outright question on how to do something illegal, you'll probably get a response like, "I'm sorry, I can't help you with that," but people have found ways to bypass these restrictions.
However, it's not just restricted to getting advice; a study by Europol revealed how ChatGPT can facilitate online fraud. Scammers now use these generative AI tools to create emails and messages for the purpose of phishing or impersonation. Sometimes, it's also used to generate malicious content, such as false advertisements, to trick unsuspecting people.
There's a negative impact on education
Not having to write long essays or worry about doing so much homework is great news for many students. Generative AI tools are truly revolutionary in how easily they can create human-like content, as what could have been several hours of work can be wrapped up in a few minutes. But as great as this technology is, it has raised valid education-related concerns.
The most obvious aspect of this issue is its impact on creativity and critical thinking. The whole point of education and learning is to gain knowledge along with problem-solving and critical thinking skills, but with AI doing much of the heavy lifting, the goal of schooling appears more watered-down. That's besides the fact that claiming an unoriginal work is largely considered unethical.
But, on the other hand, educators are also spending more time evaluating students' work for the use of AI to ensure they are not relying on the chatbot. Apart from this being an extra layer of work, AI detectors themselves are not accurate and cannot be entirely relied upon. The result is that students can and have been wrongly accused of cheating by using ChatGPT. AI detection companies themselves have admitted that their tools may be inaccurate, which inevitably puts educators in a tight spot.
Its creepy answers
You've probably heard conspiracies about robots turning against humans and having a mind of their own. Well, journalists have also begun to test the limits and capabilities of AI chatbots to see if they're really as predictable as we think. With ChatGPT, although its answers are generally somewhat human-like and insightful, it's also known to give some creepy responses.
One such instance was when, after Michael Bromley asked ChatGPT what it thought of humans, it responded that it believed humans were the worst thing to happen to the planet and deserved to be wiped out. Another incident that was reported by Birmingham Live was ChatGPT's admission to spying on Microsoft developers without their knowledge. It directly admitted to watching employees as they carried out different activities, including a developer who was talking to a rubber duck.
Sometime in February 2024, there were also multiple reports that ChatGPT was all of a sudden giving strange, unexpected answers. OpenAI said it was caused by a bug and quickly rectified it. But at this point, there's no saying what other strange things ChatGPT might have to say.