10 Unbelievable Ways ChatGPT Shook Up The World In 2023

While 2023 saw a lot of shifts in the tech world, nothing was so seismic as the widespread adoption of ChatGPT. After working largely under the radar for years, OpenAI burst onto the scene in November 2022 with ChatGPT, its first publicly available natural large language model (LLM). With that, artificial intelligence was the topic on everyone's lips, and competitors -– including Google, Microsoft, and Amazon –- scrambled to respond. Yes, we'd seen AI-driven technologies years before we ever heard of ChatGPT, but previous AI implementations didn't seem poised to disrupt industries, education, and jobs.

Advertisement

Then came ChatGPT, a generative AI model capable of producing mind-bending interactions and results with optimized prompts. Anyone with internet access can use the free ChatGPT-3.5, while OpenAI's newer GPT-4 algorithm is available for $20 a month. While ChatGPT isn't foolproof, its abilities –- such as synthesizing data, performing tasks, ideation, and content generation -– are convincing enough that entire industries are scrambling to integrate AI into operations.

The more ChatGPT made headlines, the more its popularity skyrocketed. According to Web analytics firm SimilarWeb, ChatGPT had 1.7 billion visits in November 2023, and OpenAI says it has over 100 million active users as of November 2023. ChatGPT's rise has led to other LLM introductions from Google, Meta, and more – Stanford University's Constellation LLM atlas identified over 15,000 LLMs by mid-2023. Meanwhile, Microsoft invested in OpenAI and integrated ChatGPT across its software and services.

Advertisement

ChatGPT grabbed headlines right up to the last week of 2023 when OpenAI and Microsoft were sued by The New York Times for copyright infringement. The sheer scale of ChatGPT's rapid adoption upended the status quo. Buckle up to see 10 unbelievable ways ChatGPT shook up the world in 2023.

Rise of the deepfakes and disinformation

Deepfakes and manipulation are not new. Neither are disinformation campaigns. What is new is how ChatGPT and other generative AI competitors make it easier than ever to create convincing content with the intent to misinform and manipulate the public. The baseline ChatGPT is free for all users, whether they have good intentions or they're bad actors following their own agenda.

Advertisement

In 2023, we saw many examples of generative AI used to misrepresent situations in global politics. The Los Angeles Times reported that AI-generated images misrepresented events in war-torn Gaza. AI-generated audio faked a conversation between a Slovak journalist and the chairman of the Progressive Slovakia party with the clip purporting electoral fraud released just days before Slovakia's parliamentary general election, as reported by the International Press Institute. Finally, in December, CNN revealed that Pakistan's former prime minister used AI to clone his voice and give a speech from his jail cell. 

These incidents are proof of concept of how AI can manipulate reality, and they're concerning given 86% of Americans consume news digitally via a smartphone, tablet, or computer, according to Pew Research data. ChatGPT's wild success accelerated conversations around content and source transparency, an issue that comes up a lot in other discussions from the highest levels of government down to individual content creators. Given the coming U.S. election cycle, these conversations will take on even more importance in 2024. 

Advertisement

Content authenticity: Created by AI or created by humans?

Google's search engine optimization guidelines have long called for authenticity, but content authenticity takes on a whole new level of meaning in the ChatGPT era. ChatGPT's output approaches the level of "good enough" that big media brands like CNET and Sports Illustrated have published AI-generated news stories as original content. 

Advertisement

Since ChatGPT's release, we've seen access to AI tools democratized and seen AI LLM competitors flood the market, including specialized content tools running on ChatGPT. The issues surrounding AI-generated content aren't limited to the disinformation and deepfakes discussed earlier. It affects the larger issue of content authenticity and transparency. Some news organizations have policies around generative AI use, but guidelines and guardrails are not universal. 

More encouraging is the progress towards a universal scheme for authenticating content's origin. Central to these efforts is IPTC metadata, a long-standing universal standard for captioning digital images. The Coalition for Content Provenance and Authenticity has built a system on IPTC, with the backing of industry giants Adobe, Intel, Microsoft, and more. Its approach is still in its infancy, and it revolves around using digital certificates to embed the copyright and generative AI source in both the file's metadata and in a logo on the image itself.  

Advertisement

Meanwhile, Google, Midjourney, and Shutterstock all announced plans to detect generative AI images using IPTC Digital Source Type metadata. These initiatives are good first steps, but until they're widespread and mandatory, they won't protect against disinformation. Beyond that, textual content authenticity is a whole other challenge to solve. Expect to see a rise in AI detection engines and a cottage industry of tips on how to spot fake AI information.

ChatGPT makes AI safety a priority for governments worldwide

ChatGPT's rise -– and the implications of how AI can impact daily life -– inspired a watershed diplomatic moment uniting 29 countries and governmental organizations at the UK AI Safety Summit. This meeting discussed AI benefits and risks, while the resulting Bletchley Declaration acknowledges the "transformative positive potential of AI" and establishes an agreement among its signatories for pursuing inclusive international dialogue and cooperation on AI. In closing, it pledges "to continue research on frontier AI safety to ensure that the benefits of the technology can be harnessed responsibly for good and for all." 

Advertisement

While that last line sounds a bit like the inverse of Google's one-time mantra of "don't be evil," both phrases underscore a desire to use the technology for good. In the case of AI, it tacitly acknowledges the potential for AI to go far afield. A key part of the agreement is all parties -– including the U.S., the U.K., the European Union, and China –-  agreed to pre-deployment testing of new AI models.

From that event, the U.K. and the U.S. each announced their own AI Safety Institutes for further research and standards development in cooperation with global initiatives. The USASI will focus on shaping standards for AI model safety, security, and testing and for AI-generated content authentication while supporting AI research. This initiative is in addition to President Biden's October 2023 Executive Order outlining a comprehensive plan around AI safety, security, and research.

Advertisement

ChatGPT makes AI safety a priority for governments worldwide, continued

Governments were starting to consider the implications of AI for society and how to regulate and monitor AI without limiting innovation long before the Bletchley Declaration in November. The European Union got the conversation around AI safety back in April with attempts at AI regulation, and China followed shortly after with new regulations around generative AI intended to dictate how China implements AI both internally and internationally. These generative AI regulations come on top of 2021 regulations around recommendation algorithms and 2022 guidelines on synthetic content. 

Advertisement

In addition, the U.S. and U.K.'s cybersecurity offices spearheaded a non-binding agreement by 18 countries in support of AI cybersecurity guidelines around protecting data and watching for system abuse. Separately, in late November, Germany, France, and Italy agreed on how AI system regulation might work, calling for "mandatory self-regulation through codes of conduct."  Finally, the European Union closed out the year in December with its approval of a provisional deal dictating AI regulation, the first such laws of any government body. 

None of these efforts would have happened with the speed and urgency they did if not for ChatGPT's widespread availability and mind-blowing capabilities. Suddenly, AI was no longer a distant future technology or a sci-fi trope — it was real life in the here and now.

Advertisement

New tactics behind homeland security and war

Considering ChatGPT's prowess at analyzing data and ideation, it isn't surprising to see discussion around how AI like and including ChatGPT may power modern warfare. In August, the U.S. Department of Defense established a generative AI task force to assess how the DoD can leverage AI both strategically and responsibly. Generative AI is seen as a potentially invaluable tool for assessing risk, evaluating intelligence, and streamlining operations in the field and behind the desk. It can also be useful as a counterpoint to how America's adversaries might use the technology. 

Advertisement

In short, turning to generative AI models like ChatGPT can potentially enhance national security,  and this is not a conversation we'd be having if not for ChatGPT's technological breakthroughs and crazy popularity in 2023.

ChatGPT has already been used to analyze past battles for insights into its ability to reason based on context. Some experts speak of using ChatGPT or specialized LLMs to create future battle plans based on secure data inputs, much the same way we use ChatGPT to create a shopping list or travel itinerary. Thankfully, experts point to the need for a trained human to review any such plans. That means AI won't be fighting our wars as in the movie "War Games" –- at least not yet. 

Advertisement

New ways of working and disrupted jobs

ChatGPT's generative AI prowess instantly made scores of jobs look ready for the chopping block. Headlines throughout 2023 heralded ChatGPT and AI would decimate jobs as we know them, and that's happening already. However, while no one disputes AI is going to transform the workplace and job market as we know it, the AI revolution is in its infancy and job prognostications are all over the map. 

Advertisement

A survey conducted by ResumeBuilder noted that 50% of respondents were using ChatGPT while another 30% planned to do so in six months. The hold-up for many companies is figuring out how to transparently integrate AI without compromising internal security. In May, for instance, Samsung famously banned ChatGPT use after an engineer inadvertently leaked proprietary information. 

OpenAI's ability to create custom GPTs using ChatGPT-4 will help make it easier for companies of all sizes to adopt the technology. Still, integration isn't new: CRM behemoth Salesforce, for example, integrated its EinsteinGPT — which is powered by ChatGPT — into its CRM software in March 2023.

While some jobs are going to disappear, new ones will take their place. However, many may evolve into collaborative roles where AI enhances the human worker, as suggested in Asana's State of AI at Work 2023 report. Asana found employees consider 29% of their work tasks replaceable by AI, which leaves room for AI to complement and boost employees' productivity.

Advertisement

Forced us to confront bias, again

Bias is everywhere — including our AI chatbots. As a society, the diversity, equality, and inclusivity (DEI) movement has forced us to re-examine our biases, both the conscious and unconscious ones. So, who's schooling our friendly, neighborhood LLM AI systems? As ChatGPT fueled 2023's AI surge, it quickly became apparent that AI has a bias problem.

Advertisement

OpenAI addresses bias in its educator FAQ, where it admits "ChatGPT is not free from biases and stereotypes, so users and educators should carefully review its content. It is important to critically assess any content that could teach or reinforce biases or stereotypes. Bias mitigation is an ongoing area of research for us, and we welcome feedback on how to improve." The FAQ also notes that ChatGPT's model favors Western views and English dialogue, but it does so without acknowledging why ChatGPT has these biases. 

ChatGPT's responses reflect two things: Its inputs, which represent the biases of the user, and its training sources, which reflect the full range of belief systems, preconceptions, and assumptions posted across the open internet. The material ChatGPT trains on shapes how the AI responds.

Advertisement

Forced us to confront bias, again, continued

Discussions around AI and bias are not new. In March 2022, the National Institute of Science and Technology found that we'd have to deal with human and systemic biases to eradicate AI bias. However, once again ChatGPT's stratospheric rise to prominence catapulted the bias issue into the spotlight.

Advertisement

Rooting out biases in ChatGPT and other AI models ties into the governmental agencies' concerns around AI safety. It also circles back to the aforementioned need for transparency in how AI models train and learn from existing data. Consider the proverbial echo chamber loop in your social media feed — if the feed's algorithm knows what you like, it keeps feeding you more of the same, which is often not necessarily a good thing.

One starting point for mitigating AI bias is understanding how the model gets trained. Stanford University's Human-Centered Artificial Intelligence team launched its Foundation Model Transparency Index in October, which is designed to assess Big Tech's transparency about the models and training behind commercial AI systems. Such efforts are a good first step and a starting point for government agencies pursuing AI safety initiatives and regulations.

Advertisement

Cybersecurity challenges amplified

Cybersecurity is yet another sector directly impacted by the ChatGPT tsunami. The widespread use of ChatGPT has cybersecurity experts preparing for new, innovative attacks that subvert long-established guidelines. For example, whaling and phishing attacks may no longer have tell-tale signs of being fake. 

Advertisement

ChatGPT makes it exponentially easier for bad actors to automate attacks, create more believable email phishing attacks, and develop new malware approaches — even if ChatGPT can't write the malicious code itself. It also makes it easier for cybercriminals to innovate new attack vectors. Perhaps they will find ways to circumnavigate existing enterprise-grade cybersecurity software or create personalized deepfakes to encourage staffers to share company secrets. In the future, breaches may only be limited by attackers' imaginations, meaning cyber attacks will only get increasingly sophisticated.

The good thing about ChatGPT and AI in general is cybersecurity analysts can fight fire with fire. ChatGPT does well hunting for security vulnerabilities in code and process, and it can assist with penetration testing. It also works well for summarizing changelogs, anticipating attack vectors, and ideating solutions.

Advertisement

Rethinking AI in the classroom

ChatGPT had an immediate impact on education. Students could suddenly ask ChatGPT to generate that pesky term paper or answer complicated calculus questions and immediately get responses they could submit as their own. As a result, teachers and professors had to quickly learn how to identify AI-generated submissions –- and how to deal with students who turned to the ChatGPT shortcut. A Pew Research report in November identified that one in five teens aware of ChatGPT are using it to help complete school assignments. 

Advertisement

It's clear ChatGPT is here to stay in the classroom, though teachers are still debating the pros and cons. At a Carnegie Mellon University symposium, assistant teaching professor at Heinz College Haylee Massaro spoke about the use of AI in classrooms and the need for a course policy that suggests acceptable uses of generative AI tools like ChatGPT –- including outlining, ideation, and as a writing check. Her case study exercise involving ChatGPT tested students' ability to write prompts and fact-check the output. ChatGPT's results are only as good as your prompts, and prompt engineering requires critical thinking.

In a world where AI can solve your math problem or write your essay, educators not only need to determine the parameters of ChatGPT use in schoolwork, but they also need to reconsider how to assess students' learning. After all, if AI can solve the problem or write the essay, it changes how educators can grade and administer assignments or exams. Educators are embracing ChatGPT rather than fighting it because to do so would be a losing battle.

Advertisement

Blowing up copyright and creativity

So you've turned to ChatGPT for help generating ideas, or maybe you've used it to create visual images from text prompts, something that's now possible since OpenAI integrated Dall-E into its paid ChatGPT Plus. Now what? Who owns the output from ChatGPT? How can you trust ChatGPT's output isn't infringing on other copyright owners? Can you even copyright what you've generated?

Advertisement

The answers are complex, and the laws as written and applied today lag behind the technologies. In part, the question comes back to the definition of art and original work versus derivative work, important distinctions considering writers and artists are creating all sorts of "original" work with the help of ChatGPT and selling it to consumers. Beyond that, is it original work, given AI learns from existing content, and does so without attribution? Further, when you're using generative AI in an image editor or on a device like the Google Pixel 8 Pro, what percentage of that photograph is your real, original work?

OpenAI isn't transparent about how the ChatGPT model trains on existing content. The company acknowledged the copyright elephant in the room at a November developer day, when OpenAI founder and CEO Sam Altman announced OpenAI will cover the costs of copyright litigation for its Enterprise and developer customers. This is a good start that matches promises by Microsoft, Adobe, Google, Amazon, Getty Images, and Shutterstock, but it also leaves free and ChatGPT Plus users on their own to navigate those waters — which are becoming murkier every day.

Advertisement

Blowing up copyright and creativity, continued

The question of copyright is especially urgent when it comes to written text, authors, and the media. Already, we've seen authors bring suit against Meta and OpenAI for copyright infringement. One of the highest profile cases was filed in September by The Authors Guild and notable authors like John Grisham and George R.R. Martin. Other, separate suits involve individual creators like comic Sarah Silverman and fiction wrtier Michael Chabon. These suits tackle issues surrounding OpenAI's use of copyrighted materials in training its models, vicarious and direct copyright infringement, fair use, and DMCA violations.  

Advertisement

No case will be more meaningful than the most recent case brought by The New York Times against both OpenAI and Microsoft. In a 69-page filing, the Times alleges OpenAI trained its dataset on millions of articles without their permission, and now the results both plagiarize and compete with the Times. "The protection of The Times's intellectual property is critical to its continued ability to fund world-class journalism in the public interest," the lawsuit states.

The Times' suit comes on the heels of another media organization, German publisher Alex Springer, partnering with OpenAI to supply summaries of media content from such properties as Politico and Business Insider in the U.S., and Bild and Welt in Germany.  In this deal, OpenAI says ChatGPT will provide summaries and links back to the original article for transparency. As a result, 2024 is shaping up as the year AI faces the copyright music in the courts and sees legal precedents set. The Times' landmark case also sets the stage for future relationships between the media, publishing, and AI entities.

Advertisement

Welcome to the AI arms race

When ChatGPT debuted, it was a mind-blowing lone wolf. That quickly changed by early 2023. Google launched its Bard chatbot in Feburary, followed by its Gemini Pro AI model in December. Microsoft invested heavily in ChatGPT maker OpenAI and incorporated its technologies into its Bing search engine and other software and services.

Advertisement

ChatGPT remains the dominant AI player for now, but dozens of companies including Anthropic, Elon Musk's Grok, and Meta are challenging for that crown, with each company jostling for an edge. Apple jumped into the fray in October, quietly releasing an open-source LLM called Ferret together with Cornell University.  Apple's AI researchers then dropped a paper about running an LLM on limited memory, a hint that Apple plans to bring AI to the iPhone.  No surprise there, considering Samsung and Google also announced plans for on-device AI and Qualcomm's next-gen chips are optimized for on-device generative AI. 

While tech companies vie for AI leadership, another AI arms race is in playing out on a global scale. ChatGPT's success drove other countries to increase investment in AI to prevent the U.S. from having global hegemony. The U.S. has the largest AI market, followed by China and the U.K., according to the International Trade Administration. The U.K. has pumped $4.1 billion into its AI industry over the past few years, while Germany announced in August it was doubling its AI research funding. 

Advertisement

We're now in the dawn of a proverbial AI Wild West — where the competition is high, the stakes even higher, and change happens at a breakneck pace.

Recommended

Advertisement