The History Of AI: How Machine Learning's Evolution Is Reshaping Everything Around Us
For a long time, artificial intelligence was a futuristic concept. But thankfully, the future is finally here. AI is all anyone can talk about. In fact, a study by the Pew Research Center shows that 28% of Americans say they use AI at least once a day or a couple of times a week. However, it wasn't always like this. A decade ago, artificial intelligence was more science fiction than reality to the average person.
Today, you can solve problems in seconds with AI chatbots and even create images from scratch with programs like Midjourney. And tech giants like Google and Meta, among others, keep coming up with new ways to use artificial intelligence. We are now effectively in the "Age of AI."
But how did artificial intelligence go from a movie concept to our reality? In this article, we journey into the history of AI and how machine learning's evolution is reshaping everything around us.
Alan Turing asks whether machines can think
Artificial intelligence, like everything around us, began as an idea. And the person behind this idea is none other than Alan Turing. Turing was the first person on record to ask the question, "Can machines think?"
In his historic 1950 paper, "Computing Machinery and Intelligence," Turing suggested that if a machine could carry on a human-like conversation, it might be considered intelligent. He then set a benchmark test he called "the imitation game," now known as the Turing Test. To pass the test, a computer would have to exhibit behavior indistinguishable from a human through natural language conversations. The test made scientists realize that the best approach to machine intelligence was to study human behavior and add it to machines. Once this became clear, language became front and center of AI systems, and this is so even today.
Building on this discovery were programs like the Logic Theorist and ELIZA. These programs showed the first signs of machine intelligence and natural language processing. Logic Theorist was created by Allen Newell and Herbert Simon in 1956 to solve mathematical problems. ELIZA, on the other hand, was a natural language processing program created in 1966 that could simulate human-like conversations. These were just some of the first glimpses of the domino effect Turing's question and paper began toppling.
Dartmouth Conference jumpstarts AI
Alan Turing can't take all the credit. Although he lit the fire, the 1956 Dartmouth Conference fanned the flames. This event, organized by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, really established AI as an academic discipline. In fact, it was at the conference that John McCarthy coined the term artificial intelligence. Claude Shannon had earlier invented Theseus, a robotic mouse, one of the first examples of machine learning. The mouse was programmed to solve mazes and could move forward or rotate at a 90-degree angle.
Even though the conference didn't yield immediate technological breakthroughs, its ideas inspired decades of research and remain important to AI today. Some of the discussions from Dartmouth laid the foundation for the early systems, such as symbolic logic and rule-based programs, to which Logic Theorist and ELIZA owe their existence. The conference's takeaways dominated the field for decades, but it wasn't the only university that played a vital role in AI's development.
Stanford's robotics legacy
Speaking of iconic universities, we can't go on without paying homage to Stanford University's immense contributions to machine learning and development. The Stanford Cart and Shakey the Robot are perfect examples of this.
The Stanford Cart was a small, wheeled robot originally built as a remotely controlled vehicle in the 1960s. By the 1970s, researchers transformed it into a robot capable of navigating its environment independently, equipped with cameras and an early computer. Although it was slow and could take several hours to cross a room, it worked. But that didn't really matter because its purpose was less about function and more about proof of theory. It was evidence that machines could interpret, respond, and make decisions. This important milestone eventually led to self-driving cars, albeit on a more sophisticated level.
Shakey the Robot was another AI and robotics marvel. Around the same time the Stanford Cart was developed, the Stanford Research Institute came through with one of its oldest yet most iconic products. Shakey combined cameras, sensors, and a rudimentary AI system called STRIPS (Stanford Research Institute Problem Solver) to navigate its environment and plan actions.
Its use of STRIPS remains a core feature of automated problem-solving techniques and is a critical element of artificial intelligence today. Unlike the Cart, Shakey could change its plan after reasoning, making it the first robot to use perception, planning, and execution — all typical human traits. This is the sort of technology that autonomous systems such as Amazon's Proteus robots use today. The signs of AI development never looked more promising until it didn't.
The AI winters
AI advanced a lot after Turing's paper and there were so many promising signs early on, but this all changed during the AI winters. During these periods, progress slowed, funding reduced, and interest in AI went cold. The first AI winter occurred in the 1970s, approximately between 1974 and 1980. Then, researchers struggled to turn theory into reality.
This was largely due to the technological constraints of the time. Factors such as slow computers, insufficient data storage, and unreliable sensors all plagued AI's advancement. While some researchers were aware of these limitations, the broader narrative and public perception often leaned toward exaggerated expectations. Claims that artificial intelligence would be as good as human intelligence within years and that autonomous systems would become widespread fueled these expectations. However, the early successes with systems like Shakey the Robot and the Logic Theorist did not lead to the anticipated breakthroughs in computing power. When these ambitious claims were not realized, it created doubt among investors.
The second AI winter occurred between the late 1980s and mid-1990s. Unlike the first, this one was clearly because of the rise of expert systems. Expert systems were software designed to mimic human decision-making in specialized fields like medicine and finance. Though these systems worked well in some environments, they were expensive, fragile, and struggled with complex situations. And, just like in the first winters, their shortcomings led businesses to question the value, further causing people to lose confidence in AI.
IBM's Deep Blue beats chess champion
Machine learning uses pattern recognition to analyze data and make predictions. This is one of the reasons why IBM's Deep Blue was the first computer to defeat a chess champion. Deep Blue was a combination of machine learning fundamentals and artificial intelligence prospects. It had the ability to consider 200 million chess positions per second through sheer computational force and a finely tuned decision-making algorithm. To pull it off, Deep Blue used 32 processors that gave it 11.38 billion floating-point operations per second (flops) of processing speed.
The match against Garry Kasparov wasn't Deep Blue's first attempt against the Russian chess grandmaster. In 1996, it lost to him in a six-game series. However, IBM's team enhanced the system, using insights from that match to improve its algorithms and overall performance. The 1997 rematch saw Deep Blue win the series, marking the first time a machine defeated a world chess champion under tournament conditions and becoming a defining moment in AI history.
The birth of OpenAI's GPT
While Deep Blue was a signal to the entire world that artificial intelligence was more than just fiction, deep learning is what established AI as a household tool. OpenAI's GPT redefined what machines could do with natural language processing, which helps it generate human-like content thanks to deep learning.
In 2016, Google DeepMind brought crucial innovations to artificial speech capabilities. Later on, OpenAI launched the first GPT model in 2018, using loads of data from internet text to train the system. It relied on a design called the Transformer, which excels at processing data arranged in sequences, just like neurons in the human brain.
The GPT chatbot responds to prompts with remarkable fluency thanks to its ability to identify patterns even within plenty of text. This saw AI come as close as ever to being sentient, entering the mainstream, and powering all sorts of technologies. But there was an even bigger boom to come.
Artificial intelligence booms
Some might have thought GPT and AI tools for customer support service applications would be the resting point for artificial intelligence, but the possibilities are endless. We have self-driving cars, voice-activated assistants, and AI-powered medical diagnostics. Machine learning is present all around us.
Tesla's Autopilot, for example, uses vision-based machine learning models to identify lane markings, traffic lights, and potential hazards. Machine learning models can also now analyze medical reports such as X-rays, MRIs, and CT scans. With this, they can detect irregularities and identify potential risk factors that a traditional doctor might miss. For instance, in 2018, the world of medicine was rocked by news of deep learning artificial intelligence diagnosing Alzheimer's half a dozen years earlier than conventional tests.
In particular, 2023 was the year of AI. Students began to not only use but rely on AI tools for educational needs such as research and tutoring. Businesses employ AI to automate certain recurring tasks and perform time-tracking functions to help with cost-cutting and efficiency. Those who offer customer support now use virtual assistants and online chatbots as customer care representatives to provide 24/7 customer service.
Also, creatives like artists, designers, and writers also use AI to shave hours off their projects. Social media was not left out, with Instagram, Snapchat, and X debuting Meta AI, My AI, and Grok, respectively, into their user interface since then.
Artificial intelligence gets new regulations
The AI boom was a double-edged sword. Despite its incredible strides, AI's quick rise brought some challenges that can't be overlooked. For instance, some of these models were programmed using unbalanced datasets that can cause inequalities or biases. There are also privacy concerns since AI applications rely on your personal data to operate effectively.
Some jobs are at risk as AI takes over tasks originally handled by humans, offering better accuracy and saving costs. As such, many governments, regional and international organizations have created or are creating regulations to govern the ethical and fair use of AI. Regulators are pushing for transparency that requires companies to detail how their models work and why they make specific decisions.
For instance, the European Union created the AI Act. This legal framework ensures that AI systems meet safety and accountability standards before they even launch. The Act addresses transparency by requiring companies to disclose when users interact with AI. This is particularly important for fields like hiring or healthcare as they directly impact people's lives.
Safety provisions oblige the testing and risk assessment of AI systems to prevent biased or harmful systems. High-risk AI must meet strict standards for accuracy and human oversight. The Act further requires that companies make increased investments in compliance and went into force in August 2024, becoming the first regulation on artificial intelligence. It certainly won't be the last.
The future of artificial intelligence
The future of artificial intelligence still sounds like a science fiction novel. Artificial intelligence is here to stay, but there's still so much to be done to bring our ideas to life. One exciting prospect lies in the incorporation of AI into other emerging technologies, with whispers of nuclear-powered AI, for instance. Quantum computing means more powerful AI that can help create new drugs, model climates, and solve complex problems. We can expect similar successes in AI-powered robotics that could revolutionize multiple industries, such as product manufacturing and elder care.
You can also expect AI to bridge language barriers, create custom educational experiences, and empower small businesses with creative AI tools. The possibilities are truly endless; you can even imagine a future where AI robots become a part of society, filling up ranks in the police force and other jobs that put human lives at risk. While the future seems bright, job displacement and over-reliance on AI are just some of the problems the world will have to face as AI reshapes our lives.