Google Taps Co-Founders Larry Page And Sergey Brin In AI Push To Combat ChatGPT

Google has had to call back two of its founders after a rival's AI project threatened to knock it off its search engine perch. OpenAI was founded several years ago, with Elon Musk being counted amongst the company's early investors. Its projects have been hitting the headlines for a few reasons lately. DALL-E, which can create art based on prompts from users, helped kick up a debate on the validity of AI art and its effect on human artists. The company's latest project, ChatGPT, has seemingly threatened Google's search supremacy in a similar way.

Advertisement

ChatGPT has a large number of functions. It can write you a song, put together basic code, and solve math problems. It can also break down complex topics into easy-to-handle chunks, which may be what is worrying Google at the moment. While search engines are designed to harvest information and nudge you towards ads, many users use them to find answers to questions. In fact, "Google it" has become a synonym for "look it up" in recent years. Its popularity is such that, despite privacy concerns and other issues, it is the most popular search engine in the world. 

According to Hubspot, Google accounts for around 80% of the market share when it comes to searches. Its nearest rival is Bing, which is often the butt of search engine jokes, and is only used for around 15% of searches. If ChatGPT can find and relay information a little bit better, then Google's massive market share may take a bit of a dent. So, as you can imagine, the company is pulling out all the stops to stay on top.

Advertisement

Google Founders called back after three-year absence

According to The New York Times, the success of OpenAI's chatbot has caused Google to issue a "Code Red." In an attempt to secure the tech giant's position, Alphabet CEO Sundar Pichai has asked two of the company's founders to return and give their take on Google's AI strategies. 

Advertisement

In December, Larry Page and Sergey Brin — who left their full time roles at the company in 2019 — held several meetings with high-ranking staff at Google. Page and Brin had limited their involvement in the company to discussions on "moonshot projects," which are a series of projects pitched by a diverse range of inventors which could pay off big time if they somehow work out. 

Their recent involvement was a lot more serious. The two former executives reviewed the company's artificial intelligence strategy and, if the sources are correct, outlined plans to implement more AI features into Google's search functions.

Google's future use of AI may not be limited to expanding its search functions. Advanced artificial intelligence could also theoretically help fill the gap left by the company's recent layoffs. On January 20, it was announced that Alphabet — Google's parent company — had made some wide scale layoffs. Around 6% of the company's staff, or 12,000 people, were shown the door as part of wider restructuring plans. 

Advertisement

It is unknown how big a role AI will play in the company's future, but Alphabet isn't starting from scratch. It's already had a bot that made the news last year because it was a little too convincing.

Google has already had issues with convincing AI

The quest to pass the Turing test may still be ongoing, but Google has already produced an AI which was convincing enough to fool one of its own engineers. Last year, Blake Lemoine claimed that the company's laMDA AI had gained "sentience, personhood, and a soul." He even went as far as to hire a lawyer in an attempt to protect the AI's rights

Advertisement

Lemoine was initially placed on administrative leave, then ultimately fired by the tech giant for his AI-based antics. The AI itself, as convincing as it seems to be, still has no actual legal rights, and is in fact a string of code that belongs to Google and not a sentient being capable of feeling emotions.

Google's laMDA AI is essentially a highly advanced chatbot. Unlike primitive chatbots, which issue set responses chosen from a pre-written list and are based around dialogue triggers, laMDA is capable of "learning" as it goes. So user input shapes and improves the answers the bot gives. In theory, this should make the answers more accurate, fit a wider range of topics, and sound more "human." 

Advertisement

This style of AI can also go pretty wrong, as Microsoft found out a few years ago. The AI can also take several forms, such as a paper airplane or the Earth itself, and speak from the perspective of that form. Like most mainstream AIs, Google's project is still being developed. But given Google's resources, the huge leaps we've seen in recent years, and the wide range of applications AI is being used for, we could be in for a pretty exciting AI arms race in the near future.

Recommended

Advertisement