Bard As A Life Coach Is Google's Worst AI Idea So Far

Google wants to turn generative AI into some sort of personal coach for humans, one that can give users life advice. Citing internal documents, The New York Times reports that Google is working on AI that is capable of performing "at least 21 different types of personal and professional tasks, including tools to give users life advice, ideas, planning instructions and tutoring tips." The company has reportedly hired experts with doctorate degrees across multiple disciplines to refine its ambitious idea.

Advertisement

The supposed life coach AI will be able to process multi-sentence complex inputs and accordingly provide situation-specific recommendation. It help users acquire new skills and create personalized workout and budget plans. Google's Bard AI chatbot can already do these, but it appears that Google wants to give it an even more personal touch.

The company has already started pitching an AI tool called "Genesis" targeted at journalists, but it hasn't been released publicly yet. It's unclear whether Google's life advice AI tricks will arrive as a standalone product, or if it will be integrated with Bard. It is also possible that these skills might actually make their way to Google Assistant.

Google is reportedly exploring generative AI tricks to "supercharge" the Google Assistant. A virtual assistant would be the reasonable bet for life coaching, since it lives locally on a device and has system-level access to all information, including those collected from fitness apps. A chatbot, on the other hand, is tethered to the internet.

Advertisement

A long trail of AI errors

Google won't be the first to rope in AI smarts towards some kind of human advisory role. However, Google's deployment of an AI life coach, even for meal plans, could backfire. 

The National Eating Disorder Association launched an AI chatbot called Tessa earlier in 2023, but it soon started giving harmful advice and had to be shut down. The Center for Countering Digital Hate (CCDH), a U.K.-based nonprofit, found that AI is dangerously adept at doling out health misinformation. History hasn't been kind to AI morality attempts, either. 

Advertisement

The Allen Institute for AI created its own AI oracle called Delphi in October 2022. It was tasked with helping humans with their moral and ethical dilemmas, and soon went viral on social media. However, it didn't take long for Delphi to falter spectacularly: It was extremely easy to manipulate into accepting racist and homophobic advice, promoting genocidal suggestions, and even telling users that eating babies is OK.

A paper published in AI and Ethics makes a great case for deploying AI as a moral expert, but also adds in the concluding lines that "the consequences of getting it wrong are far more serious." There is almost always an "if" involved.

Regarding the deployment of AI as a means for moral enhancement, another paper published in the Studies in Logic, Grammar and Rhetoric journal notes AI is acceptable as a moral advisor "if it can play a normative role and change how people behave."

Advertisement

Recommended

Advertisement