Why Some May Not Trust Using Gemini In Their Google Workspace Account

As it competes against other companies in the AI race, Google is pushing its Gemini AI into every corner of its digital empire. The now-ubiquitous chatbot is usurping the old Google Assistant on Android smartphones and smart home devices. Meanwhile, for those with a Gemini Advanced subscription – or with a Business or Enterprise account — it has taken up a secretary's station in Gmail, and AI was injected into other Google Workspace tools from Docs to Slides. It's hard to even open up some Google products anymore without being bombarded by announcement pop-ups hyping up all the new features Gemini has introduced.

Advertisement

However, some users have sworn these features off, and even wish they could get rid of them entirely. While it might seem counterintuitive to reject increased functionality in the apps you use, there are unique considerations around generative AI that have a significant portion of the public concerned. Data privacy and intellectual property rights could potentially take a backseat as Google and other companies slurp up training data, marking a significant erosion of user control over personal information. 

There are also concerns around Google's increasingly anticompetitive business practices over the years, which most recently culminated in an antitrust suit in which the search giant was declared a monopoly. Finally, AI output itself isn't always particularly trustworthy, and has caused a range of issues for professionals who trusted their work to it. So, let's break down the reasons why some users may not trust Gemini in Google Workspace apps, assessing how legitimate those concerns may be.

Advertisement

AI can be a massive privacy risk

The first reason it's hard for some to trust Gemini is one of motive. Since tech companies are competing against one another to build the most advanced AI models, the search for training data has become a digital gold rush. Having consumed the entire Internet, new data is an increasingly rare resource. AIs trained on their own output tend to experience a systemic meltdown known as model collapse. That means the one large wellspring left largely untapped is user data. 

Advertisement

Every day, humans are generating huge quantities of data that would be valuable for AI training in the form of text messages, emails, medical records, legal filings, phone calls, voice messages, and more. However, AI cannot train on private data, so AI companies are increasingly incentivized to goad users into forking it over to them.

The desire AI companies have for users' personal data is what's known as a perverse incentive. It would be undeniably bad for OpenAI, Google, or Apple to have unfettered, unencrypted access to users' private communications, let alone their work, legal, or medical documents. User relationships with Google used to be a mutually beneficial arrangement, wherein Google did not go snooping through the stuff you stored on its servers, and in exchange, you paid for cloud storage and let them supply you with email and other services. Now, though, Google might have an incentive to break that trust, which was already running thin.

Advertisement

For people working in professional environments where accuracy is paramount, the propensity of Gemini and other LLMs toward misinformation renders them useless at best and dangerous at worst. It is for this reason that SlashGear and many other publications have created editorial policies that strictly forbid AI-generated copy.

Is Google actually looking at your Workspace files?

Is there really reason to worry about Google sifting through your private data? It's hard to say. Google appears to spell its intentions out in plain text on its Privacy Policy page, declaring, "We also collect the content you create, upload, or receive from others when using our services. This includes things like email you write and receive, photos and videos you save, docs and spreadsheets you create, and comments you make on YouTube videos." 

Advertisement

Cut and dry, right? Not really, because then there's this support page, which claims that your data in Workspace apps is only processed to offer "services like spam filtering, virus detection, malware protection and the ability to search for files within your individual account." Furthermore, Google told Business Insider that its AI is only trained on "publicly available" Docs, which it defines as those with a link that has been set to public and shared someplace web crawlers can see it. That was about a year ago, though, so things may have changed.

According to research from VPN provider Surfshark, Gemini collects the most user information of any mainstream AI chatbot, squirreling away 22 of the 35 data types the analysis looked at. That includes precise location data, personal contact information including addresses, phone contacts, and even browsing history. If those results are true, Google is clearly taking as much data as it can get away with. The real question is whether you trust Google with your data, and trust is a two-way street  – one that Google has filled with far too many potholes for some users to ever cruise down smoothly again.

Advertisement

Recommended

Advertisement