5 Things You Should Never Tell ChatGPT

ChatGPT responds to about 2.5 billion prompts each day, with the US accounting for 330 million of these. Unlike the experience when interacting with a search engine, AI responses are more like a reply from a friend than a simple list of websites that may or may not contain the answer to our query. People are using AI tools like ChatGPT in some very weird ways, but caution is essential when sharing information.

Chatbots like ChatGPT are not one-way tools that fire out responses based on a static database of information. It's a continually evolving system that learns from the data it's fed, and that information doesn't exist in a vacuum. While systems like ChatGPT are designed with safeguards in place, there are significant and warranted concerns about the effectiveness of these safeguards. 

For instance, in January 2025, Meta AI fixed a bug that had allowed users to access private prompts from other users. With ChatGPT specifically, earlier versions were susceptible to prompt injection attacks that allowed attackers to intercept personal data. Another security flaw was Google's (and other search engines') unfortunate tendency to index shared ChatGPT chats and make them publicly available in search results. 

This means that the basic rules of digital hygiene that we apply to other aspects of our online presence should equally apply to ChatGPT. Indeed, given the controversy surrounding the technology's security and its relative immaturity, it could be argued that even more prudence is required when dealing with AI chatbots. Bearing this in mind, let's look at five things you should never share with ChatGPT.

Personally Identifiable Information

Perhaps the most obvious starting point is the sharing (or preferably not) of personally identifiable information (PII). As an example, the Cyber Security Intelligence website recently published an article based on research work done by Safety Detectives, a group of cybersecurity experts. The research looked at 1,000 publicly available ChatGPT conversations — the findings were eye-opening. They discovered that users frequently shared details like full names and addresses, ID numbers, phone numbers, email addresses, and usernames & passwords. The latter is more relevant given the rise of agentic AI browsers like Atlas — OpenAI's ChatGPT-based AI-powered browser

There is no doubt that ChatGPT is genuinely helpful for tasks such as resumes and cover letters. However, it does the job perfectly without unnecessarily including personal details. Placeholders work just as well, as long as you remember to edit the details to avoid that critical letter going out as being from John Doe, Nowhere Street, Middletown. Ultimately, this one simple step prevents sensitive data like names, addresses, and ID numbers — information that can all be misused, falling into the wrong hands. 

Another option is to opt out of letting ChatGPT use your chats for data training. This can be done from within the ChatGPT settings, and full instructions can be found on the OpenAI website. Importantly, this doesn't mean that it's suddenly okay to share your PII, but it does reduce the chances of any inadvertently shared information becoming publicly available. In short, sharing your PII with ChatGPT is not something you need to do and should be avoided in all circumstances. 

Financial details

Another common way people use ChatGPT is as a personal finance adviser. This could be something as simple as creating a monthly budget or something as complex as working out an entire retirement strategy. Firstly, as OpenAI freely admits, "ChatGPT can make mistakes. Check important info.", which is why many experts advise against using such tools without verifying critical information with financial professionals. That being said, ChatGPT can be helpful with financial matters, but there is no need to ever share any personal financial details with it. 

While budgeting and financial planning are common use cases, there are other times when users might be tempted to enter financial details. Understanding a bank statement, reviewing a loan offer, or using the aforementioned agentic AI. In any situation, real details are rarely necessary, and placeholder information can be entered. In cases where users upload financial statements, editing out PII information first can also work. 

Sensitive information includes bank account numbers, credit card details, investment account credentials, and tax records. Chatbots don't operate within the secure frameworks designed to protect financial transactions. Essentially, this means that once entered, that information exists outside the safeguards normally applied to financial data. In the worst cases, this could lead to sensitive financial data falling into the hands of 'bad actors' who could then use it to carry out financial fraud, identity theft, ransomware attacks, phishing, or all of the above. 

Medical details

We won't go too deeply into the argument of whether people should use ChatGPT for medical advice — suffice to say that we can refer you to our previous point: "ChatGPT can make mistakes." That being said, the point here is not whether you should use it for medical advice, but whether you should share your medical details with it, especially with the often mentioned PII in tow. That distinction matters because a growing number of people are turning to AI chatbots like ChatGPT for health-related information. According to a recent poll, around one in six adults use AI chatbots at least once a month for health information; the number rises to one in four for younger adults. 

Again, the risk arises when general discussions start to include specifics. Information such as diagnoses, test results, medical history, and mental health issues can quickly become sensitive, especially when combined with identifying information. Like the financial information, the problem is heightened because once entered, such data exists outwith health data protection networks, meaning once it's 'in the wild', users have little visibility or control over how it's handled. 

The fact is that people can feel more comfortable sharing personal information with ChatGPT than they would with a plain old Google search. The conversation-like interactions feel more human than impersonal Google searches, and this can lead to a greater willingness to divulge personal medical information. 

ChatGPT can be useful for understanding medical concepts in broad terms, but it shouldn't be treated as having the discretion of a doctor's surgery. 

Work-related materials

Aside from personal data, there's another category of information that doesn't belong in ChatGPT — at least in unfiltered form — that is conversations related to confidential or proprietary work-related material. This includes anything linked to an employer, client, or any ongoing project that isn't cleared for public exposure. While it can be tempting to use ChatGPT to summarize documents, rewrite emails, or check & edit reports, doing so can introduce unnecessary risks to the integrity of what is often protected data. 

As an example, let's go back to the medical scenario, but this time we're looking at it from the professional viewpoint. In this instance, a busy doctor might be tempted to share a draft patient summary, clinical notes, or a referral letter with ChatGPT to help tighten language or simplify complex subject matter. However, while the intent is efficiency, sharing such details potentially places the material into the public domain, or at least places it outside of the security procedures designed to protect such information. Creative works and intellectual property also fall into the 'do not share' category. 

In short, there is a running theme that can be used to summarize the problem. Never share anything with AI chatbots like ChatGPT that you wouldn't be comfortable placing on any public-facing platform or handing to third parties outside of any acceptably secure and regulated system. Used with a little due diligence, chatbots can be incredibly useful tools. ChatGPT is full of features you ignore but probably shouldn't — staying secure is not one of these. 

Anything illegal

Finally, sharing anything illegal in ChatGPT is something best avoided, and not only is OpenAI committed to disclosing user data in response to valid legal U.S. processes, but they will also comply with international requests.

Laws change all the time, both at home and abroad, and ordinary behaviors can become criminalized in a short space of time, so it's best to be circumspect about what you reveal to companies that can later be handed over to law enforcement.

OpenAI does have safeguards in place that should prevent ChatGPT from being used for illegal or unethical purposes, and playing with these boundaries may get you flagged. This includes asking it how to commit crimes or fraud, and influencing people into taking potentially harmful actions. It also has specialized "pipelines" that specifically filter out chats where it's detected that the user is planning to harm others. 

While the latter point is perhaps the most serious and relatively easy to flag. Other illegal misuses of ChatGPT show that its safeguards aren't bulletproof. The use of the platform to write malicious code and to automate social engineering schemes is well documented, which just highlights the importance of a caution-first approach when using ChatGPT. 

Recommended