Saying These Simple Words To ChatGPT Is Costing OpenAI Millions Of Dollars

Growing up, we were all taught to be polite, but when you're one of the world's foremost AI companies, pleasantries can be expensive. According to OpenAI CEO Sam Altman, his firm is losing tens of millions of dollars thanks to well-mannered users. It's no secret that generative AI is an expensive business, but what appears to be less well known is that, at least so far, it's an unprofitable one. Generative AI models, with the notable exception of China's DeepSeek, have cost companies like OpenAI and Google billions. OpenAI lost about $5 billion in 2024, despite ending the year with 15.5 million paid subscribers. The company nonetheless closed out a $40 billion funding round at the end of March 2025. It's more than double the money a company has ever raised before.

Advertisement

AI is an expensive business for several reasons. First, a massive amount of power is required to run data centers where models are trained and run. One Washington Post estimate found that generating 100 words requires enough power to keep 14 LED lights turned on for an hour, and requires over 519 milliliters of water. That's more than an average water bottle. Factor in the cost of supercomputers from companies like NVIDIA, as well as the cost of top-tier AI talent, and you've got a business that churns through cash at an alarming pace. And then it turns out that being polite to an AI tacks on millions more in additional costs. Here's what you need to know.

Please and thank you are expensive courtesies, Sam Altman claims

In a reply on X (formerly Twitter), to someone who wondered how much money OpenAI has spent on electricity as a result of people saying "please" and "thank you" to ChatGPT, Sam Altman spilled the details "tens of millions of dollars well spent–you never know," he posted.

Advertisement

To explain why niceties cost OpenAI money, there's a bit of basic (and heavily watered down) computer science that needs to be unpacked. Words you input into ChatGPT and other AI chatbots are calculated into units called tokens, with each token needing computer resources to process and respond to. As noted above, generating as few as 100 words uses quite a bit of power and water, and therefore money. When you're polite to an AI bot, you're likely to get some extra text back acknowledging your courtesy, in addition to the usual output. So, in addition to the cost of processing pleasantries like "please" and "thank you," there's the extra cost attached to the chatbot saying, "You're welcome."

Being nice is rarely a bad choice, even when talking to a non-sentient AI. However, OpenAI is so expensive to run that Altman has claimed the company loses money on its most premium, $200 a month ChatGPT Pro subscriptions. Although the company seems able to hoover up investment capital like Cookie Monster at a Keebler factory, it's hard to imagine it has any desire to spend more than necessary. So, why did Altman call the extra resources needed to calculate a kind word money well spent?

Advertisement

Should you be polite to AI?

Altman's assertion that the money spent processing politeness is "well spent" because "you never know" seems to imply that humanity had better mind our Ps and Qs around AI or else suffer its wrath in the future. Altman has employed similar rhetoric countless times in recent years, which some view as a marketing strategy. What better way to convince investors your product will eventually make them trillions of dollars than to claim it will one day become a supreme AI intelligence? If you believe that's where AI is headed, you're likely to conclude that an ownership stake in that intelligence is worth every penny.

Advertisement

But does that mean you shouldn't be polite to AI? Not necessarily. Research on the influence of prompt politeness on LLM performance from Waseda University, Japan, suggests that talking nicely to a large language model can improve its output, with those who ask politely getting more accurate responses. It's not as if ChatGPT has feelings, but there are some explanations behind this phenomenon. 

AI is able to sound so humanlike because it is very good at understanding contexts, and since it is trained on large datasets comprised of text scraped from every possible location across the web, it understands the contexts of conversations, both polite and impolite. Think about times you've come across an online argument that had progressed into insults and attacks. Chances are there wasn't a lot of useful information in that exchange. Contrast this with a polite exchange of ideas where both parties shared their perspectives in good faith, likely leading to better outcomes. ChatGPT has innumerable examples of both types of interactions in its training data, and when you use polite language, it may be more likely to draw upon the latter, leading to better answers.

Advertisement

Recommended

Advertisement