A giant robot politely handing a tiny coin to a human, cinematic shot, dramatic lighting
Created using AI with the prompt, "A giant robot politely handing a tiny coin to a human, cinematic shot, dramatic lighting"

The Politeness Paradox: Sam Altman’s Joke About ‘Please’ and ‘Thank You’ Costs

A humorous exchange on X (formerly Twitter) has sparked unexpected debate about the financial impact of being polite to AI. When a user jokingly asked OpenAI CEO Sam Altman how much money the company has lost due to users saying “please” and “thank you” to ChatGPT, Altman quipped that it cost “millions well spent.”

What was clearly meant as a lighthearted joke has somehow been taken seriously by many across social media platforms and tech news outlets, leading to genuine discussions about the cost implications of politeness in AI interactions.

Breaking Down the Numbers

Let’s put this in perspective with some basic math. Even with extremely generous assumptions, the actual cost would be negligible compared to OpenAI’s overall operational expenses:

  • OpenAI has approximately 500 million users
  • If each user averaged twenty tokens (words or parts of words) of politeness
  • And if every user exclusively used the most expensive model (o3)

The total cost would reach around $100,000 at the absolute maximum—nowhere near the “millions” jokingly suggested by Altman.

Compared to the billions spent on AI inference overall, the cost of processing “please” and “thank you” is essentially a rounding error in OpenAI’s budget. It’s like worrying about the cost of napkins at a restaurant that serves thousands of meals daily.

Politeness CostsTotal AI Inference Costs Billions in Total Costs ~$100K Politeness Costs

Visual representation of politeness costs compared to total AI inference costs (not to scale – actual difference is far greater)

The Human Side of AI Interactions

The fact that this joke gained traction raises interesting questions about how we interact with AI systems. According to community discussions on the OpenAI forum, approximately 70% of users are polite to AI assistants, despite knowing they’re communicating with a machine.

This behavior stems from several factors:

  • Force of habit – Many people automatically use courteous language when asking for assistance
  • Better responses – Some users believe polite prompts yield more helpful answers
  • Social conditioning – Treating any entity providing a service with respect feels natural
  • Future-proofing – A humorous concern that being rude to AI might come back to haunt us if they ever achieve sentience

Does Politeness Actually Change AI Behavior?

There’s an interesting technical dimension to this discussion. While early chatbots might have been programmed with explicit rules to respond differently to polite requests, modern large language models like GPT-4 learn patterns from the data they’re trained on.

Since polite language is common in human conversations, these models have absorbed patterns where politeness correlates with constructive, helpful exchanges. This means there may actually be a subtle difference in how AI systems respond to requests phrased politely versus demands phrased abruptly.

For example, a polite request might trigger response patterns associated with thoughtful, detailed explanations, while curt demands might activate patterns linked to more direct, minimal answers. This isn’t because the AI has feelings that can be hurt, but because the statistical patterns in its training data show correlations between language style and response type.

The Business Perspective

From a business standpoint, Altman’s joke highlights something significant: OpenAI fundamentally values natural human interaction with their AI systems. The fact that he framed politeness costs as “well spent” (even jokingly) shows that the company sees value in maintaining human communication norms within AI interactions.

This approach stands in contrast to viewing AI purely as a tool to be commanded. OpenAI has consistently designed their interfaces and models to facilitate conversation-like interactions rather than purely transactional commands.

It’s a subtle but important distinction in how different companies approach AI development. Some focus exclusively on efficiency and utility, while others (like OpenAI) seem to place additional value on preserving aspects of human communication styles.

The Technical Reality

To understand why this joke is so clearly hyperbolic, it helps to understand how AI models process text. Modern AI models like those powering ChatGPT work by processing “tokens” – which are essentially parts of words or sometimes whole words.

The phrases “please” and “thank you” typically require between 1-2 tokens each. Even with OpenAI’s most expensive models, the cost per token is measured in thousandths of a cent. When multiplied across millions of users, this still amounts to a tiny fraction of overall operating costs.

To put it in perspective, the compute resources required to generate a thoughtful paragraph-long response far outweigh the minimal resources needed to process a brief polite phrase.

Component Approximate Tokens Relative Cost
“Please help me with…” 4-5 Minimal
“Thank you for your help” 5-6 Minimal
Typical user question 15-30 Moderate
AI-generated response 50-500+ Significant

The Social Media Factor

The fact that this joke was taken seriously illustrates how easily misinterpretation can spread online, particularly around technical topics like AI costs. Social media platforms tend to amplify brief statements without their full context, leading to misunderstandings that can quickly take on a life of their own.

Jokes about technical matters can be especially prone to misinterpretation because many people lack the background knowledge to recognize the hyperbole. When an authority figure like Sam Altman makes a statement about AI costs, even in jest, it can be taken as factual by those without the technical context to evaluate its plausibility.

Being Polite to AI: Practical Benefits

Setting aside costs, there are practical reasons some users prefer polite interactions with AI:

  • Better framing of requests – Taking the time to phrase requests politely often leads to clearer articulation of what you’re asking for
  • Model training influences – AI models were likely trained on datasets where polite requests received more thorough responses
  • Maintaining communication habits – Using consistent communication styles across human and AI interactions prevents context-switching

Whether you choose to be polite to AI or not is entirely up to personal preference. The financial impact on companies like OpenAI is negligible either way.

The Broader Discussion: Humanizing AI

Beyond the immediate joke, the conversation about politeness touches on a deeper trend: our tendency to humanize AI systems. We project human traits and social norms onto these tools, even when we know they are complex algorithms.

This isn’t necessarily a bad thing. It reflects our innate desire to connect and communicate in familiar ways. However, it can lead to misconceptions, as seen with the cost joke. It’s important to balance this natural human inclination with a clear understanding of AI’s technical realities.

The development of AI models that are designed to be more conversational and less like command-line interfaces reinforces this humanization. OpenAI and others are building systems that are easier and more natural for humans to interact with, which in turn encourages more human-like communication from users. This creates a feedback loop, where user behavior influences AI design and vice-versa.

The Role of Benchmarks and Real-World Performance

The discussion about AI costs and user behavior also indirectly relates to how we evaluate AI models. Benchmarks often focus on raw performance metrics or specific task completion, but they rarely capture the nuances of human-AI interaction or the efficiency of communication styles. As I’ve noted before, benchmarks don’t always reflect real-world usability. For instance, while some OpenAI models might score high on certain tests, other models like Claude often perform better in practical coding tasks.

This incident highlights that user experience, including the comfort and naturalness of interaction, is a crucial factor in AI adoption and perceived value, even if it’s not something easily measured by traditional benchmarks. The fact that 70% of users are polite suggests that people value a certain style of interaction, regardless of the underlying technology’s cost structure or benchmark scores.

AI and Misinformation

The rapid spread of the politeness cost joke also serves as a case study in AI misinformation. Simple, seemingly authoritative statements, even if made in jest, can gain traction and be reported as fact if they sound plausible to the general public. This underscores the need for critical thinking and verification when consuming information about AI, especially from social media.

The complexity of AI technology makes it fertile ground for misunderstandings. Terms like “cost,” “training,” and “intelligence” can be interpreted in many ways, and without a solid technical background, it’s easy to fall for simplistic or misleading narratives. This is why clear communication from AI developers and the media is crucial.

The Future of AI Interaction

Will AI interactions become more or less human-like in the future? As AI systems become more integrated into our daily lives and work, the way we communicate with them will likely continue to evolve. We might develop new forms of “AI etiquette” that are specifically tailored to interacting with artificial intelligence, or we might simply continue to apply existing human social norms.

The development of more sophisticated AI agents and workflows could also change how we interact. Instead of just conversational interfaces, we might see more task-oriented interactions where politeness is less relevant than clarity and efficiency. However, as I’ve discussed regarding agents versus workflows, the distinction is often unclear, and workflows are currently more practical for most business uses.

Ultimately, the future of AI interaction will likely be a blend of technical efficiency and human comfort. Companies that can strike the right balance, creating powerful tools that are also intuitive and natural to use, will likely be the most successful.

Final Thoughts

Sam Altman’s joke about the cost of politeness was just that—a joke. The actual financial impact of processing polite phrases is minimal compared to the overall operational costs of running AI models at scale.

What’s interesting is not the supposed cost, but rather what this discussion reveals about human-AI interaction patterns. Our tendency to apply social norms to AI systems, even when we rationally know they’re not necessary, shows how deeply ingrained these communication patterns are.

So should you say “please” and “thank you” to ChatGPT? Feel free to interact however you’re comfortable—you won’t bankrupt OpenAI either way. But if being polite makes the interaction feel more natural to you, there’s certainly no harm in maintaining those habits across all your conversations, whether with humans or AI.

And perhaps there’s something oddly reassuring about the fact that, even as technology advances, we bring our human values and communication styles along with us—even when they’re technically unnecessary.

This incident also highlights the importance of understanding the technical realities behind AI and being critical of information shared online, especially when it comes to complex topics like operational costs. The true value of AI lies not just in its capabilities, but in how seamlessly and effectively it integrates into our lives and workflows.