We’ve got intelligence on tap. You turn the handle, AI flows out. Pay X dollars, get Y amount of thinking power. It’s that simple, and it’s completely insane when you step back and think about it. Large Language Models are starting to look like public utilities in ways that would make electricity companies jealous.
Think about it: APIs deliver LLM capabilities anywhere, just like electrical sockets provide power universally. Once a model is trained, each additional inference costs almost nothing — similar to how producing more electricity is cheap once a power plant is running. Text input and output work as a consistent interface, like standardized voltage in power grids. The infrastructure requirements are massive too: training LLMs needs huge data centers and computational clusters, comparable to building power plants.
But here’s where the analogy gets interesting and breaks down at the same time. With utilities, improvements usually mean more for the same price, or the same for less. With AI, when we reach a new level of intelligence, it opens up completely new capabilities. Not just better, faster, or cheaper — entirely new possibilities.
The Utility Infrastructure: APIs as Universal Power Outlets
The API economy around LLMs mirrors the electrical grid more than people realize. Just as you can plug any device into a wall socket and expect consistent power, you can hit an API endpoint and get consistent intelligence. The standardization is remarkable — text goes in, text comes out, regardless of whether you’re using GPT-4, Claude, or Gemini.
This universal access changes everything. A startup in Ohio can tap into the same computational intelligence as a tech giant in Silicon Valley. The barrier to entry for AI-powered applications has collapsed to almost nothing. You don’t need to train your own models, maintain inference hardware, or hire PhD researchers. You just need an API key and some creativity.
The cost structure mirrors utilities perfectly. Training a frontier LLM costs hundreds of millions of dollars — think power plant construction. But once it’s running, serving individual requests costs fractions of a penny. This creates the same economic dynamics as electricity: high fixed costs, low marginal costs, economies of scale.
Where the Analogy Gets Really Interesting
Public utilities face regulation for good reason. When something becomes foundational infrastructure, society demands reliability, fairness, and oversight. We’re already seeing early signs of this with LLMs. Governments are asking hard questions about bias, safety, and monopolistic behavior.
The EU’s AI Act, ongoing discussions in Congress, and various state-level initiatives all point toward treating AI models more like regulated utilities. The question isn’t if this will happen, but when and how extensively. Will there be reliability standards? Fairness audits? Universal access requirements?
The innovation layer is particularly fascinating. Just as the real magic happens in the appliances built on top of the electrical grid — not in the grid itself — the most exciting AI applications happen on top of LLMs. The model providers are becoming infrastructure companies, while the value creation shifts to the application layer.
This creates an interesting dynamic. OpenAI, Anthropic, and Google are essentially competing to become the AWS or Microsoft Azure of intelligence. They want to provide the foundational layer that everything else builds on. The real profits might come from being that infrastructure layer, not from building flashy consumer applications.
But AI Isn’t Just Better Electricity
Here’s where the utility analogy completely breaks down, and why I think we’re in for something much more transformative than just “electricity but for thinking.”
When electrical utilities improve, you get more power for the same money, or the same power for less money. Better efficiency, lower costs, maybe cleaner generation. But fundamentally, you’re still getting electricity that powers the same types of devices in the same ways.
AI improvements create entirely new capabilities. GPT-3 couldn’t reliably do basic math. GPT-4 can analyze spreadsheets, write programs, and reason through complex problems. That’s not “better electricity” — that’s like discovering electricity can now think and create.
Each major model release doesn’t just improve existing use cases; it enables completely new ones. Before GPT-4, having AI write and execute code was barely feasible. Now tools like context engineering systems can build entire applications from natural language descriptions.
This is why the “intelligence on tap” concept is both accurate and wildly understated. Yes, you can buy thinking power like you buy electricity. But unlike electricity, this thinking power is getting qualitatively smarter over time, not just cheaper or more abundant.
The Economics Are Getting Ridiculous
The cost effectiveness of current LLMs is borderline absurd. For the price of hiring a minimum wage worker for an hour, you can get thousands of high-quality text generations from a frontier model. That’s not a incremental improvement — that’s several orders of magnitude cost reduction for many types of cognitive work.
I’ve been testing various automation workflows, and the numbers are staggering. Tasks that would take hours of human time now cost literal pennies. The economic implications are massive, and we’re only scratching the surface.
Models like Google’s Gemini CLI offer genuinely useful AI capabilities for free. That’s like having free electricity that’s also intelligent. The competitive pressure is driving costs down so fast that high-quality AI assistance is becoming essentially free for many use cases.
Innovation Happening at the Appliance Layer
The real excitement isn’t in the models themselves — it’s in what gets built on top of them. Just like the electrical grid enabled everything from light bulbs to computers to electric vehicles, LLM APIs are enabling an explosion of intelligent applications.
We’re seeing AI agents that can translate intent to action in real-time, systems that can outperform doctors on complex medical diagnoses, and tools that generate high-quality video content from simple prompts. None of these applications required training new models — they’re all built on the same foundational intelligence infrastructure.
This is the power of treating intelligence as a utility. Once you have reliable, cheap access to reasoning capabilities, you can focus on the application logic instead of the underlying AI research. It’s like how web developers don’t need to understand TCP/IP to build amazing websites.
The Regulation Question
As LLMs become more foundational, regulatory oversight becomes inevitable. The question is what form it will take. Will we see safety standards for model training? Requirements for bias testing and mitigation? Universal access mandates?
The comparison to utility regulation is apt here. Electric companies can’t just cut off power to certain neighborhoods or charge discriminatory rates. They’re held to reliability standards and fairness requirements. Similar thinking is already emerging around AI systems.
The challenge is that AI is moving much faster than traditional utilities. By the time regulations are written and passed, the technology landscape has shifted dramatically. Regulators are essentially trying to govern a moving target that’s accelerating.
What This Means for Everyone
We’re living through the early days of intelligence becoming a commodity. That’s a sentence that would have sounded like science fiction just five years ago, but it’s our reality now.
For businesses, this means fundamental questions about competitive advantage. If everyone has access to the same intelligent capabilities, where does differentiation come from? The answer seems to be in application, implementation, and the human insight that guides AI systems.
For individuals, it means the tools for cognitive enhancement are becoming universally accessible. You don’t need a computer science degree or a huge budget to build intelligent applications. You just need good ideas and the ability to prompt and direct AI systems effectively.
For society, it means we’re about to see changes in how work gets done that are comparable to the industrial revolution. Not everything will be automated, but the nature of knowledge work is shifting rapidly.
The Next Ten Years Are Going to Be Wild
Having intelligence on tap has never existed before in human history. We’re the first generation to experience this, and honestly, the implications are still sinking in for most people.
The cost effectiveness means AI capabilities will be embedded everywhere. Your toaster probably won’t need GPT-5, but your car, your phone, your work computer, and half the websites you visit will have some form of AI intelligence built in.
The capability improvements mean entirely new industries will emerge. Things that are impossible today will become routine. We’re probably going to see changes in the next decade that are more dramatic than the previous three decades of computing advances combined.
The infrastructure is being built right now. The APIs are live, the costs are dropping, and developers around the world are building applications that would have been considered magic just a few years ago.
We have intelligence on tap, it’s super cheap, and it keeps getting smarter. That’s not a incremental improvement to existing technology — it’s a fundamentally new capability for human civilization. The fact that you can literally buy thinking power with a credit card is still completely wild to me.
The world is about to change in ways we can’t fully predict, and it’s happening fast. But unlike many technological shifts, this one comes with the tools to help us adapt. After all, we have intelligence on tap now.