Data centers are buying massive ship-derived and industrial engines to generate their own power on-site. This is not a fringe workaround. It is a mainstream infrastructure response to a real problem: grid interconnection queues in the US can stretch five to ten years, and AI compute demand is not waiting around for that.
The clearest confirmed example right now is a Wärtsilä order for an Ohio data center. Wärtsilä will supply 282 MW of capacity using 15 of their 18V50SG engines running on natural gas. These are large, flexible reciprocating engines originally developed for marine and industrial power plant applications. Delivery starts in late 2026 and runs into 2027. To put the scale in context, this single deployment exceeds the 6,000 MW Wärtsilä has previously delivered across all US projects combined. That is not a footnote. That is a meaningful signal about where the market is heading.
On the other side of the hardware picture, Rolls-Royce mtu Series 4000 gas gensets are being used for continuous base load and emergency power in data centers, including SpaceDC’s Jakarta campus. These units reach full load in 120 seconds, dropping to 45 seconds for 60Hz markets starting in 2026. They are rated for service lives up to 84,000 hours and are already compatible with sustainable fuels including HVO, biogas, biomethane, and 100% hydrogen as of summer 2025. The fuel flexibility matters because it gives operators a path toward lower-carbon operation without swapping the underlying hardware when the energy mix shifts.
The broader pattern is gigawatt-scale growth in fast-start reciprocating engine plants worldwide, driven almost entirely by data center demand. The key advantage these engines have over combined-cycle gas turbines is startup time. A combined-cycle plant can take 30 to 60 minutes to ramp up. Reciprocating engines and aeroderivative turbines start in seconds to minutes. For AI workloads that need reliable, responsive power, that distinction drives real hardware decisions.
Aeroderivative turbines, which are engines derived from aircraft or warship propulsion systems, are also part of this trend. They cap out around 48 MW per unit but offer the same fast-start advantage and can be deployed in modular configurations. Mobile gas generators on semitrucks are being used as well, particularly for bridging power while permanent solutions come online. The common thread across all of these options is that they can be procured, shipped, and brought online on a timeline that matches how AI infrastructure is actually being built, which is fast.
The underlying driver is grid delay. Multi-year interconnection queues mean that a data center trying to add capacity today through the traditional route is looking at timelines that do not match the pace of AI infrastructure buildout. On-site generation lets operators bypass that entirely. You own the power plant. You are not waiting in a queue. The grid becomes a backup or a supplement rather than the primary source, and that is a meaningful structural shift in how large-scale compute facilities think about power procurement.
There is also a long-term angle that is easy to miss in coverage of this trend. Both Wärtsilä and Rolls-Royce are positioning these engines as forward-compatible with lower-carbon fuels. The mtu Series 4000 already runs on 100% hydrogen. Wärtsilä’s natural gas engines can be adapted as fuel supply changes. This gives operators a hardware path that does not require a full replacement cycle when the energy mix eventually shifts. The capital investment made today is not necessarily stranded when carbon targets tighten.
By 2030, industry projections suggest 27% of data center facilities will be fully powered by on-site generation. That number reflects a real shift in procurement strategy, not just a temporary workaround while grid capacity catches up. The grid is increasingly being treated as optional infrastructure rather than a given.
The AI compute buildout is pulling in hardware from industries that most people would never associate with data centers. Ship engines, jet turbine derivatives, and industrial gas gensets are now essential infrastructure for running large-scale AI workloads. The supply chain for AI is much more physical than the software framing usually suggests. You can read more about that pattern in my earlier post on ChatGPT’s water usage, which covers similar physical resource demands that tend to get undercovered relative to the software story. And the broader economic context of AI infrastructure investment is worth tracking alongside posts like Anthropic hitting $19B ARR, because the capital flowing into AI models and the capital flowing into the power plants to run them are two sides of the same build-out.

