X

Hypertec Shrinks the Data Center with Immersion-Born Servers

July 30, 2025

I recently sat down with Solidigm’s Jeniece Wnorowski and Mohan Potheri, principal solutions architect at Hypertec, to unpack how immersion cooling is reshaping data-center economics for AI and high-performance computing (HPC). During our discussion, it became clear that the biggest constraint on AI progress isn’t silicon — it’s keeping that silicon cool. Hypertec, founded in 1984 and now shipping over 100,000 servers a year to customers in more than 80 countries, has spent four decades learning how to squeeze more compute into less space without breaking the power budget, an experience that set the stage for our conversation.  

Mohan painted a sobering picture of an industry straining under the weight of its own momentum. AI, HPC, and edge-computing workloads have pushed power and cooling demand to record highs just as sustainability-focused goals demand lower energy footprints. Operators face a conflicting mandate: deploy clusters faster than ever, but do so with tighter efficiency targets and, in many sites, within real-estate footprints that can’t grow any further. Space-constrained facilities must find ways to condense more compute while still meeting aggressive thermal budgets, all without blowing out capital or operating expenses. These pressures, he said, turn traditional air-cooled data centers into bottlenecks the moment racks tip into multi-kilowatt territory.

Hypertec’s answer is to start with liquid rather than retrofit for it. The company’s single-phase "immersion-born" servers live permanently in dielectric fluid, eliminating fans and chillers and cutting cooling power by roughly 50% while driving site-level power usage effectiveness (PUE) down to about 1.03.

Because every component is designed for submersion from day one, the servers avoid material-compatibility problems that plague air-cooled hardware dipped into tanks after the fact, and they let central processing units (CPUs) and graphics processing units (GPUs) sustain 90-95% of peak clocks instead of throttling under heat. A 10-megawatt deployment that would normally sprawl across 100,000 square feet collapses into roughly a tenth of that footprint, and Hypertec’s field data shows hardware lasting up to 60% longer thanks to the vibration-free, contaminant-free bath.

Tanks roll in pre-assembled, set up in under 10 minutes, and fill with fluid in less than half an hour, giving operators a shortcut from loading dock to AI production. Add immersion-ready storage nodes that put as much as two petabytes beside the compute they feed, plus 800 Gigabit-per-second networking, and Hypertec delivers a dense, sustainable, and rapidly deployable platform that sidesteps the very constraints throttling its air-cooled peers.

Before we wrapped, Mohan shifted the spotlight to storage—the quiet partner that can still slow an otherwise cutting-edge system. He explained that if data can’t reach the processors quickly, even the fastest GPUs and CPUs end up waiting. To avoid that pinch-point, Hypertec extends its immersion approach to storage as well, placing dense drive enclosures in the same fluid bath and on the same high-throughput fabric as the compute nodes. By treating cooling, compute, and data as one integrated stack, the company keeps every component working in sync and lays a cleaner path to future scale.

The TechArena Take

What’s the TechArena take? Together, these solutions make a compelling argument: immersion isn’t a niche experiment but a practical response to AI’s insatiable appetite for watts, racks, and real estate. Hypertec’s immersion-born solutions show how vendors can rethink server design to meet that challenge head-on—reducing energy, shrinking footprints, extending equipment life, and freeing budgets to buy more compute instead of more chillers.  

Listen to the full conversation here, to learn how immersion cooling is quickly moving from “interesting” to inevitable.  

Subscribe to our newsletter

I recently sat down with Solidigm’s Jeniece Wnorowski and Mohan Potheri, principal solutions architect at Hypertec, to unpack how immersion cooling is reshaping data-center economics for AI and high-performance computing (HPC). During our discussion, it became clear that the biggest constraint on AI progress isn’t silicon — it’s keeping that silicon cool. Hypertec, founded in 1984 and now shipping over 100,000 servers a year to customers in more than 80 countries, has spent four decades learning how to squeeze more compute into less space without breaking the power budget, an experience that set the stage for our conversation.  

Mohan painted a sobering picture of an industry straining under the weight of its own momentum. AI, HPC, and edge-computing workloads have pushed power and cooling demand to record highs just as sustainability-focused goals demand lower energy footprints. Operators face a conflicting mandate: deploy clusters faster than ever, but do so with tighter efficiency targets and, in many sites, within real-estate footprints that can’t grow any further. Space-constrained facilities must find ways to condense more compute while still meeting aggressive thermal budgets, all without blowing out capital or operating expenses. These pressures, he said, turn traditional air-cooled data centers into bottlenecks the moment racks tip into multi-kilowatt territory.

Hypertec’s answer is to start with liquid rather than retrofit for it. The company’s single-phase "immersion-born" servers live permanently in dielectric fluid, eliminating fans and chillers and cutting cooling power by roughly 50% while driving site-level power usage effectiveness (PUE) down to about 1.03.

Because every component is designed for submersion from day one, the servers avoid material-compatibility problems that plague air-cooled hardware dipped into tanks after the fact, and they let central processing units (CPUs) and graphics processing units (GPUs) sustain 90-95% of peak clocks instead of throttling under heat. A 10-megawatt deployment that would normally sprawl across 100,000 square feet collapses into roughly a tenth of that footprint, and Hypertec’s field data shows hardware lasting up to 60% longer thanks to the vibration-free, contaminant-free bath.

Tanks roll in pre-assembled, set up in under 10 minutes, and fill with fluid in less than half an hour, giving operators a shortcut from loading dock to AI production. Add immersion-ready storage nodes that put as much as two petabytes beside the compute they feed, plus 800 Gigabit-per-second networking, and Hypertec delivers a dense, sustainable, and rapidly deployable platform that sidesteps the very constraints throttling its air-cooled peers.

Before we wrapped, Mohan shifted the spotlight to storage—the quiet partner that can still slow an otherwise cutting-edge system. He explained that if data can’t reach the processors quickly, even the fastest GPUs and CPUs end up waiting. To avoid that pinch-point, Hypertec extends its immersion approach to storage as well, placing dense drive enclosures in the same fluid bath and on the same high-throughput fabric as the compute nodes. By treating cooling, compute, and data as one integrated stack, the company keeps every component working in sync and lays a cleaner path to future scale.

The TechArena Take

What’s the TechArena take? Together, these solutions make a compelling argument: immersion isn’t a niche experiment but a practical response to AI’s insatiable appetite for watts, racks, and real estate. Hypertec’s immersion-born solutions show how vendors can rethink server design to meet that challenge head-on—reducing energy, shrinking footprints, extending equipment life, and freeing budgets to buy more compute instead of more chillers.  

Listen to the full conversation here, to learn how immersion cooling is quickly moving from “interesting” to inevitable.  

Subscribe to our newsletter

Transcript

Subscribe to TechArena

Subscribe