I was in Las Vegas for the 2025 Schneider Electric Innovation Conference, and the standout event for me wasn’t on the main stage. It was an after-hours tour of a data center that showed what our AI future actually looks like in steel, concrete, and piping.
Officially, the tour was a chance to see Schneider solutions in practice. In reality, it felt much bigger than any one vendor. Walking the site, you see how fast data centers are evolving from traditional server rooms into industrial-scale AI factories engineered for high-density, always-on compute.
Most of us grew up with air-cooled data centers: raised floors, cold and hot aisles, and a lot of fan horsepower. That model doesn’t work when a single GPU rack can draw over 100 kW. Air alone can’t carry that load.
Standing in front of an open, liquid-cooled rack full of GPUs brought that reality to life. Coolant manifolds and distribution lines move heat directly off the chips. It feels less like IT and more like a process plant. Once you move to liquid-to-chip and hybrid cooling, you’re no longer just “conditioning air”; you’re managing thermal energy with industrial-grade precision.
That’s where my world—HVAC, controls, and building automation—steps into the foreground. Behind the clean IT white space sits a mechanical and controls backbone: chillers, heat exchangers, pumps, piping, air handlers, and a sophisticated controls layer. The building automation system must understand power and thermal loads at the rack level, orchestrate air and liquid cooling, and continuously trade off efficiency, redundancy, and risk.
None of that works without interoperability.
In a modern data center, you have DCIM, BAS, EPMS, microgrid controllers, and IT orchestration platforms all touching the same problems from different angles. Power, cooling, and compute need a shared view of reality. That means standard data models, open protocols where possible, well-defined APIs, and clear ownership of control sequences. If any one system is a black box, it becomes the weak link—technically and operationally.
You could feel that on the tour: power systems exposing data the BAS can act on; thermal information flowing both ways between plant and white space; energy and resilience strategies coordinated across multiple platforms. Interoperability isn’t a “nice to have”—it’s the only way to operate at this level of density and risk.
For readers of AutomatedBuildings.com, the translation to commercial buildings should be obvious.
We may not be running 100 kW racks in most offices. Still, we are rapidly increasing complexity: electrification of heating, more sophisticated ventilation strategies, onsite renewables and storage, EV charging, and growing expectations for grid interaction. Those are the commercial-building equivalents of high-density compute.
If each of those systems comes with its own closed management stack, we repeat the mistakes data centers are working hard to avoid. If, instead, we design around interoperable controls, shared data models, and open-standards-based integration, we position ourselves to coordinate comfort, energy, carbon, and resilience as one problem rather than five separate projects.
That was my main takeaway leaving the data facility. AI factories are pushing the envelope faster than typical commercial buildings. Still, the fundamentals remain the same: measure, connect, coordinate, and continuously optimize across physical and digital layers—and make interoperability a design requirement, not an afterthought. Walking out past rows of piping, cabling, and ductwork, it struck me that AI doesn’t live in “the cloud.” It lives in very real buildings, with very real mechanical and electrical systems, designed and operated by very real people. Whether you’re running an AI factory campus or a portfolio of commercial buildings, the path to a robust and sustainable future runs straight through interoperability.