How six AI attributes are changing the rules of data center design
- By Victor Avelar
- 12 Nov 2025
- 4 min read
A conversation with Victor Avelar, Chief Research Analyst, Schneider Electric Data Center Research & Strategy
Artificial intelligence (AI) workloads are forcing data center operators to rethink fundamentals. Today’s AI-ready data centers are facing rack densities exceeding 100 kW, power procurement timelines stretching across years, and an essential shift from air to liquid cooling.
Schneider Electric's Data Center Research and Strategy (DCRS) group has released the third edition of its seminal White Paper 110: "How 6 AI Attributes Change Data Center Design." This update expands the framework from four to six defining attributes offering new guidance on:
- High-density power and cooling system design
- Grid-level energy procurement strategies
- Ecosystem collaboration for AI infrastructure deployment
We spoke with Victor Avelar, the paper's lead author and Chief Research Analyst at DCRS, about what's changed, what matters most, and where data center leaders should focus to future-proof their AI infrastructure.
Question 1
The new edition expands from four to six AI trends and attributes that underly power, cooling, and rack design considerations. What prompted these changes?
- We really did make some big changes to this paper, all driven by how fast AI technology is evolving. Earlier revisions included forecasts for data-center power consumption to help readers grasp the scale of growth. But at this point, everyone understands how massive that growth is. So we removed those projections and instead added six key takeaways at the top for readers in a hurry and we added next steps at the end.
- The six attributes reflect both a deeper understanding of AI workloads and the pace of hardware change. For example, we now have a much better picture of the worst-case power profile for AI training — insight that's critical for sizing power systems to prevent overloads. Things are moving fast; who knows, we may have 7 attributes this time next year!

Question 2
How has your understanding of AI workloads evolved between version 2 and version 3?
- We're seeing newer inference workloads known as "long-thinking" models, which require significantly more compute power. They use different modeling techniques but tend to push rack densities even higher. So while training has always been intensive, inference is now catching up in ways that change both power and cooling expectations.

Question 3
Peak power and synchronous loads are new attributes in this edition. How do they change power system design?
- When all the GPUs are synchronized during training, the power profile for the entire data center resembles a series of peaks all occurring at the same time. Traditional data centers were designed for more random, diversified loads, so they could size systems based on averages. That no longer works.
- Now we have to design for the peak, not the mean. It costs a bit more upfront, but given the price of AI hardware, it's a small premium for reliability and peace of mind.

Question 4
Why is liquid cooling now treated as a central design requirement rather than an option?
- One of the things that we’ve learned from GPU roadmaps over the last two years is that the power densities on these chips are increasing and the servers are getting more dense and heavier. This allows you to fit more chips in a single rack. You can't cool 100-kW racks with air anymore. Liquid cooling is the only practical solution for maintaining standard rack heights and ensuring performance.
- That means racks must also be physically stronger to handle the extra weight, often over 1,800 kilograms. So, cooling strategy and mechanical design are now more closely coupled.

Question 5
Energy procurement wasn't emphasized in previous editions. Why does it now warrant headline attention?
- Because the AI boom is overwhelming the grid in certain regions. Developers are struggling to secure hundreds of megawatts, and that's causing project delays measured in years. We wanted to give executives practical guidance — including how to work with utilities, explore on-site generation, and plan procurement earlier.
- It's a growing strategic challenge, not just an engineering one. Our team has been writing about this topic in other publications, but this paper brings those ideas together and provides links to these other resources.

Question 6
For operators planning AI deployments, where should they focus first to avoid delays and cost overruns?
- Start with your partner ecosystem. Engage early with vendors, suppliers, and integrators who've done this before. They’ve been on job sites and understand their systems best and have seen what can go wrong in the field.
- Work with partners that provide digital tools — like DCIM, EPMS, or digital twins — and have global supply chain strength. That collaboration helps you de-risk projects, reduce shipping delays, and manage complexity. In short, success depends less on any one technology and more on how well you orchestrate the ecosystem around it.

AI is reshaping every aspect of data center design—from utility negotiations to rack specifications. For detailed analysis of all six attributes and actionable design recommendations, download the complete white paper: "How 6 AI Attributes Change Data Center Design"

Latest in AI & Technology
A cybersecurity blueprint: Securing critical infrastructure through zones of influence
Bridging data and AI for success: Why it matters for businesses
Transforming data center services: AI-driven condition-based maintenance
The looming power crunch: Solutions for data center expansion in an energy-constrained world