48VDC and the Quest for Improved Compute Performance
Credit to Author: Wendy Torell| Date: Fri, 15 Feb 2019 16:25:03 +0000
Last year I wrote a blog about open standards organizations like Open Compute Project (OCP) and Open19, and the power architectures that are emerging from these groups – specifically, the consolidation of server power supplies into rack-level power supplies. I also introduced a TradeOff Tool calculator that we had created to demonstrate the efficiency impact of different server power architectures. The tool shows that there’s a small efficiency improvement in going from internal server PSUs to consolidated 12VDC rack-level PSUs, and another incremental improvement going from 12VDC rack-level PSUs to 48VDC rack-level PSUs.
I want to address why we’re seeing this 48VDC trend.
Why 48VDC?
Simply put, it stems from the desire for increased compute performance. Take a look at the figure below from Google that highlights the last 40 years of chip performance (blue data points).
Source: Open Compute Summit presentation by Google (original data up to 2010 by M. Horowitz et al, 2010 to 2015 by K.Rupp)
In this chart, you can also see the different CPU design attributes that improve compute performance:
- Frequency / clock speed – the green data points
- Transistor count (density) – the orange data points
- Increase the number of cores – the black data points
Historically, a limit was put on the CPU package power at roughly 150W (the red points), primarily because there wasn’t a need to go beyond this to achieve the compute performance needs, and it was a practical limit for air-cooling a 1U server.
While new processor designs continue to increase the frequency, transistor count, and number of cores, it is not able to keep up with the increase demanded by certain users. Because of this, people are now willing to consider a CPU package power increase in order to get the performance they’re after. In other words, we can expect those red data points on the chart to trend up again. Technologies like AI, ML, and big data analytics are driving more and more companies to consider GPUs, and along with that, they’re beginning to accept alternative form factors (i.e. chips with giant heat sinks) to liquid cooling.
VRMs Integrated Right on the Chip Package
Our Schneider Electric Data Center Science Center just published White Paper 232: Efficiency Analysis of Consolidated vs. Conventional Server Power Architectures, which describes the server power architectures and steps through the efficiency analysis findings. In the paper, we also provide details on why we’re seeing this 48VDC trend and how VRM technology changes are enabling this. With VRMs separate from the chip package, the number of pins that can physically fit on the board constrains the power. But now we’re seeing VRMs integrated right on the chip package, which significantly reduces the pin count. In a nutshell, with the same reduced pin count, providing 48VDC can give you 600W of power where 12VDC would have given you 150W.
12VDC vs 48VDC in the Near Future
The reality is once you have the VRM on the package, you can now feed it the highest touch-safe voltage possible (e.g. up to 60VDC). An ecosystem at 48VDC already exists with off-the-shelf components which will likely steer future designs to that voltage next. But, I think it’s safe to say that 12VDC will likely be the majority of deployments in traditional ITE for the coming years due to the cost effectiveness and large supply base of 12VDC VRMs. And 48VDC distribution at the rack-level will gain traction once costs decline and the supply chain becomes more robust.
Check out White Paper 232 for insights to our efficiency analysis on power architectures.
Plus, join the conversation. Is your organization looking at consolidated server power architectures, and if so, 12VDC or 48VDC?
The post 48VDC and the Quest for Improved Compute Performance appeared first on Schneider Electric Blog.
http://blog.schneider-electric.com/feed/