Today, Intel and others are exploring the ramifications of a “disaggregated” server. Putting the RAM on a shelf. The storage on a shelf. The compute on a shelf. Then interconnecting them with a multi-way optical switch that provides connectivity between the shelves as well as off rack communications. The rack becomes a computer with all of the commodity parts providing low cost redundancy and hot-swap capability.
What are the main components of a desktop PC? The hardware consists of a power supply, a motherboard with a CPU, a permanent memory drive (hard drive, optical drive, etc.) to store the operating system, main memory (usually DRAM), a graphics controller that is often integrated onto the motherboard, a network interface, a sound processor/amplifier, and a chassis.
Now take a look at a “(hyper) converged server.” Let’s see. There is a server-oriented CPU, some permanent storage (hard drive, SSD), a network interface, a power supply, some RAM, a graphics controller, and a chassis. Looks pretty similar overall to the desktop of yore! Even the form factor is similar. It seems that we have come full circle from where we were in the 1990s. The only significant differences from a desktop PC are the scale (more memory, faster drives) and the software stack running on the platform that gives the “hyperconverged server” its native support for virtualization and containers. Presumably then, “hyperconverged” is a comparison to the “disaggregated” server. What “hyperconverged” really has going for it is the ability to put additional units into a rack or datacenter that can be discovered and integrated to operate together in seamless expandability. We definitely couldn’t do that in the 1990s. Every system was a standalone piece of discrete hardware (client-server architecture for those with long memories).
A rack of (hyper) converged appliances are going to draw a fair amount of power, and require a significant number of outlets. A typical 2U appliance will have dual power supplies, each of which requires a C13 outlet. So for a 42U rack, that is 21 appliances having 42 power supplies. In a redundant power configuration, that puts two PDUS of at least 21 outlets each. With a typical 1200W maximum per power supply, that is a potential load of 25.2 KW on either strip. For most datacenters, that means going either with 208V 3-phase 60A PDU (and running the risk of tripping the branch circuit protection), or moving up to 415V circuits of 32A or 60A, requiring some very high capability PDUS.
Someone implementing such a rack of converged products is not going to have any tolerance for downtime. Too many people and processes are all relying on the performance of the hardware in that rack. This means the PDUs powering this gear must be rock solid and capable of communicating whether running on mains or backup power. Check out the new Pro2 architecture from Server Technology. It too is converged, and it delivers!
Of all the issues facing today's data center manager, heat/power distribution top the list. Available uptime and growth constraints round out the top three.