Your hyperscale data center operates on a lean budget. You want to install your hardware and be done with it until you decommission it for the next efficiency-driven replacement cycle. But real world hardware does fail, and when it does, you want your suppliers to be both knowledgeable and responsive. They need to be able to troubleshoot remotely or on site, and get you answers and replacement product quickly so that your application can be restored.Read More
If you believe many of today’s publications, sensor-laden driverless cars look to become a part of everyday life over the next decade. The processing power needed to handle the flood of data for driving, along with vehicle to vehicle, vehicle to highway, and vehicle to dispatch/management communications is likely to be huge. Edge computing, putting compute infrastructure close to the point of use (beside or over the highway, for example) will likely be called for along with deploying 5G wireless communications for transporting the data.Read More
Mainframes and Moore’s law led to personal computers. Client-server applications became possible with the first local area networks. Cellular radio systems and Wi-Fi, along with Moore’s law (again) combined with improved battery technology have made laptops, tablets, cell phones, and augmented reality headsets key drivers of internet activity today. Tomorrow’s applications will be more widespread, and possibly less visible. Think smart cities, where the lamp posts and the sidewalks work together to guide you to your destination so you don’t have to watch your progress on a map application on your phone. The solar powered talking trash bin on the corner can call a driverless Lyft for you. Need to make a phone call? Put your hand on the glass of the bus stop shelter and you can have a video call for a few micro-cents.Read More
Your centralized hyperscale data center is up and running in a stable fashion. Now the software team has come up with applications that are so bandwidth intensive that you are going to have to do some extensive pre-processing in every major locale to reduce network traffic and latency times. Sounds like some form of edge computing is needed, whether that is edge, mobile edge or even fog computing. And wherever distributed/edge computing is called for, intelligent remote power management is a requisite.Read More
Utility power should just be there. Always on. Never failing. Today’s hyperscale data center designs frequently count on the electric utility to supply them with a stable source of clean renewable energy. Alternatively, some use locally generated power with the utility as a backup. Combining robust software stacks that incorporate either virtual machine or container technologies in a rack with in-rack UPS solutions ensures a high degree of uptime and failover capability. Your hyperscale data center can’t tolerate a PDU as the weakest link in the power distribution chain. You need the best PDU solution in the business. You need Server Technology.Read More
While both “cloud” and “cloud-first” are the new go to IT solutions for many companies, there remain a large number of situations where a complete outsourcing of your hardware infrastructure is not practical. In that circumstance, colocation of your IT should make the short list for consideration, whether it is driven by the needs for expansion, proximity, or interconnect. Colocation offers the advantages of highly efficient buildings, support for multiple locations, and access to some of the best interconnections available in the industry.Read More
Today, Intel and others are exploring the ramifications of a “disaggregated” server. Putting the RAM on a shelf. The storage on a shelf. The compute on a shelf. Then interconnecting them with a multi-way optical switch that provides connectivity between the shelves as well as off rack communications. The rack becomes a computer with all of the commodity parts providing low cost redundancy and hot-swap capability.
What are the main components of a desktop PC? The hardware consists of a power supply, a motherboard with a CPU, a permanent memory drive (hard drive, optical drive, etc.) to store the operating system, main memory (usually DRAM), a graphics controller that is often integrated onto the motherboard, a network interface, a sound processor/amplifier, and a chassis.
Now take a look at a “(hyper) converged server.” Let’s see. There is a server-oriented CPU, some permanent storage (hard drive, SSD), a network interface, a power supply, some RAM, a graphics controller, and a chassis. Looks pretty similar overall to the desktop of yore! Even the form factor is similar. It seems that we have come full circle from where we were in the 1990s. The only significant differences from a desktop PC are the scale (more memory, faster drives) and the software stack running on the platform that gives the “hyperconverged server” its native support for virtualization and containers. Presumably then, “hyperconverged” is a comparison to the “disaggregated” server. What “hyperconverged” really has going for it is the ability to put additional units into a rack or datacenter that can be discovered and integrated to operate together in seamless expandability. We definitely couldn’t do that in the 1990s. Every system was a standalone piece of discrete hardware (client-server architecture for those with long memories).
A rack of (hyper) converged appliances are going to draw a fair amount of power, and require a significant number of outlets. A typical 2U appliance will have dual power supplies, each of which requires a C13 outlet. So for a 42U rack, that is 21 appliances having 42 power supplies. In a redundant power configuration, that puts two PDUS of at least 21 outlets each. With a typical 1200W maximum per power supply, that is a potential load of 25.2 KW on either strip. For most datacenters, that means going either with 208V 3-phase 60A PDU (and running the risk of tripping the branch circuit protection), or moving up to 415V circuits of 32A or 60A, requiring some very high capability PDUS.
Someone implementing such a rack of converged products is not going to have any tolerance for downtime. Too many people and processes are all relying on the performance of the hardware in that rack. This means the PDUs powering this gear must be rock solid and capable of communicating whether running on mains or backup power. Check out the new Pro2 architecture from Server Technology. It too is converged, and it delivers!