ARM Servers: Hype vs. Reality

b283e604-cfcf-11e3-a3f9-12313d1c3a13-mediumAs the ARM server market began to emerge in press and powerpoint, it was not hard to separate the hype from reality:  it was a lot of hype.  Spread by well-meaning advocates trying to change the world and give Intel a run for their money, these myths created unrealistic expectations on whether ARM chips are worthy of server applications, when they will ship, and how hard they will be to use. I applaud the early leaders including APM and AMD for their early efforts on 64-bit products.  While they have tried to balance their excitement and the uncertainty of semiconductor development schedules, there are nonetheless a few myths that need clearing up.  Here are six common ones: [Read more...]

Anandtech Reviews the Calxeda ECX-1000: “Calxeda’s ECX-1000 server node is revolutionary technology”

I’d like to point everyone over to a great review of the Calxeda-powered Boston Viridis box by Anandtech that just went live, here. First of all, big thanks to Johan De Gelas over at Anandtech and Wannes De Smet at SizingServers for doing a top notch job pulling together an in-depth review of our gear as well as the team at Boston Limited for taking care of the hardware. Since we launched the ECX-1000 we’ve been beating the streets to get real results and metrics out into customers’ hands and show that the technology delivers as promised. With quotes like “Calxeda really did it”, “nothing short of remarkable” and “revolutionary technology”, we’re all excited to see these results posted on a site like Anandtech.

[Read more...]

What’s a nice core like ARM® doing in a place like this?

IEEE held their annual fest for uber-techies at SuperComputing ’12 this week in Salt Lake City.  With over 8000 attendees flocking to the snowy site in spite of the economy and impending fiscal cliff, this event has become a mecca for anyone seeking the next great technology in computing hardware for serious work.  In the old days, it was all about (Tera)Flops and Fortran.  These days it is about Big Data, hardware acceleration, interconnect fabrics, storage, and green computing.  Wandering around in the massive exhibit hall, one could see name badges from companies like eBay,  Amazon, Peer One Hosting, and Dreamworks, right alongside the traditional attendees from leading universities, National Labs, and the Departments of Defense and Energy.

So, what’s a little core like ARM doing in a place like this? Its all about the data. “Data Intensive Computing” in HPC is pronounced “Big Data” in the enterprise.   And the two communities have another thing in common: both are seeking more energy efficient solutions to large computations challenges. So naturally, they are turning to ARM with great hopes for the future.

[Read more...]

What is an SoC? Hint: the “S” stands for Server.

The acronym “SoC” generally refers to “System on a Chip”. But with SoCs entering the server space, it is also taking on a new meaning: “Server on a Chip”. An SoC is a large scale integration of processor cores, memory controllers, on-chip and off-chip memories, peripheral controllers, accelerators, and custom IP (intellectual property) for specific applications and uses. As Moore’s law continues, chip process geometries shrink, allowing more transistors to reside on the same area of silicon. Traditionally, server processors have used this new real estate to add more cores. But there are better alternatives than just adding more cores for certain applications.

Increasing integration in an SoC brings a number of benefits including:

  • Higher performance – significantly faster and wider internal busses compared to those found in a multi-chip or multi-board solution.
  • Lower power – wider range of power optimization techniques can be employed in SoCs including power gating, changing bus speeds depending upon utilization, dynamic voltage and frequency scaling of processor cores and peripherals, multiple power domains, and a number of others. Additionally, having peripherals on chip avoids power hungry PHYs (analog drivers that need to drive signals between chips and boards).
  • Higher density – fewer components to buy, consume power, and fail.
  • Deeper integration of peripheral controllers and fabric interconnect technologies allow a number of advantages that cannot normally be achieved by having to go through standard bridges like PCIe.

Let’s stop and consider the components we typically will find in a standard rack-optimized volume server:

  • One or two processor chips, often with integrated memory controllers.
  • One or two chips for processor chipsets providing a range of functions like Southbridge peripherals and PCIe.
  • A PCIe connected Ethernet NIC, either chip or PCIe board. In today’s volume servers, this is typically one or two 1 Gb Ethernet interfaces.
  • A PCIe connected SATA controller, either chip or PCIe board.
  • Controller chip for an SD card and/or USB.
  • An extra cost, optional BMC (baseboard management controller) providing out of band system management control.

So, now with the availability of a purpose-built ARM® server SoC, how does this change? Everything in the laundry list above gets integrated onto a single, low power die. For example, let’s take a look at the Calxeda EnergyCore ECX-1000 series of SoCs. In each chip, we find:

  • A quad-core Cortex A9 CPU, configured for server workloads.
  • The largest L2 cache that you’ll find on an ARM server: 4 MB with ECC.
  • A server class memory subsystem including a wide, high-performance 72-bit DDR3/3L memory controller, also including ECC.
  • Integrated peripheral controllers that have direct DMA interfaces to the internal SoC busses without the PCIe overhead. Standard server peripheral controllers like multiple-lanes of SATA, multiple Ethernet controllers (both 1 Gb and 10 Gb), even an SD/eMMC controller for local boot or scratchpad storage,  are all integrated on-chip.
  • If your server needs to connect to devices that are not integrated, there are four dual-mode PCIe controllers, supporting both root-complex and target modes, in both x4 and x8 configurations.
  • Instead of an optional (and expensive) BMC, management is built onto every chip, providing a sophisticated server management system that provides both in-band and out-of-band IPMI/DCMI system management interfaces along with dynamic power and fabric management.
  • A deeply integrated, power and performance-optimized fabric interconnect, which we’ll talk about in a future blog entry.
  • And all of this is designed with performance, power, and cost optimized servers in mind, delivering the industry leading performance/Watt and performance/Watt/$ servers.
Calxeda EnergyCore ECX-1000 Block Diagram
Calxeda EnergyCore ECX-1000 Block Diagram

 

With all the typical server components integrated onto a single chip, you can build a server by “just adding power and DRAM”. And even that is made easy for our customers with a card-level reference design of four EnergyCore SoCs, power regulators, DRAM, and fabric interconnect.

For the last several years, SoCs have been used in embedded systems and mobile devices for the same reasons and benefits discussed above.  The server industry is now applying those same lessons learned to it’s own domain.  No matter what the design looks like, a better integrated and power optimized Server-on-a-Chip is needed for the scale-out, cluster demands of our Internet generation.

Follow

Get every new post delivered to your Inbox.

Join 989 other followers