Inktank and Calxeda Partner to Transform Ceph Storage Solutions

CephToday, Calxeda announced a partnership with Inktank in which we will together optimize and promote Ceph-based solutions in the market. It’s obvious why Ceph has been gaining lots of traction lately: it has been selected by Ubuntu as an official package within their distribution, and also for its compatibility with OpenStack cloud deployments. What may not be as obvious, however, is why and how Calxeda enables “microserver” designs that are a perfect fit for distributed applications like Ceph.

As you might have seen from last week’s announcement at Computex in Taipei, two of the three debuted systems are targeting the storage server markets, with a few additional designs that can’t yet be disclosed. More and more system vendors and customers are starting to realize the synergy in new “scale-out hardware” built for this new emerging trend of distributed storage software. But why?

Calxeda believes it can provide better hardware architectures for “scale-out” distributed applications. Here are some of the key benefits.

  • We can enable systems that deliver more performance-per-dollar.
    We do so by building a tightly integrated SOC that consists of CPU, network (via our fabric), and disk controllers. No more, no less. Depending on the actual application you choose to deploy, the advantage can be 3-4X higher than what you can achieve with expensive x86-based systems today.
  • We can enable systems that are about 10-15% less overall power consumption.
    While this doesn’t sound like a lot (most of the system power is dominated by hard drives), this incremental energy savings actually helps in situations where customers are power constrained (very common in Europe and Asia), and are only able to fill up 70-80% of a server rack. By reducing every single chassis’ power by 10-15%, we allow data center operators to squeeze another 1-2 systems into the same rack, thereby increasing the total storage density in the rack. Bottom line: reclaiming un-used space in the data centers.
  • We can enable some of the worlds best-in-class storage densities.
    Storage should be all about the disks. And yet, “commodity” x86-systems today have HALF of the entire chassis taken up by huge motherboards, large over-powered CPUs, and a plethora of other chips, cards, and connectors, when all you really want is to add more disks into a rack. A tightly integrated chip like ours allows us to fit an entire server into the same size as a 3.5″ disk, leaving you more room for hard drives in the chassis (and ultimately in the rack).
  • We can economically provide better CPU to disk ratio architectures (e.g. 4 cores with 4 disks).
    For some applications, having a more balanced core-to-disk ratio is critical. With x86-based systems, this is expensive and cost-prohibitive (especially when you want to add in things like 10Gb network interfaces). For us, it’s achievable and quite reasonable — again, all the features you want and need are already integrated into one single chip.
  • We enable smaller failure domains for distributed storage applications.
    This is indirectly tied to the item above. We have heard from many customers that they want systems with fewer drives, but are forced to add more drives per CPU to amortize the cost of a very expensive Intel processor (and 10Gb network controller). Unfortunately, that means that anytime a stick of memory fails, or Linux crashes, you just lost a whole bunch of hard drives along with that CPU. Even worse, that failure event now places additional strain on your network (and other server nodes in the cluster) caused by replicating and re-balancing itself. This is not only an infrastructure pain, but an operations pain. Again, a by-product of being able to change the economics to make “smaller failure domains” a reality in hardware as well.

Similar to how software-defined storage is emerging as an unconventional way of solving the age-old storage problem, there is a shift occurring in the industry around new scale-out hardware architectures. When you combine low-energy ARM-based CPUs, with plenty of storage bandwidth, and a fabric capable of switching multi-10Gb ports, you suddenly have a disruptive approach to designing systems – systems that seem to be destined for applications like Ceph.

For additional information, please also visit: http://www.inktank.com/calxeda

Comments

  1. We can confirm that CephFS (the POSIX compliant filesystem part of the Ceph stack) runs well on an Calxeda EnergyCore platform, as we showed in the results we presented at the recent ISC 2013 conference. Look for the poster titled “Mapping biomedical HPC workloads to low power SoC environments” that was presented there.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 980 other followers

%d bloggers like this: