There has been lots of new Cisco Data Center products released in the past 6-12 months that due to moving country I haven't had the chance to talk about!
Hopefully the next upcoming blogposts will change some of that. I have been doing a lot of UC blogposts recently, so here's hopefully a catchup for you on some of the things in Data Center World!
First of all, let's talk about the UCS M-Series of servers. When first reading about the M-Series, your initial reaction may be one of confusion, it certainly was for me. Read on for more information...
The UCS M-Series server is kind of like a chassis-based rack server, you have a 2RU unit that can take up to 8 "Compute cartridges." These cartridges consist of CPU and memory, they don't have any HDD's or Adapters! Each compute cartridge is actually giving you TWO servers! So this 2 RU unit is providing 16 servers (when fully populated.)
Supported count for these blades is 2, 4, 6 or 8. You can't have an uneven number for some reason.
A closer look at the specs of these servers reveals an interesting constraint:
The Intel CPU's available to you are not exactly going to set the world on fire, the E3 only has 4 cores for example. This is by design: these servers are meant to use the minimum amount of power and cooling. The memory fully populates out at 32 gig (I believe 64 gig is coming.)
Here's the kicker that made me realise the purpose of these servers: they're not intended to run VMWare!
These servers are intended for custom applications where you need lots of easily accessible compute resources, where the application itself provides it's own fail-over capabilities.
Imagine a world without VMware for a second: imagine how incredibly powerful Cisco UCS Service Profiles would be in a non-VM world, suddenly the statelessness of servers doesn't just make it easier to replace ESX-Hosts after failure. It would have been an absolute game changer.
For those companies with bespoke, custom applications that already have their own failover and scaling methods, the UCS M-Series provides the hardware piece of the puzzle, allowing them to easily provision new servers and replace failed servers.
To quote Todd Brannon of Cisco - “We just see the increasing use of distributed computing, which is very different from heavy enterprise workloads, where you put many applications in virtual machines on a server node, This is about one application spanning dozens, hundreds, or thousands of nodes.”
For distributed computing or enterprise workloads, all that cooling and power costs money and takes up valuable space. The M-Series allows you to reach a density you simply couldn't reach even with a blade-chassis!
Hopefully I have done a decent job of explaining the real-world application for Cisco M-Series servers. It's all about high density for bespoke, single applications that use many servers (think online gaming, ecommerce, webhosting, etc.)
To make this possible, A few technologies where developed.
The first problem to be solved: All we want in the blade cartridges is CPU and RAM, no local disk and no adapters.
Instead of boot from SAN, each M-Series chassis has a collection of local disks, shared amongst the Compute-catridges. This is done in a similar fashion to the virtualization of adapters we see in VM-FEX by doing some nifty things with the PCI-E bus and the Virtual Interface Controller:
I am not sure why Cisco did not consider the use of boot-from-SAN to resolve this problem, perhaps they don't want the applications to rely on boot from SAN? Can I even configure boot from SAN on the M-Series? I intend to find the answers to these questions and will let you guys know ASAP!
In the back of the chassis you will find a slot for 4 local HDD's that will be shared amongst the blades (more detail on exactly how you configure that will be given in a later blogpost)
The table above gives you an idea of some of the options available.
From a network-out perspective, the chassis has 2 x QSFP 40 Gig connectors, providing tons of bandwidth out to your fabric interconnects, obviously you would cable from one port to FI-A and the other port to FI-B.
What's that I hear you say? But Pete! These are uplinked to an FI and the 6248 series is currently 10 gig SFP compatible only!
No problem here, simply use the QSFP 40 gig to 4 x 10 Gig SFP+ cable:
When you go into the UCS-Manager for these M-series, you will even see that the fabric interconnect shows 4 links per fabric interconnect (8 in total) even though their are only 2 physical interfaces:
Let's take a look at one of these chassis's so you can see all the uplinks:
You can see the slots for the disks (4 per chassis), the Management interfaces, console access, and of course the power supplies and 2 x 40 Gig QSFP+ uplinks.
Here's a look at the front of the chassis:
Finally, a logical diagram provides an overview of these connections:
I hope this gives you a deeper understanding and appreciation of the real-world problem the M-Series is trying to solve!