Data Centre Build and Migration Part 2 Practical Issues in the DC
This is by no means exhaustive but is a small collectoin of just a few ideas and pointers about the practical elements of Data Centre design that can sometimes fall through the cracks and cause real problems.
MRC - Maximum rack capacity
This is just how much kit you can fit into a rack and breaks down into the following maximums
Physical height restriction:
Quite obvious, the taller the rack the more you can install, watch out though, specifying the tallest rack isn’t always the easiest option, the following other parameters need consideration. Don’t forget that the average reach of an engineer is somewhere under seven feet high so if you specify the tallest rack and put your top of rack switches,,, at the top, the engineers will struggle, patching and of course fault diagnosis and rectification will also take longer. Don’t forget also that even when you mount the cabling patch panels on one side of the rack you will still need cable management on the hardware side!
Maximum power draw:
Each rack has a specific maximum amount of electrical power it can provide. It is not wise to simply calculate or add the power requirements for what you have defined for day one operation if some of the hardware has the potential to expand via additional PSU’s for resiliency or card expansion (I always try and paste a notice of the day one power draw figure on each rack at least then the BAU guys have a fighting chance).
Maximum cooling:
This used to be a thing, like MP3 players, but like vinyl records it’s making quite a big comeback. As DC design has matured over the years the idea of a specific cooling ceiling per rack has somehow fallen out of use. However, in any rack there is a specific amount of chilled air which can be pushed through the racks, that’s simple enough to understand so even an oversimplified “how much air per row divided by the number of racks in the row” gives you an idea of peak cooling capacity. Measure this against the heating BTU (British Thermal Heat Unit) output of the hardware and you can quickly see if you have a major problem which needs early redesign.
Cabling
Cabling really is the poor relation in all this. It’s become so mundane it receives little (if any) attention. However, it can cause massive problems and huge delays to any DC build and migration. It is also worth remembering that it’s cable which connects the DC to the outside world. It is important that cabling is not only right sized but also provides the correct media, connector and presentation, consider:
- CAT6 4UTP Copper cabling
- CAT5e 4UTP Copper Cabling
- Multimode Optical Fibre (OM1,2,3 or 4)
- Micron size of core and cladding I.e. 62.5/125 micron, 50/125 micron etc
- Single Mode Optical Fibre (OS1 or 2)
- Micron Size of core and cladding i.e. 8/125 etc
- Optical connectors are numerous (MTRJ, LC , ST, SC)
A final word regarding optical fibre and a really important issue is whether you opt for cross connect cabling or not. This effectively twists the cores in a duplex (2 core) cable so that transmit at one end becomes receive at the other. It is worth remembering that each time you cross the cables (patch lead to cross row to In Row to in rack you effectively undo the cross with each even number cables. Not all hardware is auto sensing and you may end up needing a mix of crossed cables and straight though cables in the same rack if you do not get this right.
Cabling breaks into a few neat little packages:
In Rack
Cabling inside the rack distributing from patch panels or switches etc to other hardware. Generally patching.
In Row
As the name suggests this is cabling which distributes incoming and outgoing signals along the row of racks. Worthy of note here is the consolidation point, a single rack to which all the in row cabling terminates allowing patching to any other row and to the final incoming consolidation rack or meet-me point.
Cross Rack
This is best thought of as the motorway which connects each row together as cities are connected to each other by roads. You need high capacity and this needs to allow for any of the cable types to connect across the rows.
In Suite
This is how you get from your DC to the Telco racks in the meet me points of presence. As you will (or should) know the incoming cable media and connector types you can terminate these into your incoming consolidation rack and from there jump off via patching to any point in your DC.
External
The cabling which connects your DC to the outside world and through which migration and operation will be enacted.
There are some really obvious but also critically important design steps here which need to be addressed to ensure as much as possible that your build project is as de-risked as possible.
In an existing DC campus there will be telcos with existing capacity and tails (cable cores) available. This can save a huge slice of project time as the general quoted lead time for this type of cabling (from the local exchange to the DC) can be up to 100 working days. Don’t Go Simply For Price! A marginally lower price is always welcome except in this instance. A quote from any telco is purely an estimation of cost and time. They cannot be held to the price or the duration they state. Research which telcos are already on site, where their termination points are (are they in your meet me point or will they have campus cabling to undertake ?). Who has presence and capacity on the local exchanges (you will need two of these for resiliency). Use the telco with the nearest presence to your DC suite and the one with capacity. It will save you time, money and a huge amount of sleepless nights.
Air Flow
Chilled air in a DC is forced through the hardware from one side to another to keep it cool. You would think that all IT manufacturers would design their equipment in one of two ways?
Either:
- A globally recognised direction (front to back, lets say)
- Able to accept any direction of chilled air flow.
Unfortunately that is not the case. For whatever reason manufacturers produce equipment which has:
- Front to Back
- Back to Front
- Left to Right
- Right to Left
- Right to Back
- Left to Back
- Sides to Back
- Bottom to Top
- Top to Bottom (I have never encountered this one but have been informed by an old DC lag)
There is a good profit to be made from designing air redirecting hardware which generally bolts to the rack and the unit. It does a good job, however it is expensive when you have to purchase and fit it (especially if you hadn’t foreseen the need) it can strip your contingency budget very quickly. Also of course the delay in having this metal work manufactured can cause delay right when you wanted to get some major progress achieved.
The best form of protection here is of course prevention. A set of guiding principals at the outset of design can save huge amounts of disruption, delay and cost. It can also help your DC design function better and cheaper.
Logistics
A recent DC build I managed was at a state of the art facility. The client was ordering all the hardware via the company I worked for so you’d assume that all would go well. The truth was though that, before I arrived on the scene no one had thought about:
- How much kit would turn up in one delivery
- Who would accept and sign for it
- Where it would be stored until required
- Who would check the contents for completeness
- Where any equipment could be, unpacked; built; staged; checked and loaded
- Where the packaging would be stored and how it would be disposed of
- How returns would be managed
All of this needed to be managed whilst trucks were rolling, meeting rooms turned into secure storage, refuse skips ordered and a working lab built, however some thought up front to the management of logistics would have made things much easier.
Of course there is a lot more to consider but, hopefully I’ve shed some light on the sorts of issues that need consideration alongside the actual DC design work to ensure the build runs as smoothly as possible.
Ken Jacobson has worked in the IT industry for nearly thirty years and has designed, built and migrated Data Centres for some of the leading businesses, organisations and government bodies. To date he has not been involved with a single failed DC migration.