How To Move a Data Center in 35 days – Week 2

With such a tight deadline, we met with our current colocation facility owner to discuss contingency plans if we were unable to complete our move as quickly as planned. Remember, in our previous post – there was a lot of work to complete at our headquarters facility to be ready for this move. Thankfully we were able to arrive at an agreement that outlined a number of power consumption reductions per month that would enable us to extend our deadline until mid-May. This agreement provided a crucial part of our contingency plan should we need it later.

Weekend Move: First Migrations


As part of our migration plan, we knew that we would move systems from our colocation facility to our corporate facility in a number of moves. The first weekend was scheduled with the goal of helping our teams identify any misses in our plans for the larger moves scheduled for the following weeks.

This weekend targeted an initial 35 servers for migration. Personnel that were identified from our previous planning sessions and stakeholders for the systems being relocated were notified prior to any possible interruption of service. Planning really pays off here, as we’re working on a very tight time line.

The 35 servers were a mix of iSCSI storage nodes, VMWare ESX servers and some legacy systems. We started promptly at 5 am by uncabling systems and readying the racks for roll-out.

The first hiccup was that our colocation provider removed power from the wrong racks. This sent some production systems temporarily offline and required our attention to ensure services were restored before we could pull the plug on the systems that we planned to move. The colocation partner was able to restore power and then remove power from the racks we were moving, but this delayed our load time. Finally, at 11 am on Saturday, we had all of the systems loaded on the truck and we were headed on our 30 minute ride north.

Once we arrived in Andover, we unloaded our racks and moved them into our existing data center. The racks were rolled in, powered up and connected to the network by 5 pm. The first order of business at this time was to get an accurate measurement of our power consumption. When we powered up the network chassis and systems, the current draw was much less than what we expected. This provided some wiggle room in our earlier estimates, which was unexpected good news!

– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
Avoidable Mistakes that Compromise Cooling Performance in
Data Centers and Network Rooms

– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –

Part of the closing activities for day one was to add some 32 bit routes into our routers to allow the IP addresses from the migrated systems to be accessed. This is quite common to defend networks, and is quite handy to allow servers in the same networking space to be in two different locations.


The team was split into groups, with team one focusing on the preliminary cabinet layout for the new data center and team two focused on final tasks from yesterday’s move. A key partner, our purchasing department, had been able to expedite eight of the 22 cabinets we had on order – so this helped with our planned migration from the cabinets we brought north the day before.

As our teams were focused on their respective jobs, our network team was able to capture temperature readings from the servers at the colocation facility and compare them to previous readings. We wanted to determine if we were going to have a negative impact in our corporate data center before our additional A/C systems were going to be available. Thankfully, it was still winter during our move so we weren’t faced with systems taxed by a late summer heat.

By Monday, we had executed a partial move that did not impact our key stakeholders. This information was presented to senior management along with the remainder of our move plan. We received executive buy off for the next two weekend moves.

Mark Townsend

About Mark Townsend

Mark Townsend's career has spanned the past two decades in computer networking, during which he has contributed to several patents and pending patents in information security. He has established himself as an expert related to networking and security in enterprise networks, with a focus on educational environments. Mark is a contributing member to several information security industry standards associations, most notably the Trusted Computing Group (TCG). Townsend's work in the TCG Trusted Network Connect (TNC) working group includes co-authoring the Clientless Endpoint Support Profile. Townsend is currently developing virtualization solutions and driving interoperability testing within the industry. Prior to his current position, he has served in a variety of roles including service and support, marketing, sales management and business development. He is considered an industry expert and often lectures at universities and industry events, including RSA and Interop. Mark is also leveraging his background and serving his community as Chairman of the local school board, a progressive school district consistently ranked in the top school districts of New Hampshire, with the district high school ranked as a "Best High School" by US News & World Report.

, ,

No comments yet.

Leave a Reply