Concept of Server Colocation

Introduction:

Server Colocation is a concept for the ones who own server hardware and wants to have complete control over the configuration of server. The Colocation specialists provide connectivity to user’s server through a fast internet connection, usually in a secure datacenter, with complete support round the clock.

There are several aspects to look for while selecting a Colocation provider. It is important to select a provider that either has their own data center or has a presence in one of the leading “data hotels.” Class datacenters are located in major industrial cities like New York, London, and Frankfurt. The reason behind location of these datacenters in big cities is that they get the advantage of the convergence of high capacity network connectivity that occurs in a major commercial centre.

It is important to note that all internet connections are not equal. It is essential to check if one’s Colocation specialist is “multihomed” i.e. the one using the BGP protocol by having at least two connections to Tier One providers. Tier One providers are those who have very large ISP’s or Telco’s who operate their own fibre links and networks without having to operate traffic (or transit) over another ISP’s network. An example of a Tier One provider would be Level3 communications.

In addition to Tier One connections, the Colocation specialist should have peering arrangements at major exchange points such as LINX. They are working on the concept of cutting down the middleman, thus not only improves idleness but also reduces also latency; people will be able to access the websites and content hosted on one’s collocated server much faster if one colo host is peered well.

It is often seen that a web host or Colocation ISP use to negotiate many peering arrangements with other ISP’s with whom they are exchanging large traffic. This not only provides them with more resilience, rather it also helps in reducing their own transit costs with the Tier One providers enabling them to offer more competitive data transfer prices.

Choice of Hardware

It is a fact that server hosting abroad would be expensive say in London or New York. Thus one’s hardware needs to be “rack optimized.” The pricing of Colocation is usually done “per U.” A “U” is 1.75 inches or 4.44 centimeters. Thus rack servers, unlike desktop machines are long and flat. For taking care Data Canters also employ different types of Rack Cabinets which are normally 42U in height, and are capable to house at least 42 1U servers with some space for switches and cables. It is quite often that still some space left and that space is usually for allowing efficient air distribution in the cabinet.

One should always talk with his/her colo specialists for the physical hardware he/she requires and in addition should talk for case and rack mounting. This is important as the Colocation specialist knows better which brands work best in their racks.

Now after finalizing for hardware next important thing to consider is how to ship one’s server to the remote Colocation data centre. It’s really quite quick to deliver the server hundreds of miles away as flights are good every where and so one must consider all the technical aspects before sending the machine out.

Check the operating system if it reboots

All Collocated dedicated servers run without the use of a keyboard or monitor. Thus, it is important to ensure that the servers gets past the BIOS screen and boots the desired kernel without requiring pressing any keys. It is often possible to set the BIOS on one’s dedicated server to stop on “no errors.”

In case one is running on Linux, the kernel that one requires is to ensure that the correct kernel is booted without any hurdle. Now this is ascertained by the configuration on /etc/grub.conf, in case the GRUB boot loader is used, or /etc/lilo.conf if the LILO boots loader is used. The user must keep in mind to run “/sbin/lilo -v” after making any changes to LILO configuration and should also check that there are no errors. This is must to have no trouble afterwards.

In addition to above one must also check that the kernel “works” properly with the hardware.

Check if the server “Auto Powers” is on

Often many Colocation facilities provide an auto power cycler from a web interface but this can fail to work if the machine does not auto have power back up. Usually server BIOS’s are equipped with either “OFF”, “LAST STATE” or “ALWAYS ON”. The user must ensure to check that the server BIOS’s are “ALWAYS ON.” The user must note that it’s always easy and cheaper to hack cheaper ATX motherboards to be “ALWAYS ON” but in real way it is much better to consider a more expensive motherboard.

Configuration of network

One must make sure that the network addresses, DNS server and gateways are properly configured before the delivery of the dedicated server. All these informations are provided in advance by the Colocation provider. In addition one must also see if there is any provision to get back into the server remotely by having the SSHD daemon running.

With main Linux distributions, open SSH is always shipped. It is quite possible that user wants to consider configuring SSH to work with Protocol2 only, disable root logins and to turn off X11 forwarding as this all is not at all required on a production server.

There are many servers having multiple Ethernet connectors that are helpful if one is not using the second interface to either mask it off with tape or label the correct Ethernet device. One must also set up a Serial Console.

Sometimes a Colocation provider will have a serial terminal on site. It is basically a server itself but with loads of serial port which enables one to connect to his/her server if the network has failed to one’s server for any reason.

Cooling arrangements

It is a true fact that servers run well mainly due to modern processors even in a fully air conditioned environment. For a user it looks like it is an effect of cooling but in reality it is rack density. Thus one must consider a rack optimized server, as the air cooling in a Collocated environment should have been taken into consideration. One should also familiarize with the chipset architecture of his/her motherboard and should consider setting up “sensors.” One can monitor the temperature of his/her CPU and motherboard and graph it using MRTG.

Additional things

In addition to above one may wish to consider disc or hard drive redundancy. Although the “MTF” or Mean Time to Failure” quoted by hard drive manufacturers is impressive, hard drive failure, especially IDE failure happens depressingly often. With arrival of S-ATA drives there is really no excuse, especially if one is on a budget. One should consider cheap S-ATA drives with Linux software RAID. Now-a-days modern Linux distributions come with tools to administer and monitor the performance of software RAID arrays.

Thus, if all the above things are considered and if all the main power lights are working, it is really a help for technicians. The power light, hard drive activity light and network light can be very useful. Last but not the least; the server should be correctly labeled at the front with its hostname and IP address in order to identify it immediately.

About the author