Communications and data access are integral for business operations. Websites, email and messaging services, and servers have to be up and protected for business communications to function. Data centers that house mission-critical infrastructure – the most vital parts of the company network, like servers and databases – must sustain the power, climate-control and connectivity that network infrastructure demands.
The colocation facility should have redundant systems, with sufficient capacity, at every point of operation, from climate control systems to network equipment. Having multiple units for to handle capacity is not the same as redundancy. All systems should be in an n+1 configuration; for example, if there are two units, they should both run at less than 50% capacity so that if one fails, the other can still handle the load; if there are three units, then they should be at 66% capacity.
Several factors determine the quality of a colocation center’s Internet connectivity:
Colocation data centers provide Internet access to tier-1 carriers, tier-2 carriers, or carrier-neutral access. Tier-1 services provide direct access to one of the major Internet backbone networks, like AT&T. Tier-2 providers routes traffic among multiple tier-1 providers, and reliability and speed are influenced by how effectively traffic is routed. Carrier-neutral access allows service from any carrier, but requires that customers configure their own routing and maintain their own connections to Internet backbones. Tier-2 access is preferable because routing is configured by the colocation facility and it’s more reliable because more backbones are available, which increases uptime and network performance.
The way that Internet traffic is routed, both the hardware and routing logic, has a significant effect on connectivity. Effective routing covers three areas:
A hardware-redundant, dynamically adjusting routing system creates a self-healing network that offsets backbone problems, traffic and load, and hardware failures so that service doesn’t suffer.
The design of the facility itself has an impact on the performance of systems housed in the colocation center. Server rooms should control air flow between rows. The heat generated by servers is blown in one row, and cool air can be drawn in from another row. Designated hot and cold rows circulate the air to keep servers from overheating.
Appropriately-sized and redundant climate-control units create cold air and control humidity. Servers have strict climate requirements, about 72 degrees and 45% humidity. Colos must have both chillers for the facility climate control and computer room air conditioning (CRAC) units for the server rooms. The capacity for cooling units is calculated by dividing the total tonnage by the square footage. For example, if there are two 50-ton chillers and a 4000 square foot facility, the chiller capacity is 0.040 tons/foot. Chillers and CRAC units should each have a capacity of 0.30 tons/foot or higher.
If there is a failure of the primary power source, the generators and UPS are vital to keep the network online. The UPSs run the servers while power switches from regular electricity to generators, and there must be generators onsite for immediate backup power. The power system should have the following features:
Making a Decision
Keeping servers offsite can be a good decision logistically, but only if the data center provides a reliable, secure network environment. Look for potential pitfalls, where service may not offer enough reliability or performance:
ACC's San Diego colocation services utilize a state-of-the-art San Diego data center. http://www.colocation.ccccom.com