Data center switches are pivotal in enabling seamless communication between devices within the digital infrastructure. As operations expand, selecting the appropriate switches becomes crucial. With the ever-growing need for scalability, performance, and uptime, choosing switches wisely helps maximize efficiency.
This article provides critical pointers to navigate the selection process. It details factors to examine, like port speeds, density, and management capabilities. Incorporating these tips empowers planning that aligns perfectly with evolving needs.
From small starting setups to large-scale nationwide deployments, following expert guidance results in future-proofed foundations ready for whatever lies ahead.
1. Examine Port Requirements
The port configuration is a primary characteristic, determining how many endpoints can connect directly to the switch.
Carefully evaluate your current port needs and reasonable short-term and long-term growth projections. Some data center switches have fixed port counts of 48 or 96 ports, for example, while modular rack mount designs with line card expansion slots conveniently allow adding more 44 or 100 port modules later only if and when demand increases.
Consider factors like typical server density within your racks, ranging from 4 servers per U in blade configurations to just 1 in larger 2U or 4U standalone servers. Determine if connections from high volumes of end-user devices in separate areas of a large campus or dozens of wireless access points terminate directly on the core switch or if aggregation switches installed in local wiring closets can help limit port usage.
Also, document the required port types, such as fiber optic SFP+ versus faster OSFP or multi-mode fiber versus single-mode, and connector variants like LC or MPO.
It's essential to account for spare capacity when determining the proper port count. For example, allowing at least 10-20% headroom for unanticipated connections in the next year avoids requiring an early and expensive equipment upgrade.
Consider installing redundant switches and cabling best practices, such as home run cabling design. For vast campus environments, evaluate specific building and floor use cases to accurately justify the selection of a single switch requiring thousands of ports versus using aggregation to limit individual switch sizes.
2. Evaluate Switching Speeds
Applications in today's powerful data centers have intensive bandwidth and throughput requirements far surpassing what was standard just a few years ago. For example, advanced distributed storage arrays dynamically push and retrieve multi-terabyte data objects between servers and clients.
Researchers also rely on these centers to crunch massive datasets for complex computational workloads, such as genomic sequencing or oil and gas exploration. Additionally, emerging technologies like artificial intelligence and machine learning training involve rapidly transmitting petabytes of training models and database information between GPU-accelerated servers.
Carefully consider your most intensive current and credible planned workloads, such as database clusters with 100GbE FDR IB connections pushing 50GB/sec between servers, to accurately target an appropriate switching throughput or overall bisectional bandwidth able to handle the most data-heavy situations that may occur.
Switch throughput is typically measured in cumulative bytes per second supported across all ports or rack units. For example, a top-of-rack switch with 96 ports of 100GbE or faster switching fabric would prove invaluable for applications pushing multiple petabytes daily. However, simpler virtualized server workloads may find 10GbE switching capacity more than adequate if they are underneath these limits.
It's also wise to benchmark representative workload situations in a testing environment before committing to a purchase to validate your assumptions around sufficient switching headroom and avoid future saturation issues. Factor the industry's historical trend of exponentially intensifying data and bandwidth requirements over time into your predictions.
A design focused on many years of future-proofed operation avoids expensive early equipment replacement.
3. Emphasize Power Efficiency
Power consumption sits high on data center operators' strategic roadmaps, as electricity to cool and power massive infrastructures can easily exceed hardware costs over the lifetime if not optimized. The power supply and cooling system capacities also influence viable switching options to maintain proper thermal conditions for reliability.
Evaluate vendors' individual switch power usage data sheets listing typical power draw under a range of utilization levels from idle to maximum throughput, noting any dynamic scaling abilities.
For example, the latest programmable switches can achieve 50% better efficiency by adjusting the speed of components during off-peak periods. Average power consumption measured in watts and peak usage affects utility costs and critical facility infrastructure investments like UPS systems and generators.
Power usage effectiveness (PUE) is also essential for environmentally conscious organizations, aiming below the industry average of 1.7. Newer modular switch designs also benefit hugely from power innovations like platinum-rated and hot-swappable power supplies, which are standard in top vendors' offerings. Ensure support for the latest power over
Ethernet standards like 802.3bt PoE+, capable of delivering 60W and emerging ultra-versions up to 120W, for concurrent power and data delivery to endpoints like Wi-Fi 6 access points. Plan holistically to optimize efficiency over the entire expected life cycle within the context of your operational goals.
4. Consider Management Features
While basic switches suffice for plug-and-play functionality, sophisticated centralized management brings robust optimization, controls, and visibility for networking teams.
Determine your technical operations groups' staffing capabilities and preferred level of network oversight, whether simple command-line access or higher-level graphical interfaces are required.
Leading enterprise-class switches enable virtualized automation through VLAN segmentation for security domains, micro-segmentation, or logical overlay networks on top of a single physical switching infrastructure.
Prioritization capabilities help give priority or guaranteed throughput to applications and services critical to business functions or compliance requirements. Telemetry exports are helpful for rapidly troubleshooting issues or identifying bottlenecks through system and port-level metrics analysis with embedded exporters like Prometheus. Remote access tools streamline maintaining multiple distributed core switches deployed in separate data centers or geographical regions.
Right-size these advanced software-defined networking abilities for your operational needs, factoring technical teams' expertise and comfort with programmable infrastructure versus out-of-box management.
Consider advanced access support beyond simple HTTPS/SSH with options for rich graphical interfaces, REST APIs, or event-driven architectures.
5. Investigate Support Lifecycles
Enterprise-class data center switches handling mission operations must be delivered consistently for 10+ years to achieve a reasonable total cost of ownership. However, unpatched vulnerabilities in older products or sudden discontinuation of service leaves environments exposed over long periods of useful product life.
Thoroughly vet vendors' technical support commitments through staged end-of-life milestones from the active development period. Leading brands designate selected core products as "long-term support" qualified for at least ten years of proactive patch releases and documentation updates, either in warranties or through maintenance agreements. Compare policies objectively; statements like "best effort support until parts depletion" lack concrete end dates and don't foster investment protection planning.
Conclusion
These in-depth, well thought factors help align your solution investments. The unique demands of empowering mission applications, optimized facilities, security requirements, and cost-effective long-range governance of sophisticated data centers are securely in place.
By researching options in-depth, the ideal switches emerge to automate core functions for reliable, years of low-touch operation.