Google Translate

Worldwide | United Kingdom

Higher Speed Cabling

Data Center Infrastructure Beyond 10 GB Ethernet

In July of 2006, YouTube reported 100 million video views per day. In May of 2010 that number climbed to two billion views per day. In February of 2010, Twitter reported 50 million Tweets per day. As of June of 2008, iTunes had 5 billion songs downloads, which increased to 10 billion by February of 2010. Facebook has over 500 million active users with people spending 700 billion minutes per month on the social networking site. Over one million websites have integrated with the Facebook platform. These applications and others like them are creating a new internet and a new way to do business. At the heart of these applications are data centers with continuously increasing need for high-speed bandwidth and storage.

Similarly, the need for increased bandwidth is at the heart of every corporate data center. Five short years ago, no one could predict the impact of carbon taxes, increased compliance and reporting requirements, or the sheer volume of information stored and moved through corporate networks. Virtualization is a driving force in higher speed networking as multiple servers and storage devices share single or dual network connections. Enterprises are doing business with a wider variety of software and hardware platforms and increasingly incorporating collaboration, video and other advanced applications. IT is no longer a necessary evil, but rather a competitive advantage.

Whether delivering the next "killer application" or merely managing the increasing demand for instant data, data centers are increasing speeds, reevaluating existing applications and looking for greener processing in the form of consolidation and virtualization projects. In order to achieve the required higher data rates, new standards and transmission media types are becoming available. Each should be evaluated based on the architecture, design and technical advantages, end to end cost and performance considerations.

The ratification of 40/100GbE by IEEE® has increased the number of high speed transmission options. Likewise, Top of Rack (ToR) switches with short reach twinax copper or fiber assemblies provide a relatively new set of options. Designs for these systems, however, are significantly different than industry standards-based structured cabling; using two strand fiber (10GBASE-SR/SX) and/or 4-pair, twisted pair copper (10GBASE-T and 1000BASE-T) systems. This paper will provide an overview and compare various options for achieving higher speeds in today's emerging data center.

From 10GbE to 40/100GbE

Beginning with 40/100GbE, the IEEE 802.3ba standard was ratified by the higher speed task force on July 17, 2010. The original project authorization request (PAR) was approved based on the following parameters with noted changes in italics:

  • 1M over a backplane (40GBASE-KR4)
  • 10m over copper cable (40GBASE-CR4/40GBAE-CR10) reduced to 7m*
  • 100m on OM3 (40GBASE-SR4/100GBASE-SR10
  • 150m on OM4 (40GBASE-SR4/100GBASE-SR10)**
  • 10km on SMF 40GBASE-LR4/100GBASE-LR10
  • 40km over SMF 100GBASE-ER4

*There were some amendments to the original PAR during the task force's development of the standard. The twinax CR4/CR10 copper length was reduced to 7m.

**A new fiber specification was developed during the standard development, OM4 fiber. The increased bandwidth over OM4 allowed a new supported distance of 150m based on TIA-492AAAD and ISO 11801:2002 Amendment 2. As the 40/100GbE standards are new, OM4 is included for the extended distance. For 10GBASE-SR/SX applications, standards based IEEE transmission distance extensions will require an amendment to the 802.3an standard.

Fiber Transmissions at Higher Speeds

When moving to 40/100GbE, the most important difference in backbone and horizontal multimode applications is the number of fiber strands. 40GBASE-SR4 uses 4 strands to transmit and receive for a total of 8 strands. 100GBASE-SR10 uses 10 lanes to transmit and receive for a total of 20 strands. SMF remains a 2-strand application and although the fiber is less expensive, SMF optics and electronics can be 10x more expensive. In data centers and backbones, it may be possible to have 8 or 20 individual strands of fiber. However, those strands may take disparate paths from one end to the other and this can cause delay skew (known as bit skew) resulting in bit errors. For this reason, the 40/100GbE standards are written around fiber optic trunk assemblies that utilize a MPO or MTP® multi-fiber array connector. In these assemblies, all strands are the same length. Also referred to as "parallel optics," this construction minimizes bit/delay skew, allowing the receive modules to receive each fibers information at virtually the same time.

MPO (Multi-fiber push-on) and MTP (Mechanical Transfer Push-on) are available in both 12 and 24 strand termination configurations used at the end of a trunk assembly. The MTP design is an improved version of the MPO. The patented MTP connector is a ruggedized version with elliptical shaped, stainless steel alignment pin tips to improve insertion guidance and reduce guide hole wear. The MTP connector also provides a ferrule float to improve mechanical performance by maintaining physical contact while under an applied load. MPO/MTP trunks also support for the 10GBASE-SR/SX applications although only two fiber strands are used. In this case trunks are connected to cassettes and/or hydra assemblies, which break out the multiple fibers into two-strand connections (typically LC or SC).

The second difference in high-speed fiber configurations is polarity. For 2-strand applications such as 10GbE transmission, managing polarity is as simple as reversing the strands somewhere over the channel. This is true if the channel is constructed of individual strands or is part of a trunk assembly. In trunk assemblies, which have historically been 12-strand, there are three suggested polarity methods in the standards (as shown in the following table).

Method Trunk Cassette Jumpers
A Same at both ends Same at both ends Polarity reversed at one end
B Same at both ends Polarity reversed in cassette Same at both ends
C Polarity reversed in trunk Same at both ends Same at both ends

As shown above, 2-strand application polarity managing is relatively easy. When migrating from 2-strand to multi-strand parallel optics, it is important to note which polarity method was selected to assure that the correct assemblies are purchased for higher speeds. All polarity methods can be converted from 2-strand to 12-strand applications.

It is important to note that these polarity methods are suggested in the standards, not mandated. However, the mandate does state that a polarity method should be established and maintained throughout all fiber channels, mapping the transmit strand from one end to the receive strand at the other. This does not change for higher fiber count transmissions, with the exception that more strands are involved. To better visualize the transmission for multistrand applications, consider the following diagrams:

40GBASE-SR uses 8 strands of a 12-strand MPO/MTP trunk, (4 to transmit and 4 to receive). The middle 4 strands in the MPO/MTP connector remain dark. The interface on equipment will accept an MPO/MTP array connector rather than a traditional LC.

100GbE has three approved methods for transmission including one 24-strand (shown left) or two 12-strand trunks either "over and under" or "side-by-side" (shown right- Side-by-side configuration is not shown). The transmission uses 10 strands to transmit and 10 to receive leaving the outer unused strands dark. It is also possible to connect two 1- strand trunks via a "Y" assembly that converts two 12-strand trunk assemblies to one 24-strand assembly. Polarity must also be considered regardless of the method chosen and supported by the electronics.

Higher Speed InfiniBand (IB)

Ethernet is not the only application that uses a parallel processing scheme. InfiniBandT is an industry- standard specification that defines a protocol used in High Performance Computing (HPC) clusters to interconnect servers, switches , storage and embedded systems. According to an announcement on November 6, 2010 by the Infinband Trade Association, " InfiniBand represents more than 43 percent of all systems on the Top500 supercomputers. InfiniBand connects the majority of the Top100 with 61 percent, the Top200 with 58 percent, and the Top300 with 51 percent." InfiniBand is a low latency, high quality of service, fabric architecture that leverages switched, point-to-point channels with data transfers today up to 120 Gb/s through copper and optical fiber connections. Interoperability testing is performed by the InfiniBand Trade Association who maintains a list of hundreds of products on its Integrators' List.

The direct attached architecture allows a Host Bus Adapter (HBA) to communicate directly to a Target Channel Adapter (TCA), in essence extending the bus of a server or storage device and creating low latency throughput. Although InfiniBand has traditionally operated over CX4 or 8470 copper and fiber assemblies for SDR, DDR and QDR data rates, it utilizes a similar scheme for multi-lane traffic as 40/100GbE connected via a QSFP connector. InfiniBand is currently supported by a variety of interfaces that operate in lanes with one lane equal to two strands (one to transmit and one to receive). MPO/MTP trunk assemblies can be connected to native InfiniBand connectors or the strands can be connected via hybrid assemblies.

The InfiniBand specifications currently define three data rates. Single Data Rate (SDR) operates at 2.5Gbit/lane, Double Data Rate (DDR) operates at 5Gbit/lane while Quad Data Rate (QDR) operates at 10Gbit/lane. Due to encoding, the actual throughput of the data rates is less than stated. For example, interfaces that use 8/10 bit encoding, 8 bits of data are transmitted in every 10 bits. The remaining two bits are used for encoding, which is significantly less than the number of bits required for Ethernet overhead. Two new InfiniBand specifications are also expected to be released in 2011, namely Fourteen Data Rate (FDR) and Enhanced Data Rate (EDR ).

Higher Speed Fibre Channel (FC)

Fibre Channel is another protocol that has advanced to higher speeds and has become a standard for storage area networks (SAN) in enterprise data centers. Fibre Channel also has copper interfaces. Fibre Channel and the newer Fibre Channel over Ethernet (FCoE), which allows Fibre Channel to use Ethernet as its transport, are gaining dominance in storage. The standard, developed by the International Committee for Information Technology Standards (INCITS), encompasses storage, processing, transfer, display, management organization and retrieval of information. Technical Committee T11 is the committee within INCITS responsible for Fibre Channel interfaces since the 1970's. FCoE was formally approved on June 3, 2009.

Traditionally, servers used separate adapters to handle storage and networking traffic - Host Bus Adapters (HBA) carry storage traffic and Network Interface Cards (NICs) handle LAN traffic. Hybrid adapters are available for FCoE with one port that acts as the Fibre Channel HBA that accepts one port of legacy connected Fibre Channel and one port for Ethernet. With FCoE, the interface's Ethernet port is the same as traditional Ethernet copper and fiber. New Converged Network Adapters (CNA) have one single port that can be used to handle both storage and networking traffic. Using server CNAs will reduce the number of server adapters needed, which in turn reduces the number of I/O cables and the number of switch ports used. This configuration reduces hardware resources, simplifies severs I/O configurations, lowers power consumed and lowers total cost of ownership.

Topologies for Fibre Channel include point to point, where two devices are connected with one cable assembly, with the following supported distances:

Connection Speed and Distance by Cable Category
Type Speed Distance
OM2 1Gb/s 500m/1,640’
OM3 1Gb/s 500m/1,640’
OM2 2Gb/s 300m/900’
OM3 2Gb/s 500m/1,640’
OM2 4Gb/s 150m/492’
OM3 4Gb/s 270m/886’
OM2 8Gb/s 50m/1,64’
OM3 8Gb/s 150m/492’
Twinax copper 8Gb/s 15m max’

The second Fibre Channel topology, Arbitrated loop (FC-AL), supports up to 127 devices. But more commonly, Fibre Channel is deployed in a switched fabric - the third Fibre Channel topology. Fibre Channel Switched fabrics (FC-SW) are gaining in popularity as virtualization of storage increases. A switched fabric consists of a Fibre Channel switch for ToR or End of Row (EoR) or for larger installations, a SAN Director which is a chassis-based centralized switch.

An increase in speed to 8 Gb/s Fibre Channel is under development, and an even faster 16Gb/s technology is expected to be available in 2011. The recently released 16GFC (16 Gigabit Fibre Channel) speeds are backwards compatible and will autonegotiate down to 4/8GFC speeds when attached to legacy equipment.

Choosing copper or fiber Fibre Channel connectivity typically depends on the required speed, length supported and cost. Native 16GFC operates over 100m of OM3 fiber and 125m of OM4. Longer distances can be supported by SMF. SAN directors create dense environments. To address this density, hydra assemblies as shown above can be used to break out 12 strands of an MPO/MTP connector into 6 duplex LC ports. Eight port adapters for MPO/MTP to MPO/MTP trunks allow a user to fit 48 duplex LC ports in the same footprint as a 12-port or 24-port fiber cassette.

Fibre Channel also supports direct attach SFP+ copper based twinax cable assemblies that offer a low power, cost effective option for short reach applications. These high speed interconnect cables will typically be used in ToR applications.

High Speed Interconnect (HSI) Copper Assemblies for 10Gb/s, 40Gb/s and 100Gb/s and Fibre Channel Applications

The first 10GbE capable copper interface was developed for the 10GBASE-CX4 (IEEE 802.3ak) application, which published in 2004. This interface conforms to X2, Xenpack and XPack MSAs (MSAs are multi source agreements between competing manufacturers for standardized form factors of an interface). The physical requirements for this shielded four-lane copper connector is standardized under SFF-8470. As a passive assembly, the SFF-8470/CX4 cables (pictured on the right) have a reach of 15m. This assembly supports 10GbE, InfiniBand, FibreChannel and FCoE. Other serial transmissions are also supported for I/O functions such as SATA (Serial Advanced Technology Attachment), SAS (Serial Attached SCSI), and RapidIO. These assemblies use twinax cable, constructed of two inner conductors with an overall foil covered by a braid shield. The twinax cable is commonly used in short reach high speed interconnects and is factory terminated via a precision soldering process and ordered in specific lengths. Twinax cable conductors typically range from 30 AWG (nominal OD 6.1mm or 0.24 inches) for shorter reach to 24 AWG (nominal OD 11.0mm or 0.44 inches) for longer reach.

Due to their low latency, these cables are popular in supercomputing clusters, High Performance Computing and storage. As part of the 802.3ba 40GbE/100GbE standard, multi-lane 40GBASE-CR4 and 100GBASE-CR10 was defined. This standard specifies the use of 4 and 10-lane twinax assemblies to achieve 40 and 100GbE speeds for distances up to 7m.

SFP+ 10Gb/s Cable Assemblies

SFP+ 10Gb/s assemblies are relatively new to the market and are available in both copper and fiber formats. The SFP+ interconnect assemblies support Ethernet, InfiniBand and Fibre Channel protocols. Copper based twinax cable assemblies offer short reach options for ToR or short Middle of Row (MoR) applications whereas the SFP+ fiber modules address medium to long reach applications. SFP and SFP+ standardization is specified by a MSA between competing manufacturers. Electrical and mechanical specifications for 10Gb/s SFP+ optical modules, 10Gb/s SFP+Cu (copper cables), and hosts are defined in the SFF-8431 and SFF-8083 (Electrical), SFF-8432 (Mechanical) and SFF-8472 (EEPROM) specifications developed by the SFF Committee.

10Gb/s SFP+ copper cables are increasingly popular in short reach direct attach 10Gb/s applications. Like SFF-8470/CX4 assemblies, these cable assemblies utilize twinax cable from 30AWG to 24AWG and are available in specific lengths. Therefore the same pathway considerations should be taken into account in the design. 10Gb/s SFP+ passive copper (assemblies have a reach up to 7m for Ethernet applications and longer for other protocols. The actual supported distance varies by manufacturer, conductor size and application.

Active SFP+ 10Gb/s cable assemblies utilize chips to amplify the signal and may be used to provide operation over longer or thinner cables than achievable with direct attached passive copper assemblies. The limited reach of SFP+ cables makes them suitable for use in ToR or adjacent rack switching, HPC and computing clusters for servers and storage. While these switch ports are currently lower power than 10GBASE-T copper cabling, port oversubscription must be carefully monitored to assure that per port power savings is not offset by increased electronics costs (more switches and power supplies) and their ensuing maintenance costs (which represent recurring expenses). The overall cost of the switch port, cable assembly and server NIC should be carefully examined. Likewise, overall power, including the higher power server NIC, should be evaluated. These assemblies allow for rapid interconnection where traditional structured cabling is not available. With the advent of the SFP+ small form factor connectors, CX4 assemblies are typically used for InfiniBand copper. SFP+ connectors have become more popular in ToR switching while the market was waiting on more power efficient 10GBASE-T switches. For further details on comparing ToR to "Any to All" structured cabling, see the whitepaper titled, "Data Center Cabling Decisions: Top of Rack vs Structured Cabling Systems."

As these assemblies are typically confined to the same cabinet or adjacent cabinets, SFP+ Copper assemblies can help reduce pathway congestion in data centers that do not have sufficient pathway space. Major electronics manufacturers have built other capabilities into the switches to utilize both the fiber and copper SFP+ adapters, allowing either media to be accepted into the electronics slot. In most data centers, one will see a combination of the various technologies (ToR and structured cabling systems), providing the best of both worlds with technology utilization based on application specific needs. It is important to note that although the form factor is the same for direct-attached copper and fiber, interoperability from one vendor to the next may vary due to encryption methods employed by the major electronic equipment manufacturers. Varied encryption schemes may require the use of proprietary cables, which are typically far more expensive than their off the shelf counterparts.

QSFP+ (Quad Small Form-factor Pluggable Plus)

A further advancement in interconnect cables is QSFP+ (Quad Small Form-factor Pluggable Plus) cable assemblies. Supporting similar applications to SFP+, these four-lane high speed interconnects were designed for high density applications up to 10m at 10Gb/s transmission speeds per lane. One assembly can replace up to four standard SFP+ connections, providing greater density and reduced system cost. The 802.3ba 40GbE standard recognizes the use of QSFP+ connectors (also known as Style 1 ,SFF-8436 Rev 3.4) . For Style 1, Multi-lane 40GBASE-CR4 specifies the use of QSFP+ 4 lane cable assemblies to achieve this speed for distances up to 7m. Electrical and mechanical specifications for 40Gb/s QSFP+ active optical cables, copper cable assemblies and hosts are defined in the SFF-8431 (Electrical), SFF-8436 (Mechanical) and SFF-8472 (EEPROM) specifications developed by the SFF Committee. For 100GbE connections, there is an increase in the number of lanes and the connector is specified in SFF-8642 Rev 2.4.

High Speed Interconnect Active Optical Cable Assemblies (AOC)

Due to distance limitations for point-to-point interconnect assemblies, port oversubscription can be an issue with ToR switches. Power and cooling are the limiting factors that control the number of devices supported in a cabinet. In order to increase switch port utilization within a row or the data center, an Active Optical Cable (AOC) assembly has been developed. Siemon's MorayT cable assembly utilizes a standard SFF compliant QSFP+ connector and each connector incorporates integrated opto-electronics with four single-mode fiber optic transceivers each operating at data rates up to 10 Gb/s per lane of 40Gb/s per cable assembly. The Moray AOC supports a reach up to 4,000 meters (12,960 feet) and the cables are available in standard lengths up to 300m and custom lengths up to 4,000m. The assemblies support InfiniBand, Ethernet, Fibre Channel and other applications.

Design Considerations

An evaluation of each technology should be made based upon interoperability, application benefits, future scalability and maintenance and an overall transmission cost per port.. This analysis should include the assemblies, switch port and network interface cost for the server or storage device. While these solutions are low latency and can decrease the horizontal copper cabling costs, in most cases they are used in application specific areas of data centers due to the higher cost of the electronics and resulting higher ongoing maintenance costs. Due to the higher costs of these assemblies and active ports, most data centers will continue to cable for 10GBASE-T. Intel® will incorporate 10GBASE-T chips native on a motherboard in early 2011 which will decrease power consumption considerably. Energy Efficient Ethernet (IEEE 802.3az) will further lower power on 10GBASE-T ports in the near future by placing ports in a "sleep" mode (low power mode) while inactive, resulting in lower net power consumed per port. Consider the following table which compares the various interconnect and cabling technologies:

Chart above is based on Cisco® MSRP, Intel® MSRP and installed costs of cable channels. Pricing is valid as of 11/30/10 and is subject to change. Module costs are transceiver only and do not count switch chassis where possible.

When designing a new data center, backbone connections should use MPO/MTP fiber trunks, allowing the migration to 40/100GbE without having to run new channels. Traditional copper channels should be installed for monitoring, centralized KVM, management, etc. Copper channels should also be installed to allow for 10GBASE-T transmissions. Both TIA 942-2 soon to be combined with TIA 942 to be published as TIA 942-A, and ISO 24764 recommend a minimum of category 6A/class EA cabling. Interconnect assemblies are expected to be in a single cabinet or within a row depending on the distance limitation of the application. By using an "Any-to-All" cabling design which can support 10GBASE-T, oversubscription and costly moves, adds and changes can be avoided. Likewise the cost of the additional server NICs can be avoided when servers have native 10GBASE-T ports on the motherboard. It is also important to note that for the SFP, SFP+ and CX4 modules, that as these are transceiver modules the warranty period is typically 90 days as opposed to the year warranty for ports incorporated in a switch and a 20 year warranty on the cabling.

By taking advantage of the various copper and fiber high speed assemblies and category cabling options that exist and can co-exist, one can design a robust, scalable, flexible and efficient data center that meets current needs and is ready for the future. Siemon offers the broadest range of copper and fiber high speed interconnect assemblies, category cabling and cable management solutions. For more information on Siemon Interconnect Solutions (SIS) and high speed assemblies please visit www.siemon.com/sis. For more information on Siemon category and fiber structured cabling and cable management solutions including the VersaPOD Data Center Cabinet with Zero-U patching and cable management and Data Center Design Support, please visit www.siemon.com/us/versapod/.

Additional Resources

Podcast Audio - Overview of the TIA 942 Data Center Standard
Siemon's Carrie Higbie presents an overview of the TIA 942 Data Center standard released in April 2005. The standard is designed for anyone building out a data center infrastructure. (22:39 mins - 17 Aug 2005)
Download MP3

Rev. A 2/11


Need Help?

Ask Siemon

info_uk@siemon.com

+44(0)1932 571 771 (UK)
8am-5:00pm (»Worldwide)


Innovate Magazine
Cisco and Siemon

Cisco Technology Developer Partner


See Siemon in Cisco Marketplace:

Category 7 Cabling?
Cat 7 for the real world Articles and case studies 48 pages
» Learn more

Case Studies - See how Siemon is connecting the world to a higher standard
Find Partners

» Find Siemon Authorized Distributors

» Find Certified Installers & Consultants