Resolve Signal Integrity Issues in Cloud Computing Platforms
While the cloud promises to reduce enterprise network infrastructure and operating costs, the remote execution of applications makes factors such as latency, performance, and reliability critical considerations in the design and deployment of cloud computing platforms. A key factor determining server performance, cloud or not, is maintaining a bit error rate (BER) on the order of 1.E-12 for the overall system.
Given that a single bit error can necessitate the resending of an entire packet, real-time data performance drops sharply as the BER increases.
Data travels over numerous high-speed interfaces as it passes through the cloud, and poor signal integrity over any of these interfaces is a leading cause for undesirable BER degradation. Thus, as data rates continue to increase, assuring proper signal integrity through the signal channel becomes critical. However, the long trace distances inherent in data center equipment make maintaining signal integrity challenging.
For example, consider a typical server chipset integrating a PCI Express Gen 3 controller with a maximum channel loss spec of 20 dB. At Gen 3's signaling rate of 8 Gbps, maximum trace length, given FR4 PCB losses, typically equates to approximately 18", less with connections and vias. Server motherboards have a deep form factor, however, and include many other sources of attenuation, including multiple connectors and vias that reduce signal integrity and the length over which traces can be driven reliably (see Figure 1). It is also not uncommon for server manufacturers to include mid-plane and daughtercard connectors in the channel path to support different product line options. When this connector is not populated, it has a jumper inserted instead, and the resulting signal path, with all of its sources of loss, is likely to push beyond the specified limits of the interface.