Tim Miller, Vice President, StarGen, Inc.

For the past several years, communication equipment OEMs have grappled with the forces of low-cost, time-to-market solutions and high-value, high-return investments. Those decisions have pitted standards-based, off-the-shelf chip design against state-of-the-art internal development.

With off-the-shelf building blocks, OEMs can build telecommunications equipment without being forced into designing and producing all there own basic system components, and instead focus their scarce engineering resources on pieces of the system design where they add significant value and differentiation.

Unfortunately, however, bus-based architectures, typically PCI, H.110 and CompactPCI are simply running out of gas when asked to meet the needs of next generation communication equipment. To meet the growing scalability, physical form-fact options, cost effective high-availability features, and multiple traffic classes, communications equipment manufacturers are looking toward an open switch fabric architecture, rather than the traditional bus-based architectures of CompactPCI and H.110.

Today's Options

PCI is the most popular for the control plane of mid-range and low-end communication equipment like DSLAMs, Voice over Network gateways, wireless basestations and multiservice access platforms, most notable because of its low cost, reasonable performance, ease of design, wide component availability, multi-vendor interoperability, and software compatibility.

Figure 1: Three bridges fully interconnected with 2.5 Gbps full-duplex links to connect racks of existing chasses in a central office or POP.

But, because it is bus-based it can be unreliable. One problem device connected to the bus can bring down the entire system. Also its 4 Gbps of bandwidth (at 64bit/66Mhz) must be shared for all connected devices. If one device is utilizing the bus all other devices must wait. Bandwidth to a single device, latency and reliability can be a problem.

PCI is also limited to the total number of devices that can utilize a single bus, typically 5 to 8 depending on the speed of the bus. In addition, PCI has physical design limits of about 1 meter. As a result, it can only support single chassis solutions.

H.110 has similar limitations in the telecommunications world.

Next Steps: Switch Fabrics
What is needed is a new open technology — open switch fabric — which meets next generation communication equipment requirements while bringing all the cost and ease of design benefits associated with existing standards.

Switch fabrics have the scalability and reliability absent in most bus-based interconnects because of their point-to-point architectures. Each end-point is connected to every other end-point through one or a series of switches. End-points can be considered 'bridges' to existing standard buses or components. Unlike bus-based environments where only one device has access to the bus at a time, switch fabrics allow many devices to transmit and receive simultaneously. By building a complex mesh with a series of end-points and switches, many different topologies — with ever-scalable bandwidth — can be supported.

To ensure reliability, a switch-based design allows flexible routing capabilities so that multiple routes exist between the same two end-points. If one route fails, traffic can be redirected onto the alternative route. This additional reliability not only ensures no single point of failure, but also allows devices to be added or removed without impacting the overall system.

The StarFabric technology developed by StarGen — embraces all the benefits of switch fabric architectures, but adds the ability for equipment manufacturers to quickly and easily migrate from existing open platform architectures. For example, it provides 100% backward compatibility with PCI by supporting existing device drivers, BIOS and operating systems.

PICMG (PCI Industrial Computer Manufacturers Group) has created a subcommittee (PICMG 2.17) to develop system level specifications for implementing StarFabric technology in the CompactPCI environment.

StarFabric's first implementations will include a high throughput silicon switch providing 30Gbps switching capacity with six ports. Bridge chips provided by StarGen and partners will provide access from existing standard interconnects to the advanced functionality of the switch fabric. These devices offer manufacturers a new option for building high-speed, scalable and highly reliable systems. The number of components required in a system is design-dependent, with a minimum requirement of one set. This means that system manufacturers can build small-scale systems up to very large-scale systems with a common architecture, scaling to thousands of end points in a single terabit per second system.

Figure 2: A multi-segment system, in which multiple PCI and H.110 segments are included on a single backplane, increasing overall bandwidth and call capacity.

StarGen's switch architecture is a multi-queued (simultaneous input & output queuing) non-blocking switch fabric. The switch fabric utilizes point-to-point LVDS connections with clocks embedded in the data stream. Clock and data recovery are performed on each differential pair at the receiver side of the point-to-point connections. This feature eliminates the need for tight skew control that burdens technologies that use a clock separate from the data. Because of this, the technology is ideally suited for chip-to-chip, through backplane connectors and chassis-to-chassis interconnect up to 5 meters with standard PCB construction, 2mm existing connectors and CAT5 twisted-pair cable. Distances greater than 5 meters can be accomplished with higher bandwidth CAT7 cable or by using external components to convert to fiber.

The fundamental physical-layer interconnect for the initial components is a 622Mbps differential pair with a 400mV swing. Each StarGen port consists of four of these pairs in each direction to provide an aggregate bandwidth of 5Gbps. For initial components, two of these ports can be bundled to provide a 10Gpbs "fat pipe" between endpoints.

StarGen's switch fabric architecture is designed to support seven traffic classes including asynchronous classes, isochronous classes, multicast, and high-priority. Asynchronous traffic is traditional data oriented traffic, with large, bursty bandwidth requirements but without real time delivery requirements. Control and signaling traffic are typically asynchronous.

Isochronous traffic, including voice and video, requires deterministic real-time delivery. Through use of these traffic classes, StarGen's technology provides the Quality of Service (QoS) needed for communication applications with converged voice, video, and data requirements. StarGen also allows the unification of traditionally separate interconnects for control traffic and data payload traffic, simplifying design costs. The initial StarFabric implementations support isochronous class for real-time traffic with a bandwidth reservation protocol that is employed to reserve bandwidth through the fabric. These same mechanisms can be applied to multicast traffic as well. If the reserved bandwidth is not used, other traffic classes can use it.

Figure 3: A point-to-point system which supports 19 slots, each with full PCI bandwidth.

Several leading system manufacturers have embraced the StarFabric architecture, including Agere (formerly Lucent Microelectronics), which has announced its design for a bridge chip that bridges the H.110 TDM bus to two StarFabric ports.

In addition, Bustronic, a leading backplane manufacturer, is using StarFabric to develop next generation backplane technology.

Discussions and planning are underway with other partners for additional bridges to buses including ATM, Utopia, network processor buses, Gigabit Ethernet and DSPs.

Tim Miller is the Vice President of Marketing. Miller has 14 years of experience as a computer system and semiconductor marketing manager. He was marketing director for Digital Equipment Corp.'s high performance microprocessor business. Miller has an engineering undergraduate degree from Cornell University, a Master degree in Computer Science from University of Pennsylvania, and an MBA from the Wharton School.