The need for deterministic timing performance for packet-based wireless backhaul networks and case for Time Synchronization Using IEEE®-1588-2008 + Synchronous Ethernet in a Time-Setting Mode of Operation
By P. Stephan Bedrosian, Distinguished Engineer, LSI Corporation
We use clocks everyday to synchronize ourselves with persons or processes. But the necessary accuracy of the clock depends on the application. Anyone wanting to catch a train has to have their eye on the clock to within a minute. In competitive sports, a hundredth of a second can be decisive. Processes in an automated assembly line need synchronization in the microsecond range.
Many communications systems also rely on timing in order to operate correctly. Generally, an implicit timing system exists to support communication between two entities. This timing can be in the form of how information is exchanged or formatted. For example, a communications system can rely on the distribution of regular trigger events to every user which indicates the beginning of a unit of time and then trigger the appropriate actions. This method could be implemented by a time-division multiplexed (TDM) system that defines time units, called frames, to establish a communications framing structure. Each communicating user is assigned a dedicated channel in the frame with a constant data rate and guaranteed delivery.
As consumers continue to use their smart phones and connected devices to watch videos, browse the Internet, and interact with social media applications, wireless networks are increasingly pushing the bandwidth envelope. This exploding demand for mobile data usage is forcing operators to replace existing TDM systems consisting of T1 or E1 facilities with high-capacity packet-based links that rely on packet-based Ethernet to get the job done.
Ethernet networks offer transmission speeds of up to 100 Gb per second, enabling them to meet the fast-growing needs of mobile backhaul for cellular data, audio and video distribution, and Smart Grid systems. Unlike TDM systems, though, Ethernet uses a packet-based method to communicate where users transmit information in specified “data bundles” called packets. And unlike the TDM framing concept where bandwidth is continuously reserved regardless of information availability, packets are only sent when information is available. Not only does the packet method save valuable transmission bandwidth, it also allows more users to communicate over the same packet-based communications network more efficiently than an equivalent TDM network. While moving to the Ethernet provides savings from costly T1 and E1 facilities and delivers the speed and capacity to get higher volumes of data to the cellular network, it creates a big challenge.
One of the key aspects required by the cellular network is frequency synchronization (aka syntonization). As with any radio receiver, it must be tuned to the carrier frequency of the desired station to receive information. The same analogy holds for cellular wireless systems, except the radio carrier frequency must be tuned to an accuracy of 50 parts per billion (PPB) or better. This frequency accuracy is needed to ensure that the transmission (Tx) and receive (Rx) communications channels operate in their assigned spectrum and maintain the specified guard band (as shown in Figure 1). Historically, highly synchronized TDM-based T1 or E1 facilities were used to carry cellular data as well as frequency synchronization of 0.01PPB to cellular base stations. The carrier frequency of each base station was tuned to a different frequency so that information would be received on one frequency and transmitted on another. This method of frequency division duplex (FDD) supports simultaneous or full-duplex exchange of information in the cellular network.
In order to save radio frequency spectrum, newer mobile backhaul systems use a time division duplex (TDD) approach to transmit and receive information over the same spectrum. In order to accomplish this synchronization process, the transmit and receive intervals between base stations must be aligned to an accuracy of 2.5 microseconds or better and meet the frequency accuracy requirements of the FDD system. This synchronization requirements are needed to ensure that the transmission (Tx) and receive (Rx) communications channels operate in the assigned spectrum and maintain the specified guard time (as shown in Figure 2).
The IEEE® 1588-2008 precision time protocol (PTP) standard has been proposed as a method for both synchronizing and syntonizing of FDD and TDD wireless base stations over packet-based Ethernet networks. The PTP protocol, also known as IEEE 1588v2, uses an adaptive method of transporting a traceable timescale by encoding a series of timestamped bearing (SYNC) packets between ordinary clocks (OCs). An OC with a master port (aka master clock) transmits these SYNC packets to an OC with a slave port (aka slave clock). Upon receipt of the SYNC packets, the OC uses the encoded timestamp to adjust or synchronize its local clock process which maintains a local timescale. The OC is time-locked when its local timescale matches timescale of the received SYNC packets. PTP defines a grandmaster clock as the synchronizing timescale source for maintaining a common timing domain.
It is important to note that although IEEE 1588-2008 is a very well-documented protocol, performance aspects relating to its use or its ability to perform deterministically in real-world packet networks are not part of the standard. Packet delay variation (PDV) is a significant factor that limits the performance of packet timing systems because of the nature of best-effort packet networks.
Changes in network delay symmetry, common in heavily loaded packet networks, can cause time-offsets that can be problematic to some end-user services requiring the distribution of absolute time. In addition, because of the lack of metrics and masks to limit PDV at packet interfaces, it is not possible to specify or enforce packet-delay behavior that is favorable to adaptive clock recovery (ACR) systems. Another factor common to IEEE 1588-2008 systems is the long convergence time required by packet-based systems to achieve frequency and time lock. This is typically due to the use of statistical filters which rely on the receipt and processing of a significant number of SYNC packets in order to remove the effects of PDV on the recovered clock.
The statistical filtering process typically involves measuring SYNC packet arrival times to determine the lowest propagation delay between the master and slave clocks (shown in Figure 1). Due to the PDV experienced by these packets, a significant number of packet delays must be received in order to determine the lowest packet delay. The lowest delay value, commonly called the “delay floor,” is used to establish the phase of the recovered timing signal at the slave clock. Generally, delay floors remain stable for SYNC packets that traverse the same network path. However, as SYNC packets are re-routed due to network congestion or equipment faults, the value of the delay floor will also change. Therefore, the process of statistically analyzing the arrival times of SYNC packets to determine the value of the delay floor is continually performed by the slave clock.
While these challenges can impact the ability to deliver accurate timing over packet-based networks, LSI has solutions that support various approaches, features, and advantages for implementing IEEE 1588-2008 that we’ll discuss in this article.
LSI provides products with built-in timing solutions that can solve this timing issue. The LSI® Axxia® Communication Processor family helps service providers deploy flexible and low-cost Ethernet backhaul solutions by leveraging Microsemi’s ZL3034x timing-over-packet products, the only commercially available single-chip family of devices to offer both packet timing recovery using IEEE 1588-2008 and synchronous Ethernet for physical layer timing.
How can IEEE 1588v2 timing performance be improved?
The approach of using both physical-layer (synchronous Ethernet) and packet-layer (IEEE 1588-2008) technologies together has been proposed as a way to achieve both deterministic timing performance and fast convergence times. In this scenario, the clock recovered from the physical layer is used as a syntonization reference by the packet-based clock process. Timing information recovered from the packet layer is used to synchronize the local clock timescale which counts at the rate established by the syntonization reference. This method requires the participation of both timing methods (synchronous Ethernet and IEEE 1588-2008) and that all Ethernet nodes in the timing chain support this timing mode. These requirements lead to increased CAPEX for the deployment applications that require time or phase services.
LSI has proposed the Time Setting mode of operation as an alternative method of using both synchronous Ethernet and IEEE-1588 v2 to support the deterministic transport of time and phase information across packet networks. By using the innovative method of “selective synchronization,” a packet-based timing recovery system achieves a synchronized state by using both physical layer and packet-based methods, but maintains the synchronized state by using only timing recovered from the physical layer. This method requires source traceability between the PTP grandmaster and the physical-layer (PL) timing source (e.g., synchronous Ethernet).
The primary advantage of this approach is that it enables the timing recovery system to deterministically select when to synchronize (using IEEE 1588-2008 and synchronous Ethernet) and when synchronization is maintained (using only synchronous Ethernet). The ability to deterministically select when a PTP slave node is synchronized is very different from the common way that IEEE 1588-2008 is used today. By allowing a timing recovery system to select when it will synchronize using PTP, it has the ability to select periods when network conditions are favorable to the transport of packet-based timing protocols. Likewise, when network conditions are not favorable to packet-based timing protocols, the timing recovery system can simply maintain the accuracy of the timescale using the stable syntonized clock carried by synchronous Ethernet.
During periods of low PDV and stable delay floors, a PTP slave clock can converge much more quickly and provide a time-stable clock. In addition, low background traffic conditions greatly increase the accuracy of the delay request/response mechanism for the measurement of one-way delay between a grandmaster and slave clock. Thus, if the synchronization process can occur during low PDV conditions, PTP can accurately transport time over common-packet networks without the need for on-pass support. This advantage can lead to a reduction in the CAPEX for the deployment of PTP time and phase systems. Though transparent or boundary clocks can be used with the time-setting mode, their use may not be mandatory in all cases.
There are many advantages to maintaining the PTP timescale using synchronous Ethernet for extended periods of time. One significant aspect of the orthogonal PTP application layer and the physical layer is the ability to tolerate cyber attacks. Relying on the secure aspects of the physical layer frequency distribution, the time-setting mode has the ability to maintain timescale synchronization at slave clocks during periods when packet traffic may be compromised.
Another feature of the selective synchronization mode is the flexibility in the choice and use of the physical layer syntonizing protocol. Since the only requirement is that the syntonizing reference needs to be source traceable to the packet-based timescale, a number of deployment options exist including the use of traditional methods used to syntonize TDM networks. The actual performance and suitability of each of these deployment scenarios depends on the delay characteristics of the packet transport network, PTP transport awareness and the ability of physical layer syntonizing network to meet the timing requirements of the end-user application.
A fundamental requirement for enabling communications in any cellular network is the delivery of traceable timing with deterministic performance at each base station. FDD base stations require syntonization accuracy at the RF interface to within 50 PPB, and TDD base stations require synchronization accuracy to within 2.5 microseconds. With the migration from circuit-switched TDM networks to packet-based best-effort networks, new methods and technologies must be integrated into these wireless backhaul networks to support these underlying timing requirements. The selected timing method or technology must not only match the implicit timing requirements of their TDM predecessors, but may be required to deliver deterministic time and phase performance needed by the newer TDD wireless technology, based on synchronous Ethernet, IEEE-1588-2008 or a combination of both.
Driven by the need to economize the wireless spectrum, more efficient cellular transmission methods are starting to become popular. As cellular networks are upgraded with TDD technology, packet networks will evolve to accommodate the stringent requirements for time and phase synchronization delivery. The LSI selective synchronization approach using the Time-setting mode of operation combines the advantages of synchronous Ethernet and IEEE 1588-2008. By deterministically selecting when a wireless base station will be synchronized, many of the packet delay and asymmetry issues affecting the performance of packet-based timing systems can be avoided. In addition, the advantages of legacy packet equipment transport and increased cyber security are other reasons why the selective synchronization approach should be considered for the transport of time and phase synchronization needs.
In addition to meeting the basic timing requirements, the practical implementation, deployment, and operation of these systems must be seamless and compatible with existing and future networking equipment. Therefore, solutions that support a variety of integrated packet timing technologies are the most flexible and will ultimately provide the best performance. Due to the growth and evolution of packet-based networks, there is no single best technology or method for deploying timing services.
August 22, 2012