Advertisement
Articles
Advertisement

Wireless Basestation Design Challenges Using High-Speed, 16-bit ADCs

Mon, 02/08/2010 - 7:30am

LISTED UNDER:

By Josh Carnes, National Semiconductor Corporation

Wireless Basestation Design Challenges Using High-Speed, 16-bit ADCs
click to enlarge

Figure 1. Block diagram of an IF sampling subsystem within a communications receiver.
Cutting-edge 16-bit, high-speed analog-to-digital converters (ADC) can offer the very high dynamic range and low distortion levels required to meet today's most demanding wireless communications standards. As communication receivers trend toward more flexibility, multi-standard/multi-carrier radios require digitization of wider bandwidths and therefore higher sensitivity due to reduced power in individual frequency channels and increased probability of in-band blocker signals. For this reason, ADC noise and distortion are critical.

This article discusses the key performance-limiting challenges involved in integrating an ADC into a basestation application, with a focus on driving and clocking the converter. Solutions to these challenges are demonstrated with a new, high intermediate frequency (IF) subsystem design incorporating the ADC16DV160 dual 16-bit 160 MSPS ADC, LMH6517 digitally-controlled variable gain amplifier (DVGA) and LMK04031B precision clock conditioner.

As shown in Figure 1, a high-sensitivity IF sampling subsystem for a basestation application is typically composed of a high-speed ADC, a precision clocking solution, and a DVGA, whose gain is controlled by an automatic gain-control (AGC) loop. The DVGA acts as both a buffer/driver interface to the ADC and a gain block that reduces the impact of the ADC noise when the input signal is small. The clocking solution provides a low-noise sampling clock for data conversion into the digital domain.
Signal Path Challenges
Cascading a DVGA and ADC presents many challenges that must be addressed to maximize performance. These challenges include:

1. Minimizing distortion introduced by the DVGA 2. Maximizing signal integrity through the DVGA to ADC interface 3. Minimizing switching noise at the input of the pipelined ADC 4. Minimizing the noise contribution of the DVGA 5. Utilizing the full input dynamic range of the ADC

Wireless Basestation Design Challenges Using High-Speed, 16-bit ADCs
click to enlarge

Figure 2. Bandpass filter interface between the DVGA and ADC.
The first three challenges are related to the distortion performance of the subsystem and limit the spurious-free dynamic range (SFDR) of the signal path. The harmonic distortion of the DVGA, the signal-dependent charge kickback from the ADC input switches and the interface impedance mismatch and signal reflections can all result in spurious information in the spectrum that aliases into the frequency band of interest.

Challenges four and five focus on the subsystem's signal-to-noise ratio (SNR) performance. Excessive noise from the DVGA degrades the noise and failing to use the full input range of the converter is a direct loss of SNR that can be equivalently viewed as a waste of power. All five challenges are related to each other through a number of tradeoffs.
Optimizing the Signal Path
Many of these challenges are addressed by selecting a high performance DVGA and then compensating for the DVGA non-idealities by inserting an impedance-matched, differential, high-order bandpass filter between the DVGA and ADC. The filter suppresses the DVGA's harmonic distortion, limits the bandwidth of the DVGA noise and minimizes the impedance-related signal integrity issues at the ADC interface.

High-order filters that are impedance-matched unfortunately have high insertion loss in practice and are very susceptible to component mismatches and PCB parasitics. The relationship between the filter order and losses poses a key tradeoff in the design of the DVGA to ADC interface. Increasing the output signal swing of the DVGA to compensate for the passband filter loss degrades the DVGA's harmonic distortion and third-order output intercept point (OIP3) as the signal nears the DVGA power rails. Additionally, the resonant nature of bandpass filters does not effectively suppress the signal dependent, glitch-like kickback of charge from the input switches of a typical pipeline ADC, which is most significant for large amplitude signals. With proper selection of the filter architecture and balancing of these tradeoffs, high quality noise and distortion performance can be achieved simultaneously.

Wireless Basestation Design Challenges Using High-Speed, 16-bit ADCs
click to enlarge

Figure 3. Low jitter clock solution.
One such filter interface solution is demonstrated on the new SP16160CH1RB subsystem design board in the form of an asymmetric, T-matched bandpass filter. The filter, shown in Figure 2, offers fourth-order high-frequency attenuation to achieve 40 dB second harmonic (H2) attenuation with less than 0.5 dB passband ripple for common IF frequency bands. The LC T-match provides an impedance transformation that can result in little passband attenuation while maintaining an impedance match between the source resistors at the DVGA input (necessary to maintain DVGA stability) and load resistors (necessary to provide a low-impedance input common-mode reference for the ADC).

This architecture is very insensitive to PCB parasitics and realizable in practice because it requires only shunt capacitive components and mostly series inductances. Charge kickback from the ADC can be mitigated with an empirical selection of capacitance in the filter's LC tank that is distributed into both differential and common-mode orientations. In this design, the passband attenuation is improved from 5 dB to nearly 0 dB by reducing the value of the source resistors. This attenuation improvement sacrifices a perfect impedance match but allows the DVGA to reach the ADC's full-scale reference with a smaller output amplitude; the resulting improved third-order intermodulation distortion performance is well worth the associated impedance mismatch.
Clock Path Challenges
For large input signals, the quality of the ADC input clock plays a pivotal role in limiting the system's achievable SNR. Jitter on the edge of the clock corrupts the periodic sampling instant of the ADC and adds noise to the signal itself. Equation [1] gives the maximum achievable SNR for an ADC due to jitter where fin is the input signal frequency, J is the RMS jitter, and is the input signal amplitude in units of dB relative to full scale (dBFS) such that small amplitudes have large negative value.

Wireless Basestation Design Challenges Using High-Speed, 16-bit ADCs
The equation illustrates three important points:

1. Jitter reduces the SNR more for higher frequencies 2. The SNR-limiting effect of jitter is worse for larger signals 3. The SNR can be improved by decreasing the total jitter

These observations are critical for basestation receiver applications due to the high IFs, typically ranging from 100 to 250 MHz, used in IF-sampling receivers. Although the power in the frequency channel of interest can be quite small, the ADC in the receive path must also digitize large blocking signals and therefore requires very high sensitivity (high SNR and SFDR). As shown in Equation 1, the high input frequencies and large blocking signals in these applications exacerbate the effects of clock jitter. For example, achieving an SNR of 72 dBFS for a -1 dBFS single tone input signal at 190 MHz requires the RMS jitter to remain below 236 fs. Achieving this quality of clocking performance is not trivial.
Optimizing the Clock Path
To reduce the total jitter on the clock, one must understand the clock noise's spectral content and target specific spectral regions of the phase noise for reduction. "Close-in" phase noise is the skirt-shaped noise with a bandwidth that typically extends out 20 MHz from the clock's fundamental tone and is heavily influenced by the loop characteristics of the clocking circuit that generates the clock, namely the PLL. "Broadband" phase noise has a flat spectral signature with a bandwidth that extends out indefinitely and is often dominated by clock buffer noise.

The SP16160CH1RB subsystem board addresses these two regions of phase noise separately. Low close-in phase noise is achieved using the LMK04031B precision clock conditioner in conjunction with a Crystek reference crystal oscillator and VCXO. The cascaded PLL architecture of the LMK04031B provides two stages of frequency targeted jitter cleaning. The first stage reduces the reference clock noise using a very low PLL loop bandwidth while the second stage uses an internal, low-noise VCO and high speed phase/frequency detector to further reduce the upper band of close-in noise. The LMK04031B clocking solution also multiples the 61.44 MHz reference clock frequency to generate the 153.6 MHz clock for the ADC. The close-in root-mean-square (rms) jitter of the generated CMOS clock is less than 200 fs integrated out to 20 MHz from the carrier.

The clock's broadband noise is troublesome because of its wideband nature. For the ADC to accommodate a clock with a very sharp sampling edge, the clock signal bandwidth must be very wide, leading to a large bandwidth of noise that couples onto the signal and aliases back into the first Nyquist zone, thereby reducing the system SNR. Reducing the bandwidth of the clock input or clock signal itself to reduce the noise bandwidth has a big disadvantage. It makes the circuit more susceptible to amplitude modulation (AM) to phase modulation (PM) noise conversion. This is due to the reduced slope of the sampling edge, which can lead to even worse noise.

Using a surface acoustic wave (SAW) filter and CMOS buffer, the SP16160CH1RB demonstrates the effective broadband noise-reducing solution shown in Figure 3. The clock from the LMK04031B is narrowly filtered by a Vectron SAW to purify the clock's spectral content and reduce the broadband noise. The Fairchild NC7WV125 CMOS buffer then sharpens the edge rate without adding a large amount of noise. Filtering and re-buffering the clock from the LMK04031B replaces the broadband noise of the LMK04031B with that of the CMOS buffer, reducing the broadband noise density from -162 dBc/Hz to -168 dBc/Hz. The overall 2.5 dB SNR improvement compared to an unfiltered, unbuffered approach can be demonstrated on the SP16160CH1RB subsystem board.
Performance and Summary
The SP16160CH1RB subsystem design uses an input bandwidth of 20 MHz centered at an IF frequency of 192 MHz and a sampling rate of 153.6 MSPS. By addressing the challenges of interfacing to high-speed data converters in basestation applications, the subsystem design achieves a typical Nyquist-band SNR of 71 dBFS and SFDR greater than 82 dBFS for a -1 dBFS tone. Third order modulation products that fall in-band during a two-tone test are less than -91 dBFS for a composite signal with a 1 MHz spread and combined -4 dBFS peak-to-peak amplitude.

In basestation applications, the sensitivity of the channel is more important than performance over the entire Nyquist band, especially in the presence of large blocking signals. In the presence of a -4 dBFS blocking signal offset 800 kHz from the GSM-type channel, the SNR in the 200 kHz channel is 94 dBFS and the SFDR is greater than 90 dBFS. In the absence of the blocker, the SNR is greater than 99 dBFS.

Driving and clocking 16-bit ADCs in wireless basestation applications are critical functions that can make or break a performance specification. The DVGA and clock circuits that perform these functions must be carefully chosen along with appropriate interfaces to maximize the system's dynamic range. The SP16160CH1RB subsystem design demonstrates a highly-linear, low-noise DVGA driver solution and a low-jitter clocking solution for operation with a 16-bit ADC in a multi-carrier, IF-sampling subsystem.

Josh Carnes is an applications engineer with National Semiconductor's High-Speed Signal Path Group, based in Ft. Collins, CO.

Advertisement

Share this Story

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading