A receiver's position accuracy is a fundamental challenge of satellite navigation system. As a result, GPS receiver designers must perform a combination of measurements to ensure the device will perform as expected.
David A. Hall, National Instruments

Geocaching involves discovering various "caches" whose coordinates can be found on the Web and whose contents are located all over the world. Several weeks

click to enlarge

Figure 1. Block diagram of GPS generation system.
ago, I joined hundreds of thousands of other GPS enthusiasts by trying my luck at geocaching for the first time. Armed with the latitude and longitude of a nearby cache, my friends and I began our hunt for the hidden treasure. As the token RF engineer and manager of a tool for GPS receiver testing, I naturally assumed the role of "navigator." Surely, my experience with GPS receiver testing should count for something. Thus, I carefully observed our coordinates and barked out directions to the team.

It took us about 15 minutes to arrive at the supposed cache location. According to my GPS device, the hidden treasure was situated between two large buildings, and quite close to the base of one of them. Unfortunately, after an hour of searching, we concluded that the cache was nowhere to be found. We had thoroughly combed all areas within five meters of the reported location – but to no avail.

After my apparent failure at geocaching, I could not help but wonder what went wrong. I have long known that a receiver's sensitivity affects its ability to track satellites that are partially blocked by buildings and other structures. Was it possible that our search radius was not large enough? With at least half of the sky blocked from my receiver's view, it was likely that my receiver could see only six to seven satellites. Fundamentally, the receiver's horizontal dilution of precision (HDOP), a figure of merit for receiver accuracy, was perhaps in the 4 to 10 range. If my assumption about HDOP was true, the receiver's position error could be anywhere from 5 to 25 meters.

GPS position accuracy is a fundamental challenge of satellite navigation systems. In fact, this concern along with satellite availability requires GPS receiver designers to perform a combination of measurements such as sensitivity and field testing to ensure the device will behave as expected. For GPS receivers, many environmental characteristics such as multipath fading and ionosphere/troposphere variations can affect pseudorange (distance to a satellite) and, therefore, receiver accuracy. In addition, a receiver's sensitivity can also have a substantial affect on accuracy. Because receiver sensitivity determines which satellites a receiver sees and which ones it does not, a highly sensitive receiver is better able to track satellites that have been partially blocked from view. The following sections will explain how GPS receiver sensitivity is measured; provide insight into how you can use techniques, such as drive testing and record and playback, to test behaviors that a simple sensitivity measurement cannot capture; and discuss how receiver figures of merit, such as HDOP, correlate with position accuracy.
Measuring Sensitivity
Sensitivity, one of the most basic measurements for any RF receiver, is used to characterize how a receiver will function in low signal strength conditions. For GPS receivers, designers usually specify two different types of sensitivity: acquisition sensitivity and tracking sensitivity.

Fundamentally, sensitivity is the lowest power level at which any receiver produces a desired minimum signal to noise ratio necessary to achieve a required BER (bit error rate). Because signal to noise ratio and BER are tightly correlated parameters, sensitivity is often measured by simply measuring the receiver's reported carrier to receiver noise density (C/N0) for a single satellite at a

click to enlarge

Figure 2. Position deviation for two receivers over a 30-minute period of time.
known RF power level. In general, acquiring a 3D position fix requires that the GPS receiver chipset can see four satellites above the minimum C/N0 requirement – usually in the 28 to 30 dB-Hz range. The precise minimum C/N0 is dependent on the baseband processor, and is often a tradeoff versus time to first fix (TTFF). Thus, the acquisition sensitivity is defined as the lowest power level at which a receiver can obtain a position fix.

By contrast, the tracking sensitivity is defined as the minimum power level at which the receiver can still track satellites that have already been identified. A common scenario is for a receiver to obtain a position fix on four or more satellites, and then have one satellite fade in signal strength as it is blocked from view. Even though the satellite power might dip below the 28 dB-Hz requirement for position fix, a receiver will still be able to track the satellite for some time.

For any RF receiver, sensitivity is highly correlated with the overall noise figure of the receiver. Mathematically, you can relate sensitivity to the noise figure with Equation 1.

As Equation 1 illustrates, you can estimate the sensitivity of a GPS receiver simply by measuring the noise figure of the receiver. For example, if a given chipset is able to achieve position fix at a C/N0 of 30 dB-Hz, and the RF front end has a noise figure of 4 dB, then the receiver will have an acquisition sensitivity of -140 dBm (-174 + 30 + 4). In addition, improving the sensitivity from -140 dBm to -142 dBm requires either a different chipset, or a 2 dB improvement in the receiver's noise figure.

In theory, you can measure acquisition sensitivity with either a single-satellite or multiple-satellite test stimulus. In addition, though four satellites above the minimum power are required to achieve a position fix, sensitivity is more commonly measured in a production test environment with a single-satellite test stimulus. In this case, an RF vector signal generator is used to generate a single C/A coded signal at an extremely low power level. Figure 1 shows a common system setup.

As Figure 1 illustrates, a fixed attenuator (or "pad") is often used at the output of the RF signal generator to ensure that the test instrument does not contribute additional noise. Depending on the architecture of the receiver, a "DC block" may also be required. Because many GPS receivers provide a DC bias signal on their input port, a DC block ensures that the receiver's DC bias does not damage the test instrument.

With the RF vector signal generator simulating a single GPS satellite, simply measure the reported C/N0 from the receiver. In a production test, it is actually quite common to test a receiver with an RF power that is slightly higher than its sensitivity. Because RF power and C/N0 are linearly correlated, the receiver's sensitivity can be measured at a range of power levels. For example, suppose the receiver required a minimum C/N0 of 30 dB-Hz and has a noise figure of 2 dB. In this case, the receiver's sensitivity would be -142 dBm (-174 + 30 + 2). However, the same receiver would also produce a C/N0 of 36 dB-Hz at an RF power level of -136 dBm (-174 + 36 + 2). Thus, as long as the RF front end of the receiver does not saturate (where RF power and C/N0 are no longer linearly correlated), sensitivity can be tested at a variety of RF power levels.
Validating Performance in Difficult Environmental Conditions
Sensitivity is an important measurement because it determines which satellites a GPS device will track and which ones it will not. In fact, an inherent challenge of the GPS navigation system is that a direct line of sight to some satellites is often obstructed – yielding RF signals that are often significantly attenuated. For this reason, engineers typically use one of three techniques in the design validation process to ensure that a receiver will perform as expected in difficult environmental scenarios. These techniques include (1) multipath simulation, (2) drive testing, and (3) record and playback testing.

Equation 1. Sensitivity as a function of C/N0 and noise figure.
When using a multipath simulator, you can use variable signal path delays to emulate multipath conditions, which are common in urban environments. When performing a drive test, an engineer will physically take the receiver to a location that is known to introduce either small-scale or large-scale signal fading. RF record and playback testing is a newer technique that you can use to supplement some of the more traditional approaches. In this scenario, you can connect an RF vector signal analyzer to a high-gain antenna and configure it for a continuous IQ acquisition at the GPS L1 frequency of 1.57542 GHz. Once the signal is recorded, use an RF vector signal generator to generate the recorded GPS signal to a receiver. The record and playback approach is not only more cost effective than drive testing, but it also yields more repeatable results. In either case, the receiver observes a difficult test scenario, and you can compare the reported coordinates and HDOP with the expected values.
Experiments on Receiver Accuracy
To better illustrate why techniques such as drive testing and RF record and playback testing are important, I decided to conduct a basic experiment, which coorelates HDOP to position repeatability. In this experiment, I placed two receivers in different locations and observed their reported position for 30 minutes. The first receiver, which I placed on top of a parking garage, benefited from a complete 360° view of the sky. The second receiver, which I placed next to a large building and under an awning, had only a partial view of the sky. Througout the 30-minute period, I monitored the position simply by reading from the receiver's serial port and parsing the NMEA-183 commands with a program that I wrote in NI LabVIEW.

As I expected, the receiver with "open sky" view performed significantly better than the receiver with the "blocked view." In fact, the "open sky" receiver reported an average of 8.9 satellites in view, as opposed to 7.0 for the receiver under the awning. Both the number of satellites and the orientation of satellites affected the reported dilution of precision for the receiver. With a clear view of the sky, most GPS recievers will report an HDOP of 0.8 to 1.2. Thus, as expected, my "open sky" receiver reported a mean of 0.89 with a standard deviation of 0.08. However, the receiver with the partial view of the sky fundamentally tracked fewer satellites and at lower power levels. In fact, over a 30-minute window, my "blocked view" receiver reported a mean HDOP of 6.04 with a standard deviation of 7.73 and peak of 74.5.

While it is difficult to precisely correlate HDOP with horizontal position error, you can observe the effect of poor HDOP on reciever performance by plotting the receiver's reported position over a period of time. Even though both receivers were stationary, each reported a range of positions that moved over time. In Figure 2, notice that the reciever in an "open sky" environment reported a deivation that was less than two meters in either direction from the mean position. On the other hand, the "blocked view" receiver reported up to 50 meters of deviation from the mean position. Figure 2 shows the complete position deviation results for both locations.

While my experiment does not provide any indication of the absolute accuracy of the receiver in either location, the wide range in reported position at the second location suggests that the reported coordinate accuracy was quite poor. Overall, it is safe to say that the higher HDOP values resulting from obstruction of the sky are a good predictor of position accuracy.
I may never know just how accurate my receiver's position was on the day that I tried my hand at geocaching. In fact, it is likely that my team's failure to locate the "hidden treasure" was the result of poor search techniques and not the result of poor satellite conditions. Either way, the adventure provides a colorful insight into the importance of GPS receiver accuracy. Moreover, my experiences as a technology user help to explain the motivation for measurements such as sensitivity and accuracy indicators such as HDOP.

David Hall is a product marketing manager at National Instruments and is responsible for driving the growth of RF and Wireless Communications hardware and software. His particular subjects of expertise include digital signal processing and digital communications systems. Hall can be contacted at; 512-683-5661.