The IC choices architects make today will determine how they will ?be able to weather the coming storm of 4G wireless networks.By Dave Nicklin and Manuel Uhm, Xilinx
Over the next five years, the wireless industry will confront a perfect storm of technical and economic challenges as it attempts to bring the benefits of fourth generation (4G) wireless technology to users around the world.
4G network operators want to bring to users new, higher speed data intensive services and applications to help the operators raise ARPU (average revenue per user). But before they can achieve this, they need to put in place a wireless network infrastructure, preferably leveraging SDR (software-defined radio) or multi standard, multi-mode equipment, which will enable the operators to provide new services and upgrade the network via software downloads, thereby allowing the mobile device manufacturers to also capture additional revenue streams, even after the network has been deployed.
At the turbulent center of this network storm, is the growing cost and complexity of ICs controlling the wireless equipment.
Indeed, the move to 4G wireless networks is severely impacted by the semiconductor industry. The economic models of the semiconductor industry are rapidly changing due to the almost exponential increase in the development costs of semiconductors at each new silicon process node. The new cost models mean that only very large markets, such as consumer products, will be able to provide a sufficient ROI on development of an ASIC or ASSP. It is expected that development of ASSPs specifically to meet the needs of wireless infrastructure will no longer be economically viable at 32 nm and below. Additionally, the consolidation in wireless OEMs — 91% of the market is served by just five manufacturers — means that the risks in ASSP development have increased to the extent that if an ASSP fails to be adopted by two or three of the main players, then it is unlikely the companies producing the ASSPs will ever recover its development costs.
Let’s examine in more detail the specific IC issues surrounding two key areas within the base station system architecture: the radio card and baseband processing.
Issues in Radio Design?
The design of a modern radio is increasingly challenging as operators look to reduce operational costs, while requesting greater flexibility to meet changing network standards. Presently, every region in the world has different frequency usage and regulations, which results in the requirement for radio equipment to be able to control many different frequency bands of operation. Even when dealing with the same standard, it’s unlikely that the same frequency will be available worldwide, which is one reason why mobile device manufacturers developed multi-band GSM handsets. However, OEMs are constantly looking for new ways to improve operational efficiency and development cost savings, so in addition to making single standard equipment that can operate in different bands around the world, they are increasingly looking to develop infrastructure equipment that operators can easily reprogram to support a variety of different standards. This is quite an undertaking given the fact that each air interface standard has a differing spectral mask, different carrier bandwidth or even differing numbers-of-carriers, dependant on its geographical deployment and how much spectrum the operator possesses in that region.
Network providers can also significantly reduce overall operational costs through improvements in the energy efficiency of a base station’s radio amplifiers. It is widely known that the power amplifiers in a traditional base station consume upwards of 60% of the total site power. This is largely due to the fact that designers of legacy base stations were not able to create systems that could efficiently account for the power base stations need to transmit higher order modulation signals. But these higher modulation signals will become an inherent part — the norm, not the exception — of emerging wireless standards. Such modulation schemes increase the spectral efficiency (bits/s/Hz) of the network, but create significant problems with regards to transmission efficiency.
In order to improve the overall performance of the power amplifier, system architects are looking to increase levels of signal processing in the digital domain via algorithms, such as crest factor reduction (CFR) and digital pre-distortion (DPD).
While CFR deals with reducing the dynamic range of the transmitted signal (thus allowing the power amplifier to be operated closer to its maximum), DPD linearizes the power amplifier, resulting in significantly more RF power for the same DC power component. A typical LDMOS Class AB power amplifier without CFR and DPD, will usually have a performance in the 8% to 12% efficiency range, but with the algorithms enabled, network providers can increase this power efficiency by as much as 35% to 40%. When the resulting savings are calculated for a typical mid-sized network of 10K base stations, it leads to an impressive savings of $18M+ per annum (pa), and a reduction in carbon of 31,000 tonnes.
The development of multi-modal radio technology goes a long way to address many of the issues in radio design previously outlined. Network architects can leverage new analog technology in the form of direct conversion architectures to develop a single radio that is capable of handling any air interface within a given frequency range. The key benefits of multi-modal radios throughout the RAN ecosystem are shown in the table.
A key enabling technology for multi-modal radios is the platform FPGA. As well as providing high levels of flexibility via its fully programmable fabric, its embedded DSP blocks and high-speed serial transceivers enable OEMs to develop a programmable digital radio that can cover virtually any air interface standard with lower power dissipation and reduced costs in comparison to less adaptable designs built around multiple ASSPs and ASICs.
Issues in Baseband Design
High-performance baseband functions, such as turbo decoder forward error correction (FEC) blocks are very challenging to design and implement. They require high levels of processing power and, at the system level, must achieve this with as low latency as possible. In order to address the burden placed on standard DSPs, some manufacturers developed wireless market-specific products. However, this approach brings its own challenges. There have been occasions when the embedded functions inside these specialized DSPs did not work as required. In addition, standards bodies often delay or change standards, and these DSPs could not support those changes. Consequently, designers are demanding ever greater levels of flexibility, and are looking to other technologies to provide a solution.
Research work, such as that expressed in Pollack’s Rule — which states that performance only increases to the square of complexity (i.e. to get 2x the performance, there needs to be 4x more transistors on the silicon) — has lead some silicon manufacturers to conclude that multi-core devices are the only way forward to meet the demands of future baseband processing requirements, such as 3GPP LTE.
However, multi-core devices, both those built from small arrays of complex DSPs and those formed by larger arrays of smaller general purpose processors, bring with them a huge number of new technological challenges. While they promise greater efficiency and performance by leveraging parallelism, they add significantly more control and process complexity because of the need to schedule access to limited shared resources, such as hierarchical memory caches.
Software engineers used to addressing problems in a sequential way find it much more challenging to target these architectures than hardware designers who have been exploiting the efficiency of parallel implementations in FPGA hardware from the very beginning. It is also much more difficult for programmers to write generic and transportable code in a multi-core system, as all the architectures are so different from one another. Software programmers have to take into account the target architecture when writing the code, which results in large sections becoming bespoken to a specific device, thereby making it much harder to adapt the design for newer architectures or to implement it on devices from a different manufacturer.
To avoid this, architects are looking to FPGAs to perform a much higher proportion of the physical layer implementation. FPGAs have more than enough capacity and performance to do so, and have a design flow that is well established and proven for parallel design development. FPGA design tools are also evolving to allow for a higher level of abstraction, enabling design groups to use DSP algorithmic design tools, such as Matlab, to program the hardware as well as the software in FPGAs. The FPGA’s more homogenous and granular architecture makes it easier for carriers to adapt the design throughout the product’s lifetime.
In conclusion, the IC choices architects make today will determine how they will be able to weather the coming storm of 4G wireless networks. To negotiate the turbulence, more flexible products will be a necessity, while those based on today’s prevailing architectures may prove too inflexible and fail. It would certainly be advisable for designers to evaluate the full potential brought by the FPGA's mix of potent performance, low power, and flexibility.
DD??David Nicklin is senior manager of wireless product marketing at Xilinx, Inc. Manuel Uhm is director of wireless communications at Xilinx, Inc., and chair of the markets committee for the SDR (Software Defined Radio) Forum.