Shared Active Infrastructure and the Virtualization of Wireless Networks
Wed, 05/27/2009 - 7:31am
A virtualized network provides each operator with full independent control of their own virtual Base Transceiver Subsystem (vBTS), and connectivity to their own Base Station Controller (BSC) and core network.Steve Muir, Vanu®, Inc.
click to enlarge
Figure 1: Basic structure of a Virtualized System.
Passive infrastructure sharing, where operators share basic site components such as towers, shelters and electrical power supplies, is already commonplace in the wireless industry. In the last few years various operators have investigated shared active infrastructure: the sharing of active electronic components, e.g., base stations and backhaul transmission equipment. Regulators have relaxed rules on ownership of equipment to enable such sharing, recognizing that stringent build-out requirements in rural areas can only be satisfied if operators are able to leverage shared active infrastructure to reduce network costs.
Today's traditional shared deployments have necessitated that participating operators agree upon the same technology, roadmap, and features. The result is significant loss of competitive differentiation among the operators, discouraging operators from embracing the technology and leading to fewer service offerings for customers. A new solution to this challenge is virtualization of the radio access network (RAN), rather than traditional sharing.
Introduction to Virtualization
click to enlarge
Figure 2: Virtualized radio access network supporting three operators.
Figure 1 shows the basic structure of a virtualized system. The key observation to make is that the role of the hypervisor in a virtualized system is analogous to the role of the operating system (OS) in a standard computer system. The hypervisor manages various hardware interfaces, and provides essential services protection, translation, multiplexing, resource management, etc. to a number of clients. The essential difference between the hypervisor and the OS is that the OS's clients are application programs, while the hypervisor's clients are virtual machines. Each virtual machine runs its own OS, the guest OS, which in turn manages a set of standard applications.
Another difference is the interface structure: whereas the OS provides application programming interfaces (APIs) to its applications, the hypervisor provides each virtual machine with a set of virtual devices: network interface, disk, graphics adapter, etc. Some hypervisors support the use of a standard, unmodified OS as the guest OS, with virtual devices emulating standard physical devices. Others provide a guest OS environment replicated from the hypervisor's host OS, but incur much lower overhead to do so. The hypervisor may also provide a hypercall API that allows the OS running in a virtual machine to more efficiently invoke certain services of the hypervisor; such a facility is called para-virtualization.
Virtualization is used extensively in enterprise computing environments to support the use of virtual machines for various purposes. Data center hosting providers use virtualization to create a large number of hosted systems on a smaller number of physical machines, while retaining the ability to manage each hosted system as if it was a physical machine, i.e., separate disk quotas, network access and traffic management policies, independent upgrades of OS and applications. Enterprises may use virtualization to separate logical services web, email, firewall, etc. without requiring a separate physical server for each one. This enables greater security than running all services within a single OS environment. Finally, virtualization is used by engineers to boot many different guest OS on a single hardware platform for development purposes, e.g., testing of new OS-level software, such as device drivers, or investigation of OS security issues.
Applying Virtualization to Shared Radio Access NetworksOperators who wish to share active infrastructure, particularly the base station subsystem (BSS), do so in order to reduce the cost of deploying and operating certain parts of the network. The challenge they are faced with is the ability to retain independent management and configuration control, and be able to apply software technology upgrades that differentiate them from their competitors.
Traditional hardware radio approaches to shared infrastructure require that operators share a single traditional base station, thus eliminating the ability of each operator to provide independent feature sets or levels of technology. Furthermore, traditional base stations were not designed to be shared, and cannot provide fully independent management and configuration.
A virtualized RAN leverages software radio technology, which implements the complete base station subsystem (BSS) in software, rather than the traditional hardware-based approach. The key element of the BSS that must be shared is the base transceiver subsystem (BTS), the radio system that is located at each cell site and supports radio communications with mobile terminals. A software radio BTS implements all radio functionality, from physical layer through MAC layer and network layer, in software that runs on a standard operating system, e.g., Linux, on commodity off-the-shelf (COTS) processing platforms. This allows a software radio system to take advantage of investment in new technologies and open systems, rather than being limited to a particular radio vendor's own proprietary technologies. In turn, this breaks the vertical integration business model that operators have been forced into in the past and instead creates an ecosystem of horizontally-focused component suppliers that can be integrated into a more flexible and cost-effective solution.
A software radio BTS is much more readily virtualized than a hardware radio, since the BTS is just a software application. It is possible to construct a virtualized base station by using standard virtualization technology to create a virtual machine (VM) per operator, and running an independent BTS application for each operator within that VM. This ensures that each operator has complete control over their BTS, while guaranteeing that one operator's traffic, signaling and configuration data are isolated from other operators.
Figure 2 shows the architecture for a virtualized GSM network supporting three different operators. In this scenario one of the operators deploys the network and acts as an anchor tenant, using excess capacity to provide a managed service to the other two operators in order to reduce the cost of operating the network. Although superficially similar to a traditional roaming network arrangement, the virtualized network provides each operator with full independent control of their own virtual BTS (vBTS), and connectivity to their own BSC and core network.
An alternative model for use of virtualized RAN technology is the neutral host model. In this case the sites are owned and operated by a company that is not itself a mobile network operator, thus eliminating any conflict of interest concerns that arise when the anchor tenant is a competitor of one of the other site operators. Existing tower companies are natural candidates for neutral host management, and this approach fits in very well with the current trend towards provision of managed network services on a charging by traffic basis.
ConclusionShared active infrastructure is an essential technology in the mobile network operator's ongoing efforts to reduce the cost of providing service, particularly in sparsely populated areas where revenues are too low to make service in those areas economically attractive. Vanu, Inc.'s MultiRAN product is a solution that combines software radio solutions with virtualization technology to provide mobile network operators the cost savings of shared active infrastructure while allowing them to retain independent management control and technology evolution.
Steve Muir is chief technology officer at Vanu, Inc., www.vanu.com, 617-864-1711.
| Gigabit Serial Links Boost Wireless System Performance
The most difficult problem for designers of high-performance, software radio systems is simply moving data within the system because of data throughput limitations. Driving this dilemma are processors with higher clock rates and wider buses, data converter products with higher sampling rates, more complex digital communication standards with increased bandwidths, disk storage devices with faster I/O rates, FPGAs and DSPs offering incredible computational rates, and system connections and network links operating at higher speeds.
Traditional system architectures relying on buses and parallel connections between system boards and mezzanines fall far short of delivering the required peak rates, and suffer even worse if they must be shared and arbitrated. New strategies for solving these problems exploit gigabit serial links and switched fabric standards to create significantly more powerful architectures ideally suited for embedded software radio systems.
Software radio systems continually benefit from technology developed for consumer electronics, personal computers, IT infrastructure, and telecom systems. These very competitive markets place high value on price, features, and performance delivered to the customer and care little about details of hardware "under the hood."
To fuel these markets, silicon vendors have developed higher density processors, memories, peripheral interfaces, and multi-media interfaces. The most effective way to connect these functions has shifted strongly towards gigabit serial links, so these new devices come fully equipped with native gigabit serial interfaces.
Abundant evidence of this transition can be found in mass market PCs, which now use PCI Express for motherboard traffic and expansion cards, and serial ATA disk drives for mass storage. These serial interfaces require far fewer signal traces than the traditional parallel PCI buses they replace. This reduces the density of printed circuit boards and results in smaller diameter cables with more compact connectors. At the same time, data rates through these new serial links are faster than their parallel predecessors.
In order to take advantage of the wealth of high-volume, low-cost devices for mass-market electronics, and to reap the same benefits of easier connectivity, even the most powerful high-end software radio RISC and DSP processors from Freescale and Texas Instruments are now sporting gigabit serial interfaces.
Software drivers for software radio applications are evolving to take advantage of these new links through high-level drivers and that help manage the data transfers. By eliminating system bottlenecks, gigabit serial links can boost performance and open up new applications previously unattainable with earlier technology.
SDR Comes of Age: Technology Meets Economics
Wireless system designers face a daunting mix of technical and business challenges. Every new system they develop must perform much faster than the last system they developed while being flexible enough to accommodate new as well as legacy standards and wireless services. Simultaneously, the new system must have a lower BOM than the last system they developed and it must reduce their customer's total cost of ownership. How is it possible to achieve all this?
Software Defined Radio (SDR) is a technology that allows a single base station to receive and send a wide range of protocols and even support multiple wireless services from multiple wireless carriers. The concept of SDR has been around for well over a decade, but the industry initially considered its hardware too expensive to implement. However, recent advances in reconfigurable, reprogrammable processing in FPGAs has made SDR commercially viable. Today SDR is commonplace in infrastructure from leaders such as ALU, Huawei and NSN. It is even beginning to make inroads into smartphones.
At the same time that operators are demanding much more flexible networks, the semiconductor industry is going through a major disruption in which the cost of manufacturing a custom chip is skyrocketing. As a result, fewer markets can justify the costs of designing an ASIC or ASSP from scratch and the risk of redesigning one if something goes wrong. However, today FPGAs, implemented in the latest process technologies, boast advanced functionality and performance, allowing designers to quickly develop and bring to market their SDR designs and, in turn, lower customer Bill of Materials and total cost of ownership. SDR fueled by FPGA-based reconfigurable and reprogrammable processors is well on its way to becoming a de facto standard in wireless infrastructure.
Multi Core Processor Architectures Help Solve Design Challenges
While the ultimate SDR goal is to digitize the radio signal at the antenna level and handle all processing digitally in software, we know this is still years away, at least for commercial products.
However, an intermediate SDR architecture is now widely adopted in new designs of cellular basestations (BTS) driven by the latest requirements of cellular operators for indoor and outdoor BTS of various sizes. This approach has a common hardware platform to support all cellular standards and is remotely software upgradable to address future standard evolutions.OEMs also want a scalable solution that can address all types of BTS from the large outdoor macrocell to the tiny indoor femtocell in order to minimize software redesign.
This architecture requires a high performance, programmable baseband processor for which the challenges are cost/area, power consumption, performance, flexibility, scalability, and ease of programming. The power consumption versus performance tradeoff has become the key design benchmark for these new generation BTS basebands, but to date no solution has been able to address these criteria all together and tradeoffs have severely limited one or several of them.
Multi core processor architectures have now proven to offer the best criteria tradeoff because they enable coarse grain parallelism at the task level. Such parallelism solves the scalability issue and allows the same software architecture with a varying number of cores across all types of BTS.
Since full software implementations of HSPA+ and LTE modems are not feasible today, flexibility is achieved with very high performance DSP cores coupled with dedicated coprocessors, which are user programmable engines.
At the DSP core level, process shrinks cannot meet the required power reduction and performance increase. Therefore, new design techniques such as clock-less and data flow methodologies, have been designed to reduce power and area by a factor of 3, compared to traditional DSP architectures.
We believe baseband processors with cores in the mid range (10 to 30) are the only way to solve the design challenges discussed above in the short to medium term. We predict that OEMs will increasingly adopt such processors for their 3G and 4G next generation basestations.
Accurately Access the Radio Spectrum Environment
Wireless design engineers can develop low-cost, highly robust multi-channel RF front-ends for cognitive radios while improving overall performance, by incorporating Dynamic Spectrum Access (DSA) capabilities and Shared Spectrum Company’s (SSC) software solutions. DSA technology enables RF devices to continually, autonomously and accurately assess the radio spectrum environment, which allows them to automatically and swiftly adjust frequencies or other operating parameters in response to changing capacity or interference conditions. DSA-enabled devices do this without interfering with legacy radio systems, and in accordance with user-defined policies.
If a DSA-enable radio senses interference, it will quickly move off the operating frequency and rapidly “rendezvous” with the DSA network on a new, clear channel. In this respect, a DSA-enabled wideband wireless device can simply find the best set of frequencies and operating parameters to avoid interference and close the link. Traditional, higher cost transceivers, on the other hand, may attempt to transmit over the interfering source, cause additional harmful interference or lose the link altogether. Because the RF front-end in a DSA-enabled device requires less power, more forgiving battery requirements reduce overall size and cost of the final product. RF designers can also develop RF front-end solutions without worrying about the performance impact of unanticipated out-of-band or in-band interference or propagation difficulties that impact deployment and operational costs. Thus, DSA can improve performance while simplifying the design and deployment of mobile networks.
SSC’s DSA software architecture consists of four principal components: the DSA engine; the environmental sensing and detection subsystem; the policy module; and the radio interface. This modular approach enables the rapid development of APIs and software wrappers to incorporate DSA features into a wide variety of RF devices for any application. SSC is licensing its DSA software and providing expert support to OEMs, third-party developers and systems integrators.