Advertisement

As we enter the 100 Gbps era, producers of network management and security applications will need to address numerous analysis challenges in order stay ahead of the data growth curve. Staying faster than the future will require next-generation solutions that not only scale but also deliver and understand data, as well as accelerate application performance.

Data delivered in the right way creates insights that enable actions. Being able to understand all the data within networks ensures that apps run quickly, videos stream smoothly and end-user data is secure. Yet, as the volume and complexity of data increase, processing it all becomes increasingly difficult.

A Daunting Onslaught of Data

Ericsson projects that mobile phone users will quadruple their monthly consumption by 2016, with video being the largest contributor to this growth. In fact, video will account for more than 50 percent of all mobile traffic by 2019.

This rapid growth in data traffic will necessitate an increase in connectivity speeds. High-bandwidth applications such as video on demand will continue to drive adoption of 40 Gbps and 100 Gbps connections.

Best Practices for the 100G Era

Network equipment manufacturers must find a way to reliably increase performance at connections up to 100 Gbps while reducing risk and time-to-market. They must also effectively manage and secure networks while still handling a varied portfolio of 1, 10, 40 or even 100 Gbps products. Analysis will have to be performed at the same level across speeds ranging from 1 Mbps to 100 Gbps. Below is a list of best practices to ensure the network of today can move successfully into the 100G era:

Guarantee Data Delivery

High-speed solutions must be able to capture network traffic at full line rate, with almost no CPU load on the host server, for all frame sizes. Full line-rate packet capture with zero packet loss, frame buffering and optimal configuration of host buffer sizes removes the bottlenecks that can cause packet loss. It also reliably delivers the analysis data that network management and security solutions demand.

Frame buffering is a feature that can absorb data bursts, ensuring that no data is lost. It can also remove application limitations, making it a critical feature for high-speed network analysis.

Understanding Frame Processing

Next-generation network analysis requires understanding and insight. Frame classification can provide details on the type of network protocols being used. For efficient network traffic monitoring, it is important to be able to recognize as many protocols as possible, as well as extract information from layer 2-4 network traffic. Header information for the various protocols transported over Ethernet must be made available for analysis. This includes encapsulation and tunneling protocols.

Time Precision

Assuring quality of time-sensitive services and transactions is often essential and requires high precision. In 100 Gbps networks, nanosecond precision is essential to assure reliable analysis. At 10 Gbps, an Ethernet frame can be received and transmitted every 67 nanoseconds. At 100 Gbps, this time is reduced to 6.7 nanoseconds.

Flow Identification

Network applications must able to examine flows of frames that are transmitted between specific devices (identified by their IP addresses) or even between applications on specific devices (identified i.e. by protocol and UDP/TCP/SCTP port numbers used by the application).

In high-speed networks up to 100 Gbps, it is important to identify and analyze flows of data to gain an overview of what is happening across the network and then control the amount of bandwidth that services are using. It also allows for intelligent flow distribution, where frames are distributed to up to 32 CPU cores for massive parallel processing.

Packet Capture Acceleration

High-speed solutions must provide guaranteed delivery of real-time data with information that allows quick and easy analysis. What will distinguish these is the ability to accelerate the performance of analysis applications. This can be achieved by reducing the amount of data to analyze, ensuring that applications are not overwhelmed and only processing the frames that need to be analyzed. One of the main challenges in analyzing real-time data in high-speed networks is the sheer volume of data. Reducing this amount of data can often accelerate the performance of analysis applications. This can be accomplished through features such as frame and flow filtering, deduplication and slicing.

Processing Acceleration

100 Gbps solutions must provide acceleration features that enable appliance vendors to maximize the performance of their analysis applications. These features must off-load data processing that is normally performed by the analysis application. Some examples of off-loading features are:

  • Intelligent multi-CPU distribution.
  • Cache pre-fetch optimization.
  • Coloring.
  • Filtering.
  • Checksum verification.

These free up CPU cycles, allowing more analysis to be performed faster.

Tunneling Support

Tunnels have been used to transport information reliably and securely across networks that are often outside of the control of the sender. Tunneling provides challenges because the data to be analyzed is encapsulated in the tunnel payload and must first be extracted before analysis can be performed. This is an extra and costly data processing step. By off-loading recognition of tunnels and extraction of information from tunnels, high-speed solutions can provide a significant acceleration of performance for analysis applications.

Faster Than The Future Today

As we enter the 100 Gbps era, network equipment manufacturers will need to explore solutions that can help them stay one step ahead of the data growth curve brought on by the explosive growth in mobile data traffic, cloud computing, mobility and big data analysis.

Key considerations for accelerating the network to 100G:

  • Reliable hardware platforms for the development of 100 Gbps analysis products.   A 100 Gbps accelerator, for example, can intelligently manage the data that is presented for analysis, providing extensive features for managing the type and amount of data. Slicing and filtering of frames and flows, even within GTP and IP-in-IP tunnels, significantly reduces the amount of data. Look for deduplication features that can be extended in analysis software to ensure that only the right data is being examined.
  • Software suites that provide data-sharing capabilities should also be considered to enable multiple applications running on the same server to analyze the same data. When combined with intelligent multi-CPU distribution, this allows the right data to be presented to the right analysis application, thus sharing the load. Intelligent features for flow identification, filtering and distribution to up to 32 CPU cores accelerate application performance with extremely low CPU load.
  • PCI-SIG compliant products that will fit into any commercial off-the-shelf server will allow organizations to focus their development efforts on the application, not the hardware.
  • A common Application Programming Interface (API) that allows applications to be developed once and used with a broad range of accelerators. This allows combinations of different accelerators with different port speeds to be installed in the same server.

Conclusion

New technologies are setting the stage to enable organizations to manage the ever-increasing data loads without compromise. By scaling with increasing connectivity speeds, as well as accelerating network management and security applications, enterprises can stay faster than the future.

Advertisement
Advertisement