In many cases it is the equivalent of 'having the cake and eating it too.'
This is governed by the mechanism or method of network delivery, software or hardware.
In software, the cost of networking is relatively low and favors extremely rapid change.
It is important to remember that it is constrained by the software architecture as well as the queues and threads that must be processed in concert with the operating system, hypervisor and application, etc.
All of these are contending for time with the CPU and executing a software instruction takes a certain relative amount of time based on the CPU architecture.
In hardware, the cost of networking is high and favors rapid packet exchange over the ability to modify the networking function.
I'm being very generous in this statement, the sole purpose of hardware is to move the packet from one spot to another, as rapidly as possible.
Because the majority of the work is done in silicon, the only means to modify the network is to subroutine into software (which undermines the purpose and value of the hardware) OR to replace the silicon (which can take months to years and costs, a lot).
Figure 1. The price vs performance curve |
The examples of host bridges and OVS are eminently capable at relative bandwidth requirements and latency supporting an application within the confines of a hypervisor. It can be remarkably efficient at least with respect to the application requirements. The moment the traffic exits a hypervisor or OS, it becomes considerably more complex, particularly under high virtualization ratios.
Network chipset vendors, chipset developers and network infrastructure vendors have maintained the continuing escalation in performance by designing capability into silicon.
All the while, arguably, continuing to put a downward pressure on the cost per bit transferred.
Virtualization vendors, on the other hand, have rapidly introduced network functions to support their use cases.
At issue is the performance penalty for networking in x86 and where that performance penalty affects the network execution.
In general, there is a performance penalty for executing layer 3 routing using generic x86 instructions vs silicon in the neighborhood of 20-25x.
For L2 and L3 (plus encapsulation) networking in x86 instruction vs silicon, the impact imposed is higher, in the neighborhood of 60-100x.
This adds latency to a system we'd prefer not to have, especially with workload bandwidth shifting heavily in the East-West direction.
Worse, it consumes a portion of the CPU and memory of the host that could be used to support more workloads. The consumption is so unwieldy, bursty and application dependent that it becomes difficult to calculate the impact except in extremely narrow timeslices.
Enter virtio/SR-IOV/DPDK
The theory is, take network instructions that can be optimized and send them to the 'thing' that optimizes them.
Examples include libvert/virtio that evolve the para-virtualization of the network interface through driver optimization that can occur at the rate of change of software.
SR-IOV increases performance by taking a more direct route from the OS or hypervisor to the bus that supports the network interface via an abstraction layer. This provides a means for the direct offload of instructions to the network interface to provide more optimized execution.
DPDK creates a direct to hardware abstraction layer that may be called from the OS or hypervisor. Similarly offloading instructions for optimized execution in hardware.
What makes these particularly useful, from a networking perspective, is that elements or functions normally executed in the OS, hypervisor, switch, router, firewall, encryptor, encoder, decoder, etc., may now be moved into a physical interface for silicon based execution.
The cost of moving functions to the physical interface can be relatively small compared to putting these functions into a switch or router. The volumes and rate of change of a CPU, chipset or network interface card has been historically higher, making introduction faster.
Further, vendors of these cards and chipsets have practical reasons to support hardware offloads that favor their product over other vendors (or at the very least to remain competitive).
This means that network functions are moving closer to the hypervisor.
As the traditional network device vendors of switches, routers, load balancers, VPNs, etc., move to create Virtual Network Functions (VNFs) of their traditional business (in the form of virtual machines and containers) the abstractions to faster hardware execution will become ever more important.
This all, to avoid the Networking Penalty Box.
Figure 2. The Network Penalty Box |
All the while, arguably, continuing to put a downward pressure on the cost per bit transferred.
Virtualization vendors, on the other hand, have rapidly introduced network functions to support their use cases.
At issue is the performance penalty for networking in x86 and where that performance penalty affects the network execution.
In general, there is a performance penalty for executing layer 3 routing using generic x86 instructions vs silicon in the neighborhood of 20-25x.
For L2 and L3 (plus encapsulation) networking in x86 instruction vs silicon, the impact imposed is higher, in the neighborhood of 60-100x.
This adds latency to a system we'd prefer not to have, especially with workload bandwidth shifting heavily in the East-West direction.
Worse, it consumes a portion of the CPU and memory of the host that could be used to support more workloads. The consumption is so unwieldy, bursty and application dependent that it becomes difficult to calculate the impact except in extremely narrow timeslices.
Enter virtio/SR-IOV/DPDK
The theory is, take network instructions that can be optimized and send them to the 'thing' that optimizes them.
Examples include libvert/virtio that evolve the para-virtualization of the network interface through driver optimization that can occur at the rate of change of software.
SR-IOV increases performance by taking a more direct route from the OS or hypervisor to the bus that supports the network interface via an abstraction layer. This provides a means for the direct offload of instructions to the network interface to provide more optimized execution.
DPDK creates a direct to hardware abstraction layer that may be called from the OS or hypervisor. Similarly offloading instructions for optimized execution in hardware.
What makes these particularly useful, from a networking perspective, is that elements or functions normally executed in the OS, hypervisor, switch, router, firewall, encryptor, encoder, decoder, etc., may now be moved into a physical interface for silicon based execution.
The cost of moving functions to the physical interface can be relatively small compared to putting these functions into a switch or router. The volumes and rate of change of a CPU, chipset or network interface card has been historically higher, making introduction faster.
Further, vendors of these cards and chipsets have practical reasons to support hardware offloads that favor their product over other vendors (or at the very least to remain competitive).
This means that network functions are moving closer to the hypervisor.
As the traditional network device vendors of switches, routers, load balancers, VPNs, etc., move to create Virtual Network Functions (VNFs) of their traditional business (in the form of virtual machines and containers) the abstractions to faster hardware execution will become ever more important.
This all, to avoid the Networking Penalty Box.