Low Latency Workloads Technologies
Applies To: Windows Server 2012
This section provides overviews for the following group of technologies that are designed for or which were improved in Windows ServerĀ® 2012 to address low latency computing scenarios.
Latency means delay, and it refers to the length of time that elapses between two specific events, such as the amount of time between the transmission and the reception of a network message between two computers over a network path. Latency has a variety of possible causes, including electrical propagation delays, processing delays, and queuing effects.
A variety of processing workloads require that the time spent on inter-machine communications is reduced to the lowest amount possible. These workloads include distributed computing algorithms whose convergence time is bound by the network latency. Examples of such systems include distributed consensus and agreement protocols, Message Passing Interface (MPI) workloads, and distributed caching. Stock trading and other financial markets workloads also require that the latency incurred by network communications is reduced to the greatest degree possible.
Low latency computing environments typically contain applications that require very fast inter-process communication (IPC) and inter-computer communications, a high degree of predictability regarding latency and transaction response times, and the ability to handle very high message rates. The following section contains information about technologies that you can use to improve performance in low latency computing scenarios.