View Single Post
Old 14 October 2016, 21:57   #37
mark_k
Registered User
 
Join Date: Aug 2004
Location:
Posts: 2,480
Extracts from the MSDN IAudioClient::Initialize page, with emphasis added:
Quote:
To achieve the minimum stream latency between the client application and audio endpoint device, the client thread should run at the same period as the audio engine thread. The period of the engine thread is fixed and cannot be controlled by the client. Making the client's period smaller than the engine's period unnecessarily increases the client thread's load on the processor without improving latency or decreasing the buffer size. To determine the period of the engine thread, the client can call the IAudioClient::GetDevicePeriod method. [...]

A client has the option of requesting a buffer size that is larger than what is strictly necessary to make timing glitches rare or nonexistent. Increasing the buffer size does not necessarily increase the stream latency. For a rendering stream, the latency through the buffer is determined solely by the separation between the client's write pointer and the engine's read pointer.
...
For an exclusive-mode stream that uses event-driven buffering, the caller must specify nonzero values for hnsPeriodicity and hnsBufferDuration, and the values of these two parameters must be equal. The Initialize method allocates two buffers for the stream. Each buffer is equal in duration to the value of the hnsBufferDuration parameter. Following the Initialize call for a rendering stream, the caller should fill the first of the two buffers before starting the stream. [...] While the stream is running, the system alternately sends one buffer or the other to the client—this form of double buffering is referred to as "ping-ponging". Each time the client receives a buffer from the system (which the system indicates by signaling the event), the client must process the entire buffer. For example, if the client requests a packet size from the IAudioRenderClient::GetBuffer method that does not match the buffer size, the method fails. Calls to the IAudioClient::GetCurrentPadding method are unnecessary because the packet size must always equal the buffer size. In contrast to the buffering modes discussed previously, the latency for an event-driven, exclusive-mode stream depends directly on the buffer size.
Does the first emphasised sentence above mean that for WASAPI shared audio, the sound buffer size setting in WinUAE doesn't necessarily affect latency?
mark_k is offline  
 
Page generated in 0.05442 seconds with 9 queries