Overview of Audio Data Flow
Typically, a DirectMusic application obtains sounds from one or more of the following sources:
- MIDI files
- WAV files
- Segment files authored in DirectMusic Producer or a similar application
- Component files authored in an application such as DirectMusic Producer and turned into a complete composition at run time by the DirectMusic composer object
Note Any of these data sources can be stored in the application as a resource rather than in a separate file.
Data from these sources is encapsulated in segment objects. Each segment object represents data from a single source. At any moment in a performance, one primary segment and any number of secondary segments can be playing. Source files can be mixed; for example, a secondary segment based on a WAV file can be played along with a primary segment based on an authored segment file.
A segment comprises one or more tracks, each containing timed data of a particular kind; for example, notes or tempo changes. Most tracks generate time-stamped messages when the segment is played by the performance. Other kinds of tracks supply data only when queried by the performance.
The performance first dispatches the messages to any application-defined tools. A tool can modify a message and pass it on, delete it, or send a new message. Tools are arranged in linear sets called toolgraphs. A message might pass through any or all of the following toolgraphs, in the order given:
- Segment toolgraph. Processes messages from a single segment.
- Audiopath toolgraph. Processes messages on a single audiopath.
- Performance toolgraph. Processes all messages in the performance.
Finally, the messages are delivered to the output tool, which converts the data to MIDI format before passing it to the synthesizer. Channel-specific MIDI messages are directed to the appropriate channel group on the synthesizer. The synthesizer creates waveforms and streams them to a device called a sink, which manages the distribution of data through buses to DirectSound buffers.
There are three kinds of DirectSound buffers:
- Sink-in buffers are DirectSound secondary buffers into which the sink streams data. These buffers enable the application to control pan, volume, 3-D location, and other properties. They can also pass their data through effects modules to add effects such as reverberation and echo. The resulting waveform is passed either directly to the primary buffer or to one or more mix-in buffers.
- Mix-in buffers receive data from other buffers, apply effects, and mix the resulting waveforms. These buffers can be used to apply global effects. An effect achieved by directing data to a mix-in buffer is called a send. Mix-in buffers can be created only by using audiopath configurations authored in DirectMusic Producer.
- The primary buffer performs the final mixing on all data and passes it to the rendering device.
Note Applications are not responsible for managing secondary buffers that are part of a DirectMusic performance. Although an application can obtain a buffer object for the purpose of adding effects and changing properties, it cannot lock the buffer, write to it, start it, or stop it by using the IDirectSoundBuffer8 interface.
The following diagram is a simplified view of the flow of data from files to the speakers. A single segment is shown, though multiple segments can play at the same time. The segment gets its data from only one of the four possible sources shown: a WAV file, a MIDI file, a segment file authored in DirectMusic Producer, or component files combined by the composer object. In all cases, data can come from a resource rather than a file.
For a closer look at the flow of messages through the performance, see Using DirectMusic Messages.
For information on how to implement the process shown in the illustration, see Loading Audio Data and Playing Sounds.