OpenAL Tutorials |
Sources Sharing Buffers
Lesson 5
Author: Jesse
Maurais
Adapted For Java By: Athomas
Goldberg
Launch the Demo via Java Web Start
This is a translation of OpenAL Lesson 5: Sources Sharing Buffers tutorial from DevMaster.net to JOAL.
At this point in the OpenAL series I will show one method of having your buffers be shared among many sources. This is a very logical and natural step, and it is so easy that some of you may have already done this yourself. If you have you may just skip this tutorial in total and move on. But for those keeners who want to read all of the info I've got to give, you may find this interesting. Plus, we will be implementing the Alc layer directly so that we can use some of that knowledge gained in lesson 4. On top of that we will create a program you might even use!
Well, here we go. I've decided to only go over bits of the code that are significant, since most of the code has been repeated so far in the series. Check out the full source code in the download.
static ALC alc; static AL al; // These index the buffers. public static final int THUNDER = 0; public static final int WATERDROP = 1; public static final int STREAM = 2; public static final int RAIN = 3; public static final int CHIMES = 4; public static final int OCEAN = 5; public static final int NUM_BUFFERS = 6; // Buffers hold sound data. static int[] buffers = new int[NUM_BUFFERS]; // A vector list of sources for multiple emissions. Vector sources = new Vector();
First I've written out a few macros that we can use to index the buffer array. We will be using several wav files so we need quite a few buffers here. Instead of using an array for storing the sources we will use a Vector. We chose to do this because it allows us to have a dynamic number of sources. We can just keep adding sources to the scene until OpenAL runs out of them. This is also the first tutorial where we will deal with sources as being a resource that will run out. And yes, they will run out; they are finite.
static int initOpenAL() { ALC.Device device; ALC.Context context; String deviceSpecifier; String deviceName = "DirectSound3D"; // Get handle to device. device = alc.alcOpenDevice(deviceName); // Get the device specifier. deviceSpecifier = alc.alcGetString(device, ALC.ALC_DEVICE_SPECIFIER); System.out.println("Using device " + deviceSpecifier); // Create audio context. context = alc.alcCreateContext(device, null); // Set active context. alc.alcMakeContextCurrent(context); // Check for an error. if (alc.alcGetError() != ALC.ALC_NO_ERROR) return AL.AL_FALSE; return AL.AL_TRUE; }
This is some sample code from what we learned in the last tutorial. We get a handle to the device "DirectSound3D", and then obtain a rendering context for our application. This context is set to current and the function will check if everything went smoothly before we return success.
static void exitOpenAL() { ALC.Context curContext; ALC.Device curDevice; // Get the current context. curContext = alc.alcGetCurrentContext(); // Get the device used by that context. curDevice = alc.alcGetContextsDevice(curContext); // Reset the current context to NULL. alc.alcMakeContextCurrent(null); // Release the context and the device. alc.alcDestroyContext(curContext); alc.alcCloseDevice(curDevice); }
This will do the opposite we did in the previous code. It retrieves the context and device that our application was using and releases them. It also sets the current context to null (the default) which will suspend the processing of any data sent to OpenAL. It is important to reset the current context to null or else you will have an invalid context trying to process data. The results of doing this can be unpredictable.
If you are using a multi-context application you may need to have a more advanced way of dealing with initialization and shutdown. I would recommend making all devices and contexts global and closing them individually, rather than retrieving the current context.
static int loadALData() { // Variables to load into. int[] format = new int[1]; int[] size = new int[1]; ByteBuffer[] data = new ByteBuffer[1]; int[] freq = new int[1]; int[] loop = new int[1]; // Load wav data into buffers. al.alGenBuffers(NUM_BUFFERS, buffers); if(al.alGetError() != AL.AL_NO_ERROR) return AL.AL_FALSE; ALut.alutLoadWAVFile("wavdata/thunder.wav", format, data, size, freq, loop); al.alBufferData(buffers[THUNDER], format[0], data[0], size[0], freq[0]); ALut.alutUnloadWAV(format[0], data[0], size[0], freq[0]); ALut.alutLoadWAVFile("wavdata/waterdrop.wav", format, data, size, freq, loop); al.alBufferData(buffers[WATERDROP], format[0], data[0], size[0], freq[0]); ALut.alutUnloadWAV(format[0], data[0], size[0], freq[0]); ALut.alutLoadWAVFile("wavdata/stream.wav", format, data, size, freq, loop); al.alBufferData(buffers[STREAM], format[0], data[0], size[0], freq[0]); ALut.alutUnloadWAV(format[0], data[0], size[0], freq[0]); ALut.alutLoadWAVFile("wavdata/rain.wav", format, data, size, freq, loop); al.alBufferData(buffers[RAIN], format[0], data[0], size[0], freq[0]); ALut.alutUnloadWAV(format[0], data[0], size[0], freq[0]); ALut.alutLoadWAVFile("wavdata/ocean.wav", format, data, size, freq, loop); al.alBufferData(buffers[OCEAN], format[0], data[0], size[0], freq[0]); ALut.alutUnloadWAV(format[0], data[0], size[0], freq[0]); ALut.alutLoadWAVFile("wavdata/chimes.wav", format, data, size, freq, loop); al.alBufferData(buffers[CHIMES], format[0], data[0], size[0], freq[0]); ALut.alutUnloadWAV(format[0], data[0], size[0], freq[0]); // Do another error check and return. if (al.alGetError() != AL.AL_NO_ERROR) return AL.AL_FALSE; return AL.AL_TRUE; }
We've totally removed the source generation from this function. That's because from now on we will be initializing the sources separately.
static void addSource(int type) { int[] source = new int[1]; al.alGenSources(1, source); if (al.alGetError() != AL.AL_NO_ERROR) { System.err.println("Error generating audio source."); System.exit(1); } al.alSourcei (source[0], AL.AL_BUFFER, buffers[type]); al.alSourcef (source[0], AL.AL_PITCH, 1.0 ); al.alSourcef (source[0], AL.AL_GAIN, 1.0 ); al.alSourcefv(source[0], AL.AL_POSITION, sourcePos ); al.alSourcefv(source[0], AL.AL_VELOCITY, sourceVel ); al.alSourcei (source[0], AL.AL_LOOPING, AL.AL_TRUE ); al.alSourcePlay(source); sources.put(new Integer(source[0])); }
Here's the function that will generate the sources for us. This function will generate a single source for any one of the loaded buffers we generated in the previous source. Given the buffer index 'type', which is one of the macros we created right from the start of this tutorial. We do an error check to make sure we have a source to play (like I said, they are finite). If a source cannot be allocated then the program will exit.
static void killALData() { Iterator iter = sources.iterator(); while(iter.hasNext()) { al.alDeleteSources(1, new int[] { ((Integer)iter.next()).intValue() }); } sources.clear(); al.alDeleteBuffers(NUM_BUFFERS, buffers); exitOpenAL(); }
This function has been modified a bit to accommodate the Vector. We have to delete each source in the list individually and then clear the list which will effectively destroy it.
char[] c = new char[1]; while(c[0] != 'q') { try { BufferedReader buf = new BufferedReader(new InputStreamReader(System.in)); System.out.println("Press a key and hit ENTER: \n" + "\t'w' for Water Drop\n" + "\t't' for Thunder\n" + "\t's' for Stream\n" + "\t'r' for Rain\n" + "\t'o' for Ocean\n" + "\t'c' for Chimes\n" + "\n'q' to Quit\n"); buf.read(c); switch(c[0]) { case 'w': addSource(WATERDROP); break; case 't': addSource(THUNDER); break; case 's': addSource(STREAM); break; case 'r': addSource(RAIN); break; case 'o': addSource(OCEAN); break; case 'c': addSource(CHIMES); break; } } catch (IOException e) { System.exit(1); } }
Here is the programs inner loop taken straight out of our main. Basically it waits for some keyboard input and on certain key hits it will create a new source of a certain type and add it to the audio scene. Essentially what we have created here is something like one of those nature tapes that people listen to for relaxation. Ours is a little better since it allows the user to customize which sounds that they want in the background. Pretty neat eh? I've been listening to mine while I code. It's a Zen experience (I'm listening to it right now).
The program can be expanded for using more wav files, and have the added feature of placing the sources around the scene in arbitrary positions. You could even allow for sources to play with a given frequency rather than have them loop. However this would require GUI routines that go beyond the scope of the tutorial. A full featured "Weathering Engine" would be a nifty program to make though. ;)
© 2003 DevMaster.net. All rights reserved. |
Contact us if you want to write for us or for any comments, suggestions, or feedback. |