Don't click here unless you want to be banned.

LSL Wiki : Sound

HomePage :: PageIndex :: RecentChanges :: RecentlyCommented :: UserSettings :: You are


Second Life uses the FMOD audio library to handle sound.

When sound files are uploaded to SL they are encoded from the WAV PCM format to the Ogg Vorbis format. Even though sounds are streamed, the client will not start playback until it has completely downloaded the sound file (because the client decodes them back into WAV files). The maximum length of a file is 10 seconds. Sounds are always sampled at 44.1KHz, 16-bit, mono (stereo files will have one channel dropped--merged (as in combined)--when uploading). The bitrate of the encoded sample is selected when uploading and can be between 32, 64, 96 or 128kbps. Sounds are played at some specified volume.

Sound Functions:

Deprecated Sound Functions:
(Provided for historical comparisons only -- do not use these!)

Q: How do I play an MP3?
A1: You can stream audio on your land using llSetParcelMusicURL. You can read more about the limitations of this at the llSetParcelMusicURL page.
A2: If you want to upload a song to play in SL without streaming it directly, you can do this, but you may run into some problems. First of all, you're limited to less than 10 seconds per track. Presumably, the Lindens' reasoning for this is that it's easier to just avoid copyright disputes than to attempt to police them, but it's possible that there's a technical limit for it as well. (Beyond "we don't want you filling up our servers", that is.) Note: when songs or other audio files are cut into 10 second clips, most scripts lack the ability to link them seamlessly.

Q: Why not just play them as OGGs?
A: Because mosts sounds are played more then once, and it's CPU-intensive to decode them multiple times; by caching the intermediate WAV stream it saves CPU time. The vast majority of soundcards on the market can only handle WAV streams, meaning that the audio has to be decoded first.

There are various 'jukebox' scripts available that are designed to play a sliced-up song's segments in order. To create the sound inventory items needed, break up the MP3 into 10-second clips (GoldWave is a useful utility for this--it can do it automatically via its "cue points" tool), then upload them. Unfortunately, uploading songs like this can be costly. Expect to spend hundreds of L$ per song. -EepQuirk

Do note, that it's Ogg Vorbis - not Ogg- when referring to the audio format and Ogg when referring to the container format, as described on -SignpostMarvMartin

Functions | Sound
Comments [Hide comments/form]
Are the files actually encoded using the really bad-sounding CBR-mode of the Ogg Vorbis libraries? Or are they using the VBR quality-levels that are documented to approximate 32, 64, 96, and 128kbps? Asking as I'd like to minimize the size of each section I upload by 'listen testing' it at all encoding levels off-line before uploading them and actually spending the L$.
-- WolfWings (2005-11-01 13:46:23)
"Imagine that 3D rendering engines had never been invented and all games still worked like Myst by presenting a series of flat photographs. That is the state of sound in current games technology in 2006...
Native digital synthesis is the future for multimedia and games. Instead of recording a bunch of sounds like photographs the synthesist creates code to be executed at runtime. These are the sounds, expressed as procedures or formulas which will be run by the client hardware. Synthesis is a direct analogy to the 3D rendering engine used for most modern games, only difference is that it's a sound rendering process. "

from "Practical synthetic sound design" Introduction for games developers:

Sound synthesis of course will never completely replace recorded audio files, but it can go a long way in enhancing the atmosphere and interactivity of a virtual reality. After playing around with "puredata", an open source sound synthesis engine/blackboard, I was captivated by the idea of automated sound generation based on information that is already present in any decent physics system. Blind people can navigate a complex environment using nothing but the echoes of their own footsteps. How much of our awareness of our surroundings comes through our ears? I think puredata or something like it would integrate very nicely with ll events as a low network-bandwidth sound layer, but also by procedurally generating reverb and other acoustic properties of an environment (such as footsteps) it can add a whole new "dimension" to SL.

-fenn wakawaka
-- FennWakawaka (2006-06-22 03:38:25)
End Italics Tag
The link Fenn is referring to is:
-- ChristopherOmega (2006-06-22 16:46:03)
oops. I stumbled across an example of another VR sound rendering project today. let's see if i can get it right this time: and
This pdf describes how the open source uni-verse project went about translating from 3d object data to digital signal processing parameters in order to "render" the acoustics of a virtual room, in a manner very similar to ray tracing.

I think that in addition to simple room acoustics, a system using standard engineering modal analysis algorithms would be needed to generate the "inherent" sounds of objects - the clank of metal hitting stone, the thump of footsteps on a wooden bridge. However more complex acoustic interactions such as musical instruments, babbling brooks, etc. would still need to be modeled by hand.
-- FennWakawaka (2006-06-25 07:57:44)
Re-recovered the old page or I had a cached one? Displays properly for me now.
-- XerdarOh (2007-05-10 02:48:17)
Folks, don't delete the page content; restore the latest intact version via the "Edit this page" link below. Select the latest (topmost) intact version from the list and then click the "Re-edit as New" button, and then the "Store" button. Hopefully Catherine will get a handle on this problem soon.
-- TalarusLuan (2007-05-17 15:33:09)

Breaking italics.
-- ChristopherOmega (2007-06-07 18:04:21)
Attach a comment to this page: