After all, you have updated the macOS to the newest macOS Mojave. A lot of new functions are now available to use in macOS Mojave, and you will enjoy MacBook UI and supportive features. On the other side, you have to deal with few bugs they will act as a roadblock in your journey. When my colleague updated the Mac to macOS Mojave, he was facing sound not working on Mac in macOS Mojave. Then after trial and error method, we tried to fix the sound not working in Mac and the trick worked.
A portable USB Sound Card wholesale. Plug in your audio through USB. Buy USB Sound Card. The best quality at the best price. Wholesale available.
So I would like to share some useful tips regarding fix Audio won’t work in macOS Mojave.
we are happy to help you, submit this Form, if your solution is not covered in this article.
- This Solution also fixes the following problems:
- Mac volume buttons not working
- no output devices found Mac
- internal speakers not working Mac or no audio sound when you playing online video on browsers like Safari, Google Chrome
- External speakers not working on Mac or annoying popping noise when playing Noise volume or video.
Fix Audio-Sound not working on Mac
Quick look:
Related Readings
- Whatever media player are you using on Mac check the volume controls are not turned down if the volume is low then boost up the volume? If your Mac Volume locked on mute then get solution below.
- Play different audio file, DVD or CD or any other file on your Mac to check if the problem is with a particular file or not.
If you’re listening to music on Mac’s inbuilt speakers then,
- Remove External speaker or headphones.
Try this:
Step #1: Click on “Apple Menu”.
Step #2: Open “System Preferences” and click “Sound”.
Step #3: Select the Output devices as “Internal Speakers”.
Step #4: Also check the “Output volume’s” slider is on the right side.
Step #5: Besides, also make sure that the “Mute” is not selected.
Run Command into Terminal to fix a Sound issue,
- Open the Terminal app to run a command on your Mac.
- Type: “sudo killall coreaudiod” and press Enter to run. Hope this command work and fixed.
If you’re listening to audio through external speakers then,
- Properly plug in the external speakers in the audio port in your Mac or Display port. Also check the proper power supply, if necessary. Make sure external speaker is turned on and try to adjust the volume of the speaker.
- Let’s check,
Step #1: Click on “Apple” icon and open “System Preferences”.
Step #2: Click “Sound” and then click on “Output”.
If your PC has an only single audio port, then click on “Use audio port for” and select Sound Output and select the external speakers.
- If the headphones or external speakers are connected to USB port then,
Step #1: Open “Apple” menu and then click on “System Preferences”.
Step #2: Click “Sound” and choose “Output”.
- In addition check, external USB speakers are chosen,
Step #1: Go to “System Information”.
Step #2: In the “Hardware” section, select “USB”.
Also check the connected speakers are on the list and if they are not, then unplug and re-plug the speakers. Still, the external speakers are not responding then prefer manual guide of the speaker.
If you are using HDMI external Display’s port with sound, then try this,
- Unplug the speaker or headphone.
- Also, check all the cables of the Display are perfectly connected to the Mac.
- Try this,
Step #1: Tap “Apple” menu and open “System Preferences”.
Step #2: Open “Sound” and click “Output”.
Step #3: Select “Display Audio” from the “Output” device list.
If you are using Digital receiver then,
- Digital port is not available in all the Mac, but if your Mac have then,
Step #1: Click on “Apple” menu and select “System Preferences”.
Step #2: Click “Sound” and select “Output”.
Step #3: Again, select the “Digital Output”.
- Verify the Mac is properly connected with the digital-ready receiver through optical digital cable. Apart from this, check the Digital receiver is set up to the option Digital Input.
- Adjust the volume of the receiver because when you connect the digital receiver, Mac’s control can’t be used to adjust volume.
Extra Ideas:
In case of the internal speaker not working then Try external speakers as an output device
You should Re-install macOS Mojave using create bootable USB installer
Premium Support is Free Now
We are happy to help you! Follow the next Step if Your Solution is not in this article, Submit this form without Sign Up, We will revert back to you via Personal Mail. In Form, Please Use the Description field to Mention our reference Webpage URL which you visited and Describe your problem in detail if possible. We covered your iPhone 11 Pro, iPhone 11 Pro Max, iPhone 11, iPhone 8(Plus), iPhone 7(Plus), iPhone 6S(Plus), iPhone 6(Plus), iPhone SE, iPhone 5S, iPhone 5, iPad All Generation, iPad Pro All Models, MacOS Catalina or Earlier MacOS for iMac, Mac Mini, MacBook Pro, WatchOS 6 & Earlier on Apple Watch 5/4/3/2/1, Apple TV. You can also mention iOS/iPadOS/MacOS. To be Continued..
You can also Downgrade from Mac Mojave to Mac High Sierra.
Jaysukh Patel is the founder of howtoisolve. Also self Professional Developer, Techno lover mainly for iPhone, iPad, iPod Touch and iOS, Jaysukh is one of responsible person in his family.
Contact On: [email protected] [OR] [email protected]
Contact On: [email protected] [OR] [email protected]
This article is based on my presentation given at the Game DevelopersConference in Moscow in 2003. The author is the professional developerof 3D sound based engines for modern games.
3D Sound vs. Surround Sound
Sound is not as important as graphics in game development. The game developersspend more time on new features and effects for 3D graphics. It's difficultto persuade people to spend time and money on high-quality sound in games.At the same time, most users would better get a new 3D accelerator thana new sound card.
However, the situation is changing - both users and developers are nowpaying more attention to sound. And modern projects dedicate to sound upto 40 % of the budget, time and manpower.
Audio chip makers and 3D sound developers have done their best to convinceusers and application developers that good 3D sound is an integral partof a modern computer.
Sound was stereo before, then it turned into 3D and now we have multi-channelsolutions: 4-channel, 5.1, and 7.1.
Let's have a closer look at 3D sound, its similarity and differencefrom multichannel solutions.
The concept of 3D sound means that sound sources are located in the 3Dspace around the listener. Each sound source represents an object in avirtual game world able to produce sound.
Here is a typical view in a 3D shooter by the example of Vivisector:Beast Inside (from Action Forms). There is the Listener and Sound Sources.Some of the sources are stereo (such as background music; in this particulargame wind and jungle sounds are the main ambient sounds), 8 sources areproduced by monsters, 1 source is dedicated for a player - shots, steps,and 3 ambient sounds (in this case they are sounds of insects, birds etc.).
3D sound is used for deeper submersion into the game's virtual worldby making more realistic what's going on in the scene. Various technologiesare used to emulate or hyperbolize sound behavior in the real world. Forexample, reverberation, reflected sounds, occlusions, obstructions, remotemodeling (how far a sound source is from the listener) and many other effects.
ÇD Sound technologies: positioning
Everyone perceives sounds differently (it depends on the ear shape, ageand psychological state). There can't be a single opinion about sound qualityof a particular sound card or effectiveness of a given 3D technology. Soundreproduction depends a lot on a sound card and speakers, as well as onthe audio engine of a given game.
Let's see how the 3D sound effect is created. We'll start with the 2D panning.This technology was used yet in ID Software's DOOM. So, every mono soundsources is played as stereo, and their positioning can be altered withthe left- or right-channel's volume level. Such system has no verticalpositioning but it's possible to change the sound a little (for example,by filtering high frequencies) when it comes from behind the listener becausein this case he hears it a little muffled.
Now the hardware realization. A sound card can emulate position of asound source with the HRTF (Head Related Transfer Function) on twospeakers or headphones. Filtering and other transformations emulate humanauditory sensation.
HRTF (Head Related Transfer Function) is a transfer functionwhich models sound perception with two ears to determine positions of thesources in space. Our head and body are actually obstacles modifying thesound, and our ears hidden from the sound source perceive sound signalsaltered; then the signals proceed to our head to be decoded in order todetermine the right position of the sound source in space.
Product Page; Windows Driver 1.0.6 - Supports Windows 7, 8, 8.1, and 10 (32 and 64-bit) M-Track 2X2M. Product Page; Windows Driver 1.0.6 - Supports Windows 7, 8, 8.1, and 10 (32 and 64-bit) M-Track Hub. Product Page; Windows Driver 1.0.3 - Supports Windows 7, 8, 8.1, and 10 (32 and 64-bit) Further Technical Support. Acclaimed audio interfaces, studio monitors, and keyboard controllers. Drivers, Firmware, & Software Updates Search. Do you have the latest drivers for your device? Our engineering team is constantly adding, updating and improving our drivers to ensure optimal performance. Do you have the latest drivers for your device? Our engineering team is constantly adding, updating and improving our drivers to ensure optimal performance. Series Pianos USB Audio and MIDI Interfaces MIDI Interfaces Keyboard Controllers Microphones Accessories Legacy You must select a series. Download m audio fast track driver for mac. M-Audio M-Track Quad3.3.11.dmg BY DOWNLOADING OR USING THIS SOFTWARE, YOU ACKNOWLEDGE THAT YOU HAVE READ THIS LICENSE AGREEMENT, THAT YOU UNDERSTAND IT, AND THAT YOU AGREE TO BE BOUND BY ITS TERMS. IF YOU DO NOT AGREE TO THE TERMS AND CONDITIONS OF THIS LICENSE AGREEMENT, PROMPTLY EXIT THIS PAGE WITHOUT.
On the left you can see three HRTF (sound source position, azimuth 135degrees and 36 degrees) for three different persons for the left and rightears respectively. All of them are based on certain laws. In most casesthey are recorded using special methods with special stereo mics insertedinto ears of a human or a model (KEMAR). Sensaura, in particular, utilizessynthetic HRTFs using the same laws you can see on the slide (for example,the peak at 2500 Hz and the drop at 5000 Hz for a given point in space).Some other companies use averaged HRTFs.
The system is actually composed of two FIR filters (Finite Impulse Response),and HRTF is their transfer function. Since the HRTF are discreet, and it'stoo costly to store megabytes of HRTF, the source real position is calculatedwith HRTF interpolation.
Downsides of HRTF
1. Sound can be badly distorted!
2. Operation can be pretty slow.
3. If sound sources are immovable, their positions can't be determinedprecisely, because the brain needs them moving (movement of the sourceor subconscious micro-movements in the listener's head), which helps todetermine a sound source position in the geometrical space.
It's typical of people to turn their heads towards unexpected sounds.When the head's turning, the brain gets additional information definingthe sound's position in space. If the sound source does not generate aspecial frequency forming the difference between the front and rear HRTFfunction, the brain ignores such sound; instead, it uses data from thememory and compares the information about location of known sound sourcesin the hemisphere.
4. Headphones give the best results. Headphones make it simpler to solvethe problem of delivering one signal to one ear and another signal to anotherear. Moreover, some people do not like headphones, even light wirelessmodels.
Besides, the fact that a sound source seems to be much closer when theplayer has headphones on should also be accounted for.
Acoustic systems make it possible to avoid some problems of the headphones,but there are other troubles popping up: first, it's not clear how to usespeakers for binaural listening, i.e. when a part of the signal goes toone ear and the other part to the other ear after the HRTF transformation.When we connect speakers instead of headphones the right ear catches thesound meant for the left one as well, and vice versa. One of the way-outscan be crosstalk cancellation (CC).
In so-called sweet spots a listener can hear all 3D effects perfectly,while in other areas the sound will be distorted. The necessity to choosethe right position, i.e. sweet spots, brings in new problems. The widerthe sweet spot, the better. That is why the developers keep on lookingfor new ways to expand the sweet spots.
In a multi-speaker system (4.1, 5.1) the sound is distributed among speakerswhich are located around the listener's head. The sound coming from oneor another speaker is positioned so that the listener could locate it.
In principle, usual panning is enough, i.e. there are several streams(depending on the number of speakers) which play simultaneously on allspeakers but at different volume levels - hence the effect. For example,Dolby Digital utilizes 6 and 8 streams in the 5.1 and 7.1 configurationsrespectively.
The Sensaura MultiDrive, Creative CMSS (CreativeMultispeaker Surround Sound) technology reproduces sound using HRTF functionswith 4 or more speakers (every sound area uses its own crosstalk cancellationalgorithm).
Each pair of speakers forms front and rear hemispheres. Since the soundareas are based on the HRTF functions, each sweet spot allows for goodperception of sources located on each side of the listener and sourceslocated on the front/rear axis. As the covering angle is pretty wide, thesweet spot is large enough.
Without crosstalk cancellation (CC) positioning of sound sources isimpossible. Since HRTFs are used for 4 speakers for the MultiDrive technologyit's necessary to apply CC algorithms to all of 4 speakers, which requirespowerful computational power.
Usage of the HRTF on the rear speakers makes its necessary to positionthe rear speakers accurately relative to the front ones. Front speakersare usually placed near the monitor. A subwoofer can be put somewhere onthe floor in the corner. As for rear speakers, people place them wherethey find it better. Not everyone would want or has place to put them behind.
Also remember, that calculations for HRTF and CC for 4 speakers takemuch power. And Aureal, for instance, uses the panning algorithms for therear speakers because restrictions on positioning of rear speakers arenot that strict.
NVIDIA uses Dolby Digital 5.1 for 3D sound. When positioned, the wholesound stream is encoded into the AC-3 format and transferred to the externaldecoder in the digital form (for example, for a home theater).
Min/Max Distance, Air Effects, Macro FX
One of the main features of a sound engine is distance effects. The fartherthe sound source, the quieter it is. One of the simplest models is loweringthe volume level at farther distances: the sound designer must assign acertain minimal distance out of which the sound starts fading out. Whilethe signal is within this distance, it can only change its position; whenit crosses the BORDER=1 it loses half of its strength (-6 dB) with each meter.It will keep on getting quieter until it reaches the Maximum distance,where it's too far to be heard. When this distance is reached the soundcan keep on dying out until it comes to the zero volume level, but it'sbetter to turn off such sounds to free the resources. The farther the maximumdistance, the longer the sound will be heard.
In most cases the volume level is based on the logarithmic dependence.The designer can discern loud and quiet sounds. Sound sources can havedifferent Min and Max Distances. For example, a mosquito isn't heard atthe distance of 50 cm already, while the sound of an air plane dies outonly at the distance of several kilometers.
A3D EAX HF Rolloff
A3D API extends the DirectSound3D distance model by modeling high-frequencydegeneration - like it works in the real world when high frequencies areabsorbed by the atmosphere according to the logarithmic law - approximately0.05 dB per meter (for the frequency chosen: 5000 Hz by default). But inthe foggy weather the air is thicker. And high frequencies are fading outquicker. EAX3 grants lower-level features for modeling atmospheric effects:here two reference frequencies are assigned - for low and high frequencies.Their effect depend on the environment parameters.
MacroFX
Most HRTF measurements are carried out in the far field, and it makes the calculations simpler. But if the sound sources are located within1 meter (in the near field), the HRTF doesn't work adequately. The MacroFXtechnology was developed to reproduce sounds coming from the sources ofthe near field. The MacroFX algorithms apply sound effects in the near-field,and the sound source seems to be located very close to the listener asif it's moving from the speakers towards the listener and even penetratesinto his/her ears. This effect is based on accurate modeling of sound-wavepropagation around the listener's head from all positions in space, andon transformation of the data with the effective algorithm.
This algorithm is integrated into the Sensaura engine and managed bythe DirectSound3D, i.e. it is transparent for application developers whocan create a good deal of new effects.
For example, in flight simulators a listener as a pilot can hear a conversationof the air traffic controllers as if he has headphones on.
Doppler, Volumetric Sound Sources (ZOOM FX), Multiple Listeners
The Doppler effect is observed when the wavelength changes as thesource approaches or moves away. When the sound source is nearing the wavelengthshortens, and when it's moving away it grows in accordance with the specialformula. Racing or flight simulators benefit most of all from the Dopplereffect. In shooters it can be used for rackets, lasers or plasma, i.e.in any objects that move very fast.
Volumetric Sound Sources
Modern systems of reproduction of positioned 3D sound utilize HRTF functionsforming virtual sound sources, but these synthetic virtual sources arespot. In the real life the sound mostly comes from large sources or compositeones which can consist of several individual sound generators. Large andcomposite sound sources allow for more realistic effects in comparisonwith spot sources.
A spot source can be successfully applied to large but distant objects,for example, a moving train. But in the real life when the train is approachingthe listener it's no more a spot source. However, the DS3D model determinesit as a spot source anyway, and the picture gets less realistic (i.e. itlooks like a small train near rather than a huge one).
Aureal was first to apply the volumentric sound in its A3D API 3.0.Sensaura with its ZoomFX was next. The ZoomFX technology solves this problemand defines a large object as a collection of several sound sources (incase of a train the composite source can consist of noise of wheels, engine,couplings of carriadges etc.).
3D Sound Technology: wavetracing vs reverbs
In 1997-1998 every chip maker decided upon the technologies they consideredto be promising. Aureal, the then leader, staked on the maximum realismin games with Wavetracing. Creative decided that it's better to use precalculatedreverberations and developed EAX. Creative bought Ensoniq / EMU in 1997,the developer and manufacturer of professional studio effect-processors- that is why they had the reverb technology at that time. When Sensauraappeared on the market, they used the EAX as a base, named their versionEnvironmentFX and started on other technologies: MultiDrive, ZoomFX andMacroFX. NVIDIA was the last to come onto the scene (developer of the componentsfor the MS X-Box), - they implemented the unique real-time Dolby Digital5.1 encoding for 3D sound positioning.
Wavetracing
To create the effect of full submersion into the game it's necessaryto calculate the acoustic environment and its interaction with sound sources.As the sound propagates, the waves interfere with the environment. Thesound waves can reach the listener in different ways:
- direct path
- 1st order reflections
- 2nd order or late reflections
- occlusions.
Aureal's Wavetracing algorithms analyze the geometry describing the 3Dspace to determine ways of wave propagation in the real-time mode, afterthey are reflected and passed through passive acoustic objects in the 3Denvironment.
The geometry engine in the A3D interface is a unique mechanism of modelingof sounds reflected and passed through obstacles. It processes data atthe level of geometrical primitives: lines, triangles and quadrangle (audiogeometry).
The audio polygon has its own location, size, shape and properties ofthe material it's made of. Its shape and location are connected with thesound sources and listener and influence how each separate sound is reflectedor goes through or round the polygon. The material properties can changefrom transparent for sounds to entirely absorbing or reflecting.
The database of the graphics geometry (which is displayed on the monitor)can be passed through the converter which turns all graphics polygons intothe audio polygons while a game level is loaded. Global values can be setfor parameters of reflecting or occluding objects. Besides, it's possibleto process the graphics geometry database in advance by processing thepolygon conversion algorithm and storing the audio geometry database ina separate file-card and swapping this file while a game's level is loaded.
As a result, the sound becomes much more realistic: the combinationof the 3D sound, acoustics of the rooms and environment and accurate representationof audio signals to the listener. The environment modeling realized byAureal has no equals, even as compared to the EAX latest versions fromCreative.
However, the number of hardware streams assigned for calculating reflectionsby the Wavetracing technology is limited. That is why it's still a farway to the full realism. For example, the processing power won't be sufficientfor late reflections, not to mention the graphics processing. Besides,the Wavetracing technology is not prompt; the realization requires hugeexpenses. That is why you shouldn't disregard the prerendered sound texturesof the EAX technology. 3D graphics doesn't use the real-time renderingbased on the ray tracing method yet.
Occlusions
The EAX technology and its reverberation model will be described a littlebit later. And now let me dwell upon the occlusion effects. In principle,it can be done by turning down the volume, but the more realistic effectcan be reached with the low-pass filter.
In most cases one type of occlusions is enough - the sound source is locatedbehind the blind obstacle. The direct path is muffled, and the filteringdegree depends on the geometrical parameters (thickness) and materialsthe wall is made of. Since the sound source and listener have no directcontact, the echo from the source is muffled according to the same principle.
For the maximum realism the API developers at Creative use one more conceptof Obstruction which means that the direct path is muffled - there is nodirect contact with the listener, but the source and the listener are inthe same room, and late reflections reach the listener in the originalform.
One more type is Exclusions. The source and the listener are in differentrooms but they have a direct contact, the direct path reaches the listener,but the reflected sound can't entirely pass through the opening and getsdistorted (depending on the thickness, shape and properties of the material).
Anyway, no matter how the effects are realized (with Aureal A3D, CreativeLabs EAX or manually on your own audio engine), it's necessary to tracegeometry (wholly or only the sound part) to find out whether there is adirect contact with the sound source. This is a very strong blow on performance.That is why in most cases it's better to build the simplified geometryfor sound (especially for shooters, 3D RPG or other similar games thatbring in as much realism as possible). Fortunately, such geometry is almostalways calculated to define collisions - in order not to trace the wholegeometry around the player in the room. That is why we can use the samegeometry and make it a bit more detailed for the sound.
Environments morphing
One more solution of Creative Labs was launched in 2001 together with theEAX3. This is the algorithm of gradual transformation of reverb parametersof one environment into another. The picture demonstrates two practicalrealizations.
- The first is Position transition: the reverb parameters change graduallydepending on the player's position between two environments with absolutelydifferent parameters (in this case, open space and enclosed space withmetallic walls). The closer the player to the open space, the more effectivethe reverb parameters for the open space are, and vice versa.
- The second type is Threshold transition: the parameters change automaticallywhen the player crosses a certain BORDER=1 between the rooms.
Environments morphing is one of the most important functions concerningreverberation. It's a breeze now to create new presets for reverb parameters.Even if the gradual transition is not used, you can use this function toform a certain average environment (for example, we have Outdoor and Stonecorridor) by setting the morphing factor equal to 0.5, and we will getsomething average of different sounding.
Before the Environments morphing was developed, this effect in the games(i.e. Carnivores 2) where the parameters couldn't be changed gradually(there were fixed presets in EAX1 and EAX2) was created the following way.An intermediate environment was composed of those 25 presets available(mostly orally). For example, there is Cave preset and it's required tomade gradual transition into the Mountains preset. After listening andadjusting some of the parameters the Stone Corridor is chosen as somethingintermediate. Now you can avoid it thanks to the environment morphing.
Interfaces and API
Now let's talk about choosing API for programming the audio engine. Thechoice is actually not great: Windows Multimedia, Direct Sound, OpenAL,Aureal A3D.
Unfortunately, the drivers of the Aureal A3D are still buggy, and itworks poorly on the modern Windows 2000 and XP.
The Windows Multimedia system is the basic sound reproduction systeminherited from earlier Windows 3.1. It's rarely used for games becauseof its large latency due to its large buffer. However, the WinMM is usedin some semiprofessional cards which have special optimized WDM drivers.
OpenAL, Loki Entertaiment's solution of a cross-platform API, analogof the OpenGL. It's promoted by Creative Labs as an alternative for DirectSound. The idea is good but the realization is poor. Besides, Loki Entertaimenthas recently gone bankrupt. We hope that a new alternative API for soundwill soon appear because the OpenAL is a real nightmare for a programmer.However, NVIDIA has recently launched the hardware driver for the OpenALon its nForce chipsets, which was a real surprise.
Direct Sound and Direct Sound 3D are the most optimal APIs at the moment.It has no equals at the moment. It's a bit pretentious, but after all,it reproduces sound, and there is nothing more needed.
Beside these hardware APIs (the APIs that have hardware drivers insteadof the emulation of reproduction via DirectSound or WinMM), there are socalled wrappers (program interfaces which use ready soft-hardware interfacesfor creation of their own interface). As a rule, every game has its ownwrapper interface.
There are a lot of such API-wrappers (they have no real hardware support):Miles Sound System, RenderWare Audio, GameCoda, FMOD, Galaxy, BASS, SEAL.
MilesSS is the mostfamous one - 2700 games use exactly this wrapper. They licensed Intel'sRSX technology, and now offer it as an alternative to the software 3D sound.There are a lot of features available, but it's not deprived of weak points:it covers only Win32 and Mac and has a very high license price.
Galaxy Audio was developed for Unreal and now it's used for all Unreal-enginebased games. But the Unreal 2 was built on the OpenAL, that is why theGalaxy can be considered dead now.
Game Coda and RenderWareAudio from Sensaura and Renderware respectively have almost equal sizes.Both support PC, PS2, GameCube and XBOX and many various features, butthe license price is still too high.
Finally, the FMOD. It has recentlyarrived but it takes one of the leading positions thanks to a wide choiceof features and technologies supported in the API.
Firelight Multimedia (Melbourne, Australia) which was founded in 1995and had only one person on the staff - Brett Peterson, paid much attentionto audio tracker formats MOD, XM, IT, S3M, and the first version of thatAPI (yet for DOS) was also released at that time. It was a free API fornoncommercial use for demos and other interactive applications.
Since the tracker formats playback technologies require a very fastsoftware mixing code, the FMOD takes the leading position among the APIsbecause its mixing code is the fastest.
When the first cards with hardware 3D sound came onto the scene, theybrought a new problem of combining the hardware and software 3D sound,and, therefore, fostered development of new technologies: Aureal A3D, CreativeLabs EAX 1, EAX 2, scene geometry software manager - for A3D compatibility.
In 2000 the company was renamed into Firelight Technologies and got4 persons on the staff. They primarily ported a code to the PS2 and XBOXconsoles. Firelight Technologies was almost the first to offer audio middlewarefor consoles. Now the company aims to embrace as many platforms as possibleand to develop a single interface (maximum compatible and emulating somefunctions on the software level if they are not available on a certainplatform).
The year of 2003. This is the only API supporting 7 platforms - PC (win32),PlayStation2, XBOX, GameCube, Linux, Mac OS (OSX) and PDA (winCE), theMultiple Listeners technology, 12 compilers for any Visual Basic programminglanguage up to Assembler, the fastest mixer software code, a fast MP3 decoder(unfortunately, an additional incense from Thompson multimedia is neededto use MP3 for commercial applications), but there is an alternative -OGG or WMA. But that's not all the FMOD has. At http://www.fmod.org/you can look through the full list of supports and license prices. No secretthat they are currently developing the FMOD hardware driver for severalWindows platforms and mobile phones (PDA). At present, FMOD takes a firmposition on the market.
Nice to see that a lot of the things I wanted our Engineers to developinto A3D actually got done by an outside company, says David Gasior / FormerTechnology Evangelist at Aureal. I trust David Gasior, and agree to whathe says about the FMOD. It's not advertising but a real state on the currentAudio API market.
EAX (Environmental Audio Extensions)
EAX (Environmental Audio Extensions) from Creative Labs is not an API,not a library but a set of extensions for the API DirectSound3D.
It's simple to realize the EAX in a game for programmers but adjustmentof the parameters takes much more time.
We will deal separately with the Listener and Sound Sources. The EAXsystem discerns parameters adjusted separately for the Listener and forSound Sources. We will call them Listener parameters and Sound Source Parametersrespectively.
In 1997-1998 Creative launched the EAX v1. This was a primitive setof 26 presets and 3 parameters for more accurate adjustment of the ListenerParameters and 1 parameter for for the Sources. The EAX 1 was soon followedby the EAX2 (14 parameters for the Listener and 13 for the sources includingocclusions), it was actually taken as a standard for games. InteractiveAudio Special Interest Group (http://www.iasig.org/)created the IASIG Level 2 standard which was almost entirely based on theEAX2, but still had some disadvantages. The matter is that every companythat applies the IASIG Level2 - Microsoft Direct Sound 8, Sensaura EnvironmentFX,Aureal A3D, - makes its own add-ons, - it can rearrange or rename the parameters.It makes adjustment and porting a bit inconvenient. The EAX2 is describedin detail in the SDK at http://developer.creative.com/.
EAX Advanced HD
In 2001 Creative announced the Audigy sound card and new EAX functionsnamed EAX Advanced HD. They cover 25 (!) parameters for accurate adjustmentof the Listener and 18 parameters for the sources including two new occlusioneffects.
So, in the EAX Advanced HD (let's call it simply EAX3) the Listenerparameters are divided into two groups: high-level and low-level.
Listener Parameters
The high-level parameters include Environment, 26 presets back compatiblewith the EAX 1 and 2. The second parameters of the Environment Size from1 to 100 meters. And the third group is flags for automated calculationof low-level parameters depending on the environment size; it covers theauto latency calculation, decay time of late reflections etc. You can learnthe details in the SDK and documentation, and here we will just look atthe approach.
The low-level parameters are divided into several subgroups dependingon their actions.
The first one covers volume level parameters. There are volume levelparameters for all sounds (Room - master volume), 1st order (Reverb), 2ndorder (Reflections), and the Room Roll-off Factor.
The second subgroup includes the time parameters: Reverb and ReflectionDelay, and Reflections Decay Time (in seconds).
The third group consists of the sound tone parameters. This can helpthe player define what the walls are made of, what is the air density inthe environment etc. Every material reflects and absorbs certain frequencies.These parameters emulate such absorption and reflection. They are relativefrequencies (LF - Low Frequency and HF - High Frequency) within which changescan be made. For example, metallic walls reflect more frequencies thanwooden ones, and the HF level will be lower for them than for emulationof wood. For example, the workshop has the following parameters: 362HzLF and 3762 Hz HF; a wooden room has the LF at 99 Hz and the HF at 4900Hz. Finally, there are parameters controlling the effect of Room LF andHF frequencies (in dB). This subgroup also contains Decay factorfor LF and HF, and Air Absorption HF factors.
The fourth subgroup controls granularity of sound reflected. But I'dprefer the concept of density. Here we have the Environment Diffusion (0..1)and Echo Depth and Echo Time - how many times the original sound repeatswhile fading away. For example, Echo Time is 250 ms by default, which meansthat the echo repeats 4 times/sec. So, this group controls density and,therefore, realism.
One of the most interesting subgroups is the fifth one with the panningeffect. They allow for much higher dynamics of the sound reflected. Ifyou know how the rooms are arranged, you can make the program calculateand define the direction most reflections come from, for example, takinginto account how the near walls are positioned. Or imagine that the playeris in the open space, i.e. there is no echo, but he can hear thesound reflected from the cave's hole located not far away. The programcan pan the reflected sound as it it comes from the cave hole with theReverb Pan (1st order reflections) and Reflections Pan (2nd order reflections)parameters. This is a unit vector with the coordinates (x,y,z) changingwithin 0 - 1. All the values are equal to 0 by default, which means thatreflections can come from all directions. In case of 1 the reflectionscome from only one directions, and the maximum value is limited by 0.7for all coordinates.
The last subgroup is Pitch Modulation effects.
These effected are not typical of real environment. They are createdfor emotional load, for example, to make you feel dizzy, intoxicated etc.Here we have Modulation Depth (0..1) and Modulation Time (0.4..4 seconds).
Sound Source Parameters
Like the Listener parameters, the Sources ones are divided into severalsubgroups.
The first one is the volume level control as well. It includes Directpath volume, and Direct HF (HF absorption volume level), as well as thevolume levels for 1st order reflections (Room) and HF absorption for reflectedsound (Room HF).
The second group covers the 3D properties of the sound sources: Roll-offFactor, Room Roll-off Factor, Air Absorption Factor and flags for automatedcalculation of some parameters.
The third group is occlusion effects. The occlusion types were studiedin the first part, and now I want to draw your attention to some peculiarities.These effects are simply filters used for the direct path sound and/orfor reflections.
So, occlusions (the sound goes round an insurmountable obstacle whenthe listener and source are divided by it) include 4 parameters: Occlusion,which is actually the effect volume level (-10000 to 0 dB), Occlusion LFRatio (0..1), Occlusion Room Ratio and Occlusion Direct Ratio (the twolast control the sound filtering degree for the direct path and reflections).
Obstructions (both the listener and source are in the same room, thedirect path is blocked, but the reflected sound reaches the listener).There are 2 parameters: Obstruction (effect volume level from -10,000..0dB) and Obstruction LF Ratio.
Finally, Exclusions (the listener and source are in different roomsbut there is an opening in the wall, the direct sound reaches the listenercompletely while the reflected one only partially). There are two parametersas well, like in Obstructions.
EAX4 (EAX Advanced HD version 4)
In March 2003 Creative announced the EAX Advanced HD v4 which was scheduledto become available in the end of April or beginning of May. Unfortunately,Creative does not permit the detailed description of the EAX4 with technicaldetails. The EAX3 differs from the EAX4 only conceptually.
So, the EAX Advanced HD version 4 has the following new elements:
- Studio quality effects
- Multiple effect slots
- Multiple Environments and Zoned effects
Studio quality effects
EAX4 presents 11 studio quality effects. You can use any of the effectslisted below for 2D and 3D sound sources.
- AGC Compressor - automatic leveling of the sound source volume
- Auto-Wah - auto version of the Wah pedal
- Chorus - makes one instrument sound as several instruments
- Distortion - emulates 'overdriving', guitar amplifier
- Echo - brings in motion and extends the audio space for the source
- Equalizer - 4-band equalizer
- Flanger - tunneling or whooshing effect, - the effect of input signalmodulation
- Frequency Shifter for the input signal
- Vocal Morpher - applies special effects to the input signal forthe vocal (two 4-band formant filters for creation of the Vocoder effectwith the modulation signals preinstalled)
- Pitch Shifter - shifts the frequency with the harmonics and timebeing the same
- Ring Modulator - multiplies the input signal by another one (modulating)in the time domain
- Environment Reverb - EAX's basic component.
At http://www.harmony-central.com/Effects/effects-explained.htmlyou can get more details on how some of the effects work.
These effects will give vent to your imagination. For example, the Flangereffect can be applied to a machine gun to make an effect of overheat orfaster shooting in the real-time mode, without changing the audio file,or to emulate the transmitter effect with Distortion and Equalizer. Youcan come up with many ideas but there is one problem - it's supportedonly by the Audigy / Audigy 2. However, some of such effects are builtinto the Direct Sound 8 or can be emulated on the software level.
Multiple effect slots
Another feature announced is several slots for effects. You can add thereseveral effects mentioned. For example, you can hear sound in several environmentssimultaneously, or add to the Distortion and Equalizer in the transmitterthe effect of Environment Reverb to create illusion of a transmitter inthe room with echo.
Multiple Environments
Imagine the following scenario. 3 sound sources surround the listener.Source 1 is heard in the environment with the Occlusions effect. Source2 has Obstructions and Exclusions applied. The third one is coupled withExclusions. All the parameters can be set with four EAX3 functions - onefor the listener and 3 for Occlusions, Obstructions and Exclusions effects(additional parameters can be set together with the main ones).
In case of the EAX4 each source and the listener will have their own environments.The sound from each source is spread both in its own environment and inthe listener's one. Occlusions, Obstructions and Exclusions are appliedboth to the sources' environments and to the listener's one. Thus we getthe result of interaction of the sources in their own environments plusinteraction in the listener's one.
With these settings it's needed to use the EAX4 functions a lot: it'snecessary to define the environments for the sources and listener, andeffects for each source, but the sound becomes more realistic - the sourcessound as if they are located in different rooms, while in the EAX3 theywere simply in a different room with the listener.
Zoned effects
The concept of zones is very similar to Room or Environment. Creative Labsrecommends dividing the visual geometry into several zones, each with itsown properties, its own reverb preset. If the game knows where the listeneris and what sources there are, the respective parameters can be set automatically.Space should be divided into zones in the level editor, where the audiodesigner can set identifiers for the zones and reverb parameters.
EAX Advanced HD 4 ensures more gradual transitions between the zones.Like in the EAX3, the listener crossing the BORDER=1 between zones enablethe morphing operation. But now it can be applied to parameters in theslots, i.e. it's not needed to load them every time.
The slide demonstrates 3 zones (3 environments). Zone 1 is a cylindrical room, Zone 2 is a small low room and Zone 3 is a long corridor. Each willhave its own reverb effect. We load 3 packets of parameters into the slots,and morph them when the gradual transition is needed (see Environment Morphing).
The Reverb / Reflections panning and Occlusions effects are especiallyinteresting. You can load several different Environment Reverbs for eachZone into the same slots and then morph one into another for entirely realisticsounding.
The sound source is located in Zone 3 and the listener is in Zone 2.The Zones are connected with a doorway (Exclusion effect - direct pathis clear, and reflections are muffled). So, the direct path and muffledexclusions get from Zone 3 into Zone 2, reflect and reach the listener.Also, there is Zone 1 behind the listener. There are no sound sources atthe moment (if there were something, the direct path and reflected pathwould reach the listener and contact Zone 2). But we can adjust the panningvectors so that the reflections from Zone 2 get to Zone 3, interact thereand get back to the listener. The sound scene would look very realistic.
That's what we should expect in the near future. The realization will certainly be much more difficult than the theory. The main problem will be to define where sources are, correctly load parameters of the nearest zones and trace of each source for determining Occlusions, Obstructions or Exclusions. Sure, it's not necessary to use all the effects the EAX4 offers. It's possible that you project needs only realistic Environment Morphing and Occlusion effect.
Write a comment below. No registration needed!