Construct(ing) Adaptive Music

9

Stats

4,346 visits, 5,964 views

Tools

Translations

This tutorial hasn't been translated.

License

This tutorial is licensed under CC BY 4.0. Please refer to the license text if you wish to reuse, share or remix the content contained within this tutorial.

Published on 19 Dec, 2018. Last updated 4 Feb, 2020

Adaptive music is a technique (or rather a family of techniques) that can be used to vary a game soundtrack over time. Starting in the early 2000s with the Microsoft’s Direct Music Producer (part of the DirectX API), composers were provided with a spreadsheet-like interface to organize components of a composition and allow in-game events to determine which voices/parts are heard at what time. Composer Guy Whitmore wrote extensively on this topic when it was first introduced. He defined the technique, saying "Adaptive audio is a term used to describe audio and music that reacts appropriately to—and even anticipates—gameplay." Full article on GamaSutra.com

Musical reactions or adaptations provide several benefits to a game soundtrack. Perhaps that which would be most immediately noticeable to players is a lack of uniform repetition and musical stasis. There are few things worse in games than a homogenous soundtrack. And we have all played games that had to be silenced after a period of time. But enough said—this isn’t the place to beat up on games with unimaginative music or lazy implementation. The essential point is this: If you as a designer believe that music is vital to the overall experience of your game it’s important that the music you use both support and enhance that experience. Players launch a game and play. They restart, take breaks, try again at a later time, and so on. And though the game program hasn’t changed from one play session to the next, the choices a player makes will vary. Add to that a game designer’s use of randomization or probabilities for generating levels and spawning enemies, and players must engage differently with every restart. Different kinds of play (trial & error, new strategies) leads to a unique experience, demanding similarly unique music that adapts to and complements the new situations a player will face. And when the sound of a game fully supports the visual and when appropriate, narrative, experience, players sink into a whole other level of immersion.

For musicians and composers, the creative process changes when you approach your soundtrack in a more adaptive fashion. You still start by thinking about the usual big questions like overall tone, genre, and instrumentation appropriate for the game. However, once these ideas are settled and a musical direction is determined, you must consider how the music will adapt. Currently there are two main options: Horizontal Re-sequencing and Vertical Remixing. The horizontal option is called re-sequencing because music is divided into short segments—think verse, bridge, chorus, etc. in songwriting terms—and then set to play and adapt to the game as a variable sequence. The vertical option allows individual tracks of a composition to play simultaneously and, much like you do in a DAW (Digital Audio Workstation such as Logic, Pro Tools, Reaper), automate levels to control which parts are audible, which are not, and at what point these relationships change to best support events within the game. And don’t let the fact that there are only two techniques mentioned here bother you. In practice, these two can be combined in a variety of ways to expand the potential for musical variety. For a more thorough overview on these techniques, see Michael Sweet’s article in Designing Music Now.

It’s pretty natural to be thinking right now, "Sure, FMOD, Wwise, Fabric, and Elias can do this whole adaptive music thing, but Construct…?" The truth is, Construct does allow you to create both horizontal and vertical adaptive music systems. These are not likely to be as robust as the solutions you can create with middleware listed above, but they are equally effective and (fortunately for us) easy to implement. Linked example files are horizontal-reseq.c3p and vertical-remixing.c3p. Rather than go through the implementation of these techniques step-by-step, I think it’s easier to examine the completed Construct projects and look at the Events and Actions used to create these two different Adaptive Music systems.

Vertical Remixing

First, download this example file (vertical-remixing.c3p) and preview the layout in the Construct Editor. To play, use arrow keys to fly the mothership around and press space to beam up aliens that have been stranded in the city. As you do this you’ll hear the music build in layers as more aliens are rescued. Next, look through the Event Sheet in the Construct Editor. The most important thing to understand about this technique is that when the layout begins, ALL of the music tracks are set to play, but at an inaudibly low level. The global variable MusicMin (set to -90 dB) ensures that none of these tracks will be heard. In addition, all of these tracks are tagged so that volume levels can be set independently.

Believe it or not, MOST of the work is done at this point. As players rescue their friends, the UpdateMission function increments the AliensRescued variable. As this value changes, a series of Events in the Music State Machine section controls the overall mix.

As the number of alien rescues increases, some tracks are silenced (volume set to MusicMin or -90 dB) while others are made audible and volume raised to MusicLevel or -3 dB. Note that all of this is done via an Audio Set Volume Action. Each of these uses the tag associated with an individual music track. Important: Don’t forget to tag your music or this technique will not work.

The final important note has nothing to do with Construct, but with your audio production software. The image above shows the Reason session I used to make the demo music for this lesson. The track is 16 bars long at a tempo of 180 bpm and has four main voices: drums, bass, synths, and lead. The top-to-bottom arrangement of tracks in the DAW is what gives this technique the name vertical, while modulating levels to combine these elements in different ways is remixing in the truest sense of the term.

Though adaptive music is heard in a variety of permutations, it is standard practice to compose this music as if it were a conventional, linear song or score. As a composer you must hear all of the parts together. You have to be able to confirm that everything is accurate in terms of rhythm, harmony, and mix before you can export/bounce your music and bring into Construct. It’s common for game composers to work with a file like this, but then solo or mute individual tracks to get an idea of how their music will sound as individual parts are added to or subtracted from an in-game adaptive score. When you are satisfied with the entire mix and have enough combinations to suit your game, solo individual tracks and bounce or export to WAV files (aka stems) that you can import to Construct.

Horizontal Re-Sequencing

The horizontal approach has (if you’ve read Michael Sweet’s article) the greatest number of variations. In its most basic form, the technique creates an endless series of audio files played back-to-back. If you imagine what that would look like in a DAW, the name horizontal re-sequence makes a lot more sense. Open horizontal-reseq.c3p and take a look at the three Event Sheets. Each uses the same basic technique in a different way.

Horiz Music SEQ is the most fundamental and plays a preset sequence of three music files.

Horiz Music RAN plays the same three files but in a randomized sequence. You never know which will come next and it’s possible that one will repeat itself.

Horiz Music POS uses the position of an object in the game world to determine which of the three music segments plays.

The core of the Horizontal technique is Audio: On Ended. This Event is called the moment a tagged audio file is done playing. As such, On Ended acts as a trigger to cue the next tagged audio file. When that file is done On Ended cues the next, and so on and so forth. The sequence can continue indefinitely as long as the audio file tags are specified correctly.

This is a good time to re-emphasize the importance and flexibility of audio tags. In Horiz Music RAN I use the choose() System Expression to randomly select one of the three available sound files. By definition, choose() cannot predict which sound will play, but by using the same tag regardless of which sound is cued, the On Ended Event will always be able to keep the sequence going.

The final example, Horiz Music POS, simply builds on the techniques used in the last two examples. Everything you need to know is in the comments, but I will highlight a few key points. The most noticeable difference is the use of an Array. This data structure is an efficient means of storing a list of information; in this case, audio files. When the Player (just a red dot in this example) is over a particular surface in the game world, the PlayNext Global Variable updates and is used as an index to the Array.At() Expression to define the next sound to be played in a sequence of On Ended Events. Again, the generic tag ("CurrentLoop") is applied so that no matter which sound plays, it is wrapped in an identifiable tag that can be monitored when On Ended triggers.

In this example there are four surfaces (stone, grass, dirt, water) but only three audio files. This was an intentional choice to demonstrate how to have silence or an on/off functionality. There is no sound when the red dot is over the water texture. It’s a big change in terms of what you hear, but minimal from a technical perspective. This cue works by playing a silent, one-second WAV file when the dot is over water. Though it is never heard, the short WAV file loops and allows the system to maintain its On Ended cycle, and to be ready for an audible WAV when the player moves to a new surface.

Conclusion

These techniques were developed to help my students in the Game Design program at Indiana University, Bloomington, where I co-teach a course called "Game Art and Sound." Students work with Construct to develop their final projects: A short game or "3-5 Minute Experience" that showcases the skills and techniques they have developed throughout the semester. We can look at Adaptive Music schematic diagrams or play examples of existing music, but to truly understand how these techniques work, it is far more helpful for students to create and implement their own music in support of an original game or other kind of interactive experience. I hope that you, Construct Developer, find these examples useful in one way or another!

On a general, final note that looks ahead to a more robust implementation of these ideas, there is more to say about developing music for a Vertical Remix Adaptive Music system. Game composers nearly always write a full arrangement of music so that when everything is audible in the DAW, they hear what a player should hear when the game is at a point of greatest intensity (tension, mystery, etc). The various components (aka stems) of an adaptive arrangement can then be finalized by working backwards. Mute tracks, drop levels, and swap voices in order to achieve a "least intense" (tense, mysterious, etc.) sounding state. One these two extremes have been defined you can experiment with various combinations of the remaining material to determine the makeup of the in-between states.

  • 5 Comments

  • Order by
Want to leave a comment? Login or Register an account!