My suggestions would be quite easy to implement, as the web audio api already offers all of it:
1) The ability to mix/submix.
With web audio one can do amazing things using effects and filters. But, when you have a set of sounds that should act in a context, it is very painful to edit every single sound. Imagine the steps of a player, the 'swish' sound when he swings his sword and the shot sound from his second weapon, a crossbow. All of them should sound within a cetain context, for example a cave he's exploring, or a forest, where he gets ambushed.
Currently we would need to create the very same effects for all of the sounds and change all of them seperately depending on the player's location. But with a submix, I could create a new audio tag (e.g. "player sounds") and route all of the player's sounds to there, while only needing to change the effects applied to "player sounds". This gets even more intersting when taking into account the enemies and their sounds. A submix "enemy sounds" could be created and a "ambience" mix, where both "enemy sounds" and "player sounds" are routed to. Now I'd only need to apply/change effects of "ambience", instead of hundreds of single audio tags (which also reduces CPU load).
Web audio already supports this. Every audio node accepts input from multiple outputs. There is also javascript example code in the api's documentation.
2) Splitting/merging channels
Currently an audio file is treated as a whole. With the analyser node we can access peak and rms information, but only as a sum of all channels. It would help much if individual channels could be accessed, for example to distinguish the peaks from the left and the right channel of a stereo file. After the analysing they would be merged back into one audio stream.
Web audio already offers this. It's the channel splitter node and the channel merger node.
3) Extending the looping feature
By adding loop start and end we could have some very nice additions to the current implementation. For example a file with a nice intro before the main theme. Loop start would then be set to right after the intro, resulting in playing an intro followed by an indefinitely looping main theme.
Web audio already offers this as attributes (loopStart, loopEnd) of the AudioBufferSourceNode. This should only be offered for sounds of course, as only those are buffered in memory, while music is streaming.
4) Recording feature
I may be the only one wishing this, but I add it to this wish list nevertheless. An option to record the final output to disk. This could be realized with the ScriptProcessorNode and its AudioProcessingEvent.
Ashley
Let me again point out that all the features are already there. They would "only" needed to be made accessible in C2. So the workload would be quite low compared to features that would have to be programmed first.