Example project:
drive.google.com/open
So, my simple project goal was to make an app/project that could record your voice using your microphone, and play back the record in "slow motion" (let’s say at 50% or 25% speed; a lower pitch for you audio people).
In the attached example capx, I succeeded in doing that using the Video object (thanks to the examples provided on the construct 3 start page), however there is no capability to play the video object at half speed. There is also no capability (at least by my computer science - idiot status) to "live" import the webm file while the project is running to the sounds folder so that the audio object "play back rate" action could be performed. There is no filechooser function to live import sounds into the sound/music folder to my knowledge.
I have tried several things to attempt to play the audio (as a video object) at slow motion that have all failed:
- Set the recording object to a low sample frame rate e.g. 12 frames per second (this only affected sprite objects, not audio).
- Set the system timescale at 0.5 or 0.25
- Set the Video object timescale at 0.5 or 0.25
- Tried setting the audio tag "mic" playback rate to 0.5 or 0.25
Now I know that Construct 3 has worked hard to provide the most compatible encoders (OPUS webm), and I am eternal greatful and I don't expect a solution to encoders that take hundreds of people-hours or more to fine tune, but i was just curious what computer science/audio magics is happening/not happening to make playing web audio slow and not being able to import sound.
My goal with any post replies here is just to learn a little bit more about the science to satisfy my OCD...which i also know is exactly the opposite of how you treat OCD :).
My theory on why the audio with the webm video file is not also slowed when recording lower frame rates intentionally is that the audio is treated separate from graphical frames. Frames are straight forward to recording a lower number of frames is no issue to make a cheap chop motion slow-motion video.
My theory is that when audio files (webm, .ogg, etc.) is/are put in the construct 3 sound or music folder, the audio file is compressed, and then once project preview is pushed, the audio files are "in" the Construct 3 engine, allowing access to advanced audio features such as play-back rate, audio effects, etc.
When you try to play sound live (e.g. from a URL) none of those C3 engine special features are available, because the audio has not been compressed and "presented" or converted to machine language in a form that the construct 3 engine can augment audio-wise (e.g. playback rate /change pitch, audio effects). For a layman like me that’s what i understand as "encoding" and why sounds can’t be (easily) live imported to a running project.
Thanks in advance for your replies/comment/tutelage.