iOS speech recognition issue

0 favourites
  • 4 posts
From the Asset Store
Whisper is a speech recognition system for making requests to the OpenAI speech to text API
  • This is a pretty specific issue.

    It'll be useful to people trying to use speech recognition in an iOS app.

    iOS Safari doesn't support speech recognition so in order to use speech recognition I've installed a Cordova plugin: github.com/pbakondy/cordova-plugin-speechrecognition

    This works ok. iOS Safari is able to request microphone access & speech recognition permissions and start transcribing audio into speech.

    When I start recognising speech, Construct audio is turned down. The problem is that it doesn't seem to turn back up after I stop recognising speech.

    The only way I've found to get the audio back is to suspend the app by pressing the home button, and then switching back to the app. When you resume the app, something weird happens: the audio turns back up, but also, any audio that had been triggered that didn't play, will play all at the same time on resume.

    I know this is using a third party Cordova plugin so it's not directly related to Construct but does anyone have any information on this? Any tips or hints? Maybe I can use the Web Audio API to turn Construct's audio back on after speech recognition has finished?

    Tagged:

  • I thought I'd solved it by running:

    runtime.objects.Audio.audioContext.resume();

    when I closed speech recognition.

    More weird stuff happens. The sound comes back at a much lower amplitude.

  • Try Construct 3

    Develop games in your browser. Powerful, performant & highly capable.

    Try Now Construct 3 users don't see these ads
  • Really sorry I can't help you with this. I'm actually hoping you can help me! How did you get the plugin to work with xcode/android studio? I feel like I'm close to connecting all the pieces but i'm missing something. Probably because I don't know what I'm doing and have never heard of Cordova before starting to export apps into X Code and Android Studio.

    I hate to be that guy, but would it be possible for you to point me to a video or tutorial of this or briefly explain how you managed to get speech recognition to work with app exports?

  • No worries!

    In order to use Cordova plugins not included in Construct I followed a tutorial here: https://www.construct.net/en/tutorials/building-mobile-apps-locally-21

    Speech recognition in Construct works perfectly everywhere apart from iOS Safari, so it won't work in iOS apps. This is nothing to do with Construct. Apple won't let iOS Safari access the microphone. To get round this I use a Cordova plugin: https://github.com/pbakondy/cordova-plugin-speechrecognition

    Now when I want to export a project, I export it as a Cordova project. Then in Terminal I run:

    plugin add cordova-plugin-speechrecognition

    cordova platform add ios

    This will generate a workspace you can open in Xcode.

    Hope that's of some use. The tutorial link above really helped me out.

Jump to:
Active Users
There are 2 visitors browsing this topic (0 users and 2 guests)