First let me say, I think that its awesome how you are doing this type of work as your profession, and can really respect you for doing this.
If i'm understanding this correctly, you are using Construct to handle your graphical processes, and using a separate device/program to read it out.
Instead of having Construct be responsible to give the data to the screen-reader, how about you just take a screenshot from the local machine?
Hear me out on this, developing a simple Python program and using something like Google's "Tesseract OCR" (The type of plugin that companies like Tesla use in order to develop their self-driving cars, or authorities use to develop facial recognition) in order to analyze screenshots that are taken every 5 seconds or so would be easy. It would work for your application, and could be used in other projects down the road. However, it would be easy for the program to get confused and add a whole load of other useless items into your speech queue, and can make it confusing for the visually impaired. With enough development though, I think it could be very helpful for the sort of work that you are doing.
It wont be the entirely independent Construct that you were looking for, but at least it will work and (probably) function with the ADA Compliancy, and is highly expandable.