I'll follow down the rabbit hole...
construct.net/en/make-games/manuals/addon-sdk/reference/graphics-interfaces/iwebglrenderer
What if there was a way to override all of these types of calls, including what is done in the C3 renderer.
e.g. if C3 called Quad3D2(tlx, tly, tlz, trx, try_, trz, brx, bry, brz, blx, bly, blz, texQuad) it could be directed to a wrapper around a three.js call to draw a Quad3D2.
This would be slower compared to what three.js render could do, but all of the normal C3 behavior might be available.
Do it both in the editor and runtime, basically replacing C3's renderer, but use all the rest of C3 functionality.
Translating a 3D model to C3, would probably just be represented still by a bounding box.
Or is this the _worst_ of both worlds.
------------
On a different topic in terms of textures, I am imagining three.js or babylon.js, etc. will be in a separate context, so we'll either need to ignore C3 textures and focus only on three.js textures or transfer them over outside the context and upload them to GPU again for the different renderer.
As Skymen mentions, a big challenge does seem to be getting larger access to the C3 editor, to make a 3D editor to create levels during editing. Will there be a large increase in the editor SDK API surface for us to work with?
For placeholders in editor, perhaps we could generate textures on the fly for things like C3 cubes, etc. which are 2D snapshots of the models rendered from each side of the model, fully lit. At least this way there will be _some_ representation of the models in the editor besides a bunch of black and white dice. For this, it would then be useful to pass the texture data in the other direction too.
With vanilla C3 render for the editor as placeholders we might end something like the below, trying envision how useful it might be.