In native-based engines you have much more control over resolution, so we can render the effects and things on a smaller screen space, then use one clean up-scale render-to-texture to make that "HD" using a simple one-pass shader.
OK, but that's more of a feature, not an inherent limitation of WebGL. We could do something like that, because we can do everything a native app can do with OpenGL ES 2 (soon 3). WebGL does not prevent that.
[quote:2pwyj2mf]The problem here is that it's really more than just Construct 2 being so frame-dependent (despite use of dT like crazy)
You can turn off framerate dependence if you want, or adjust the dt cap to limit frame skipping, by changing the minimum framerate. But if you disable it completely then you have to deal with the game running at different rates depending on the display refresh rate, e.g. it will go twice as fast on a 120Hz gaming monitor, which I've always thought was a worse result.
[quote:2pwyj2mf]Support doesn't mean it's fully working. A laptop from 2006 might have a version of Chrome that "supports" WebGL, but it'll run DirectX 9 faster than it every time.
Do not assume native DirectX or OpenGL are perfect for everyone either. I've done a lot of native coding with both DirectX 9 (Construct Classic) and OpenGL (C2 editor) and there is a total minefield of horrible driver bugs, crashes, and even poor performance. The situation is so bad it can even ruin major game releases. The GPU driver situation is totally awful and that affects the whole industry, not just WebGL, HTML5, or Construct 2. I know people get annoyed when I point the finger at other companies like those responsible for the terrible GPU drivers, but it really is that bad and everyone in this industry is dealing with it.
[quote:2pwyj2mf]Unity has the funds and staff to do extreme optimization on both native and WebGL, and here's their results:
https://blogs.unity3d.com/2014/10/07/be ... -in-webgl/
Despite the title, that blog appears to mainly be testing CPU performance with Unity's WebGL exporter, which means it's basically running asm.js and WebAssembly vs. native code. Their own analysis says: "When you are mostly GPU-bound, you can expect WebGL to perform very similar to native code."
Even then, other games, both 2D and 3D, made in other engines, perform significantly better on Intel embedded GPUs than games made in C2. Whether that's due to usage of WebGL or not is debatable (I say probably not)
Maybe you're actually testing CPU-bound games where the GPU performance doesn't matter. That explains why you could see differing performance even on a system with a super-powerful GPU. CPU performance is another topic entirely. My point is that since there is a 1:1 mapping between WebGL calls and OpenGL calls, the GPU performance should be identical to a native app making the same calls. There's no reason for it not to be. Maybe different engines do some special optimisations or something, but there's nothing stopping us doing that in WebGL as well, since the same features are there. So it's not actually WebGL's fault or some fundamental limitation of HTML5.
If you have a CPU-bound game, as I always offer, send it to me and I'll profile it and see if the C2 engine can be improved.