In a sense you do have to worry about performance at least a little bit because there's always gonna be this one guy with his forgotten relic of a computer expecting to play the game. Which is fair enough considering we're talking about 2D games here. They should run on any toaster you throw them at.
Regardless I do think the engine does perform really well in most cases. Things that do cause bad performance that I'm aware of are:
- any kind of loops every tick especially if they manipulate instances.
- The physics plugin
- Shaders/effects also tend to be unusually(?) expensive, especially on mobile.
Am I missing something?
EDIT:
I wanted to add. Construct being tightly tied to vsync can also cause performance issues, theoretically at least. Lets just assume my game performs 50.000 collision detections per second. That is... as long as it's a 60hz screen. Newer mobiles for example often have 120hz screens, now it's 100.000 collision detections per second. Or a fancy gaming monitor with 240hz... 200.000 collision detections per second. On top of that, don't forget that this also means that collision detections are inherently framerate dependent, just to add a little extra annoyance.
I'd really like to see these type of calculations untied from the monitor framerate. At least as an optional feature. I'd like to be able to run the game logic at a fixed rate, then have the visuals on top interpolate accordingly. This will stop collision checks from ballooning out of control for no actual reason, and also would guarantee framerate independence for collision checks. And yes, this will slow down the game if the fps dip too low. That is a tradeoff I'm willing to take, considering this approach is good enough for a multi-billion dollar company youtu.be/dDxMv33QlFs
There's a reason this is done this way. I've run into many many inconsistencies because of this. So many times have I made something that works nicely, then I remembered I have a high fps screen and swap to 60hz. Boom, everything's broken. No amount of deltatime is gonna save me here because how am I supposed to deltatime the rate of raycasts being performed if it's tied to vsync. So I could do something like "every 0.0167 seconds -> do raycast" to have it run at "60 fps". Does this work? Honestly I don't even know, probably not. What about the platform behavior? That is also tied to vsync, but I don't have the ability to execute that only "every 0.167" seconds. How can I tie this together without massive spaghetti? I probably can't.
But I also don't expect this to change, because changing that is probably a massive task. What I do know is that the whole interpolate on top thing works because I sort of do exactly that in my current project. Technically speaking, it runs at ~13 fps. I get position updates to my sprites that I then simply tween to their new position. Visually it runs at a normal framerate, just everything in the background happens at 13 fps. And it will run at 13 fps in any case.