Progress in Framerate-Independent Platforming

0 favourites
From the Asset Store
Progress\Loading Bar. Now Load you game like a Professional.
  • A while back, I stumbled upon a thread where some game developers noticed pixel-perfect platformers had different jump heights depending on the framerate.

    This is something we had confirmed as an issue a while back as well, thanks for the amazing work tracking down the issue and providing a solution to us fellow C2 users!

    It's pretty funny that you not only did the in-depth debug/investigations, but even created the solution/bug fix lol!

    Edit: Also this,

  • Okay, I guess we should fix this, but it's a breaking change and would throw loads of existing games ever-so-slightly off. So I think we need a project setting like "enable better acceleration precision", which defaults to on for new projects but off for old projects, or something like that.

  • Okay, I guess we should fix this, but it's a breaking change and would throw loads of existing games ever-so-slightly off. So I think we need a project setting like "enable better acceleration precision", which defaults to on for new projects but off for old projects, or something like that.

    Sounds like a good idea for keeping the old projects working.

    Also, amazing work everyone who contributed to this solution. It's great to have such a community with users who spend their time discovering/analysing/fixing such problems. Thank you.

  • Sounds like a great solution ASHLEY. And well done calebbennetts for identifying the cause of this problem!

  • cooperation is a beautiful thing to behold.

  • There's a great resource on motion integrators here. A lot of the jump height drift can be improved by using time-corrected Verlet or a RK4 integrator for behaviors (see first link), instead of Euler based solutions. However I would like to point out that even those doesn't solve the problem completely.

    The thing is, although all these fancy integrators support different framerates they are not designed to handle dt fluctuations at every frame. Their formulas are expected to use a constant dt throughout the simulation. In practice C2 uses a variable timestep which introduce small fluctuations to dt at every frame, that in turn induces small errors that accumulate and can get big over time.

    To fully solve this and ensure that the jump height is the same all the time it's necessary to use a deterministic approach, AKA using a fixed timestep for the simulation. That's the solution most engines use nowadays in combination with interpolation or extrapolation for the rendering part to handle variable framerates. There are some great resources about this in the links below. The second one has a great example of how small errors turn big when using variable timesteps.

    http://gafferongames.com/game-physics/fix-your-timestep/

    http://www.koonsolo.com/news/dewitters-gameloop/

    Now it's probably not practical to implement this last solution in C2, but I would love to see it for C3. Besides solving the varying jump height problem it also eliminates the missed collisions due to small framerates (AKA tunneling), it removes the randomness of each play session which helps enormously in making a replay function or in games where the physics must behave exactly the same every time, it allows to set a smaller frame rate for the simulation (like 30 fps) freeing more cpu budget (specially good for mobiles), and it makes things easier for beginners since it's not mandatory to add dt in events that deal with time. It can also be backwards compatible by automatically setting the value of dt to compensate for the moving rate of objects, in case the new fixed timestep is different than 60.

  • Now it's probably not practical to implement this last solution in C2, but I would love to see it for C3. Besides solving the varying jump height problem it also eliminates the missed collisions due to small framerates (AKA tunneling), it removes the randomness of each play session which helps enormously in making a replay function or in games where the physics must behave exactly the same every time, it allows to set a smaller frame rate for the simulation (like 30 fps) freeing more cpu budget (specially good for mobiles), and it makes things easier for beginners since it's not mandatory to add dt in events that deal with time. It can also be backwards compatible by setting the value of dt to compensate for the moving rate of objects, in case the new fixed timestep is different than 60.

    Very big "YES PLEASE!" from here on that. Those first two issues you highlighted there are ones that quite a few of our customers on Steam have faced (and often left negative feedback/reviews mentioning it), but they don't realize it's due to their hardware thinking "It's just a 2D game!".

    I'd rather they have a visual frame drop than a logic frame skip, since one says "This game might be poorly using my resources or my PC is too old" while the other makes them think "Geez this game is buggy!" when it's collision/platformer behaviour code I didn't even write.

  • Animmaniac: Woah … My mind is blown. That WOULD be good for C3.

    Aphrodite: I agree, we need to test at lower framerates. I just meant I’m not sure how to force it low enough to test it.

    Here’s the spreadsheet I had up earlier, with newly-corrected numbers comparing the actual physics, the built-in behavior from C2, and my Platform Minus. It shows two important things:

    1. IF we could assume a constant framerate, Platform Minus would give less variation in jump height than the built-in behavior as the framerate is set lower and lower. This indicates, but doesn’t absolutely prove, that Platform Minus would improve consistency between machines.

    2. In both Platform Minus and the current behavior, velocity seems to lag behind position by one tick. This might mean position should technically calculate before velocity (so it’s using the previous tick’s velocity values), but I'm not 100% certain, and I also couldn’t say whether you would get enough extra accuracy to justify the change.

    Thanks for your support, everyone else who’s commented so far. Thanks for your help with the math, Aphrodite and Colludium, and especially, thanks, Ashley, for considering my idea.

    Also, would anybody mind if I saved a copy of this forum topic as it stands now? It might come in handy as project experience on a resume. Collaborating to solve a technical problem, providing data to support my claims…

  • calebbennetts

    "Also, would anybody mind if I saved a copy of this forum topic as it stands now? It might come in handy as project experience on a resume. Collaborating to solve a technical problem, providing data to support my claims…"

    I don't have an issue with that. All my posts are meant to help, unsure about the others.

  • Same here

  • Yup

  • Try Construct 3

    Develop games in your browser. Powerful, performant & highly capable.

    Try Now Construct 3 users don't see these ads
  • calebbennetts - sure.

    Animmaniac - I really am not very persuaded by other time stepping algorithms. Sure, someone wrote a blog post saying "this is the best way to do it!" but that doesn't mean I agree. The way I see it is you either end up stepping more than the rendered framerate, which means you're burning CPU simulating stuff that isn't displayed, or you step less than the rendered framerate, which means there's nothing new to draw at the next rendered frame.

    Also people are very sensitive to jank/juddering/uneven movement. If the logic is running at a different rate to the rendering, the motion becomes uneven. For example logic at 45 Hz with rendering at 60 Hz means every rendered frame shows alternating step distances, e.g. 10px, 15px, 10px, 15px... given how sensitive people are to this (particularly when scrolling), I think the only solution would be to go for an integer multiple/divisor of the framerate to guarantee every rendered step moves the same distance. E.g. logic at 2x, 3x, or 1/2, 1/3.

    This means then that if you want faster logic than the display rate, it will have to go at least twice as fast, to 120Hz. This also means if your current CPU usage is 60%, you don't have enough overhead to do that, and your game will start lagging. It also will use more battery, make phones hotter, and significantly lower your headroom to do interesting stuff with the CPU.

    If you want slower logic than the display rate, it'd probably have to be 30 Hz. What I don't really get about this is if you only step the game at 30 Hz, there is nothing new to draw in between logic frames. I think some devs propose a "lite" step in between which just advances motion without doing anything else, but this worsens tunnelling problems (new scenario: object actually seen colliding, but no collision registered!), and could still use a bunch of CPU e.g. if it's a large physics game. Also, to reliably schedule at half-display-rate we need browser support.

    Finally the really obvious problem with fixed timesteps is systems which have different display rates. VR is making rates like 90 Hz more common, with plans to make it even higher. Gaming displays that run at 120Hz have already existed for a while. Variable-rate gaming displays like G-sync are now available too. Maybe Apple or Samsung will decide their next big differentiator is to run their next phone at 120Hz to make everything even smoother. It's naive to think everyone runs 60 Hz and will always do so, and then if you only run logic at an integer multiple/divisor, there is no solution - you can't choose a logic rate that covers all these display rates with smooth motion. And don't forget on weak systems fixed-step games go in to slo-mo instead of stepping further to compensate.

    So IMO the existing approach of variable steps has a lot of benefits:

    • with accurate v-sync, motion is perfectly smooth
    • all display rates are accommodated equally (which is also good future-proofing)
    • CPU usage is matched to the framerate and is no higher than necessary
    • slowdowns on weak systems are compensated for

    The downsides include tunnelling with large steps, but this can be mitigated in the main cases, e.g. the bullet behavior in Classic had a special stepping feature specifically to solve that problem, which I always intended to (but never quite got round to) port to C2, or there could be a stepping behavior to handle it. Then this CPU-intensive stepping only applies to things that really need it, rather than the whole game globally.

    The other downside is non-deterministic gameplay. To some extent we can improve this with better maths, as this thread shows. However I've always thought of this like floating point rounding errors: the calculations are never going to be exactly correct, so you always build in a tolerance instead of expecting exact answers. For example even with fixed steps, you can never assume your platform behavior will land at Y = 100, because floating point errors (literally occurring in the CPU circuitry) mean it will probably land at something more like Y = 99.9999999978. Given you need this kind of tolerant approach for that, I always thought this was wise to apply to motion as well, doubly so for the errors in time step. So instead of expecting the player to make a jump within 0.1px, make sure it works with say a 2px threshold (assuming we fix the math), and so on. There are ways to work around other problems as well, for example a record/replay feature could simply record the sequence of actual measured delta-times, and replay them for the purposes of making a deterministic replay.

    So my focus has always been on variable timestep, and then add features to help work around tunnelling and non-determinism if they pose a problem. So my approach for C3 is to add some of those mentioned features. The way I see it, changing the way the stepping works itself has worse downsides.

  • I think that the simplest accurate method, that Sir Isaac would approve of, is to use s = ut + 0.5*t^2.

    So, if V0 and Y0 are values from the previous tick, and g is the pixel gravity:

    Y1 = Y0 + V0*dt + 0.5*g*dt^2

    V1 = V0 + dt*g

    (ie the exact method from the spreadsheet linked above). Any other morbid will introduce different errors...

    Raycast is required to prevent tunneling and high speed / low fps problems.

  • The 'jump' is pretty good dt corrected. It is the 'gravity' that is not.

  • Also people are very sensitive to jank/juddering/uneven movement. If the logic is running at a different rate to the rendering, the motion becomes uneven. For example logic at 45 Hz with rendering at 60 Hz means every rendered frame shows alternating step distances, e.g. 10px, 15px, 10px, 15px... given how sensitive people are to this (particularly when scrolling), I think the only solution would be to go for an integer multiple/divisor of the framerate to guarantee every rendered step moves the same distance. E.g. logic at 2x, 3x, or 1/2, 1/3.

    ...and then if you only run logic at an integer multiple/divisor, there is no solution - you can't choose a logic rate that covers all these display rates with smooth motion.

    I think you may be missing or misinterpreting the part about interpolation/extrapolation. The idea is that the logic should run at a constant rate, while the rendering is a separate process that interpolates or extrapolates the logic steps to match any framerate the display uses. The render and the logic don't even need to be in sync, they could be different threads running in parallel on different cpu cores.

    To use your example, if interpolation was used the result would be a smooth movement with no jank whatsoever:

    *The gray balls represent the logic steps which are the real positions of the simulated object, while the red balls are the interpolated positions used just for rendering (they have no influence on the simulation).

    With this method it's possible to render at any display rate without affecting the simulation:

    As you can see with 120fps you just need to do more samplings to the logic steps while the simulation remains unaffected. Everything would behave the same regardless of framerate, just like when testing. There would be no unexpected behavior like tunneling on untested framerates since the simulation is always the same. And like variable timestep, CPU usage is matched to the framerate and can even be smaller since it doesn't need to step the logic at every rendered frame. This means it can use less battery, make phones cooller, and significantly increase the headroom to do interesting stuff with the CPU.

    What I don't really get about this is if you only step the game at 30 Hz, there is nothing new to draw in between logic frames.

    ...I think some devs propose a "lite" step in between which just advances motion without doing anything else, but this worsens tunnelling problems (new scenario: object actually seen colliding, but no collision registered!), and could still use a bunch of CPU e.g. if it's a large physics game.

    This is were the debate about interpolation vs extrapolation comes in. With interpolation you get a very smooth result, but it comes at the cost of rendering with up to 1 logic step of delay. Since you need to know the next logic step to do a proper interpolation with the previous step, you delay the render until the next step is available. This adds a bit of latency to input vs feedback. For a logic step of 30fps the latency is about 33ms, for 45fps 22ms, for 60fps 16ms (it actually looks like this on a timeline). But it should be fine for most games since according to Wikipedia the average input latency for games is 133ms, 67ms for quick action games. If a game needs a faster input response one can just increase the rate of the logic step.

    The other option is extrapolation, which is basically using the past steps to try to guess the future. It doesn't have the downside of latency but it adds a bit of imprecision to the perceived movement. For instance, a fast moving object may seem to overlap another solid for a fraction of a second before moving to it's true just-touching position. Or an object that changes direction constantly may drift a bit from it's real position on the simulation. However this has no influence on tunneling since it only affects the display position not the game simulation. What collides in preview should also collide in any framerate. It may cause the perception of some objects colliding but not registering though (when passing close on a curved motion).

    A third option is combining the two to get an average of the benefits and compromises of both. Like you could delay only half logic step, using extrapolation for the first half step (while the future logic step is not available) and interpolation for the other half step (after the new logic step is available). This would result in half latency with a bit of drifting. It's even possible to transform this into a slider that goes from full interpolation to full extrapolation with proportional steps in between.

    The thing is, depending on the game one solution may be preferred to the other. In my view interpolation is superior, but having an option to switch between the two is feasible.

    And don't forget on weak systems fixed-step games go in to slo-mo instead of stepping further to compensate.

    That's true, but speaking as a developer I prefer the game to go on a slo-mo in too weak systems than objects going through walls or jumping different heights. Although if my game was failing to run at a decent speed on my target devices I would probably adjust the logic step rate to accommodate those weaker systems and avoid any slo-mo.

    If desired it's also possible to still use an integer dt in those cases to step the logic only once. Like if the logic needs to step 2 times in a render frame, you set dt to 2 and step once. The result would be much more stable than a dt that varies every frame with complex fractions.

    However I would still prefer the game to step twice the logic instead of using dt, simply because it makes things more deterministic and easier to deal with. After all most games are limited by rendering and not by logic, so stepping the logic more than once in a slow frame should not add too much overhead.

    For example even with fixed steps, you can never assume your platform behavior will land at Y = 100, because floating point errors (literally occurring in the CPU circuitry) mean it will probably land at something more like Y = 99.9999999978.

    The main advantage of a fixed step is that the behavior will always land at the same height regardless of framerate given the same input conditions, while with a variable time step you can't ensure that. It's less about the math being 100% correct and more about the movements being predictable. The same maximum jump height you get when previewing at 60fps, you will get at any framerate. And with new displays increasing their refresh rate there's a chance the drift (when using variable timestep) will tend to get worse, as exemplified in the previous link I posted.

    So I still find this may be a better solution than the current variable timestep. It adds a bit of overhead due to the interpolation but overcompensates by not having to execute the logic at every rendered frame and by being able to run the renderer in parallel. And as a bonus solves other problems like unpredictable tunneling, nondeterminism, and the difficulty of begginers in dealing with dt.

Jump to:
Active Users
There are 1 visitors browsing this topic (0 users and 1 guests)