Ashley's Forum Posts

  • I just released a Greenworks plugin update for v0.82.0 using Greenworks v0.15.0, which supports a universal binary for macOS.

  • Once you export with nwjs and disallow chrome dev tools, can a player force chrome dev tools back on?

    Yep: just edit package.json and delete --disable-devtools, and dev tools works again.

    You can't really disable dev tools. If someone wants them they can get them. Disabling dev tools is not really a security measure, it basically just disables the built-in shortcuts. Still, even if you have dev tools, it's not necessarily easy to make any useful changes, especially if you advanced minify the export.

  • Try Construct 3

    Develop games in your browser. Powerful, performant & highly capable.

    Try Now Construct 3 users don't see these ads
  • look at the global variable timetocomplete to get the duration the test took.

    Your project does not display this variable. Are you looking in the debugger? Because debuggers have performance overhead. And so then your test is basically measuring the debugger overhead. Normal preview is faster.

    for (i = 0; i<=1000000; i++)
    	runtime.globalVars.NumberA = 
    		Math.sqrt(runtime.random(100) + runtime.random(100));
    

    Code like this is still an unfair test: an optimizer may be able to identify that the same value is overwritten 1000000 times, so it can skip the first 999999 calls; then since the 'i' variable is unreferenced, it is OK replace it with no loop and just one assignment. I don't know if it does or doesn't do that optimization, but in theory it can, so you shouldn't write a test like that. Usually at least summing a number every iteration and then displaying the result (so the sum is not dead-code eliminated) is a better approach.

    I profiled the empty JS block case. Given an empty loop does nothing, then adding something in to the loop - even an empty async function call - means you're measuring nothing vs. something, so the numbers will be big. I don't think that by itself means the result is meaningful. Still, it seems to spend most of its time in GC; perhaps the fact it's an async function call means it has some GC overhead and so you end up measuring that.

    I don't think that's a sensible thing to do if performance is important though. Crossing interfaces always has a performance cost. For example in some C# engines, calling C++ code is expensive. So if every time you call a math function in C# it has to jump in to C++ code, and you benchmark it, it'll be slow as it's all overhead for jumping between interfaces. The same thing is happening here. So high-performance engines tend to do all their math stuff in C# so there is no overhead and only call in to C++ for more expensive things (which then have all their own C++ math functions, so there's no need to jump back to C# for that).

    The obvious thing to do is if you need maximum performance, write the loop itself in JavaScript, and avoid mixing interfaces (in this case event blocks and JS). So why not just do that?

  • The issue is this: (JS blocks have a surprising overhead, functions unsurprisingly have overhead, and must be inlined when used frequently)

    The overhead to call a JS block in the event sheet is: "call an async function". So this does not really make technical sense. It should be a minimal overhead. Hence my assumption the benchmark was wrong (especially since you talked about doing things which would qualify as an unfair test, as a good JIT would replace or delete the code you are running).

    If you have a concern about performance, it is 10x more helpful to share an actual project file demonstrating a realistic situation, than just talking about it.

  • Picking is a concept that only applies to event sheets. There is no notion of picking in JavaScript code. So that code operates on all instances of Character and has no involvement in picking whatsoever.

    However from a code block in an event sheet, you can use pickedInstances() to iterate just the instances picked in the currently running event block. You could then also pass these to a function in a script file. But that doesn't mean JavaScript understands picking - from the perspective of JavaScript it just means you're passing an iterator (or array) with some specific subset of content.

    Whatever the case, doing the above 1000 times with only 1 Character in the scene adds 10% cpu load, which makes it feel like I'm doing something terribly wrong with that.

    Ignore such measurements, for the reasons I described here.

  • Read Optimisation: don't waste your time. Questions like this are generally pointless. Just make your project ignoring performance. If it is not fast enough by the end, then you can spend time optimising it, but there's a good chance it will be fast enough anyway. JavaScript is extraordinarily fast and you can probably just trust it to be fast enough.

    Performance testing is hard, and your test is an unfair one:

    If I run Normalize(1,1) in a js block on an event "repeat" 1000 times, the cpu is running at around 22%, nothing else is going on.

    JavaScript JIT optimisers are extremely sophisticated these days. If you run Normalize(1,1) repeatedly, the optimiser may well notice "well, that calculates to (0.70710678118, 0.70710678118), and so I'll just replace the call with the answer". Then depending on your benchmark it might figure out that you don't actually use the answer, and so completely delete the code. So now your performance benchmark is doing nothing, and is just the overhead of the "repeat".

    Add in to that the fact the timer-based CPU and GPU measurements are subject to hardware power management which makes them misleading at low readings, and it can be hard to tell if something is actually faster or not. Tests like this are only meaningful if you run them at full capacity and use an FPS reading to eliminate the effect of power management.

    Your other tests may or may not actually calculate a normalization depending on how the code is run. In other words tests like these are usually meaningless and will just confuse and mislead you, unless you have good understanding of the JIT and take care to ensure it actually runs the code you intend.

  • I don't think Greenworks plugin includes an Apple Silicon build of the native component yet.

  • "Load image from URL" just creates an individual texture for the loaded image, and then replaces anything using that animation frame with a reference to the new texture. So yes, that particular feature works as if spritesheeting is disabled: the runtime has no system for dynamically creating spritesheets, that's only done by the editor at the moment. I guess it's theoretically possible the runtime could do dynamic spritesheeting, but the editor spritesheeting system is extremely complicated, and so I would be very reluctant to have to bring that all in to the runtime.

    I looked in to some of the old WebGL 1 code, and it's actually more subtle than I originally remembered: WebGL 1 can load non-power-of-two (NPOT) textures, but only if they're not tiled; then there are still some more restrictions that apply to NPOT textures, like being unable to generate mipmaps. If you want a NPOT texture for Tiled Background, Construct stretches it up to a power-of-two size, which slightly degrades the quality; if you downscale a NPOT texture with WebGL 1 and mipmaps enabled, it will have poorer quality as WebGL 1 can't actually generate mipmaps in that case. These limitations tend to manifest as a degradation in display quality on WebGL 1 systems, but that's not as bad as failing entirely, so I guess I mis-remembered that. One of the benefits of spritesheeting is since all the spritesheets are a power-of-two size, it side-steps all the NPOT restrictions that WebGL 1 has, so improves compatibility and display quality.

    A good example of "how hard can it be to load a texture?" - pretty complicated in the end when you take in to account old and more limited hardware. All the other stuff about potential performance impact applies though, which will vary depending on the game (it suspect it could range from "no significant impact" to "totally trashes game performance").

    So I think this approach might be worth thinking about:

    Perhaps there's other ways to solve this - for example if Construct could export metadata about spritesheets (i.e. the location and size of every image), then in theory some external tool could take a folder of individual image files and paste them over spritesheets, and that could be robust against future changes to the project. It would keep the benefits of spritesheets and only need a minor change to Construct, exporting some extra info it already has.

  • my suggestion was to have a bool in the export options to enable or disable based on the preferences of the developer.

    My point still applies. If the developer chooses to turn it off, then players of that game who aren't interested in modding still get a potentially big performance hit, which seems a shame for them.

  • I tested it moving it to (100, 100) and it worked the same as the browser. Maybe it's an issue with your calculation. It's hard to help unless you file an issue following all the guidelines, as it's usually impossible to help without that information.

  • If you run in to any problems with Construct please file an issue following all the guidelines, as usually it is impossible to help without all the requested information.

  • I don't think it's a simple thing to just turn off spritesheets. Spritesheets are always power-of-two sized, and while WebGL 2+ supports non-power-of-two texture sizes, WebGL 1 does not. There's still enough WebGL 1 systems out there that I think if it just exported a bunch of non-power-of-two individal images then your game will be broken on a small percentage of systems, but enough that you'll get a lot of "game doesn't work" reports.

    Then there's other issues like larger games losing the benefits of spritesheets could become significantly larger to download and significantly slower to load, and have a runtime performance hit too. It could be a massive deoptimisation for your game that slows things down even for players who don't want to mod anything and just play the stock game, which I think is an unfortunate trade-off.

    Perhaps some of that can be mitigated in various ways, but it's complicated. Modding is a very large area if you want to go further to adding new kinds of objects, changing game logic, adding new kinds of level themes, etc. as well. I fear this may be one of these areas people say "just add A!" then "just add B!" then "just add C!" and then on for another 20 things over months/years.

    Perhaps there's other ways to solve this - for example if Construct could export metadata about spritesheets (i.e. the location and size of every image), then in theory some external tool could take a folder of individual image files and paste them over spritesheets, and that could be robust against future changes to the project. It would keep the benefits of spritesheets and only need a minor change to Construct, exporting some extra info it already has.

  • You can load sprite images from a URL which might work better. That bypasses the whole spritesheeting system.

    Update: this thread previously had preliminary documentation for writing WebGPU shaders. As WebGPU support is now more mature, we've moved the documentation to the Addon SDK manual section on WebGPU shaders. This thread is now no longer sticky; please refer to the Addon SDK manual for the latest documentation.

  • Why not use an array instead of function parameters?