Ashley's Recent Forum Activity

  • Try disabling any browser extensions you have installed. They can interfere with and break Construct.

    Otherwise try pressing F12 and check the browser console for a more detailed error message.

  • Try Construct 3

    Develop games in your browser. Powerful, performant & highly capable.

    Try Now Construct 3 users don't see these ads
  • Shadow Caster will draw over anything below it in Z order. If something still appears over the shadow, make sure it's below the Shadow Caster in Z order.

  • If you need maximum performance at all other costs, choose pure JS then. Surely you can have the performance sensitive parts in pure JS and the general purpose stuff in event sheets or mixed event blocks/JS.

  • I just released a Greenworks plugin update for v0.82.0 using Greenworks v0.15.0, which supports a universal binary for macOS.

  • Once you export with nwjs and disallow chrome dev tools, can a player force chrome dev tools back on?

    Yep: just edit package.json and delete --disable-devtools, and dev tools works again.

    You can't really disable dev tools. If someone wants them they can get them. Disabling dev tools is not really a security measure, it basically just disables the built-in shortcuts. Still, even if you have dev tools, it's not necessarily easy to make any useful changes, especially if you advanced minify the export.

  • look at the global variable timetocomplete to get the duration the test took.

    Your project does not display this variable. Are you looking in the debugger? Because debuggers have performance overhead. And so then your test is basically measuring the debugger overhead. Normal preview is faster.

    for (i = 0; i<=1000000; i++)
    	runtime.globalVars.NumberA = 
    		Math.sqrt(runtime.random(100) + runtime.random(100));
    

    Code like this is still an unfair test: an optimizer may be able to identify that the same value is overwritten 1000000 times, so it can skip the first 999999 calls; then since the 'i' variable is unreferenced, it is OK replace it with no loop and just one assignment. I don't know if it does or doesn't do that optimization, but in theory it can, so you shouldn't write a test like that. Usually at least summing a number every iteration and then displaying the result (so the sum is not dead-code eliminated) is a better approach.

    I profiled the empty JS block case. Given an empty loop does nothing, then adding something in to the loop - even an empty async function call - means you're measuring nothing vs. something, so the numbers will be big. I don't think that by itself means the result is meaningful. Still, it seems to spend most of its time in GC; perhaps the fact it's an async function call means it has some GC overhead and so you end up measuring that.

    I don't think that's a sensible thing to do if performance is important though. Crossing interfaces always has a performance cost. For example in some C# engines, calling C++ code is expensive. So if every time you call a math function in C# it has to jump in to C++ code, and you benchmark it, it'll be slow as it's all overhead for jumping between interfaces. The same thing is happening here. So high-performance engines tend to do all their math stuff in C# so there is no overhead and only call in to C++ for more expensive things (which then have all their own C++ math functions, so there's no need to jump back to C# for that).

    The obvious thing to do is if you need maximum performance, write the loop itself in JavaScript, and avoid mixing interfaces (in this case event blocks and JS). So why not just do that?

  • The issue is this: (JS blocks have a surprising overhead, functions unsurprisingly have overhead, and must be inlined when used frequently)

    The overhead to call a JS block in the event sheet is: "call an async function". So this does not really make technical sense. It should be a minimal overhead. Hence my assumption the benchmark was wrong (especially since you talked about doing things which would qualify as an unfair test, as a good JIT would replace or delete the code you are running).

    If you have a concern about performance, it is 10x more helpful to share an actual project file demonstrating a realistic situation, than just talking about it.

  • Picking is a concept that only applies to event sheets. There is no notion of picking in JavaScript code. So that code operates on all instances of Character and has no involvement in picking whatsoever.

    However from a code block in an event sheet, you can use pickedInstances() to iterate just the instances picked in the currently running event block. You could then also pass these to a function in a script file. But that doesn't mean JavaScript understands picking - from the perspective of JavaScript it just means you're passing an iterator (or array) with some specific subset of content.

    Whatever the case, doing the above 1000 times with only 1 Character in the scene adds 10% cpu load, which makes it feel like I'm doing something terribly wrong with that.

    Ignore such measurements, for the reasons I described here.

  • Read Optimisation: don't waste your time. Questions like this are generally pointless. Just make your project ignoring performance. If it is not fast enough by the end, then you can spend time optimising it, but there's a good chance it will be fast enough anyway. JavaScript is extraordinarily fast and you can probably just trust it to be fast enough.

    Performance testing is hard, and your test is an unfair one:

    If I run Normalize(1,1) in a js block on an event "repeat" 1000 times, the cpu is running at around 22%, nothing else is going on.

    JavaScript JIT optimisers are extremely sophisticated these days. If you run Normalize(1,1) repeatedly, the optimiser may well notice "well, that calculates to (0.70710678118, 0.70710678118), and so I'll just replace the call with the answer". Then depending on your benchmark it might figure out that you don't actually use the answer, and so completely delete the code. So now your performance benchmark is doing nothing, and is just the overhead of the "repeat".

    Add in to that the fact the timer-based CPU and GPU measurements are subject to hardware power management which makes them misleading at low readings, and it can be hard to tell if something is actually faster or not. Tests like this are only meaningful if you run them at full capacity and use an FPS reading to eliminate the effect of power management.

    Your other tests may or may not actually calculate a normalization depending on how the code is run. In other words tests like these are usually meaningless and will just confuse and mislead you, unless you have good understanding of the JIT and take care to ensure it actually runs the code you intend.

  • I don't think Greenworks plugin includes an Apple Silicon build of the native component yet.

  • "Load image from URL" just creates an individual texture for the loaded image, and then replaces anything using that animation frame with a reference to the new texture. So yes, that particular feature works as if spritesheeting is disabled: the runtime has no system for dynamically creating spritesheets, that's only done by the editor at the moment. I guess it's theoretically possible the runtime could do dynamic spritesheeting, but the editor spritesheeting system is extremely complicated, and so I would be very reluctant to have to bring that all in to the runtime.

    I looked in to some of the old WebGL 1 code, and it's actually more subtle than I originally remembered: WebGL 1 can load non-power-of-two (NPOT) textures, but only if they're not tiled; then there are still some more restrictions that apply to NPOT textures, like being unable to generate mipmaps. If you want a NPOT texture for Tiled Background, Construct stretches it up to a power-of-two size, which slightly degrades the quality; if you downscale a NPOT texture with WebGL 1 and mipmaps enabled, it will have poorer quality as WebGL 1 can't actually generate mipmaps in that case. These limitations tend to manifest as a degradation in display quality on WebGL 1 systems, but that's not as bad as failing entirely, so I guess I mis-remembered that. One of the benefits of spritesheeting is since all the spritesheets are a power-of-two size, it side-steps all the NPOT restrictions that WebGL 1 has, so improves compatibility and display quality.

    A good example of "how hard can it be to load a texture?" - pretty complicated in the end when you take in to account old and more limited hardware. All the other stuff about potential performance impact applies though, which will vary depending on the game (it suspect it could range from "no significant impact" to "totally trashes game performance").

    So I think this approach might be worth thinking about:

    Perhaps there's other ways to solve this - for example if Construct could export metadata about spritesheets (i.e. the location and size of every image), then in theory some external tool could take a folder of individual image files and paste them over spritesheets, and that could be robust against future changes to the project. It would keep the benefits of spritesheets and only need a minor change to Construct, exporting some extra info it already has.

Ashley's avatar

Ashley

Early Adopter

Member since 21 May, 2007

Twitter
Ashley has 1,425,718 followers

Connect with Ashley

Trophy Case

  • Jupiter Mission Supports Gordon's mission to Jupiter
  • Forum Contributor Made 100 posts in the forums
  • Forum Patron Made 500 posts in the forums
  • Forum Hero Made 1,000 posts in the forums
  • Forum Wizard Made 5,000 posts in the forums
  • Forum Unicorn Made 10,000 posts in the forums
  • Forum Mega Brain Made 20,000 posts in the forums
  • x108
    Coach One of your tutorials has over 1,000 readers
  • x62
    Educator One of your tutorials has over 10,000 readers
  • x3
    Teacher One of your tutorials has over 100,000 readers
  • Sensei One of your tutorials has over 1,000,000 readers
  • Regular Visitor Visited Construct.net 7 days in a row
  • Steady Visitor Visited Construct.net 30 days in a row
  • RTFM Read the fabulous manual
  • x35
    Great Comment One of your comments gets 3 upvotes
  • Email Verified

Progress

32/44
How to earn trophies

Blogs