Is C2/C3 good for large 2D desktop games?

0 favourites
From the Asset Store
A cool way for kids to write and practice English Alphabets
  • it's the graphics drivers, which are native technology. As I repeatedly have to say to people who don't seem to believe this, graphics drivers are widely accepted to be terrible and can even completely ruin game launches. Here's two links to back this up:

    Batman: Arkham Knight had such poor PC performance it virtually ruined the launch - it ran fine on console, where the drivers are good quality and predictable

    "How to delay your indie game" specifically calls out struggling with GPU drivers: "Despite claiming to, their drivers just don’t properly implement the OpenGL specifications... Then a new OSX update will hit and everything will change again, yet at the same time I will need to still cater for the previous releases and older machines... it’s unbearable... Would I support Mac again? No."

    I always laugh at this argument. "Graphics card drivers are bad!" so you decided to use possibly the most unsupported and experimental platform (other than Vulkan, which seems to have better performance already), I.E. Web GL ?

    That still runs on graphics card drivers! You're still facing the same issue, just hiding it as "not my fault now!" when it's Chrome or Node-Webkit/NodeJS that's blacklisted it instead.

    The simple bottom-line is:

    1.) Most desktop graphics cards today are optimized specifically for DirectX (at least to 9.0c), and have varying degrees of OpenGL support, both integrated and dedicated

    2.) The same graphics card issues that occur in a native engine will still occur in HTML5 + WebGL, because you're still using the GPU to render graphics unless you're blacklisted and running software I guess?

    3.) Aside from a few neat tricks to convert native C++ to JavaScript like ASM.JS, JavaScript will always be slower than a majority native code by some amount. Sometimes by a very large amount, as we have noticed with collision and physics.

    3b(onus).) If you're someday going to be using ASM.JS why not also export actual C++ code that could be used in another game engine that supports lots more platforms and actually allows people to make viable/sellable commercial desktop and console games? It could be charged per export and still make plenty of cash for Scirra!

    To the people saying "You just event bad!" well if the engine is designed that way and I optimize my events as best I can to still have the functionality I need, and it's slow, but I code fairly averagely in some C# and Unity and it runs blazingly fast + with better compatibility on Steam than our previous Construct 2 title, is that not proof enough? It wasn't even a GPU limitation because we added MORE effects to our game!

    It's nice when that last argument is made by people who don't have titles selling commercially on Steam, who don't see the issues that occur and have to hear "Give me your source code and/or report it to Chrome or NodeJS !" every time they have an issue in the engine they purchased to make commercial games.

    Personally, I'd rather pay $100/year for Scirra to make a commercially sold large 2D pixel-perfect retro platformer in their own engine and then release and troubleshoot it on Steam, than to further invest in HTML5 with C3, and it sucks, because I really like the way Construct does game making (it still has an editor I love very much, which is what keeps me still fairly interested in the developments of Scirra ahead). I just need C2/C3's export to be a lot better for playing, and preferably sooner than "the future" where PC's drop in price to the point where everyone runs basically an Intel i7 CPU + NVIDIA GTX 960+ GPU and the problems "suddenly disappear now" magically.

    Triforce The arguments made above have been my experience, and the experience I have seen from other prominent Construct 2 games on Steam. You can listen to the people who haven't made large games if you want, but we have all roughly come to the same conclusions: C2 is great for hobbyists, educators, and personal or small games. But when you need to get larger, make some prototypes and test a LOT before deciding to stick with it.

  • Also if it is fillrate-bound, how is that combated? I assume that's what it is since if I minimize to the standard resolution (640 x 360) I get a constant 60 fps, but that's a bit unrealistic to play at, and even using low settings it doesn't change anything.

    If using a lower display resolution makes it run faster, you've basically proven it's GPU fillrate, exactly as I suspected. I can take a look at the .capx if you email it to me (), but I am sure it's that just based on what you've said.

    Force-own-texture layers (which includes any with an effect, blend mode or opacity other than 100%) add a lot to fill rate, as do any large sprites or tiled backgrounds. You have to reduce the number of those you use.

    I always laugh at this argument. "Graphics card drivers are bad!"

    Ask any low-level graphics programmer how they would characterise drivers. Watch them bang their head against the nearest desk/wall while wailing. This is an industry recognised issue, and I backed it up with references in my earlier post. What makes you think this is not actually the case? Do you think those references I provided are wrong? If so, why?

    [quote:1rffrqe3]so you decided to use possibly the most unsupported and experimental platform

    This is ridiculous. WebGL is a robust, reliable and widely-deployed technology, which actually to a large extent sanitises OpenGL support by closing off some gotchas, defining previously undefined behavior, and working around driver bugs where feasible. To illustrate the point, on my high-end dev machine if I go to chrome://gpu, Chrome tells me all the driver workarounds it's applied, which are the following:

    [quote:1rffrqe3]Some drivers are unable to reset the D3D device in the GPU process sandbox

    Applied Workarounds: exit_on_context_lost

    TexSubImage is faster for full uploads on ANGLE

    Applied Workarounds: texsubimage_faster_than_teximage

    Clear uniforms before first program use on all platforms: 124764, 349137

    Applied Workarounds: clear_uniforms_before_first_program_use

    Always rewrite vec/mat constructors to be consistent: 398694

    Applied Workarounds: scalarize_vec_and_mat_constructor_args

    ANGLE crash on glReadPixels from incomplete cube map texture: 518889

    Applied Workarounds: force_cube_complete

    Framebuffer discarding can hurt performance on non-tilers: 570897

    Applied Workarounds: disable_discard_framebuffer

    Limited enabling of Chromium GL_INTEL_framebuffer_CMAA: 535198

    Applied Workarounds: disable_framebuffer_cmaa

    Zero-copy NV12 video displays incorrect colors on NVIDIA drivers.: 635319

    Applied Workarounds: disable_dxgi_zero_copy_video

    Disable KHR_blend_equation_advanced until cc shaders are updated: 661715

    Decode and Encode before generateMipmap for srgb format textures on Windows: 634519

    Applied Workarounds: decode_encode_srgb_for_generatemipmap

    Native GpuMemoryBuffers have been disabled, either via about:flags or command line.

    Disabled Features: native_gpu_memory_buffers

    That's a list of bugs - and performance issues! - you will run in to and need to work around yourself if you don't use WebGL, for just one modern and up-to-date system. Think about extending that across the entire ecosystem of different OSs and hardware. So actually WebGL is a more robust graphics technology. Browser vendors have done a huge amount of workarounds already to maximise reliability and performance.

    Given that, on what basis do you believe this is an experimental technology?

    [quote:1rffrqe3]That still runs on graphics card drivers!

    I'm not sure what your point is, because literally every other GPU-accelerated technology also depends on drivers.

    [quote:1rffrqe3]Personally, I'd rather pay $100/year for Scirra to make a commercially sold large 2D pixel-perfect retro platformer in their own engine and then release and troubleshoot it on Steam

    We have effectively already done this with the Construct 2 editor itself, which has in the past faced severe graphics drivers (e.g. crash on startup). To this day there is still the odd crash-on-startup issue due to graphics drivers. It is incredibly frustrating and difficult and costs us sales. This is literally one of our motivations to use web technology. We've done the native side and seen how excruciating that can be. WebGL is better! I'm not sure what you expect us to do about it, either - do you think I can just phone up AMD and tell them to fix a wide range of bugs and then retroactively install that to millions of PCs worldwide? We could change technologies, but that could easily make it worse, as I've shown above. What are you expecting us to do in response to these issues?

  • - Honestly I would LOVE for it to be my bad programming, but if I disable all of my code the fps doesn't really change much. Unless, however, disabling is different from actually deleting it.

    If after disabling all of your code you still have performance issues then I assume you have a lot of (probably high res?) sprites with motion behaviors? I saw once a project with many high-res grass objects with sine + effects + the grass had 60 frames animation. This in result was killing the project as both behaviors and 60fps animation takes a lot of CPU.

    There's really a lot of things that can affect performance. My advice is to disable/remove features/elements one by one and check what's the issue. Then once tracked think of how can you optimize it.

    C2 is surely not a performance beast and that's why it's difficult to make a big project and so - important to code/make it carefully.

  • Try Construct 3

    Develop games in your browser. Powerful, performant & highly capable.

    Try Now Construct 3 users don't see these ads
  • I've been following this topic a while, and from what I understand and read is that a WebGL application on Windows can beat a native desktop OSX application, because the OSX OpenGL driver sucks.

    On windows a native desktop application can easily have 10x more draw call throughput then a WebGL app running on the same machine, BUT not because of slow JS performance, but because of WebGL overhead.

    So what people do to combat this is to reduce the amount of draw calls to WebGl.

    [quote:1b3hksbp] - next I tested drawing with ANGLE_instanced_arrays, object positions are computed on CPU, written to a (double-buffered) dynamic vertex buffer, and then rendered with a single draw call, in Chrome on Windows with NVIDIA I can get 450k instances before the performance drops below 60fps (so 450k particle position updates per frame in JS, and no sweat!), performance in a native app isn't better here, my suspicion is that the vertex buffer update is the limiter here (500k instances means 8MByte of dynamic vertex data shuffled to the GPU each frame), on my OSX MBP I can go up to about 180k instances (again very likely vertex throughput limited). However in this case, the way the dynamic vertex buffer works is also important, it looks like vertex buffer orphaning is useless in WebGL (see discussion here: https://groups.google.com/forum/#!topic ... MNXSNRAg8M), so I switched to double-buffering

    I don't know how well C2/C3 handles this, but it might just be the case it's the draw call overhead, and why many people experience it as slow?

    Edit: I'm pretty certain the only performance issues with C2/C3 is overhead, nothing else...

    The C3 example, "Quad issue performance test" should probably be able to do double ammount of sprites without breaking a sweat if draw calls were reduced.

  • Ashley - Thanks. I've emailed you a dropbox link for my capx file. You're likely right that it's a fillrate issue, however , my layout size is 4500, 2000 (resolution is 640 x 360). I'm using 4 layers that each have a tilemap on it (3 are the same file, 1 is a collision tilemap), and I do have 1 layer that is set to force texture with maybe 20-30 sprites that have destination out on it. I'm also using maybe 10-20 sprites that have the additive blend mode.

    If those are truly what's causing the problem, to ME that literally means that using tilemaps or any blend modes in general should be prohibited for anything that isn't a flappy bird or mario-type game.

    Ironically enough, when I deleted all of my "light" with destination out sprites, it didn't actually help anything.

    - Unfortunately, no. It's pixel art with almost nothing having any behaviors or effects. The particular layout is 1 enemy that has no events to it, and I've disabled the groups for all other enemies. I'm not using super heavy fx, or any large objects. Most sprites are either a tilemap or static.

  • Ashley

    Is C2/C3 taking advantage of double buffering to reduce draw calls? Check my previous post, my only issue for my own game on mobile is draw calls, so maybe there's something to it?

    The only reason WebGL might perform slower than native is due to overhead, which should be reduced to a minimum for WebGL.

  • I think all modern OSs double buffer everything on-screen. Certainly all modern graphics applications do.

  • I think all modern OSs double buffer everything on-screen. Certainly all modern graphics applications do.

    I was under the impression that was something you have to setup manually?

    http://stackoverflow.com/questions/29565026/how-to-implement-vbo-double-buffering-in-webgl

    Anyway, from what I've read, the main issue with webGL is draw call overhead, so reducing that should improve the performance a lot.

    [quote:1n0y2rq9]Fewer, larger draw operations will improve performance. If you have 1000 sprites to paint, try to do it as a single drawArrays() or drawElements() call. You can draw degenerate (flat) triangles if you need to draw discontinuous objects as a single drawArrays() call.

    https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API/WebGL_best_practices

    I'm not doubting the power of WebGL, vs Native, I just feel it's not really optimized when using C2/C3.

  • Please share the .capx with Ashley - I get second hand frustration in these threads because users will complain and then never share the final cause of issues; would love to see some closure.

    I think at least that will start giving some comparisons in the real world...that will help one develop a game from the start taking into account inherent issues there might be.

    > Honestly I would LOVE for it to be my bad programming, but if I disable all of my code the fps doesn't really change much. Unless, however, disabling is different from actually deleting it.

    >

    Not only "bad coding" but misuse of budgeting graphics/effects/particles/objects/collisions/force texture or anything that has a behavior etc..

    As I understand it Behaviours are less resource intensive than coding your own. (At least the built in ones.)

  • Ashley

    I don't know if any of this makes any sense to you (I'm not a coder), but i just tried the WebGl inspector in Firefox. This looks like a lot of draw calls for 1 frame, where good practice (from what I've read) is to bundle them and draw all sprites at once. Is there any way to optimize this further?

    And Gecko seems to be using most CPU time.

    As I've said I'm not doubting webGL performance. I'm just suspecting we could get more bang for the buck if C2/C3 was optimized in such a way to reduce draw calls to a minimum. Since overhead is the major issue with webGL. Known fact. That's why people it's good practice to draw many things at once. Otimal is to draw 1 time per frame.

    Any thoughts on that? Is there anything that can be done to reduce # of draw calls you think?

  • jobel - true, but I've tested all of that (deleted most of my objects, turned off all effects, AND disabled code and collision checks) to no avail.

    deleted most of your objects? of the objects you left have how big are your sprites? how many do you have? what behaviors do they use? what are they doing on the screen? you say your cpu is not taxed, so your fill rate has to be doing something taxing. How big is your layout?

  • Even making a small Flappy Bird game is far more stable making in Unity for an Android Jellybean phone from 2012 than making on C2.

    While i don't tout HTML5 performances as the greatest thing ever, i find this particular statement extremely hard to believe.

  • Draw calls is another matter really, and happens on the CPU side. It's probably best to split that topic off to a new thread. We have OpenGL ES 3 equivalent capabilities with WebGL 2 though, so if at any point draw calls prove to be a bottleneck, it's something we can potentially optimise in exactly the same way a native app would adjust their draw calls to be more efficient. Most 3D APIs, WebGL included, are specifically designed to allow as much drawing as possible with the fewest draw calls, to as far as possible eliminate the CPU overhead.

  • https://docs.google.com/presentation/d/12AGAUmElB0oOBgbEEBfhABkIMCL3CUX7kdAPLuwZ964/edit#slide=id.i0

    Found this presentation document for a good read on WebGL, I'm just trying to understand a little bit more on how it works. I have no doubt in my mind that it can match performance of native if optimized the right way. The only thing I'm not sure of is if C2/C3 is getting the most out of it. Since a lot of people still seem to be complaining about it.

    Maybe both are right? Ashley claiming close to native level performance (which probably is true in an optimal case), but users are experiencing something else with their projects because it's not optimized?

    What do I know? Just speculating...

  • So... To sum it up after five pages of discussion, two camps of thought:

    1) the Construct developer(s) maintain that it is due to drivers and user errors (for example badly scripted events)

    2) a number of game developers who have actually made and released larger 2d desktop games in C2 on Steam, and speak from experience when they notice performance is not up to par with 'native' engines, and of whom a number have switched to other game engines for their next games.

    Question to Ashley: does Scirra use their own engine to develop larger games? If not, perhaps it would be a good idea to work on at least one larger project to see for yourselves if there exist any issues.

Jump to:
Active Users
There are 1 visitors browsing this topic (0 users and 1 guests)