TiAm's Forum Posts

  • By the way, just to counter all that downer talk: this has become a really awesome document. I haven't looked a it in a few weeks, so most of this is completely new (to me).

    In particular, I'd never heard of Duff's Device loops. Why, exactly, are they faster? Would there be any way to hard-mod a bit of the engine code to give them a test run?

    For anyone that wants to run the tests:

    http://jsperf.com/duffs-device

  • Try Construct 3

    Develop games in your browser. Powerful, performant & highly capable.

    Try Now Construct 3 users don't see these ads
  • It looks like we have a troll

    http://i.imgur.com/BAsp1pv.png

    Would it sound jaded if I were to be surprised that it took this long... <img src="{SMILIES_PATH}/icon_lol.gif" alt=":lol:" title="Laughing"> <img src="{SMILIES_PATH}/icon_rolleyes.gif" alt=":roll:" title="Rolling Eyes">

    jayderyu:

    You can roll back, right?

    > If there are customizable windows, and associated actions. Then the architecture can already support the request down the road. Either by Scirra or some one else.

    >

    Extensions are great and all, but I want the main software to function well from the get go. I'm already going trough nightmare of custom extensions in other software I'm using, and I don't C2 to be this way.

    This +1000.

    I know the whole point of this thread is to articulate how C3 should be more open to tinkering and third-party-extensions, but megatronx makes a really good point: we need basic functionality that works. I just don't want to see C3 typified by this response from Ashley (and/or other devs, if there ever are other C2 devs):

    "Yeah, that's a good idea, and it's on the todo list, but you know, you can just add it yourself with the sdk."

    Also, while I can think of all sorts of awesomeness that could come from custom IDE elements, such a feature must be designed so that extensions can be rolled into a capx on export (hopefully such a feature can be extended to plugs/behaviors/effects too).

    Otherwise, we are going to be looking at an expansion of the dependency-hell that already affects many C2 projects due to third party plugins/behaviors/effects.

    Anyway, don't mind me, just being the devil's advocate. Now, where's another parade I can piss on... <img src="{SMILIES_PATH}/icon_twisted.gif" alt=":twisted:" title="Twisted Evil">

  • >

    > I think it's starting to look like:

    >

    > (C2 engine + real game + iGPU) != (60fps 1080p)

    >

    Not exactly, only applies to Intel iGPU.

    I have an AMD A10 HTPC rig for playing movies & lighter games on the TV.

    AMD iGPU in their APU like A6/A8/A10 series have no issues and are able to accelerate WebGL perfectly & efficiently.

    Yeah, actually, that's a good point. I've really never read anything bad about AMD's apu solutions; in fact, they seem pretty awesome for what they are.

    Of course, AMD has the advantage of all the tech and expertise they picked up when they merged with ATI; Intel is a relative newcomer to GPU's, and it hasn't exactly been smooth sailing (anybody remember Larrabee?).

    What about giving airscape/gpu-z a go on the A10 rig you're talking about?

  • Really, I've had a lot of trouble with display glitches in various programs due to intel graphics. It's a shame they are such a large chunk of the market.

    I've gotta stop putting off getting a real gpu...

    /digression

    We might get a better idea if someone with low end dedicated graphics (Nvidia or AMD/ATI) could run the airscape demo and post their gpu-z results. If a low end card is not only pushing 1080p 60fps, but doing so without using any gpu, we can pretty much chart this up to bad drivers on intel's part.

  • > Yeah, bunnymark isn't a real rendering performance test, it's probably CPU/memory bottlenecked.

    >

    > I don't know why people are talking about multicore and CPU limitations. You do realize the problem is PURELY on the GPU side, right? Unless it's harder for the CPU to keep up with 3000 objects in renderperftest than it is for the GPU to draw it...

    >

    Do the bench with your setup and monitor GPU usage with GPU-Z, lets see the GPU load %. Bet you its nowhere near your CPU utilization.

    I've just monitored GPU usage in Airscape demo on Steam, 1080p (auto resolution and standard). ~15-20% of my GPU is loaded while playing 60 fps (with random stutters). It's definitely not a GPU bottleneck.

    Face=palm. Totally forgot about gpu-z.

    Okay, so, on my system (3570k, HD4000) I can confirm that the gpu is getting loaded down. In the dropbox airscape demo, gpu is at 90% or more at 60fps, auto setting, window size aprox 800p.

    Trying a random project of mine (space shooter with a couple webGL effects and a few layers) I upped the rez to 1080p and looked at what I got. With default settings, smooth performance was a no go, with fps running similar to airscape in the latest chrome stable, ie, from 30 to 40fps. Stripping away almost all the fx, and a few window dressing layers got me 60fps, but with little room to spare according to gpu-z.

    I think it's starting to look like:

    (C2 engine + real game + iGPU) != (60fps 1080p)

  • Elliott

    Bunnymark was brought up before. Actually, my modified devilmark doesn't do too bad (at 30fps: 27k for devil, vs ~65k for bunny), but mainly it's because bunnymark is spawning very simple objects that are similar to our particles, and don't have the overhead of a proper sprite object. Devilmark actually maxes my cpu not my gpu, which is likely the overhead of the event engine and the sprite object, compounded by the ludicrous number of objects in either case.

  • >

    >

    > Ah, that's a good point about multicore...doesn't your processor have 6 cores? If you are still subs-60fps at low rez, then maybe airscape is cpu bottlenecking on your system. Really doesn't seem like the demo should be doing that though...

    >

    Even though my processor has 6 cores it doesn't make any difference for *most* html5 applications as they only can utilise 1 core.

    There are Web Workers that could add Multicore support or the new JXcore (I would love to see JXcore used for a C2 game).

    And yes, that's exactly what I am trying to say that it may be a CPU problem. And I am quite sure that a 3.8GhZ core should be able to run it if optimised.

    Based on my theory it would explain most stuff, as to why the 3D test run so much better (Because not much CPU power required there).

    One thing it does not explain then though is the fact that Canvas2D is faster than WebGL.

    That was my point. Your single thread performance isn't as good as a modern intel processor because you have more cores. I totally agree that most any C2 game that is properly optimized should run fine on a system like yours...but, as of yet, airscape's code is a black box, so it's impossible to know if it's being dragged down by un-optimzed code.

  • > SgtConti

    >

    >

    > >

    > > SgtConti's results are particularly bizarre...I wouldn't exactly call that a 'mid-range' system, unless we are throwing $5000 alienware's into the mix. No C2 game should be struggling on a computer like that.

    > >

    >

    > Since it only is (nowadays) about a 700-800$ rig I would call it mid-range. As you can get a real High-End rig for about 1100$ (Or more) ^^

    > And well, without multicore support it still is easily possible for an unoptimised or really big game to struggle on such a rig.

    >

    Ah, that's a good point about multicore...doesn't your processor have 6 cores? If you are still subs-60fps at low rez, then maybe airscape is cpu bottlenecking on your system. Really doesn't seem like the demo should be doing that though...

  • scaffa

    What is the rez on that notebook? If it's 1080p, those results look similar to mine, which is interesting given silverforces above comments. Though, in my case, mid 40 are not playable in chrome because of massive stuttering.

    Really, everybody posting a bench should report their screen rez. Since this might be a fillrate issue, resolution could make a big difference. A lot of laptops are still 1366x768, but some are 1080p, and some are crazy high rez, like 1440 or more. Desktop can be practically anything.

  • eli0s

    Wow, those trees are awesome...great job!

    3570k, HD4000 (iGPU)

    I get about 30% cpu, and about 40fps fullscreen at 1080p. 60fps if I shrink the window about a third.

  • sqiddster

    It may involve some work, but the best approach here would be to strip down a capx as much as possible and post it. Otherwise, we're all just taking shots in the dark. Can't you obfuscate/distort/watermark your resources somehow so that they wouldn't be re-useable?

    SgtConti's results are particularly bizarre...I wouldn't exactly call that a 'mid-range' system, unless we are throwing $5000 alienware's into the mix. No C2 game should be struggling on a computer like that.

    Really feel for you on this, and I hope you can figure it out somehow...

  • imaffett

    Thanks for the reply.

    Gave app preview a try, but my framerates tanked...went from ~40fps with cwalk 7/10 to single digit framerates with app preview.

    Crosswalk Player is definately the way to go, since app preview still requires me to export every time I want to test a change, which is a much slower workflow.

  • sqiddster

    I think I have a hunch about what is going on here. But first, my test results:

    I see similar results in renderperf: 2.5x as many sprites in canvas2d as in webgl. In the other two tests, I get:

    PixelFillrate: slightly more sprites onscreen with webgl, about a 25% increase. However, with canvas2d, my cpu gets maxed, whereas webgl only uses about 6%.

    DevilMark: Here WebGL pulls way ahead of Canvas2d because the CPU is also having to handle quite a bit of logic moving all the devils around and testing their collisions. About a 3-4X difference.

    Okay, here's what I think is happening: Canvas2D is pulling ahead of WebGL because, in systems with integrated graphics, the performance discrepency between the CPU (top-of-the-line) and the GPU (bottom-of-the-barrel) is so great that software based rendering wins out in situations where there is little or no logic overhead. This is why canvas2d falls behind on a test like DevilMark.

    What are the actual specs of the systems you are testing on?

  • sqiddster

    I would be wary of those benchmarks...while I see the same as you (and I have Intel graphics, HD4000), I'm wondering about that text object that is being modulated...it looks like it might not be a spritefont. Also, I'm not seeing the same when trying some similar tests. Give these a go:

    https://www.dropbox.com/s/8blp1b2nrnnvt ... .capx?dl=0

    https://www.dropbox.com/s/b0qkyjxzd55k1 ... .capx?dl=0