Halfgeek's Recent Forum Activity

  • Lots of progress on the faction AI.

    Also made a ship damaged effect system, smoke, fires & sparks.

    Time to work on the merchant trading & combat AI!

  • It's not a C2 issue, its clearly an Intel iGPU problem with WebGL.

  • TiAm That demo doesn't seem to work for me. It tells me that it's running at 60fps when it's actually running at like 2fps. Also, again, it's hard to say if it's GPU or CPU bottlenecked.

    This? I got 60 fps (really smooth) with 60,000 fishies.

    http://people.mozilla.org/~jmuizelaar/f ... -fast.html

    [attachment=0:1ic84rjj]Fish.jpg[/attachment:1ic84rjj]

  • What about giving airscape/gpu-z a go on the A10 rig you're talking about?

    It's wife's rig and occupied for her work atm.

    I did test it in the past running my own performance test and it handled it fine, which was why I was so surprised when my Intel HD4000 struggle so much in a simple game.

  • I think it's starting to look like:

    (C2 engine + real game + iGPU) != (60fps 1080p)

    Not exactly, only applies to Intel iGPU.

    I have an AMD A10 HTPC rig for playing movies & lighter games on the TV.

    AMD iGPU in their APU like A6/A8/A10 series have no issues and are able to accelerate WebGL perfectly & efficiently.

  • This smells like it's an Intel problem, their iGPUs are not optimized to handle WebGL properly leading to major loading in otherwise simple 2d games. Especially if its faster in canvas2D mode where their raw CPU power muscle through.

    I noticed this behavior early on, in the first Star Nomad I deliberately kept everything as simple as possible and only used 1 webgl effect on a rare sprite. Everytime it spawned, my performance would tank HARD on my notebook with Intel HD4000. It was fine on Android devices and butter smooth on iOS.

  • Yeah, bunnymark isn't a real rendering performance test, it's probably CPU/memory bottlenecked.

    I don't know why people are talking about multicore and CPU limitations. You do realize the problem is PURELY on the GPU side, right? Unless it's harder for the CPU to keep up with 3000 objects in renderperftest than it is for the GPU to draw it...

    Do the bench with your setup and monitor GPU usage with GPU-Z, lets see the GPU load %. Bet you its nowhere near your CPU utilization.

    I've just monitored GPU usage in Airscape demo on Steam, 1080p (auto resolution and standard). ~15-20% of my GPU is loaded while playing 60 fps (with random stutters). It's definitely not a GPU bottleneck.

    I just ran your stripped down test via dropbox, and my GPU isn't even loaded (~1-3%!!) to a point where its above idle enough for it to run at higher clocks. It's running on idle 300mhz clock instead of loaded 880mhz (which the Steam demo did load).

    Edit: Radeon 7950 stock clocks with i5-3570K.

  • Try Construct 3

    Develop games in your browser. Powerful, performant & highly capable.

    Try Now Construct 3 users don't see these ads
  • I do tend to believe it's a C2 problem, or at least a 2D problem however. I just tried out this benchmark, which renders 150,000 cubes. On my integrated chip it runs at 40fps easily.

    3D engines can take advantage of GPU acceleration much better. It was around the time of the hardware transform & lighting being incorporated into the GPU, before that, 3D games were also limited by the CPU to setup the scene.

    2D tend to be CPU limited, since the fill-rate of modern GPUs are insanely high, its usually not the bottleneck.

    I'm wary of this limitation & C2 logic being single threaded. Already in my early dev of SN2, I'm seeing 50-60% CPU usage (the GPU is barely doing any work at all!), all on 1 thread on my quad-core CPU.

    After SN2, I'll move to a 3D game engine for future games.

  • > http://www.goodboydigital.com/pixijs/bunnymark/ maybe this one?

    >

    I was able to keep 30+ with 50K bunnies - I can't even begin to imagine doing a similar project in C2.

    Can anyone explain why Pixi is so fast?

    Perhaps its GPU accelerated, ie. GPU compute. It's definitely multi-threaded, which isn't something C2 is (besides pathfind).

    [attachment=0:3ogrdvjl][/attachment:3ogrdvjl]

    It's not good to be in 2015 with a game engine that is single threaded. Definitely it limits creativity since the hardware available aren't being fully utilized. Quad-cores are quite common these days.

  • Correct me if I am wrong, but the you can't just compare a a range of products like 6300M with a single product like the Mobility 5430 in such a case.

    It would be like if I would compare the GTX 900 series with the GTX 770.

    So it would be usefull to know which of the 6300M models have been used.

    Since a 6370M is better than a 6330M which you both would compare (and set equal) to the 5430 in your case.

    I'm sorry but this had to be

    The 6300 and 5400 series of mobility radeon (entry, lowest performance) all have 80 shaders, the difference is clock speed but the performance scaling on those older parts aren't linear. At best, you have 20% performance gap. Even the fastest 6300 series is slower than newer iGPU from Intel (when it works).

    On desktop, entry dGPU typically have 500+ shaders, so they are many orders of magnitude faster, and indeed is much faster than Intel iGPU.

  • Thanks for the info! Did not know that was possible, glad to hear that.

    I just tested the game out on my laptop which is a bit slower then my workstation. The game does not run at 60 fps on the auto res option but the fps is around 46. But its still playable and don't notice any jank. Lowering the res gets the fps to 60 again.

    This is on a Intel i7 Q740

    Ati Radeon Mobile 6300 which is by far a good videocard.

    If I have the time I might be able to test it tonight on a Mac Air which has a intel hd4000 vid card.

    This is definitely an Intel driver problem because the mobility 6300 is a re-named HD 5430 Radeon, which itself is barely faster than the iGPU in Sandy Bridge!

    http://www.notebookcheck.net/ATI-Mobili ... 702.0.html

    A newer iGPU found in Ivy Bridge (HD4000) or Haswell (HD4400+) is actually a LOT faster (33/70% faster) than that Radeon.

  • Nice job on the parallax forest!

    I have constant 60 fps no stutter on Chrome. ~18% cpu usage.

    i5-3570K

    Radeon 7950

Halfgeek's avatar

Halfgeek

Member since 24 May, 2013

Twitter
Halfgeek has 4 followers

Connect with Halfgeek

Trophy Case

  • 11-Year Club
  • Coach One of your tutorials has over 1,000 readers
  • Email Verified

Progress

13/44
How to earn trophies