Halfgeek's Forum Posts

  • ~800 col checks per tic isn't that high. It's definitely due to all the other AI and events.

    And yes, there's limitations. One needs to be very careful to optimize it. If the game is too complex in scope, best to use another engine with multi-threading support.

  • i for one, am very dissapointed with tthis, we live are in 2015, and only single thread cpu usage?

    Whenver i see the community seying that c2 can make big games i guess they mean big games with ugly graphics and goombas for AI

    my game too has complex AI, and extremely good graphics, but now after learning abou the cpu thing, im very worried. i assumed that the runtime will use full use of the CPU....not so little.

    about the NW.js well im going to try it, but i dount is going to work :/

    You will just have to optimize and more optimize. Good luck with it.

  • Try Construct 3

    Develop games in your browser. Powerful, performant & highly capable.

    Try Now Construct 3 users don't see these ads
  • TiAm I noticed there seems to be a cliff of sorts, more units than that and it quickly eats up CPU.

    I'll have to do an optimization run through a few times, currently I'm very generous with it, all the calculations are done per tick, see what I can move to every 0.1 or more seconds instead.

    It's a challenge though, getting efficient & effective AI for mass RTS style combat. Was hoping newer Chromium versions may improve this a bit but I saw no difference in performance between Node 10.5 to now.

  • BF4 is very well threaded, it will use up to 8 threads so fully using most CPUs.

    C2 is single threaded. 1/4 of your CPU is being used if you see 100% CPU usage.

  • I know this is a bit OTT, but in that stress test of yours, what is chewing up most of your resources? Cols? A.I.? Draw calls?

    Draw calls = ~5%.

    Using GPU-Z, my GPU usage is ~20%. Radeon 7950 (pretty mid-range these days).

    Col isn't much, peaks around 700/tic, well below stress test col levels.

    The bulk of the CPU use is for AI for target selection, priorities, movement for each ship (range, optimal firing solution, left/right shield facing against attackers, when to use missiles etc), drones & seeking weapons.

    I have to limit the fleet sizes in encounters due to it being single threaded, since my CPU is a desktop i5-3570K, using more than 50% of that thread means a lot of people on notebooks would lag as their CPUs are lower clocked.

    This definitely limits C2 to smaller scope games with fewer active AI &or simplistic combat.

    If I remove drones (~50 drones), that same fleet battle peaks at 50% single thread load and ~35-40% average, well within acceptable range for it to run well on a range of setups. The problem is it loses the sense of epic-ness without hordes of drones. Can't have carriers without drones either. :/

  • No stutters or jank, Chromium 41 is good. But I get no difference in CPU usage in my stress test.

    This is pure logic bottleneck, not GPU or fill-rate, my GPU is ~20% loaded. Draw calls are ~5% of the CPU usage. The rest are all used by the AI & combat simulation. Peak 97% CPU usage. It sucks to be single-threaded in 2015.

  • Lots of progress on the faction AI.

    Also made a ship damaged effect system, smoke, fires & sparks.

    Time to work on the merchant trading & combat AI!

  • It's not a C2 issue, its clearly an Intel iGPU problem with WebGL.

  • TiAm That demo doesn't seem to work for me. It tells me that it's running at 60fps when it's actually running at like 2fps. Also, again, it's hard to say if it's GPU or CPU bottlenecked.

    This? I got 60 fps (really smooth) with 60,000 fishies.

    http://people.mozilla.org/~jmuizelaar/f ... -fast.html

    [attachment=0:1ic84rjj]Fish.jpg[/attachment:1ic84rjj]

  • What about giving airscape/gpu-z a go on the A10 rig you're talking about?

    It's wife's rig and occupied for her work atm.

    I did test it in the past running my own performance test and it handled it fine, which was why I was so surprised when my Intel HD4000 struggle so much in a simple game.

  • I think it's starting to look like:

    (C2 engine + real game + iGPU) != (60fps 1080p)

    Not exactly, only applies to Intel iGPU.

    I have an AMD A10 HTPC rig for playing movies & lighter games on the TV.

    AMD iGPU in their APU like A6/A8/A10 series have no issues and are able to accelerate WebGL perfectly & efficiently.

  • This smells like it's an Intel problem, their iGPUs are not optimized to handle WebGL properly leading to major loading in otherwise simple 2d games. Especially if its faster in canvas2D mode where their raw CPU power muscle through.

    I noticed this behavior early on, in the first Star Nomad I deliberately kept everything as simple as possible and only used 1 webgl effect on a rare sprite. Everytime it spawned, my performance would tank HARD on my notebook with Intel HD4000. It was fine on Android devices and butter smooth on iOS.

  • Yeah, bunnymark isn't a real rendering performance test, it's probably CPU/memory bottlenecked.

    I don't know why people are talking about multicore and CPU limitations. You do realize the problem is PURELY on the GPU side, right? Unless it's harder for the CPU to keep up with 3000 objects in renderperftest than it is for the GPU to draw it...

    Do the bench with your setup and monitor GPU usage with GPU-Z, lets see the GPU load %. Bet you its nowhere near your CPU utilization.

    I've just monitored GPU usage in Airscape demo on Steam, 1080p (auto resolution and standard). ~15-20% of my GPU is loaded while playing 60 fps (with random stutters). It's definitely not a GPU bottleneck.

    I just ran your stripped down test via dropbox, and my GPU isn't even loaded (~1-3%!!) to a point where its above idle enough for it to run at higher clocks. It's running on idle 300mhz clock instead of loaded 880mhz (which the Steam demo did load).

    Edit: Radeon 7950 stock clocks with i5-3570K.

  • I do tend to believe it's a C2 problem, or at least a 2D problem however. I just tried out this benchmark, which renders 150,000 cubes. On my integrated chip it runs at 40fps easily.

    3D engines can take advantage of GPU acceleration much better. It was around the time of the hardware transform & lighting being incorporated into the GPU, before that, 3D games were also limited by the CPU to setup the scene.

    2D tend to be CPU limited, since the fill-rate of modern GPUs are insanely high, its usually not the bottleneck.

    I'm wary of this limitation & C2 logic being single threaded. Already in my early dev of SN2, I'm seeing 50-60% CPU usage (the GPU is barely doing any work at all!), all on 1 thread on my quad-core CPU.

    After SN2, I'll move to a 3D game engine for future games.

  • > http://www.goodboydigital.com/pixijs/bunnymark/ maybe this one?

    >

    I was able to keep 30+ with 50K bunnies - I can't even begin to imagine doing a similar project in C2.

    Can anyone explain why Pixi is so fast?

    Perhaps its GPU accelerated, ie. GPU compute. It's definitely multi-threaded, which isn't something C2 is (besides pathfind).

    [attachment=0:3ogrdvjl][/attachment:3ogrdvjl]

    It's not good to be in 2015 with a game engine that is single threaded. Definitely it limits creativity since the hardware available aren't being fully utilized. Quad-cores are quite common these days.