jayderyu's Forum Posts

  • newt is suggesting using a webgl shader. If you used a shader to change the colour you can avoid the impact of extra colour changed versions of your sprite.

    Can I ask. What is the resolution of your character. I can imagine 16x16 based character sprites running at 200+ frames. But I can't image much larger using 200+ frames. I can barely think of any games that need so many animations. And the few that I do know tend to very limited graphicially. Such as fighting games which are just 2 characters and a background set pieces.

    So you have a sample of what needs 200+ frames.

  • If I may suggest. Use Dictionary for active data. And save the JSON to local storage at reasonable increment times. Then you only need to load data from local storage at the beginning of the game. You will also have access to the standard data check features.

  • You do not have permission to view this post

  • As everyone said C2 is built to run at 60fps. dt should in theory cope for dropped frames. However that does not seem to be the case.

    I linked to a article that was about fixing the dt and putting the game to 30fps. The theory runs on the idea that the engine loops as fast as possible. The game also tracks the dt difference. however logic only updates every 30fps. So even if the game dips below 60fps the logic already at 30fps anyways so you will never see the dip. however as always. it's not 60fps so there is no value in the design choice.

  • I realized today that there is nothing wrong with the way the new storage is.

    Just use Dictionary for your data. When you want to save long term use the new LocalStorage. When loading use a splash screen and when it's loaded from local storage into dictionary your all good. Honestly this is how I do saving loading in Java/C++/Unity anyways.

  • I agree with Rex. When I suggested a model to deal with the change for Android and to switch over to a memory management similar to AJAX triggers. I suggested a cache. I still suggest a cache.

    Since session is removed as it does seem redundant. Why not offer Cache storage as part of LocalStorage. When using cache storage there is always a running memory copy that is like a dictionary and behaves just like webstorage. Cache storage is loaded at the beginning of the app and regularly writes to storage location. So now you get the benefit of the immediate old style, but deals with work around of using the DB styles. This then means you don't need "Then" as a follow up.

    Also add save frequency based on how often the cache should save. A sample would be "Data Change, LayoutEnd, Once a Minute, Never". Then also have an option to "save cache".

    So now you get the best of both worlds. high memory data should not be in cache. where as small memory data would be.

  • http://en.wikipedia.org/wiki/Duff's_device

    https://jsperf.com/duffs-device

    The current loop is

    foreach Object, then foreach behaviour.

    And it's not the best way. As previously addressed the CPU optimization is similar to a GPU optmization in concept. Handling the same code in scope in chunks allows for better optimization and performance. Running through a ForEach(object) then ForEach(Behaviour) results in the object at the top. So the loop starts with object, then goes to the first behaviour, however since all following behaviours are never the same the CPU never optimizes as well as saying just handling all the same behaviours at once. As for ignoring Duff's device. I would do so in non game programming, but in game engine programming. It's all about finding optimal performance. Games are intensive and there is no reason not to take advantage of any and all coding tricks.

    Unless your relying on IRQ, all listeners are still based on loop. An event happens, then the event loops through the listener firing off the event. In this case everything. Which as you point out would be chaotic. Since this would be a chaotic loop that would result in little to no optimization of runtime.

    As for parallel loops, this might as well call it threading. In this case that would be done with a webworker. This is doable, but the this comes down to managing effective data transfer between the different threads with no conflict. I would love to see threading implemented in a game loop, but since memory needs to be transfered. I'm not entirely sure if the memory moving would save on time. It could do so in large memory batches.

    on a note. I did forget to write that there is a post and pretick loops. Which also again would take advantage to scope cache in a DuffDevice.

    I like the idea of clicking on a function. which then take you to the defintion. That's good <img src="{SMILIES_PATH}/icon_e_smile.gif" alt=":)" title="Smile">

  • I wrote this as an update to the document draft. However I think this could improve game performance overall going forward.

    https://docs.google.com/document/d/1pNR ... ZGe8/edit#

    it's better formatted at google docs

    Engine Runtime Code execution

    Having spent some more time in the main game loop and ways to take advantange of memory, cache optimization for Duff’s Device(below) has led to a different suggestion. The current design of C2 main game loop for objects is

    Current Loop

    loop through all objects types

    loop through instances of Object type

    do object pre tick

    do update

    loop through behaviours

    execute behaviour update

    do object post tick

    This is the actual loop. The loop is repeated twice.

    for (i = 0, leni = this.types_by_index.length; i < leni; i++)

    {

    type = this.types_by_index;

    if (type.is_family || (!type.behaviors.length && !type.families.length))

    continue;

    for (j = 0, lenj = type.instances.length; j < lenj; j++)

    {

    inst = type.instances[j];

    for (k = 0, lenk = inst.behavior_insts.length; k < lenk; k++)

    {

    inst.behavior_insts[k].tick();

    }

    }

    }

    for (i = 0, leni = this.types_by_index.length; i < leni; i++)

    {

    type = this.types_by_index;

    if (type.is_family || (!type.behaviors.length && !type.families.length))

    continue; // type doesn't have any behaviors

    for (j = 0, lenj = type.instances.length; j < lenj; j++)

    {

    inst = type.instances[j];

    for (k = 0, lenk = inst.behavior_insts.length; k < lenk; k++)

    {

    binst = inst.behavior_insts[k];

    if (binst.posttick)

    binst.posttick();

    }

    }

    }

    In the above loop the loop is jumps into a multi embedded function. because the loop goes down, then back up the system is constantly clearing cache for a new behaviour, then going back to the behaviour for the next instance. There is little opportunity for the JIT to truly optimize cpu cache for both function variable cache, or object function cache.(as related to Duff’s device). As an analogy GPU/WebGL/GPU in general. Part of efficient rendering is to package sprites together on a texture. That way there is less swapping textures, which reduces memory transfer to the gpu cache, less draw calls and overall way better performance. Which C2 already works towards. however the CPU also benefits from caching code, and memory variable(as Duff Device below) significantly.

    I propose a loop that the CPU can take advantage of code/behaviour batching which can improve JIT optimization at runtime. Less deep than above, and also offers the awesome feature of priority behaviour control. this of course still works off the basic idea of world object and everything is a behaviour.

    Proposed loop

    loop through different behaviours

    loop through all of the same behaviour on all objects

    execute behavior update

    How does this work instead.

    This loop does not loop through Objects.

    This loop does not go through behaviors in an object

    This has the behavior at the top, and then manipulates it’s world object

    There is a BehaviourList. this list contains a BehaviourInstanceArray. And the array contains a reference to each and every instance of the behavior.

    abstract sample below

    BList[ Sprite[]

    Collision[]

    Platform[]

    Solid[]

    Pin[]

    ]

    When an object is created, the code adds the behavior instance to the array. When the object is destroyed then the behavior is removed from the array. This also means that the CPU can cache the update function and run through them all in one go. Doing so offers the benefit of Duff below. However the system also offers another benefit.

    Sort-able behavior execution. The above sample shows Sprite at the top, and Pin at the bottom. however this list can be sorted. Offering control as to what should be executed first to last.

    So the loop is

    for(int i = 0, int length = blist.length; i < length; i++)

    {

    var bInstList = blist[ i ];

    var n = iterations % 8;

    while (n--) {

    bInstList[ n ].tick();

    }

    n = (iterations * 0.125) ^ 0;

    while (n--) {

    bInstList[ n - 0 ].tick();

    bInstList[ n - 1 ].tick();

    bInstList[ n - 2 ].tick();

    bInstList[ n - 3 ].tick();

    bInstList[ n - 4 ].tick();

    bInstList[ n - 5 ].tick();

    bInstList[ n - 6 ].tick();

    bInstList[ n - 7 ].tick();

    }

    }

    So here is the game loop. Takes advantage CPU caching. Far simpler in design. Allows for Behavior execution order. So instead of Object top down, It’s behavior top down. There are probably other optimization techniques that can be applied.

  • Don't use per animation frame colliders. Each collider on a different frame will Trigger the collision again.

  • Just wanted to chime in on a couple of these to put them in perspective; and not so much to refute them.

    Like others said, a simple Crappy Fird clone takes 30MB and uses 20~40% CPU and still has some frame drops... Angry Bots (default demo of Unity) takes 38MB, uses around the same % of CPU, no frame drops.

    Unity has a 10mb over at the core. Which is about 7mb less than Crosswalk. Unfortunately there does seem to be some unwanted overheard on the browser aspects. I keep on saying it. But I truly believe Scirra should take Chrome xyz. And customize a version of that. Remove the DOM as a priority.

    Eli0s's Fancy Benchmark runs at 6~9 fps CPU 25~40% in Chrome, 8~9fps CPU 70~90% in FireFox on my Samsung Galaxy S4 while Epic Citadel runs at constant 60fps on default and 30~55fps at Ultra High Quality (rendered at 1080p), and Angry Bots mentioned above...

    So yeah, "HTML5 has close to native performance" / sarcasm

    Fancy benchmark is nice. But in refer to my comment above. Scirra should just use a stripped verion of Chrome. that would solve a lot of the overhead issues.

    Epic Citadel is gorgeous. However please note that 99% of Epic Citadel is

    • static environments,
    • quadtree collision detection,
    • binary spacial tree drawing determination.

    in comparison

    • C2 poor performances demos are batches of moving objects. Check out Goo Create for a proper 3D static environment comparison
    • C2 uses Cell collision detection. however don't let the name fool you. It's still brute force collision detection with in the cells.
    • C2 probably uses the Cell's to determine drawing or does brute force.

    This all comes down to NOT HTML5 or the browser, but just not well optimized critical performance pipeline of C2 engine. Which I have been requesting overhauls for 3 years now. It's why I requested focus on core malleable World Object with dynamic attached behaviours rather than C2 rigid Plugin system. This would allow for more flexibility to overhaull the choking points in C2.

    Maybe the CPU part is close to native, but the graphics part is not even close. I've stressed my device to test Epic Citadel and Angry Bots at a lower framerate, and guess what, at 15~20fps they still look more fluid, with no stutter then C2 at 45~50fps

    I have a Unity game on mobile where the start runs at 15 to 20fps to start on 2 year old devices. Unfortunately that's unoptimized due to time constraints. The game is very flappy birds like. 1 character moving to a tap. Though we 3 more animations on the screen.

  • I helped one of the forum members get his 5fps game that would crash on mobile. To 60fps and memory effecient on the same mobile fps under CrossWalk. Use the info however you want.

    eh. I'll expland

    Scirra promotes a no need to optimize and no need to micro optimize. This however is untrue. Not only should you mirco optimize, you should also be wary of C2 own Plugins. Some of C2 plugins will in fact hit your performance hard in ways you wouldn't think so. However if you take up good game design on your efforts, micro optimize, careful use of plugins, replace plugins in some cases. You can get descent running games on mobile.

    1. Sort you images into types and layers.

    You need to take advantage of C2 quircky texture packing to maximize the WebGL renderer. You won't notice this an issue with small games. You will on bigger ones.

    • All ui images are in one sprite.
    • all platform images are in one sprite.

    so on etc.

    2. Each Sprite Object is it's own texture pack

    Right from the get go this relates to 1. Each Sprite object will pack have it's own images packed into it's own sprite sheet. So if you repeat images in 2 different sprites. Then those images will be repeated in more than 1 sprite sheet. So do number 1 is mandatory.

    3. Behaviours carry performance weight.

    Don't attach plugins willy nilly like theirs no tomorrow. A simple Plugin such as Pin. Has curve based performance reduction. It's strange to think that Pin which figures an XY value hits performances. but it does. A few objects with pin is fine. a few hundred objects with Pin, and then you will start to notice.

    4. Some behaviours are just slow.

    I will use this one as an example. I wanted a rotating background in my game. So I made a 2048 by 2048 image. The game currently ran at 60fps with the background not rotating. As soon as I added Rotate Behaviour the performance dropped to 8fps. whoaawhh

    I switched to rotation in the EventSheet and that jumped performance to 30fps.

    I then rotated in the EventSheet every other tick(ie updating the image rotation at 30fps) and the game ran at a buttery smooth 60fps. And there was no visual difference.

    Lesson. Don't use Rotate Behaviour. I don't know why. I looked at the code. Doesn't seem like anything would hit performance. But eh.

    oh the game was running on a 6 year old Tegra 2a Tablet. So we are talking pretty old in comparison theses days.

    I had a link to a C2 game. The developer was showing a video of his main menu. Which was an orbital space system. Where clicking on an area woudl scroll the rotating systems to another set of rotating systems. The planets, moons and start acted as the button interface. This ran at 60fps on a 1core chip with 512 mb. Was buttery smooth, supported 3 level deep parallax layering.

    However I will be brutally honest. Due to Scirra not having a lot of value in micro optimization. this put's far more responsibility on the C2 developers. And new C2 developers just can't cope with the heavy rigid performance design. Most just come in and start making a game. Then use C2 in the most intuitive manner possible, because it's so easy. Then find doing so causes extremely bad performance. The few who I have encouraged rigid discipline and listed tend not to have much in the way of performance issues.

  • ROFL

    I'm with newt. I feel C2 would benefit with everything being a Behaviour. With each Behaviour being individually attached to any WorldObject .

    Then use some kind of CreateCopy( UID ) as the method for creating new ones.

  • Tokinsom is talking the object 0. When you use SpawnObject or Create object. C2 creates the object based off of a template version of the object. However as I understand(this could be off) the creation process. Is that the first object defined in the first Layout of that object becomes the template object.

    The first solution is that anything you use create on. Should only be on some form of Assets Layout. However there is a problem.

    Let's give an example.

    1. I have a Assets layout where I create the first ever version of the object. We call this Object[0].

    2. However I end up using Object[1] in the main menu, but I end up doing some customizations

    3. I create Object[.....] during game play

    from a development point of view. Object[0] is my template Object that set's the standard for the create and forget. However there is a problem. Because Object[1] is custom and is the first create one through used alyouts. My game layouts will use Object[1] even though I want Object[0] as my default.

    Tokinsom is suggesting that there be a toggle field as part of the info where they can be just set as a Default. So that the confusion of Create precedence is not an issue.

    My suggestion is to copy Unit's Prefab system. Where Objects are popped into a Prefab dedicated folder. This way Prefabs are always default. Another suggestion would be to create instead a Prefab Layout. So that Any objects on this Layout take precedence of creation settings.

  • Honestly. I see similar threads in the Unity forums.

    Platform X is missing B

    Platform Y is too slow

    Platform Z bleah

    I just participate in these ones

  • This would require a larger team. This is one of those matters where the benefits have to out weigh the negatives. For a 1 developer team. Not going to happen. However if Scirra was a 20 developer team, and the cost benefits to develop an abstraction layer, then a per platform player. Then Scirra would revist the idea. However at this time. it's not practical for such a small team.

    Want to see this happen. Release some massive games that are bringing in millions of dollars. This would bring the larger and dedicated companies With such a flow of developers and income flow. Then we would see the possibility of a player. until then. not happening.

  • Try Construct 3

    Develop games in your browser. Powerful, performant & highly capable.

    Try Now Construct 3 users don't see these ads