ErekT's Forum Posts

  • Wow! Over two hundred thousand in my case :P Thanks, that worked nicely.

  • Task manager says:

    Processes running -> 2 nw.exe, no game.exe.

    Disk usage is hogged by nw.exe reading at about 250 kb/second. svchost.exe also uses quite a bit: 130 kb/s.

    No antivirus software installed, just comodo firewall. But the firewall always informs me if it decided to block something it doesn't trust so I don't think that's it.

  • Link to .capx file (required!):

    dl.dropboxusercontent.com/u/70562654/Prototype3.6_340x236_bugreport.capx

    Steps to reproduce:

    1. Run preview with node-webkit

    2.

    3.

    Observed result:

    Harddisk loads like mad but the preview never executes. To stop it I need to kill the nw.exe processes in Task Manager, which also takes a loooong time to complete. This only happens the first time I try to run preview, on subsequent runs preview only takes about a couple of seconds to get going. Closing and re-opening C2 makes no difference; if I've already run Node-webkit preview once after a fresh boot it will always run without problems. Also doesn't happen with exported Node-webkit.

    Expected result:

    Browsers affected:

    Chrome: no

    Firefox: ?

    Internet Explorer: ?

    Operating system & service pack:

    Windows 7 64-bit. Service pack 1

    Construct 2 version:

    r152    

    This started to happen around r142-r143 I think? Definitely sometime between r139 and r146. It doesn't seem to happen with new capx'es.

  • Try Construct 3

    Develop games in your browser. Powerful, performant & highly capable.

    Try Now Construct 3 users don't see these ads
  • ith the new 2D features in Unity, it will be impossible to beat it for functionality, but for getting things done quickly, C2 is pretty cool.

    Could be. But Unity's price tag is brutal. And the free version is pretty crippled/useless imo.

  • Sure it's not a problem with your Ubuntu installation/hardware setup? I've never known Scirra to fling falsehoods around to make a quick buck before :P

    As for Node-Webkit deployment in Win, Windows 7 and XP have always run em problem-free here.

  • Thanks a heap for these pointers! :) It looks like my fumblings with webgl were pretty redundant. It's just as well! :P

    Point sampling: For WebGL this seems pretty trivial, just change the flag from linearsampling to pointsampling for the rendertexture. I have no idea about canvas2d (yet) though. I've been wanting this mostly to prevent undue slowdowns in Webgl rendering and like you said, the added overhead of scaling another canvas might nullify any performance gains with canvas2d. I'll give it a try though, it should help with the sub-pixel rendering issue at least.

    In the meantime, thanks again :)

    EDIT: Seems that in order to account for applied shader effects I'll need to look into renderEffectchain() as well. Will do so when I have more time.

  • Webgl is turning out to be a real horror show. How do humans work with this? Ahh.. :P

    Anyway, I am trying to start simple and create a 4-vertex quad to display on screen at the bottom of the drawgl() function. I've managed to create a buffer for the vertices in InitState and bind it to the arraybuffer. So far so good I guess. When I try to draw the thing though, I get an error message about the shaderProgram not being defined. This shaderprogram seems to be specific to C2 so I can't dig up any solution on the internet. How do I define this? The DrawGL() snippet looks like this:

              gl.bindBuffer(gl.ARRAY_BUFFER, squareVertexPositionBuffer);

                  gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute, squareVertexPositionBuffer.itemSize, gl.FLOAT, false, 0, 0);

    //          mat4.translate(matmv, [3.0, 0.0, 0.0]);

              gl.drawArrays(gl.TRIANGLE_STRIP, 0, squareVertexPositionBuffer.numItems);

    EDIT: Ah, nevermind. Figured it out.

    EDIT2: Right, this is what I have in DrawGL right now. It runs fine but I see no quad.

              var shaderProgram = gl.createProgram();

              gl.bindBuffer(gl.ARRAY_BUFFER, squareVertexPositionBuffer);

              mat4.translate(gl.matMV, [3.0, 0.0, 0.0]);

                  gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute, squareVertexPositionBuffer.itemSize, gl.FLOAT, false, 0, 0);

              gl.uniformMatrix4fv(shaderProgram.locMatP, false, gl.matP);

                  gl.uniformMatrix4fv(shaderProgram.locMatMV, false, gl.matMV);

              gl.drawArrays(gl.TRIANGLE_STRIP, 0, squareVertexPositionBuffer.numItems);

    edited><editID>ErekT</editID><editDate>2013-10-13 18:55:14</editDate></edited>

  • Thanks, that's a big help :)

  • Okay, I'm deep into the guts of glwrap.js tinkering about to figure out how it all works. My hope is that I'll be able to understand it well enough to change the rendering pipeline around a bit to allow for all rendering to take place before the viewport gets scaled up. Which is probably waaay over my head but what the heck, I'll try anyway. The main thing I'm puzzled about is the sequential order of it all. Is there even a fixed order to the rendering pipeline? The closest thing I found was SwitchProgram and InitState which only gets run once. But that tells me little about at what point in the render loop methods like SetSize and Scale get called. Also not sure what kind of numbers get passed to indicate width and height for the viewport and whatnot. I assumed at first that these were the same as either window size width and height or system resolution width and height, but when I tried to change them around I got weird scaling results and graphical artifacts.

    Just so you know, I don't plan to release a modified glwrap.js or anything. If I get it to work I'll try to put it into a separate plugin. For now tho, I'm just trying to understand.

  • Do you have a code snippet to show? Hard to offer suggestions without it. Except that you need to restrict clipping to the current light sprite obviously.

  • Actually I meant hundreds of man-hours on my part :P If I go all out developing something in C2 in the hopes of getting this feature down the road only to discover a year later that upscaled low-res renders (or something equally effective) will never happen for whatever reason then I'll be very unhappy. But hey, if it's too much work for Scirra to implement then sure, I understand. They have limited resources. I'd just like to know what's what so I can plan better.

  • Ashley

    I hate to be pushy, but it would be a big help if we could get some kind of answer. Are there any plans to tackle this? Is it possible with html5 and node-webkit? Construct 2 has a great workflow and set of features, and it's faster to develop with than any other gamedev software I know of. But much as I love using it, I'd rather not sink hundreds of potentially wasted man-hours into it based on a thin hope that maybe this crippling problem gets solved sometime in the future.

  • Hmm, that's weird. The inconsistency could be due to sprite-placement on the screen at different times (whether interpolation kicks in or not). If it happens just to some sprites I thought it could be an almost invisible stray pixel somewhere that gets amplified from being drawn multiple times, like if several layers of blurred images get drawn on top of each other. But since you tried cutting away different portions that isn't the case probably.

    But sprites are packed onto sprite atlases, so if other sprites don't have an empty 1-2 pixel border around them they could still mess with the drawing of those that do.

  • Does it happen to all sprites or just some? What kind of effect is it?

  • 640x480 is pretty high for a pixel art game. If you plan to go that high wouldn't it be better to just use regular 2D painting techniques?

    Here's an interesting article on pixel art if you haven't read it:

    gamasutra.com/blogs/AdamSaltsman/20090724/2571/Pixel_Art_Freelance_Best_Practices__Guidelines.php

    And an exerpt:

    nfortunately, a facet of this problem extends even to well-planned, well-intentioned projects. It's a quadratic problem; that is, doubling the resolution of your project actually quadruples the size of the art assets. After all, you can fit four 64x64 images into a 128x128 image. In many 2D mediums that just means using a bigger brush, no big deal, but in pixel art the ramifications are pretty serious. When you are painting your lines and details and anti-aliasing pixel by precious pixel you have 4 times as many elements to manipulate with tender, obsessive care.