fisholith's Forum Posts

  • This would scale up a 100 x 100 area of your game to fit the 400 x 400 window, meaning every game-world "pixel" would actually be a 4x4 block of screen pixels.

    The problems...

    Now, just zooming in will make everything bigger, but if we stop here, you'll also start getting blurry pixels, pixels that don't snap to the pixel grid, and even diagonal pixels if anything rotates.

    So, there are a few extra things you'll need to do...

    Setting up for pixel art retro games

    1: In the Project panel, in the tree view, click the root folder for your project.

    (Now in the Properties panel, you should see all the global properties for your project.)

    2: In the "Project settings" section, set "Pixel Rounding" to "On".

    3: In the "Configuration settings" section, set "Sampling" to "Point".

    That should fix the blur and pixel grid snapping, but not the object rotation or scaling issues. (I'll come back to this in a sec...)

    Point Sampling

    "Point Sampling" means that pixels will stay sharp when you zoom in on them, instead of fade smoothly into each other.

    Pixel Rounding

    "Pixel Rounding" means that objects are displayed as if they are always at integer coordinates. This means that an object with an X position of 99.8 will be displayed as if it were at X position 100. This keeps objects from drifting out of the pixel grid. This is really nice if you want to keep everything snapped to the pixel grid, but there's one catch; you can still misalign an object's pixels from the world's pixel grid by rotating or scaling the object.

    Pixelation post-processing

    I've run into this rotation and scaling issue in a few of the retro-style games I've made.

    (In fact, my member avatar is from one of them. At the moment, anyway.)

    I solve it by pixelating the game environment by the same value as my zoom factor. That is, if my zoom/scaling factor is 3, then I pixilate the game into a mesh of 3x3 pixel blocks. This ensures that anything that rotates off the grid is still converted into a single on-grid pixel with the correct pixel scale.

    You can pixelate the entire game by using WebGL effects.

    Hope that helps.

  • Thanks for the replies.

    DuckfaceNinja, could you explain the multiple array filters you mentioned? I'm not sure what you mean, but it sounds interesting.

  • So far, what I think I understand is:

    It looks like "Global" objects exist across all layouts, with all their state information being globally consistent and available all the time, independent of any single layout. (e.g. Even their position in a layout will be preserved across layouts, as well as their layer number, (not name).)

    It looks like objects with the "Persist" behavior, have their state information tied to their home layout, however that state info is saved on exiting the layout, and restored on reentering the layout.

    Is that kind of what's going on?

    Also, What happens if an object is "Global" and has the "Persist" behavior? Does that cause conflicts? Would there even be any point, or are Persist characteristics essentially subsumed by Global characteristics?

    Other interesting stuff about globals I've come across:

    Global or not, object instances placed in layouts don't seem to get spawned until the layout is executed, and so global object instances placed in a layout will not exist until that layout is first executed.

    A global object instance, once created, and then destroyed, will not be recreated if you return to the layout that first spawned it. So, I guess the layout knows that it already spawned that global object instance and won't do it again until the game is restarted?

    This seems to be the case even if in the editor, you place a global object instance in "Layout_A", then run "Layout_A" to spawn the placed global, then destroy it in "Layout_B", and then return to "Layout_A". The global object instance will stay dead.

    Is there any other interesting stuff worth knowing about Global or Persist?

  • Hey all

    I was hoping to get some feedback on an idea.

    Sorry it's kind of long.

    What I'm trying to build:

    I am interested in making a custom particle system that uses sprite objects as the particles.

    Reason being, I'd like to have access to rotation, animation, collisions, and possibly other complex behavior on a particle-by-particle basis.

    I'm looking to build this in events, rather than making a new plugin.

    Details:

    Ultimately what I want is a single emitter object ("ParticleEmitter"), such that I could make an instance of it for every particle effect I need. I would also have several different sprite objects each serving as a particle type, (e.g. "ParticleSnow", "ParticleFire", "ParticleRain", etc).

    The emitter object would have an instance variable that stores the name string of the particle object it should spawn.

    And this is where things get tricky. As far as I know, in C2, there's no generalized way to use that name string to spawn an object. Every single object type must have its instances explicitly spawned with event code dedicated to that object in particular.

    "Create by name" workaround:

    So, I have some thoughts on how I might work around this.

    For each Particle object (snow, fire, etc), I can use the Function object to create a "constructor" function of sorts. The name of the function will be the name of the object to be spawned, (maybe with a prefix like "spawn_").

    With this method, I still have to create one constructor function for each particle object, but now I get to store the name of the item to be spawned as a string in the emitter. The other benefit, is that if I eventually have 30 different types of particles, I don't need to have an event with 30 switches that runs once for every emitter in the game, and does that every game tick. The function object allows me to "point" only to the code I want to execute, and only for the particles I need to spawn.

    Initializing the new particle:

    I will still need a way of initializing the particle object I just spawned, so that I can populate its instance variables with information from the emitter, (e.g. Lifetime, speed, opacity, etc).

    I could do this initialization within the constructor function event, but I would need to know which emitter is the particle's parent, and as far as I know I can't preserve picked objects through a function call. So, I was thinking that, inside the event that calls the constructor function, I could pass the parent emitter's UID as a function parameter. Then, once I arrive inside the on-function event, I can use that function parameter to pick the parent emitter by UID. From there I can do the particle's initialization, with the correct emitter having been picked.

    It would be great if I could put all the particle objects in a family and then use generalized initialization code to set up the particles using family references, but I'm not sure if I can make that work. I don't yet know enough about how object picking interacts with families and functions.

    Does this make sense?

    This might take a while to build, so I wanted to double check with the forum to see if there is maybe a simpler solution I'm overlooking.

    If you made it this far, thanks for taking the time to read all that.

  • Wow, very helpful replies.

    Thanks Ashley, "Effects are supported" sounds like exactly what I'm looking for.

    Likewise SHG, thanks for making a demo cap. That's awesome.

  • Is there a way to detect WebGL availability?

    I'm working on a game that requires WebGL.

    I would like to be able to detect a lack of WebGL support, and display a, "This game requires WebGL", notice.

    --- Edit:

    I just thought of a workaround.

    I can make my "WebGL Required" notice cover the game's title screen, and then place a WebGL shader on it that makes it transparent. If WebGL is working, then you won't see the notice. If WebGL is not working then the notice will show up.

    I'm still interested to know if there is an event condition that can check for WebGL support, if anyone knows of one.

    Thanks.

  • Awesome, thanks. :)

    Do you know off hand if "viewOrigin" refers to the layout pixel coordinates of the center of the viewable area, or the top-left corner of the viewable area? (Also, if it's the top-left corner, what would it refer to if you rotated the layout (e.g. by 45 degrees)? Maybe the non-rotated bounding box?)

    Just curious.

    Thanks again, for taking the time to reply to all my questions, Ashley. :)

  • I may be thinking about this wrong, but I think the shader I linked to shows that the coordinate space flipping and scaling does not result in everything coming out as if it were never flipped. That is, at least on my computer and version of Construct 2, the end result appears to be different depending on what the shader is applied to and how it is applied.

    My specs are:

    Construct 2 version: r173 64-bit.

    Previewed in: Firfox & Node-Webkit (same behavior in both)

    OS: Win7 x64.

    RAM: 16 GB

    GFX: Geforce GTX 780 Ti

    Brief disclaimer: I always feel a bit uneasy about describing a potential problem, because I don't want to come off as complaining. My feeling is that Construct 2 is awesome, and thanks to the effort of the devs, grows steadily more awesome with each release. So, I'm only bringing up the coordinate space issue because if it really is a problem, that isn't just on my computer, it may eventually be one more place C2 has room to expand it's aforementioned awesomeness.

    Again, I could be thinking about this coord space thing wrong, but it doesn't look to me like there is any way to reliably address a desired location on an arbitrary object (layer, sprite, etc) in the way you could in Construct Classic.

    The issue I'm running into at the moment is that I'm trying to recreate a pretty simple elevation gradient effect I created for Construct Classic, but presently in Construct 2 the coordinate system doesn't necessarily face a consistent direction, or consistently refer to the bounding box of the object being processed.

    A number of my Construct Classic shaders that I'm trying to port to Construct 2 won't work without a coordinate system that behaves consistently. What I mean by consistent, is that the origin (0,0) is always in the same corner of any object and the (1,1) point is always in the opposite corner, whether you're dealing with sprites or layers, and whether the effect is the first in the chain, or later in the chain.

    At the moment, if I say get the object's color at (0,0) I could be referring to the bottom-left corner of a layer, the top left corner of a sprite (only if it's the first effect in the chain), or a point that may not be anywhere near the sprite's bounding box, if the effect is second or farther on in the chain. Although depending on the location of the sprite within the layout, (0,0) could also potentially refer to any point inside the sprite's bounding box as well, (since the coordinate space gets entirely decoupled from the sprite after the first effect).

    By contrast, in Construct Classic, the effective (0,0) point was always the top-left corner of any object you were dealing with, and (1,1) was always the bottom-right, whether you're dealing with sprites or layers, and regardless of how many effects were in the chain.

    As a workaround, I could try building in extra effect parameters that user's could use to compensate for the various possible coordinate spaces, but they would have to be things like "Enter the dimensions of your sprite only if this is the 2nd effect in the chain", "Enter the dimensions of the viewable screen area", "Check this box if this effect is being applied to a sprite", "Check this box if this is not the first effect in the chain" and "Check this box if this effect is being applied to a layer". This would require multiple layers of branching code to be placed inside the shader. But it could theoretically work. Though you might also have to pass the location of the sprite within the viewable screen area to the effect in order to resolve some of the 2nd-effect-in-the-chain issues.

    Another possibility would be to make multiple slightly different versions of an effect to handle each case. I could make a Layer version, a Sprite-1st-effect version, and a Sprite-2nd-or-later-effect version.

    Again, I don't want to sound like I'm complaining or criticizing here. And like I said, this may just be on my computer. I haven't had time to test it out on another machine, but the effect I linked to in the first post should show what I'm talking about if it's not restricted to my computer.

  • It's very possible I'm doing something wrong here, but it looks like the Y component of vTex is sometimes Positive upwards and sometimes Positive downwards, with the origin sometimes at the bottom-left, and other times at the top-left, depending on what you're applying the effect to, and how you're applying it.

    For a sprite the vTex coord system seems to have its origin at the top-left of the sprite, with the (1,1) point at the bottom-right of the sprite, making Y-positive-down, which is what I would expect. However, this seems to only be true for the first effect placed on the sprite. All subsequent effects applied to the sprite seem to have a flipped and scaled coordinate system with the origin at the bottom-left of the screen (the screen, not the sprite) and the (1,1) point at the top right of the screen, with Y-positive-up, instead of down like it is for the first effect.

    To try to better understand what was going on, I created an effect that simply places the X and Y values of vTex in the R and G channels respectively. Thus, red increases with positive X, and green increases with positive Y.

    gl_FragColor = vec4( vTex.x , vTex.y , 0 , 1 );[/code:nucr3ubz]
    You can download the effect here to try it for yourself:
    [url=https://dl.dropboxusercontent.com/u/38240401/Public%20Distribution/Construct2/Effects/fi_effect_ShowUV.zip]https://dl.dropboxusercontent.com/u/382 ... ShowUV.zip[/url] 
    
    When I apply this effect to sprites, I get a black corner in the top-left of the sprite, with green increasing downward, and red increasing rightward, just like you'd expect, both colors maxing out at the bottom-right corner of the sprite. When I apply the effect a second time to the same sprite, I get a "window" onto a coord system where there is a black corner at the bottom-left of the screen (the screen, not the sprite), and green increases upwards until it maxes out at the top edge of the screen. Likewise, red maxes out at the right edge of the screen.
    
    When I apply this effect to layers, I get a black corner in the bottom-left, with green increasing upwards, and red increasing rightward, both colors maxing out at the top-right corner of the screen.
    
    This inconsistency in the coordinate systems seems to occur both in the editor and at runtime.
    
    Again, I could be doing something wrong here, so I figured I'd run it by the forum. Any advice is welcome. If anyone want's to try it out and report back with what they find, you can download the Show UV effect above.
  • Is there a list of the specialty variables that Construct 2 makes accessible in GLSL?

    e.g. From looking through scripts I found "pixelHeight".

    In Construct Classic, I recall the following variables being available:

    [quote:1lvdl6i8]boxWidth

    boxHeight

    boxLeft

    boxRight

    boxTop

    boxBottom

    pixelWidth

    pixelHeight

    hotspotX

    hotspotY

  • Sounds good. Thanks again. :)

  • Just want to double check the method for enabling dev mode.

    The SDK page says,

    [quote:165dbysl]"and add the key devmode and set it to 1 (DWORD value)."

    Does this mean that, inside the "html5" key (folder), I need to create a new key (subfolder) named "devmode", put an unnamed DWORD inside it, and set it to 1?

    Or does it mean that, inside the "html5" key, I need to create a DWORD named "devmode", and set it to 1?

  • Good point. Additionally, adding and removing effects is probably much less of an issue than previewing changes anyway, because in general if I'm working on a new effect, I only need to restart C2 once for it to show up. After that it's just a matter of testing incremental changes.

    The dev mode sounds like it's pretty much exactly what I'm looking for.

    Thanks :D

  • Other than exiting and restarting Construct 2, is there a way to force the list of effects to reload?

    The only reason I ask is that it would be a bit easier to test incremental changes to effect code if I could avoid regularly restarting Construct.

    On the plus side, what I've noticed so far is that in some cases I can see changes to effect code when running the compiled version of a project, which is nice, but it's a limited workaround, as it doesn't really help if the change was something like a new parameter.

    At the moment, if I create a new effect, I save my project, close Construct 2, restart it, and reopen the project, to see the new effect show up in the effects list.

    Similarly, if I make a change to an existing effect, the change will show up when I compile and run the project, but it won't show up in Construct 2's layout editor, even if I remove and reapply the effect.

    Anyway, I hope I don't sound like I'm complaining or anything. I just want to make sure I'm not missing something, or overlooking a better workaround.

  • Try Construct 3

    Develop games in your browser. Powerful, performant & highly capable.

    Try Now Construct 3 users don't see these ads
  • Wow, awesome. Thanks. :D

    A few more questions just occurred to me, actually.

    5: If you place an effect on a sprite, can you sample parts of a foreground sprite that are off-screen?

    Just checking because I played around with a blur, and it looks like it ignores pixels that are outside the visible screen area. More specifically it looks like sample coords that refer to off-screen locations are clamped to the closest equivalent on-screen location.

    6: Can the origin of the vTex coords ever be outside the visible screen area?

    i.e. If a foreground sprite's top-left corner moves off-screen, does the origin of the vTex coords move off screen with it, or are the vTex coords squeezed to fit inside the edges of the visible screen area?

    In Construct Classic, I vaguely recall the HLSL coords acting a little different as a sprite started moving off-screen.

    7: Is there any place I should look to find out more about the details of the Construct 2's GLSL related stuff, or is the forum the best place to ask about this kind of thing?

    Thanks again. :)