Ruskul's Forum Posts

  • ...if you wanted to write a behavior that runs events using conditions / actions / expressions, you're doing things wrong, you can do that in construct itself.

    The reason I would do this is because a.) its still faster than the overhead of running c2 events, b.) I get sick of recreating events in project after project. I condense most of my events into behaviors and plugins because its easier to reuse them later than events are.

    On the other hand I am curious, I must have missed this post last year, other wise I would have responded sooner lol, but how would one access the methods of a behavior on another object from within the object whose behavior is running, clearly that is probably better than accessing the aces, but idk... half the time the aces are already configured in such a way that a behavior script isn't very usable outside of those...

    You would need the instance of the object being accessed, but im unclear as to how one actually calls the method on the behavior attached to that object?

  • Use Google Drive or Dropbox always works

    Thanks, I saw the reply in my inbox and was like, "did I really ask that question?" >.< One would think after trolling the forums for so long and downloading from dropbox in the process I would have figured it out.

    It's okay if you want to edit your question to include something along the lines: "Derp, dumb head"

    They may say there are no stupid questions, but it was stupid to have to ask lol

  • ... This is a classic case of something which looks easy to a user but in reality involves a great deal of architecting...

    Well... that figures... I had a hunch I was talking about something I had no clue about. lol

    thanks for detailed reply If it could make it in c3 at some point that could be really cool.

    R0J0hound - well, Iv'e always skipped the parts of the tutorial that had nothing to do with the frag shader... *sigh*. Given your answer on a separate forum I think I'm getting it.

  • R0J0hound - so basically one would have to have a plugin that only does an effect then? (ie for each shader you wanted to make you would have create a plugin for that effect? This is probably way over my head. I think I understand.? This is probably going to take more work than I originally thought at the beginning. I'm trying to decide if its worth it.

    I think I like the world of physics engines... you just have to know math... but this architecture stuff frustrates me for some reason.

  • Hello,

    I know there have been a number of people who have asked are the following possible...

    retro palette cycling?

    palette swaps?

    dirty lens / camera + dynamic bloom?

    ...And the answer is no, not in the way it would be best to do so, at any rate. I have been working on some shaders that would enable retro palette effects, and mutiple texture shaders like dirty lens plus bloom. I have a few other ideas. These can be implemented in Unity for example without problem, but not construct because...

    ...There is just one technical problem: In a construct 2 effects file you can only reference 2 textures. The one being drawn and where it is being drawn to. Thats it: front and back.

    A huge part of making cool filters and effects (especially non process intensive ones - which is key for construct 2 performance) is to reference other premade textures. Old school film effects, complex film grain, and other camera effects can be found using no additional textures, but they have to compute the noise and patterns they create. some effects just can't be done without additional textures (like palette cycling, and image based filters) without some form of compromise that usually affects performance. Some can't be done period.

    The solution is simple. Stupid simple as a matter of fact.

    All that is needed is for the effects portion of a plugin (where you add effects to a plugin or layer) to include another field where you can load an image (the ability to add more than one would be nice, but not needed as you could combine multiple into one ) and attach it specifically to the effect. Within the effect file you could then reference the image as....

    uniform lowp sampler2D samplerCustom1;

    uniform lowp sampler2d samplerCustom2;

    ...in addition to samplerfront and samplerback.

    This means you could have the ability to make super awesome palette swapping filters, dynamic scratched lens filters with hdr bloom, professional color Luts with dynamic blending between different lighting schemes, and much much more all at a reasonable performance cost.

    As I said, this should be easy to add, but it needs to be officially supported. Ashley likes to give priority to things that are a.) actually needed and b.) desired by the community. I think I have given a good case for why it is needed, and I believe it should take little time to implement (easy to solve with high value).

    What about Ashley? I would love to be able to develop some of these shaders but I need to be able to reference at least 1 additional texture in an effects file (samplercustom). Just a few lines of code here and there and presto! It would be totally worth it. o.o

    On a side note, it would be possible, though highly complicated, impractical, and downright silly, to include the texture needed combined with the sprite being effected, you could then vtex to correctly look at the actual sprite vs the texture packaged with it. The problem is that animated sprites would become massive as each frame would have to include the texture (some of which would likely be at least 256,256. Then each effect would have stringent use case possibilities which sucks for modularity... I think you get the point.

  • R0J0hound - So I checked out your creature plugin, now as I understand it, its not much different than the sprite pluggin in a way right (as far as drawing and textures go)... but I'm confused, because I thought you can't change the editor which would be required for attaching multiple images right?

    I mean, is it possible to actually pass another texture to the shaders in c2? (like so that I could have samplerback, samplerfront, samplercustom1, sampler custom2) does this make sense?

    And you would have to somehow be able to attach that image in the editor eh?

  • Okay, so as I mentioned, I know nothing of shaders lol, but it looks like we are basically doing the same thing mathematically speaking... but In mine I flip y at the end for the 2 texture coordinates that you then mix together. Also in mine, I apply the fx to a sprite that is the lookup table - that then has to be scaled to fit the screen exactly. Anything you don't want affected has to go above the layer containing the table. I added a few other properties like collums and rows for the table so you can use multiple sizes and a version that has no color mixing at the end for crispy pixels in a retro game , but other than that...

    Again, I checked the math and I think it is basically the same (pretty sure) except for what I noted. As I said, I'm bad with shaders so the nuances escape me, but maybe you know?

  • Hey, thanks for throwing this up on the forums. I got a chance to dl it and test. I'm having the same problem as others that have mentioned it.

    As I mentioned on another post, I have at this point created my own effect that does this, I'll have a look at the two and see if I can spot the differences that may give you a clue what it doesn't work on some machines.

  • chrisbrobs - blast! I forgot about this conversation. I had to drop tinkering with the graphics card to do other things .

    I just spent the last day and a half figuring out how to do this through trial and error (moslty cause I still know nothing of shaders - I'm like "Hey this should work, the math checks out!" but nope. >.< Well then, thanks for the effort!

  • Colludium - I never thought about that but it is a good point.

  • Hello,

    I am curious, I am blundering around writing shaders and currently I do silly things and I don't know why.

    Sometimes I have to flip a texture coordinate: texpos.y = 1.0 - texpos.y;

    I haven't figured out why I need to sometimes, all I know is that the effect is upside down and I am fixing it. I thought 0,0 is top left and 1.1 is bottom right as far as tex coordinates go?

  • Hello,

    For anyone who understands shaders, hopefully you can help easily.

    Consider the following two lines:

    lowp vec4 front = texture2D(samplerFront, vTex);

    lowp vec4 back = texture2D(samplerBack, mix(destStart, destEnd, vTex));

    I understand the first line and what it does. I understand what the second line does but not why (why am I mixing here). Consequently I don't understand the out put given the following (comments show what I know/think):

    lowp vec4 color = texture2D(samplerFront, vTex); // gets the pixel from the front, and should be the same as front

    colorlookup.b = 0.0; //sets blue to 0

    colorlookup.r = vTex.x; //sets the r component to be the same as the texture coordinate in use

    colorlookup.g = vTex.y; //dito for green

    gl_FragColor = colorlookup;

    Now, this should simply color an object like a slice of an rgb cube with red increasing on the right and green increasing down y. The top left corner is black and the bottom right is yellow and no blue is present.

    If I delete the line: lowp vec4 back = texture2D(samplerBack, mix(destStart, destEnd, vTex));

    everything works, but if I include it vTex seems like it gets changed and thusly it seems to be off when calculating red and green. Like if the object fills the entire screen it will be rendered correctly, but if it larger or smaller than the screen it is color acording to the top left pixel of the screen (not the object) being black and the bottom right being yellow. Does this make sense? does the mix() change or alter it?

    I checked other effects that blend with the background and I don't see anything. You may ask why I read the samplerfront twice, and thats a good question, but I want to know why its working the way it does.

  • chrisbrobs , hey thanks for you help! Very nice of you to offer to help converting. I do have a question though... (I think I'll post it elsewhere instead of jacking this thread.

    edit*

    here is the forum post:

  • I think what I am getting at is that automation is super helpful.

  • Try Construct 3

    Develop games in your browser. Powerful, performant & highly capable.

    Try Now Construct 3 users don't see these ads
  • Colludium I had a friend who created an external program to unzip construct 2 save files and then parse out certain information. When making a game I could do things like put z_Z in text fields, variable names, etc, and then pass it off to my friend who would then automatically inject the desired info in. It was crazy because you could make a game have 70 different languages in about 2 seconds for example.

    I thought I was a good programmer, but this friend made me feel woefully incompetent.