QuaziGNRLnose's Forum Posts

  • purplemonkey

    The shader system still has some issues that need to be fixed. The idea was to make stuff simple for things like AO but it was poorly designed for such things and needs heavy modifications.

  • purplemonkey

    Very cool! for the trees you can try creating compound colliders by using a thin cyclinder collider for the trunk and a sphere for the leafy ball. In debug mode you can trial and error and visualize the position of any extra colliders you add. this all has to be done with action however you can make it automated in the on-created trigger.

    Prominent

    You shouldn't need to mess around with manual rendering. If you make your scene background have 0 alpha and layer view-ports you can layer renders, however semi transparency wont cross between layers cause of the three.js renderer design. I will add more manual rendering features when I get a chance since you seem to be expressing interest and it wont take much work.

    All transparent objects SHOULD render last, as this is the way transparency is generally handled in forward renderers. Opaque objects render in any order and sort using the z-buffer, and at the end transparent objects/sprites are sorted based on camera view-plane distance and rendered back to front. This should minimize all artifacts unless objects intersect, which is a problem that has no easy solution and is present in even modern renderers. If sprites are rendering last after the back to front rendering of transparent objects thats a three.js bug, but i dont recall this being the case.

    http://stackoverflow.com/questions/8763 ... r-in-webgl

    An image would help me diagnose what the issue you're having is. usually playing with material depth setting values and preventing the sprites/semi transparent objects from writing to the z-buffer works well, especially if its for smoke or special effects like explosions which utilize blending. Transparency is just an overall annoying problem. Sometimes the best solution is to not use a sprite at all, and instead a piece of flat geometry which is a cutout of the silhouette, so that the object can be opaque or mostly opaque, and the artifacts are minimized cause theres no edge-cases that lead to really bad transparency occlusion problems.

  • Also need a way to choose which scene to put the objects in.

    There doesn't appear to be a way to clear the renderer manually. Also, when rendering a scene, it appears to clear it as well, which makes it impossible to render two scenes?..

    What do you mean clear the renderer manually? What do you need that for.

    You choose the scene with the "pick scene" action in Q3DMaster. Then any action created objects are in the new scene. Also if you use "change parent to scene" objects will be put in the new scene, so you can code a simple function to move objects over. I didn't and won't include a scene chooser within the various plugin properties in editor for the main reason that the feature is only really intended for limited uses where multiple view-ports are needed, and is difficult to work with due to the abstraction, also there would be no easy way to initialize a new scene nicely in editor. It's not a lot of work to make a function that moves necessary objects over at layout start if you put all the models / etc in a family.

    I don't fully understand what you want out of the renderer. Rendering doesn't clear the scenes (you must mean something else, that has to do with the clearing of buffers) also, I have no idea what you mean by clearing the renderer manually. Are you referring to the various settings three.js gives control over when rendering? Q3D doesn't allow access to those levels, but you can layer multiple renders by layering Q3D viewports.

    As for the transparency, In general transparency in forward rendering is a difficult issue, and you might simply be running into depth sorting problems with no easy fix due to the design of forward renderer. In general you can use the the "depth settings" on the material of objects to control how they modify/read the depth buffer in rendering.

    I'll look into the issues you've brought up when I have the time.

  • matriax

    Yeah don't worry about that error, its caused by the fact that example uses features that were removed and replaced by the Q3D Light object which makes things better/easier. I just forgot to remove it.

    The tiny tank example uses a very early version of the plugin so thered be no point releasing it now, a lot of the things it uses are much simpler to do now so itd just be confusing (and would fail to open with newer versions for the same reason as that example)

    Im going to compile some new examples probably some time in july.

  • RadioMars

    In general, morph animation is "faster" since it's all loaded into the gpu, however on weaker hardware it's difficult for me to really say. Skeletal animation without blending is of course possible and will look right (where morph animation with non-blending tends to look "shaky" unless you use many frames which are memory inefficient) but i can't really say if the performance will be "better" since it's highly dependent on the number of bones/model complexity and the hardware itself. If you're already taxing the cpu having many skeletal animations will likely perform poorly, whereas if you have few bones it'll probably be a better choice compared to morph targets.

    your best bet is to make a relevant benchmark to see how good the performance is for a given number of skeletons vs a comparable set of objects with morph animation. simple skeletons could possibly outperform morph target if the gpu is weak on the target platform.

    I'm sorry I can't give a better answer! It's really not possible to give a simple answer since both techniques work to different strengths and weaknesses.

  • miketolsa

    take a look at the raycaster example for an idea of how the picking can be done. After you have a surface and point you can find the nearest grid position by using this trick (which works in 2D aswell):

    X = round(xposition/cellXsize)*cellXsize

    Y = round(yposition/cellYsize)*cellYsize

    Z = round(zposition/cellZsize)*cellZsize

    This finds the X/Y/Z of the "closest" grid cell to a specified X/Y/Z position in a non-grid coordinate

  • Try Construct 3

    Develop games in your browser. Powerful, performant & highly capable.

    Try Now Construct 3 users don't see these ads
  • > Ralph

    > I'm wont working on Q3D for a while (possibly for a few months) since i have other projects to complete for the time being.

    >

    Gosh! does that mean the minor fixes that we were talking about will not happen for few months!?

    Not a few months, but I really don't have the time this month (june) due to personal matters.

  • Ralph

    Seems the performance is better than I expected, I'll see about adding official support for something eventually. The QFX shader system is incomplete so there wont be any examples for a while. I'm wont working on Q3D for a while (possibly for a few months) since i have other projects to complete for the time being.

  • tunepunk

    if possible it'd be better to use infront/behind, they're less bug prone and it may fix your issue and inside is too slow for mobile anyway (it needs to do some expensive operations). The error is possibly because of something weird with the inside mode, and i can't easily reproduce it / test it so it will be difficult to fix. also "zooming" breaks inside because of the way it works.

  • Ralph

    I'm not planning any large feature updates for the time being. Sadly passing a texture from a C2 canvas is not something that can be done with the current WebGL spec since Q3D and C2 use different contexts. One way that you could try to achieve fog of war however would be to use the Q3D "Behind" mode or "Inside" (inside has some issues due to browser adoption). If you place the 3D canvas below the C2 canvas, you can then make fog of war as something 2D inside the construct 2 layer, and overlay it on top of the 3D. Of course this would mean you'd need to fix the camera or apply some transformations to get the C2 scrolling/graphics aligned to the 3D top down view, but this is the way i'd do it since it'd be easiest, and since fog of war is usually 2D anyway. you could probably do this pretty well with sprites/tilemap/canvas/blur depending on the look you want.

  • tunepunk

    the main optimization which gives better performance is keeping the resolution low. Mobiles have very high resolution screens but they lack the hardware to do 3D rendering efficiently at native resolution because of the fill-rate required among other things, so when "resolution mode" is set to "auto" in Q3D Masters properties, on a mobile device a large HD resolution is chosen and the phone can't keep up. You should set resolution mode to "size" instead, and then you can set the resolution to smaller specified values. You can also set the resolution with the action in Q3D Master (set resolution). Of course mobiles are also a lot slower in general so you can only expect smooth performance for simpler scenes (that's why the mobile morph demo only has one animated character)

  • terransage

    To reduce the json file size, you can remove the whitespace. this can considerably shrink the filesize.

    Morph Animations use a lot of memory but their performance is better since the GPU animates them very efficiently through shaders. Because they use a lot of memory however they tend to work better for models with fewer vertices. There is no simple way to make something like a carried weapon through hierarchies using morph targets because they don't have any bones (and this is in fact why they are faster, if they had bones they'd be much slower) Some old games that use morph targets work around this by saving extra information about the position/orientation of a hand for every frame of every animaton, and then placing the object there. This is basically something like image points in C2. You could attempt something like this but it would have to be coded in events in C2 as a custom solution.

    The frame naming doesn't matter if it starts at 0000 or not, Q3D just reads the frames in the order they're placed in the file, sequentially, and trims any numeric characters off to find the "animation name". I tried my best to make it simple but the current exporters for three.js are still in development.

  • Physics is very complicated/slow. To get good performance Q3D only supports spheres/boxes/cylinders. What you can do to get more complex shapes is combine multiple colliders by using overlapping physics objects. Q3D physics also supports compound shapes on a single object but it is advanced and technical to use, and must be set up in the event sheet. The capsule is an example of this. Try playing with the shape actions in the physics behaviour. I wish it were as simple as using the geometry as a collider but no games/engines do this and the performance would be horrible.

    If you want the objects to stay constrained to the table you could try putting a frictionless immovable invisible box above the table that prevents this. Try playing with friction and elasticity and the various settings too. Mixing the behaviours is a terrible way to fix the problem, as they will interfere with each other and be very glitchy.

  • Play with those settings and you can adjust the texture wrapping mode to have a variety of edge/texture repeat settings. your questions are too unclear for me to really help you.

    "what is??!!" doesn't give me any information about what you want to know.

  • I have no idea what you mean. Use "apply force in direction". "angles" make little sense in 3D, you provide a 3D direction vector instead.