R0J0hound's Forum Posts

  • There is a wrap behavior that kind of does that. But it’s imprecise.

    You also do the wrapping with events and make it a bit better. Here for example the screen is 640 pixels wide. It’s the most basic example and you’ll see when the objects jump from one side to the other.

    Object: x<0
    — object set x to self.x+640
    Object: x>640
    — object set x to self.x-640

    You can solve that by making the position jump further off screen by some margin. A good rule of thumb would be a margin at least as wide as the widest object you’re wrapping around.

    Object: x<-margin
    — object set x to self.x+640+margin*2
    Object: x>640+margin
    — object set x to self.x-640-margin*2

    Now that’s is the scroll position is to the left of the layout. You could make it relative to the screen if you really need to.

    For doing it with a tiled background you’d just need to have the tiled background’s width to the imageWidth+screenWidth and the events would basically be:

    tiledbg: x<imagewidth
    — tiledbg set x to self.x+imagewidth
    tiledbg: x>0
    — tiledbg set x to self.x-imagewidth

    Now if you change the x by more than the imagewidth it will take more frames for it to catch up. Adding a while condition above the x compare conditions is a simple fix. There is probably a nice math way to correct it without a loop too

  • Well I still don’t fully understand how you want it to work. I’m guessing you have a box per spawner, and you can select a box. Then clicking a button will create/ recreate the boxes that aren’t selected.

    Global number pickedBox=-1
    
    On box clicked
    — set pickedBox to box.uid
    
    On spawn_button clicked
    [negated] box: pick by uid pickedBox
    — box: destroy
    
    On spawn_button clicked
    [negated] spawner: overlaps box
    — spawned: spawn box

    If you set it up in the layout so each spawned has an overlapping box to begin with then the last two events could be merged into

    on spawn_button clicked
    [negated] box: pick by uid pickedBox
    Spawner:overlaps box
    — box: destroy
    — spawner: spawn box

    But then I wonder if we need to even recreate the boxes at all.

    Again that’s why I don’t think I fully understand fully what exactly you wanted it to do.

  • Probably load the image to an animation frame of a sprite, paste the sprite to drawing canvases the size of the frames, and finally use the drawing canvas save image feature. After that you could load those images as frames.

    An alternate idea is to just load the image into the sprite with a 2x2 mesh distort and just change the UVs of the corners to select a sub image.

    You could also use a tiledBackground by setting its size and the image offset to select a sub image. But loading images at runtime with that is per instance instead of shared between instances. Which may or may not be an issue

  • I feel it could be made simpler but I don’t understand what the desired behavior is from reading your posts.

    When the spawn_button is clicked you want to destroy any boxes that aren’t selected and create boxes on any spawners not overlapping a box? Sounds simple enough but looking at your events it looks like you have more complex behavior in mind?

  • One way is to just pick a random tile and check to see if it’s not red and is next to a red tile.

    For example with a 100x200 tilemap you’d get a random tile with:

    set x to int(random(100))

    set y to int(random(200))

    Then the rest is just comparing the tile at a location. It would need to run multiple times to select a tile to fill.

    Now a more direct way could be to loop over all the tiles and store a list of all the valid locations to an array and randomly choosing one of those.

    If you’re clever you could make it so you’d have to loop over everything only once and add and remove from the array as you go. You could try looping up flood fill algorithms to get some ideas too.

  • You said the set position action, did you mean the look at action?

    The reason you can’t set the up vector directly is it’s related to the forward and right vectors. They all have to be perpendicular from each other.

    What the look at action does is set the forward vector directly, the calculates the right vector from the forward and up vectors, and finally it calculates a new up vector from forward and right so everything is perpendicular.

    The lookat expressions I haven’t used. They seem to relate to the rotate action.

    If it jitters then I’d say post your events. All I use from the 3d camera object is the look at action which updates the up,forward and right expressions immediately.

    So I guess my untested question is do the look at expressions get updated immediately too or is it delayed till things render? Genuinely curious. I know the text.textwidth expression only updates after the text is drawn so maybe something else is happening here.

  • You can use either. It’s mostly a matter of personal preference.

  • You can check out the “user media” object. It can get a list of the connected audio input devices. You can tell if a microphone was pugged in or unplugged based on if the list changed.

    For headphones you’d probably have to do that from js to get the output devices too.

    developer.mozilla.org/en-US/docs/Web/API/MediaDevices/enumerateDevices

  • It’s simple enough to do any kind of math from the event sheet.

    You can access the up vector with the 3dCamera.upX/Y/Z expressions. And only way you can modify the up vector is with the look at action.

    Can you post a screenshot of your events that set the camera? There’s apparently an official example for a fps camera that might be helpful?

  • Mario games handled it by checking the vertical velocity of Mario when hitting the enemy.

    Mario collides with enemy

    Mario y velocity >0

    — kill enemy

    Mario collides with enemy

    Mario y velocity <=0

    — kill Mario

  • Try Construct 3

    Develop games in your browser. Powerful, performant & highly capable.

    Try Now Construct 3 users don't see these ads
  • The 3d that construct provides is pretty simple. Basically just some 3d objects you can place, and the ability to rotate the 3d camera. Construct’s collision system, and most behaviors are 2d so the 3d is mostly cosmetic.

    You can place most object’s xy and zelevation, but you are limited to rotations around the z-axis. And the included 3dshape plugin has a limited set of different meshes you can use.

    The mesh distort feature of sprites can be used to make other shapes and do rotations on other axis’s with some math. The only caveat with meshes is mesh points can only have zelevations greater than 0. It’s just something you have to workaround.

    Motion, collisions and raycasts in 3d are mostly done manually with events or js. Basically done from some existing algorithm or some math.

    At its lowest level you have the 3dcamera with the look at action, distort meshes and math. A lot is possible with this but it’s mostly used to do things that aren’t exactly simple.

    For the math you could look up any 3d math tutorial which probably covers some vector math, rotations, spherical coordinates and some matrix math. But in practice you probably don’t need to use all of it.

    Often users utilize the third party 3d object plugin to do stuff easier. It loads meshes in the gltf file format with animations. It lets you rotate objects in any way and includes behaviors for 3d physics and raycasting. It’s not free though. But it seems to try to push what you can do with 3d in construct.

    In construct’s editor you cannot change the camera’s angle in 3d so it’s not ideal for designing levels in many cases. So some have experimented with the workflow of designing the level in blender.

    At this time there is an issue with amd rx cards where if you have objects with transparency it will render wrong. Official response is it’s a driver bug, and amd needs to fix their driver. So you may have to deal with broken rendering for some users. Scirra doesn’t have hardware to reproduce the issue, and from what I can tell users with the issue haven’t found a way to effectively report the issue to amd to maybe get it corrected. On the plus side the author of that third party plug-in have been trying to find a patch to work and the issue.

    Anyways, myst was pre-rendered as I recall. Videos transitioning as you moved around and maybe a cube view box to look around the room.

    You could go all 3d though. Mostly just a matter of getting the mesh in there. To interact with stuff you could do this low tech but effective technique to do the raycasts. Basically have a 2d layer you can get the mouse from. And use a system expression to convert the xyz of the object on the 3d layer to the 2d layer and compare the distance or something.

    The final bit would be the camera transitions. It should mostly be motion along a path. Same as in 2d but with z too.

    Overall 3d is doable in construct but it’s mostly doing stuff yourself or using workarounds.

  • You want to detect a collision without a collision? Any reason you can’t enable the collision?

  • On disk or on a server it’s considered a file such as png.

    Once the file is loaded and decoded into an array of pixels in memory it’s an image.

    To render the image it needs to be copied over to video memory which is a texture.

    In webgl for example you’d first create an html image object from the png then a webgl texture object from the image.

    If you delve into webgl you can get pixels back into an image. Basically render the texture to a framebuffer and call getpixels to get an array of the pixels. Then create an html canvas of the same size, get an image buffer from that and copy the array of pixels to that. Finally canvas.imageUrl will give a data url (aka base64 version of a png) that you can load into an image or download. That’s basically how taking a screenshot works.

  • The sdk manual says nothing about getting a texture’s image.

    Take all my suggestions with a grain of salt. I can only go by what I find in the manual. When actually making a plugin you have the other tool of using console.log with whatever object you’re curious about and then you can then browse that object from the browsers debug console. That would let you confirm the things the manual lists as well as show anything not mentioned.

  • At edit time and at runtime there are sdk functions where you specify how the object is drawn in your plugin.