Sigmag's Recent Forum Activity

  • > Good old fashioned brain rendered depth blur in games that don't actually have depth blur.

    >

    I hadn't actually considered that. Surprising that it works! I thought we'd see the "everything-on-focus" effect, like we have in flat screens.

    It's interesting, this is one of the defining factors of immersion and even standing in the same room as a user on the OR with a monitor showing what they are seeing, it's in no way the same experience. Things like dust specks that are just "filler" on a flat monitor become complete atmosphere in the OR.

    My favorite in this category is watching people play 'dreadhalls' which is a randomly generated maze. To us watching on the screen it's a scary game, to them in the OR, it is a haunted house.

    > This already happened a year ago, it's pretty spot on for making you feel like you are in a theater. Program is called VR Cinema. And if I see C2 making it's way to the OR any time soon, it'll be in a format like this. 2D projected into a 3D environment.

    >

    I don't know if you got what I was implying. I meant to use VR as a productivity tool, such as having 10 excel windows open at once without getting lost, since you have "infinite" screen space. The limit would obviously be the device's resolution - I hear text is tiring (due to low-rez) even in the OR DK2.

    They have desktop workspaces like this, and it definitely will help productivity. However like you said, text and low definition are a bad mix since text is very often among the smallest elements on screen. Most all text that isn't cartoon block letters or giant in the OR DK1 is unreadable, and I'm sure DK2 isn't quite there yet either.

    I am looking forward to virtual workspaces though all the same, I think they just have a limited start.

    > the main thing keeping people from wearing it for 8 hours is going to be motion sickness. Unless you are immune to motion sickness, you will feel sick within 10 minutes of playing the OR for your first time. It gets easier as you use it more regularly. I can use it for about 4 hours a day.

    >

    But that's because you're in a game, with all sorts of action-result mismatches, right? I imagine that if you had it on in a "sitting-in-a-desk-simulator" it wouldn't cause any problems at all.

    The scenario I'm thinking of would be binaural sound simulation (i.e. rain or the ocean) and having some sort of vista projected around you (a mansion, top-floor-skyscraper, yacht deck, this thing), meanwhile, flat windows (from desktop programs) would be projected in a globe around your head. You could look and interact with them as if you were surrounded by monitors. Of course, later programs would be designed to be less flat.

    The main issue is that there are so many variables in making virtual reality appear as actual reality to the brain, that there is much to account for. As it stands, the best rule of thumb is generally the more you move your head (and around the virtual space), the quicker the motion sickness sets in regardless of what you are doing.

    I guess it's kind of like being on a boat, you feel pretty much the same as being on land, but every now and again you get sea sick because the waves of motion somehow don't line up with what your brain expects. Like sea sickness, some people never will experience OR motion sickness. I had a friend who was strapped into a space shuttle spinning out of control that made a few of us watching the flat monitor sick, but not the guy in the OR, so it's a bit odd.

    However, some have come a long ways in regards to this. A game I played recently called 'Windlands' has you swinging around like spider man at skyscraper heights and it didn't make me sick at all, I played it for an hour and 30 minutes. And another misconfigured game where I wasn't even moving made me need to take off the OR immediately, so I think once they standardize the eye profiling they will also improve the motion sickness issue substantially.

    > social interaction is definitely going to be one of the biggest uses for VR

    >

    Feedback and control schemas are much too primitive for that. We'd need some sort of tactile simulator capable of transmitting tactile info from the character to the player and some system that can convert movement intent to in-game movement, without moving your body, otherwise the dissonance that comes from indirectly controlling a character through an input device will still break the immersion (as well as cause motion sickness).

    I'm not saying it's impossible or unwanted, the porn industry alone would pay fortunes to see this through, but without direct neural stimulation or spinal bridges (both still in the realm of scifi), I don't see this happening satisfactorily.

    I think we will see some try to break into that eventually, but social experiences don't have to be based solely on reality or use all the senses. I mean people will come to interact with one another in ways similiar to how online gaming has brought people together. In fact, it's surprising how natural it is for a gamer to adapt to a hybrid control scheme of movements used in real life, and movements used in console or PC gaming to immerse yourself in the rift. Sight, sound and head feedback are typically enough along with the familiarity of a gamepad to tie your immersion together.

    One program called 'JanusVR' is a multiplayer virtual web browser type thing. People basically walk around with an xbox controller or the mouse and keyboard and people can see where you are standing and where you are looking based on a simple polygon model, and I think there's chat. It's just a different way you and a friend could browse the internet together, by entering theaters to watch youtube videos or walking down halls with comments scattered on the walls from users of that site. It's not inherently a better or worse way to do it, just a different way to experience it, full realism or otherwise.

  • Wanted to add my input, as I'm one of the first 5,000 developers to gets hands on with the OR, and I've had the DK1 now for 16 months and have had time to explore the tech.

    TL;DR: In my opinion, fixed vantage point cameras have no real place in VR. The best VR you are likely to get with Construct 2, if anything at all, will be a 3rd party tool that will project your 2D game export into a 3D environment like sitting in a movie theater, watching a flat screen. They have programs that do this with movies, and your desktop, so I don't see doing it with a game to be too far off.

    Some background on the OR and why you have to actually put one on your head to understand it:

      The biggest thing that needs to be taken into consideration is that the OR is an entirely new beast. For anyone who has tried one, you know what I mean. Beyond having head tracking capabilities, it is also the most immersive/realistic stereoscopic vision technology on the market. The biggest difference here is that all the 3D tech such as IMAX 3D or 3D TVs, or the 3DS render both images (one for right eye, one for left) on top of one another. That means your eyeglasses and brain have to separate out the 2 images from the stream using polarized glasses or shutterglasses, or putting your head at exactly the right spot while playing the 3DS. On top of all that, all current 3D tech has a fixed vantage point, so you always see through the eyes of the camera, rather than the camera being where your eyes are looking. Unlike traditional 3D glasses, OR gives each eye its own image without ever putting them on top of each other so there is no ghosting or shine through such as you can run into with shutterglass 3D tech. You also can turn and tilt your head which you can't do with polarized glasses. Unlike the 3DS where you have to be at just the right angle to see the 3D effect, the OR is fixed to your head so you always have the correct angle. Your eyes process the 3D just like real life, so if you have your face against a chainlink fence in a game and focus beyond it, the fence becomes blurry - and if you focus your eyes on the fence, the area beyond the fence becomes blurry ( Hiding in bushes in VR is pretty awesome ). Good old fashioned brain rendered depth blur in games that don't actually have depth blur. Even without head tracking, the 3D is much more advanced than any 3D you've used before, and can and will trick your brain into thinking you are there. Standing at the top of a tall stair case, or at the edge of a cliff and looking down will make your stomach sink or give you vertigo (to an extent).

    Why C2 isn't a great VR fit:

    • Would have to render in stereo (one image for each eye), which C2 does not support. You would need a z-axis for depth in order for C2 to render 2 different images for each eye. That means the entire C2 engine would need to reworked to become a 2.5D workspace at a minimum to simulate actual depth.
    • Rendering 2 separate cameras in 2.5/3D is resource intensive, and people already complain about performance with 2D C2.
    • Even with a z-axis for depth on objects, the best you'll really be able to do is project your 2D game onto a plane or inverted sphere in front of you. It would be in your face and take up all your vision, sure. But it's not really a true VR experience as if you turn around, all you'll see is black empty space. See TheWyrm's post for what to expect:

    Also this is someone getting their 3DS working on an Oculus Rift => http://www.engadget.com/2014/09/15/oculus-rift-3ds/

    It's cool enough, but feels really gimmicky compared to true 3D environments. To be honest, it's the same difference between going to the IMAX 3D and playing your game on the screen in 3D, versus looking down at your own hands while in the theater and seeing that kind of 3D depth. The 2.5D approach just feels very shallow by comparison to what a 3D environment can accomplish in VR. It looks good, takes up your vision, definitely worth playing, but you lose almost all of the immersion that VR has to offer, which is the biggest appeal.

    Subscribe to Construct videos now

    I mean, if given the choice, you would rather watch/hear a movie where bullets fly past your head and flames actually engulf you rather than have the cliche "spear poke towards the camera" thing they do in every 3D movie. AND you don't have to keep facing forward, look whatever direction you want. It does wonders for immersion.

    It's hard to explain, but you can feel the warmth of fireplaces, the sun in your eyes, the drop in your stomach when you go on a roller coaster, etc.

    Rambling:

    > Someone will probably make a way so you can view any 2D applications/games/movies as if you were viewing it on a massive TV/imax. (Although you'd probably want the oculus to be higher resolution first.)

    >

    I have no doubt in my mind that this will happen. I use 3 monitors at work and at home, and with an oculus rift I could have something ridiculous like 80 borderless monitors. In fact, I think the entire notion of flat desktops might end with the rift - the desktop of the future might look more like a "headsphere', and that's considering current control schemes (keyboard and mouse).

    The rift will revolutionize book reading, movie watching and working.

    This already happened a year ago, it's pretty spot on for making you feel like you are in a theater. Program is called VR Cinema. And if I see C2 making it's way to the OR any time soon, it'll be in a format like this. 2D projected into a 3D environment. As far as that's concerned the best you get from that is you get to play your game on a giant theater screen. I know that sounds cool, and it is cool, but you haven't seen 3D VR yet.

    I think it will always have to be in a 3D environment of some sort, due to head tracking. If you just display the game typically through the display and don't account for head tracking, then anytime you move your head you feel like barfing. Imagine moving your head in real life and your vision not moving, your brain hates it.

    They're supposedly lighter than some models of headphones, and people wear those for long sessions. I have no trouble imagining them being used by powerusers in office jobs.

    It's comfortable enough, it's basically like wearing big headphones, eventually you start noticing that they are cramping on your head and feel weird, but typically once you are in an immersive environment, you don't even think about it anymore.

    Also, the main thing keeping people from wearing it for 8 hours is going to be motion sickness. Unless you are immune to motion sickness, you will feel sick within 10 minutes of playing the OR for your first time. It gets easier as you use it more regularly. I can use it for about 4 hours a day.

    TheWyrm, I see your point: the face-to-face interaction a family or a group of friends is used to is unlikely to be replaced anytime soon. I don't foresee you putting on one of those for watching a romcom with your girlfriend, for instance.

    It won't replace intimate interaction - it may augment it however. That said, social interaction is definitely going to be one of the biggest uses for VR, think exactly like the episode of futurama where everyone goes on the internet.

    Subscribe to Construct videos now

    For now it's still not universal and a lot of people still complain of motion sickness. This is probably the worst problem they have to fix before fancy "wireless" 2D AR interfaces.

    The motion sickness is the biggest drawback to the tech taking off. I also suspect we won't see a consumer version until they fix the inherent ipd (inter-pupilary distance) issues which are getting standardized pretty quick.

    Essentially the ipd and lenses must be profiled for each user, because everyone's eyes are different distances apart, everyone is a different height, everyone has different levels of focus (near-sighted, far-sighted, etc) so in the early days, when you loaded up a game, there was no ipd settings. So your eyes didn't match up with the virtual person you were supposed to be, so it triggered motion sickness since you felt like you were looking through someone elses eyes.

    After you get past the ipd and focus issues, then you also have to have a PC that is powerful enough to give you a solid 60+ fps, because real life is pretty high frame rate and your brain is used to that.

    whalan84: The motion sickness is more than just lag because of an old computer.

    Certain FPS with "head bobbing" will make you nauseous right away. Some others won't.

    This is the thing people fail to realize about VR too, it's immersive to the point that most things that make you sick in real life are going to make you sick in VR. Spinning around in a tube rolling down a hill? You're gonna get dizzy and barf in VR and that's not a shortcoming of the tech, that's just being human like you said.

    If you could run around at 20mph in real life with your head bouncing up and down, spinning about really fast like in quake, you'd probably barf too.

    As well as turning floor plans in to a 3D space to visualise yet to be built homes. It is definitely opening up many possibilities for new concepts outside of gaming.

    I've loaded up several architecture/house demos, one called 'Red Frame' will blow your mind at how detailed and realistic a virtual home/environment can look.

    Gaming is going to be awesome, but I really see it taking off in so many directions, especially therapy.

    Have social anxiety? Load up areas with a specific amount of people, increase as you become more comfortable.

    Have a fear of heights? Start on a smaller platform and work your way up to the top of a sky scraper.

    The first thing I did when I put on the rift was stare at dust specks floating around me and let the sun shine in my eyes for 30 minutes, things that you never look twice at on a monitor in 2D.

    If VR doesn't take off, it will be ONLY because not enough people actually put one on their head.

  • Rebooting seems like a very inconvenient solution. Don't you have a laptop with Win7 and intel HD graphics 3000 on which you could try?

    Inconvenient or no, rebooting is the standard after many installations. It allows the computer to reinitialize the OS from a clean start with the new files/services running as intended. It's not always necessary to reboot to access/run the newly installed files, but it's good practice.

    I don't really have anything of value to add, just wanted to point out Ashley's reasoning.

    As an IT guy, this is the troubleshooting process I would try: Clear out the files and .dlls you have in place (make backups of those individual .dll files) and uninstall dx to get rid of residual files. Then reboot, reinstall with dxwebsetup.exe (might as well try elevated/admin priveledge on install) and reboot after install finishes.

    If that doesn't make the game work, drop the individual .dlls back into the root project folder. If it works after that, it sounds like the .dlls aren't being accessed properly from the system folder and only from the root project folder?

    I dunno.

    PS: I think I've had problems with dxwebsetup.exe being a pos in the past, getting stuck/not completing installation successfully. I hate web setups.

  • Well, you haven't given us much to work with, but I would first try disabling "Use high-DPI display" project setting in C2 and see if that makes both operate the same.

    Kind of a long shot , but I remembering it causing differences of this nature between iPad and iPhone when working with parallaxed layers and the positions of objects on those layers. Seems like scaling the layers could have a similar effect on positioning.

    Again, not sure it'll even do anything given you are testing in chrome, but can't hurt to see.

  • Aphrodite Man! I really wish I knew about that setting a year ago, how did it escape me this long...

    Thanks for the tip! Fixing to overhaul my system to work with unbounded scrolling for ease of use.

  • Try Construct 3

    Develop games in your browser. Powerful, performant & highly capable.

    Try Now Construct 3 users don't see these ads
  • Weleavefossils No problem! Feel free to ask other questions, I'll try to answer as best I can.

    I've also updated the title and OP with newer information, and once I can knock out the dynamic scaling blog post, it'll be added to the OP as well.

    Sorry again guys for all the delays on the write up, one of our publishers is demanding a really weird change to one of our games that is going to take a week or two to fix.

    By the time I finish fixing that issue however, the dynamic scaling technique will have further expanded functionality.

  • Weleavefossils Glad you enjoyed the post! As for scaling down, if your resized image is jagged the there's a good chance you are using point sampling. When you downscale with linear sampling, there typically isn't any noticeable difference from the original - it's just smaller. You can check which you are set to in the project properties, which you can get to by clicking your project name in the "Projects" box on the top right of the C2 screen. Once there you should see this box on the left side of the screen where you can set your sampling type:

    If you reference the concepts from the blog post, then I talked about an "original, high resolution image" to start with. Depending on the resolution of your game, you'll want to load in that asset, or at least one that is larger than what you need so you can test what image size you will need.

    From there, you can resize the image inside of construct, or you can scale it. The difference here can be thought of in terms of permanent, or temporary.

    When you resize an image from 1500x1000 to 150x100, then that image is now 150x100 - if you resize OR scale it back up to 1500x1000 the image will look terrible. Here's a couple images showing how you resize an image in C2 after double clicking your sprite - this is the only method you can use to resize your image inside of C2, everything else is used to scale the image:

    When you scale an image from 1500x1000, that that image can be scaled to any size, but its still keeping the color information from the 1500x1000 asset, even if you scale it to 150x100 in your game, you can scale it right back to 1500x1000 no problem. You can scale the image with lines in the code, setting the numbers manually in the object properties box (when you select your sprite), or by dragging the corners of the sprite.

    So, RAM and performance issues aside - for best visual fidelity, you should resize your image to the size you think will be the largest you will show that image in the game, depending on how the image is scaled (larger than 100% or smaller than 100% original size).

    example: If you have a 150x100 asset, and scale it to 300x200, it will be very blurry. But if you have a 300x200 asset and scale it to 150x100, it will look fine.

    However, having larger images in your game will cause it to use more RAM and impact performance more heavily, so you should try to find a balance. I find that you can typically scale an image to 120% it's actual size without much noticeable difference (won't be too blurry).

    Hope this answered your question!

    Also, you shouldn't use an image anywhere near 1500x1000 in your actual game, you should try to stay 512x512 or lower for the most part! I was just using some random assets for an example

  • Double post, but just wanting to let you guys know I finished the first part of the dynamic scaling post, although it only covers image scaling concepts. My internet has been dead most the week so I haven't been able to write the actual post about the dynamic scaling technique yet - sorry for the delay

    All the same, check out this post in the meantime.

  • Updated! v2.1 is live, uses half the assets, single layout switching, and should be slightly more accurate in the readouts. Planning to make information display larger in portrait mode, but haven't gotten to it yet. Code optimization and tweaks to graphics and input so that everything works a little better all around.

    It's at the same URL so just shift+f5 if you have a cached copy.

    Again, if you guys have any suggestions, I'd love to add on - so just call it like you see it!

  • Heeey, good timing - I'm actually working on this write up as we speak. I'm having trouble articulating it all, but it's coming along and will be posted later this week. The information I post will be the revised, reiterated, and expanded version of the logic you are implementing (I think you are using an old ResUtil .capx), so you'll want to check it out even with your current logic to see if there's any further functionality you'd want.

    The ResUtil capx has some nifty math for the individual HUD objects, but the upcoming post will detail how to build a framework around those concepts in order to build your game within a scaling environment so that placing objects is streamlined (such as the Dragonfly Zero scaling demo above). That math should be pretty straight forward as well.

    That said, to answer your question - pretty much. Here's landscape and portrait HUD sizing logic side by side for reference:

  • Thanks all for the feedback! Really happy with the response

    A0Nasser Just FYI, I haven't forgotten to write up how to do the dynamic scaling! I will be tackling that in the blog this next week now that Kitten Kitchen is wrapped up.

    If you don't remember, here's what I'm referencing

  • I'm glad you guys like it! Simple but engaging is exactly what we were going for

    I think I'll take you up on your offer to post it on the C2 reddit, the more feedback the better!

Sigmag's avatar

Sigmag

Member since 27 Feb, 2013

None one is following Sigmag yet!

Connect with Sigmag

Trophy Case

  • 11-Year Club
  • Email Verified

Progress

12/44
How to earn trophies