Pode's Forum Posts

  • Ashley : that plugin is there just to wait for your release. Having them directly in the framework, like what you are planning to do is the best solution, because there's going to be less bitmap array swapping (so much more reactivity in the end).

    By the way, what kind of fallback are you planning for the 2D mode ? (Because I don't think we are going to have JS VM fast enough in the next year to crawl the whole bitmap matrix in realtime...)

  • Update : new build !

    <img src="http://dl.dropbox.com/u/1412774/GLFXC2Demo2/demo.png" border="0" />

    dl.dropbox.com/u/1412774/GLFXC2Demo2/index.html

  • Here's a quick (useless demo) for a plugin I'm developping :

    <img src="http://dl.dropbox.com/u/1412774/GLFXC2Demo/demo.png" border="0">

    http://dl.dropbox.com/u/1412774/GLFXC2Demo/index.html

    The effects are realtime, and it's using GLFX.js as a backend (so WebGL needed)...

  • : to make it work reliably, you need to use a trigger on the "onload" function.

  • Try Construct 3

    Develop games in your browser. Powerful, performant & highly capable.

    Try Now Construct 3 users don't see these ads
  • In fact, AJAX and FileReader can access a file directly on Chrome, if you set the flag to allow it. You can't obviously ask that to your user (because it's a big security risk).

  • Arima : it's not going to be possible in the current state of the support in the browsers, because it's breaking the sandbox model.

    The most I can do is allowing the styling of the button, to blend it in your webapp GUI.

    I'm going to search if there's a hack around it, but I'm pretty sure there isn't any, sorry.

  • Yes, it's possible. I'm going to update my SVG plugin this week, you can do realtime blur with it (those on the IRC chan already saw it <img src="smileys/smiley2.gif" border="0" align="middle" />).

  • Update v1.1 <font color="RED">19/10/2012</font> : bugfixe + load images !

    Here's a new version of the FileReader plugin !

    <img src="https://dl.dropbox.com/u/1412774/FileReaderDemo2/demo1.png" border="0">

    I fixed a bug, and added thee possibility to load an image file :

    <img src="https://dl.dropbox.com/u/1412774/FileReaderDemo2/demo2.png" border="0">

    Useful, for example in Awesomium, where you can't do a drag'n'drop from desktop to add an image to your game (thanks Joannesalfa for the use case <img src="smileys/smiley2.gif" border="0" align="middle">).

    The demo : https://dl.dropbox.com/u/1412774/FileReaderDemo2/index.html

    The capx :

    https://dl.dropbox.com/u/1412774/FileReaderDemo2/FileReaderDemo2.capx

    The plugin : https://dl.dropbox.com/u/1412774/FileReaderDemo2/pode_filereader.1.1r.zip

    _____________________________________________

    In conjonction with the FileSaver plugin, here's a FileReader plugin !

    It should work on relatively new web browsers (there's no easy fallback for older browser, like the FileSaver plugin).

    <img src="http://dl.dropbox.com/u/1412774/FileReaderDemo/demo.png" border="0">

    For the moment, only text file are loaded...

    The demo : http://dl.dropbox.com/u/1412774/FileReaderDemo/index.html

    The capx :

    http://dl.dropbox.com/u/1412774/FileReaderDemo/FileReaderDemo.capx

    The plugin : http://dl.dropbox.com/u/1412774/FileReaderDemo/pode_filereader.0.1.zip

    (And yes, I know there's a typo in my test file <img src="smileys/smiley2.gif" border="0" align="middle">)

  • AT least, they did something right, the User Agent is specific, so you can serve pages customised to the Vita. For everything else, forget it. To get the 3 hours battery life, they throttled everything.

    The build is an old Webkit one.

    No <video>, no <audio> no webworkers, no SVG inpage, no WebGL, no Websocket, no FileReader, no Webstorage, no Geolocation... That was two months ago. An update was released recently, but I have no informations on that one.

    http://www.roshi.tv/2011/12/psvitano.html

    Anyway, think about that : Sony, as every other console maker loses money on the hardware, and use it to boost software sales. The more time you are on the web to play free games, the more they are losing money. Apple is the contrary. The more time you pass online on non-AppStore webapp/webgame, the less financial burden on them for running and curating the AppStore...

    Sony has incentive to push you out of the web, Apple has his interesting in pushing you out of the AppStore.

  • Zerei : I worked with users' reaction time on computers, during my PhD. No Javascript can give you a timer more precise than 4 ms, it's in the W3C specification. Only the Chrome, 1st build used a 1 ms timer (which made the CPU getting hot like crazy). Keep in mind that on any OS apart from QNX and RTLinux, when you set a timer resolution, it's an at best promise from the OS. For example, on Windows below win7, you won't get anything below 10 ms in real world usage.

    For your reaction loop : frame display = 16 ms (at 60fps) + eye processing, arm movement and finger click on mouse/keyboard ~= 150-200 ms + USB port rate, at 125 Hz = 8 ms + OS HID stack processing ~= 1 or 2 ms (depends on what's the CPU's doing at that moment => IRQ interruption mechanism) + next frame updated onscreen = 16 ms...

  • lxm7 : worked with iOS, iPhones and iPads for a client, with C2.

    For optimal performances : don't use physics and no viewport/texture, anything graphical over 1024x1024 (the viewport is flattened as an OpenGL texture by the system, so you don't wan to force the GPU to create another texture).

    If you create an app with PhoneGap, or asking the user to create a bookmark on the Homescreen, remember that the Nitro Javascript engine isn't activated for those use cases, so it's going to be slower.

    On mobile : only one sound at a time and preload lag for that sound, so you perhaps need to use an audiosprite.

    That's what I'm thinking for the moment...

  • Zetar : but, I'm fun ! (At least, ask those on the IRC chan, I always have something to discover regarding babies, with my daughter <img src="smileys/smiley2.gif" border="0" align="middle" />)

    Yes I worked with one of those devices, but not a Tobii specifically.

    About the glint/identification, they aren't the same.

    The glint, used in eyetracking, is from an infrared LED shined through the cornea, bouncing back on the retina, and coming back through the cornea, leaving 4 "dots" (because of the two refractions of the cornea). That lightning blob pattern is then tracked by the eyetracker.

    It only works if the head isn't moving, obviously. If the head is free to move, you also need to know the head displacement regarding the camera (a plane of displacement, in fact, parallel to the camera, and three axis of rotation, for the head on the neck).

    It's still possible to track that : you need a wide lens camera, working in visible light (a regular webcam, in fact <img src="smileys/smiley1.gif" border="0" align="middle" />) to track the head movements. You can use models like Active Appearance Models, or Dementhon's POSIT (because the head is an undeformable solid regarding the problem interesting us here) to follow the head.

    When I worked with that 5 years ago in my lab, and using OpenCV (Matlab is slow as a turtle for that), it wasn't possible to do that realtime.

    I remember the setup : a USB webcam (the ToUcam II at that time the only one able to do 60 fps), and a second one with the IR filter removed.

    In the end, we could only do the tracking on videos, because no computer could do realtime what we wanted to do : two webcam streaming at 60 fps, you need to do the AAM on one of the image, blob tracking on the other and mouse pointer interpolation (because remember that the head isn't moving all the time, and the eye, even moving, wasn't covering the whole image, so we needed to create a relationship between a displacement on a 320x240 or 640x480 image from the eye and the whole screen image, at 1650 or something - each 1 pixel evaluation error from the webcam cost us a x3 error on the cursor position ! And since the eye is jittering all the time, because of the saccades, you can imagine the performances where terrible !

    Furthermore, the USB2 protocol is maxed out by images at 640x480, at 60 fps - remember that you have the overhead of the protocol on top of that, and DMA was good enough to do that for two cam in parallel...)

    With Tobii, all that is done on the chip, so it can be done fast. But they are two problems, in the end : first, we don't really know what are the health effects of shining IR lights 8 hours a day or more on the retina of a geek (my wife is an optometrist, and she did biblio work on that). Second, they are 3 interactions with the mouse pointer : move, designate, interact. You can do the move part with the eyetracker, you can designate by leaving the cursor on the target, but how do you interact (ie. send a command to the computer) ? You need another channel (and no, the eyebrow isn't going to make it, even by frowning, or flapping the eyelid. Too cumbersome. At the end of the day, you are going to die with two heavy, muscular eyelids <img src="smileys/smiley17.gif" border="0" align="middle" />).

    To make an efficient iris-recognition, you need to do an high-res picture of the iris, warp the image from polar coordinates to cartesian (to make the comparison possible against a database). That's why it only works with really good cam (CSI only works on TV, you know. And by the way, who in Hollywood thinks you can go on crime scene in a suit, and that a blonde legist can search evidences on a body with their hair unnattached - DNA everywhere, anyone ? <img src="smileys/smiley4.gif" border="0" align="middle" />)

  • To add to what Ashley is saying, there's a solution, while still using Javascript for the plugins.

    The core framework can be C/C++ (and let's be honest, C/C++ can be compiled and run everywhere). The core framework can also embed the V8 JS engine. V8 is the fastest of the three vendors' engines. It's not even using full JIT, its compiling to native on the first pass (if I remember the numbers right, it's something like 1 mb of code in 200 ms).

    Since all successfull game framework is using a scripting language, let it be JS !

    And for the "restrictions" on iOS, there's no problem with that. The author of ImpactJS is already doing that. If you are building JavascriptCore from the Webkit sources (meaning V8 + various bindings), you can embed it in your iOS app, and go to the AppStore.

    In fact, Apple doesn't want to have an other engine. You can grab the sources of Webkit, of JavascriptCore, and do your own build (but only those, so no Gecko/FF on iOS, and no Rhino/Squirell extreme...)

  • Just for the information, the Tobbii is recording the glint of an IR light on the retina. So no arousal information from pupil dilatation.

    Another information : you can't infer what people are doing just by looking at their mouse trail patterns (because in the end, it's just that). You need to correlate that with what's onscreen at the moment they were looking.

    And no computer, available at the right price for the public, can record a fullscreen video (let say something starting from 1440x900) while generating the graphics at the same time, without stuttering (think about fullframe acquisition, storing in RAM, realtime compression and disk access time, while generating graphics...)

    And, just for the record, I worked with those, so I don't say they are not useful : for disabled people, hemiplegia/tetraplegia, it's a wonderful tool. It's the difference between locked in your body, and interacting at least with a computer to say and do things to the world...

  • You don't need a particular plugin, since the eye tracker moves your mouse cursor.