It works for me, but you need to use the 1.2.1.0 beta version of the addon (listed above) and use the faceapiexample02.c3p example (listed above.) Try it in preview. A photo may be possible, but you would need to figure out how to get a uri that points to your photo and replace 'UserMedia.SnapshotURL' with your uri.
Added another example with face points on video. There will be lag between video and facepoints due to processing time of Tensor Flow.
Try this for the dots(you can do something similar for the box), it scales the location based on the ratio of the display Sprite and the video input:
-> System: Create object Dot on layer 0 at (JSON.Get("._x")×Sprite.Width÷(UserMedia.VideoWidth)+Sprite.X, JSON.Get("._y")×Sprite.Height÷UserMedia.VideoHeight+Sprite.Y)
Another fun thing to do (but it will _lag_) is to display dots on the live video:
-> System: Create object Dot on layer 0 at (JSON.Get("._x")×UserMedia.Width÷(UserMedia.VideoWidth)+UserMedia.X, JSON.Get("._y")×UserMedia.Height÷UserMedia.VideoHeight+UserMedia.Y)
If you still see something is not right, play with the ratio / offset: ×Sprite.Width÷UserMedia.VideoWidth+Sprite.X (I have sometimes seen mobile with 2X the expected resolution.)
If you could get a DragonBones JS runtime to render to another canvas in C3, ElementQuad could display that canvas. ElementQuad is very simple, it does not do any rendering, it integrates rendering from other sources into the C3 render/layer/blend system and some common aspects can be controlled by events.
Can you point me to the aws link, so I can try it from my browser and see if I see errors?
Answer to your other questions: no it cannot track hand movements (the model is not trained to recognize a hand.) You can check distance between eyes (in pixels) to get an idea of how close or far the face is from the camera.
This algorithm is pretty GPU compute-intensive, so phones may not have enough of a GPU to have it work well (though I have tried on an iPhone XR and it works ok, but I mainly am using this on PC notebook and desktop for the game I am making.)
Hmm, I thought I removed all the cases, regardless, here's the outline effect to get you going:
construct.net/en/make-games/addons/265/outline
Do you have any console logs, so we can see the error? Is this when you build an apk? Does it work if you try remote preview?
Download the example again, I removed the need for the outline effect.
Good stuff - it would be great if you kept updating this on your road to high-level SDK/JS master.
You can create one from a photo with a depth map (recent iPhones with portrait mode include a depth map. You can create one by hand, here's an example: youtube.com/watch
Thanks! Do you mean off-screen? I think if it's not rendered, the glsl front sampler does not have access to it.
An updated version added 1.2.1.0, works in preview, see the example for changes required (weights now included in project files.)
C3 changed one of the APIs to load project files, I think this may be why it's not working. I will update the plugin for this.
This is great! Can you add a 'make visible' action (make it visible or not (transparent or not.))
Thanks for the comments. The model in this example does not run at real time speed on most systems, it takes a lot of compute. I am testing a new version which may run faster. You can save the points captured and compare past points vs current points to do a comparison.
Member since 22 Apr, 2016