I hope it's useful, others have also compiled newer versions of the greenworks lib in case you want to use a later version of nw.js or later version of Electron, see: greenworks-prebuilds.armaldio.xyz for Electron see: electronforconstruct.armaldio.xyz
No, I don't think that 'copy' will do split screen, but I've seen a lot of discussion on split screen in the forums and it seems like folks have some good ideas, I would search there. Good luck!
You are welcome, if you do something with it, it would be great to see it, perhaps drop a link here in comments if you can.
Thanks, that's exactly what I was using for too!
How about some more nice RTFM updates? Like how to do callAction?
It works for me, but you need to use the 1.2.1.0 beta version of the addon (listed above) and use the faceapiexample02.c3p example (listed above.) Try it in preview. A photo may be possible, but you would need to figure out how to get a uri that points to your photo and replace 'UserMedia.SnapshotURL' with your uri.
Added another example with face points on video. There will be lag between video and facepoints due to processing time of Tensor Flow.
Try this for the dots(you can do something similar for the box), it scales the location based on the ratio of the display Sprite and the video input:
-> System: Create object Dot on layer 0 at (JSON.Get("._x")×Sprite.Width÷(UserMedia.VideoWidth)+Sprite.X, JSON.Get("._y")×Sprite.Height÷UserMedia.VideoHeight+Sprite.Y)
Another fun thing to do (but it will _lag_) is to display dots on the live video:
-> System: Create object Dot on layer 0 at (JSON.Get("._x")×UserMedia.Width÷(UserMedia.VideoWidth)+UserMedia.X, JSON.Get("._y")×UserMedia.Height÷UserMedia.VideoHeight+UserMedia.Y)
If you still see something is not right, play with the ratio / offset: ×Sprite.Width÷UserMedia.VideoWidth+Sprite.X (I have sometimes seen mobile with 2X the expected resolution.)
If you could get a DragonBones JS runtime to render to another canvas in C3, ElementQuad could display that canvas. ElementQuad is very simple, it does not do any rendering, it integrates rendering from other sources into the C3 render/layer/blend system and some common aspects can be controlled by events.
Can you point me to the aws link, so I can try it from my browser and see if I see errors?
Answer to your other questions: no it cannot track hand movements (the model is not trained to recognize a hand.) You can check distance between eyes (in pixels) to get an idea of how close or far the face is from the camera.
This algorithm is pretty GPU compute-intensive, so phones may not have enough of a GPU to have it work well (though I have tried on an iPhone XR and it works ok, but I mainly am using this on PC notebook and desktop for the game I am making.)
Hmm, I thought I removed all the cases, regardless, here's the outline effect to get you going:
construct.net/en/make-games/addons/265/outline
Do you have any console logs, so we can see the error? Is this when you build an apk? Does it work if you try remote preview?
Download the example again, I removed the need for the outline effect.
Good stuff - it would be great if you kept updating this on your road to high-level SDK/JS master.
You can create one from a photo with a depth map (recent iPhones with portrait mode include a depth map. You can create one by hand, here's an example: youtube.com/watch
Member since 22 Apr, 2016