SOLVED
Through some miracle, dark magic and the process of elimination I seemed to have managed to get this working. Now i can take a picture with the camera and this image is uploaded to the Sentisight api for image recognition. A result is then returned to my device with the recognition data.
For anyone who might need something similar in the future, here is how the successful call looks...
The event sheet should read>>>>>>>>
// do SentiSIght API request
+ UserMedia: On snapshot ready
-> AJAX: Request UserMedia.SnapshotURL (tag "got_snapshot_data")
+ AJAX: On "got_snapshot_data" completed
-> Functions: Call Do_Sensight_Request (image: UserMedia.SnapshotURL)
* On function 'Do_Sensight_Request'
* Parameter 'image' (String)
-> Run JavaScript:
file_in= await runtime.assets.fetchBlob(localVars.image);//THIS LINE WAS THE KEY TO THE FIX
predict();
and then the predict function and POST request in a JS script looks like this:
>
var file_in;
const baseApiURL = "https://platform.sentisight.ai/api/";
var token;
function predict() {
token = "XXXXX";
const projectId = "XXXX";
const modelName = "my_model_name";
const file = file_in;
var fr = new FileReader();
fr.onload = function() {
results = JSON.parse(apiPostRequest('predict/' + projectId + '/' + modelName, fr.result));
console.log(results);
alert(results);
}
fr.readAsArrayBuffer(file);
}
function apiPostRequest(request, body) {
var xmlHttp = new XMLHttpRequest();
xmlHttp.open( "POST", baseApiURL + request, false );
xmlHttp.setRequestHeader('Content-Type', 'application/octet-stream');
xmlHttp.setRequestHeader('X-Auth-token', token);
xmlHttp.send(body);
console.log(xmlHttp.responseText);
alert ("Resp :"+xmlHttp.responseText);
return xmlHttp.responseText;
}
Grimmy
I have been trying to do this for ages! Still can't seem to get it to work can you share a capx?