Skip to content
Sign upLog in
This post is read-only. Explore Repls and connect with other creators on Community.View Community
The info in this post might be out of date, check out our docs instead. View docs

How to make a very simple Neural Network based ML web app using teachable machine


Google just released teachable machine v2.0 and the point of this was to make machine learning as easy as possible for people without a machine learning background to use in their projects. In this tutorial, I'll introduce you to teachable machine and how to make projects with it in JavaScript. In this tutorial, I'll make a system that finds the probability that you're dabbing using the most interesting of the projects on offer, PoseNet for pose estimation.

**Step 1: **

Head over to and select the project you want to make:


I selected the pose project because I want to know if the pose of the user is similar to that of a dab.

Step 2:

After selecting the project you want to make, you'll be taken to the model training page. You can add classes which are basically categories that your program will select based upon the input data. In this case, I want two classes, dab and idle. The class dab will have a higher attributed probability if the person is dabbing and idle will have a higher attributed probability if the person is not doing anything.

Training should be fairly straightforward, after defining the classes you give it training data using your camera or microphone.


Step 3:

After training (which took about 5 minutes for me with 150 samples for both classes), you can preview your model to see how well it's performing. If it doesn't work great try giving it more training data or more varied training data giving it more examples to learn from.

Once you're happy with the resultant model, press export model and then press upload and a link should come up below as well as an HTML code snippet and at this point, most of it has been done for you, for a basic app now you simply need to copy-paste the code into the HTML editor and maybe add a stylesheet (I just used water.css dark but you can do whatever).

This was the code snippet that should've been generated (for posenet only) in case you can't find it (the model url should be replaced with yours) :

<div> Teachable machine app </div> <button type='button' onclick='init()'>Start</button> <div><canvas id='canvas'></canvas></div> <div id='label-container'></div> <script src="[email protected]/dist/tf.min.js"></script> <script src="[email protected]/dist/teachablemachine-pose.min.js"></script> <script type="text/javascript"> // More API functions here: // // the link to your model provided by Teachable Machine export panel const URL = ''; let model, webcam, ctx, labelContainer, maxPredictions; async function init() { const modelURL = URL + 'model.json'; const metadataURL = URL + 'metadata.json'; // load the model and metadata // Refer to tmImage.loadFromFiles() in the API to support files from a file picker // Note: the pose library adds 'tmPose' object to your window (window.tmPose) model = await tmPose.load(modelURL, metadataURL); maxPredictions = model.getTotalClasses(); // Convenience function to setup a webcam const size = 200; const flip = true; // whether to flip the webcam webcam = new tmPose.Webcam(size, size, flip); // width, height, flip await webcam.setup(); // request access to the webcam await; window.requestAnimationFrame(loop); // append/get elements to the DOM const canvas = document.getElementById('canvas'); canvas.width = size; canvas.height = size; ctx = canvas.getContext('2d'); labelContainer = document.getElementById('label-container'); for (let i = 0; i < maxPredictions; i++) { // and class labels labelContainer.appendChild(document.createElement('div')); } } async function loop(timestamp) { webcam.update(); // update the webcam frame await predict(); window.requestAnimationFrame(loop); } async function predict() { // Prediction #1: run input through posenet // estimatePose can take in an image, video or canvas html element const { pose, posenetOutput } = await model.estimatePose(webcam.canvas); // Prediction 2: run input through teachable machine classification model const prediction = await model.predict(posenetOutput); for (let i = 0; i < maxPredictions; i++) { const classPrediction = prediction[i].className + ': ' + prediction[i].probability.toFixed(2); labelContainer.childNodes[i].innerHTML = classPrediction; } // finally draw the poses drawPose(pose); } function drawPose(pose) { if (webcam.canvas) { ctx.drawImage(webcam.canvas, 0, 0); // draw the keypoints and skeleton if (pose) { const minPartConfidence = 0.5; tmPose.drawKeypoints(pose.keypoints, minPartConfidence, ctx); tmPose.drawSkeleton(pose.keypoints, minPartConfidence, ctx); } } } </script>

You can also get fairly creative with the output array of predictions and use them as inputs into your own scripts.

Just note that the app you made might not always be fast.
Overall though, I think teachable machine is a great idea to make machine learning accessible.

3 years ago




doesn't seem to do anything

3 years ago

Well this is really cool to read! Great work!

3 years ago
Load more