Loading AI models...
Real-time facial expression analysis
Loading AI models...
This application uses face-api.js, a JavaScript library built on top of TensorFlow.js. It runs entirely in your browser - no server-side processing required.
We use the TinyFaceDetector model - a lightweight CNN (Convolutional Neural Network) optimized for real-time detection.
The 68-point landmark model identifies key facial features used for emotion analysis.
The FaceExpressionNet analyzes landmark positions and facial features to classify emotions.
async function loadModels() {
const MODEL_URL = './models';
await Promise.all([
faceapi.nets.tinyFaceDetector.loadFromUri(MODEL_URL),
faceapi.nets.faceLandmark68Net.loadFromUri(MODEL_URL),
faceapi.nets.faceExpressionNet.loadFromUri(MODEL_URL)
]);
}
Three neural network models are loaded in parallel: face detector (~190KB), landmark detector (~350KB), and expression classifier (~330KB). Total: under 1MB.
async function startCamera() {
stream = await navigator.mediaDevices.getUserMedia({
video: {
width: { ideal: 640 },
height: { ideal: 480 },
facingMode: 'user' // Front camera
}
});
video.srcObject = stream;
}
The WebRTC API requests camera access. We specify ideal dimensions and front-facing camera for selfie mode. The stream is assigned to a <video> element.
async function detectFaces() {
const options = new faceapi.TinyFaceDetectorOptions({
inputSize: 320, // Processing resolution
scoreThreshold: 0.5 // Minimum confidence
});
const detections = await faceapi
.detectAllFaces(video, options)
.withFaceLandmarks()
.withFaceExpressions();
// Process results...
requestAnimationFrame(detectFaces); // Loop
}
Each frame is processed through a pipeline: detect faces, find landmarks, classify expressions. requestAnimationFrame creates a smooth loop synced to display refresh rate.
function drawDetections(detections) {
ctx.clearRect(0, 0, canvas.width, canvas.height);
detections.forEach(detection => {
const box = detection.detection.box;
// Draw rectangle around face
ctx.strokeStyle = '#00ff88';
ctx.lineWidth = 3;
ctx.strokeRect(box.x, box.y, box.width, box.height);
// Draw corner accents
// ... (stylized corners in cyan)
});
}
A transparent <canvas> overlays the video. Each detected face gets a bounding box drawn using the Canvas 2D API. The canvas is mirrored to match the video.
function updateEmotions(expressions) {
let maxEmotion = { name: '', value: 0 };
for (const [emotion, value] of Object.entries(expressions)) {
const percentage = Math.round(value * 100);
// Update progress bar width
card.querySelector('.emotion-fill').style.width = `${percentage}%`;
// Track highest emotion
if (value > maxEmotion.value) {
maxEmotion = { name: emotion, value };
}
}
// Highlight dominant emotion
dominantCard.classList.add('dominant');
}
The expressions object contains probabilities for all 7 emotions (0.0-1.0). We convert to percentages, update CSS widths for the progress bars, and highlight the dominant emotion.
All processing happens locally in your browser using WebGL acceleration. No data is sent to any server.
tiny_face_detector
190 KB
Locates faces in the frame
face_landmark_68
350 KB
Maps 68 facial feature points
face_expression
330 KB
Classifies 7 emotions