Face Emotion Detection

Real-time facial expression analysis

Loading AI models...

Detected Emotions

😊 Happy
0%
😢 Sad
0%
😠 Angry
0%
😨 Fearful
0%
🤢 Disgusted
0%
😲 Surprised
0%
😐 Neutral
0%

Emotion History

Recording Time: 0:00
Data Points: 0
Avg Dominant: --

Emotion Heatmap

Happy Sad Angry Fearful Disgusted Surprised Neutral
0%
100%

How It Works

🧠

Technology Stack

This application uses face-api.js, a JavaScript library built on top of TensorFlow.js. It runs entirely in your browser - no server-side processing required.

  • TensorFlow.js - Google's ML framework for JavaScript
  • face-api.js - High-level face detection API
  • WebRTC - Browser API for camera access
  • Canvas API - For drawing detection overlays
👁

Face Detection

We use the TinyFaceDetector model - a lightweight CNN (Convolutional Neural Network) optimized for real-time detection.

  • Input: Video frames at 320x240 resolution
  • Process: Sliding window with feature extraction
  • Output: Bounding box coordinates + confidence score
  • Threshold: 50% confidence minimum
📌

Facial Landmarks

The 68-point landmark model identifies key facial features used for emotion analysis.

  • Points 1-17: Jawline contour
  • Points 18-27: Eyebrows
  • Points 28-36: Nose bridge and tip
  • Points 37-48: Eyes
  • Points 49-68: Mouth and lips
🎭

Emotion Recognition

The FaceExpressionNet analyzes landmark positions and facial features to classify emotions.

  • Model: Trained on FER2013 dataset (35,000+ images)
  • Classes: 7 basic emotions
  • Output: Probability distribution (0-100%)
  • Speed: ~30 FPS on modern devices

Code Walkthrough

1. Loading AI Models app.js
async function loadModels() {
    const MODEL_URL = './models';

    await Promise.all([
        faceapi.nets.tinyFaceDetector.loadFromUri(MODEL_URL),
        faceapi.nets.faceLandmark68Net.loadFromUri(MODEL_URL),
        faceapi.nets.faceExpressionNet.loadFromUri(MODEL_URL)
    ]);
}

Three neural network models are loaded in parallel: face detector (~190KB), landmark detector (~350KB), and expression classifier (~330KB). Total: under 1MB.

2. Accessing the Webcam app.js
async function startCamera() {
    stream = await navigator.mediaDevices.getUserMedia({
        video: {
            width: { ideal: 640 },
            height: { ideal: 480 },
            facingMode: 'user'  // Front camera
        }
    });
    video.srcObject = stream;
}

The WebRTC API requests camera access. We specify ideal dimensions and front-facing camera for selfie mode. The stream is assigned to a <video> element.

3. Detection Loop app.js
async function detectFaces() {
    const options = new faceapi.TinyFaceDetectorOptions({
        inputSize: 320,      // Processing resolution
        scoreThreshold: 0.5  // Minimum confidence
    });

    const detections = await faceapi
        .detectAllFaces(video, options)
        .withFaceLandmarks()
        .withFaceExpressions();

    // Process results...
    requestAnimationFrame(detectFaces);  // Loop
}

Each frame is processed through a pipeline: detect faces, find landmarks, classify expressions. requestAnimationFrame creates a smooth loop synced to display refresh rate.

4. Drawing Face Boxes app.js
function drawDetections(detections) {
    ctx.clearRect(0, 0, canvas.width, canvas.height);

    detections.forEach(detection => {
        const box = detection.detection.box;

        // Draw rectangle around face
        ctx.strokeStyle = '#00ff88';
        ctx.lineWidth = 3;
        ctx.strokeRect(box.x, box.y, box.width, box.height);

        // Draw corner accents
        // ... (stylized corners in cyan)
    });
}

A transparent <canvas> overlays the video. Each detected face gets a bounding box drawn using the Canvas 2D API. The canvas is mirrored to match the video.

5. Updating Emotion Display app.js
function updateEmotions(expressions) {
    let maxEmotion = { name: '', value: 0 };

    for (const [emotion, value] of Object.entries(expressions)) {
        const percentage = Math.round(value * 100);

        // Update progress bar width
        card.querySelector('.emotion-fill').style.width = `${percentage}%`;

        // Track highest emotion
        if (value > maxEmotion.value) {
            maxEmotion = { name: emotion, value };
        }
    }

    // Highlight dominant emotion
    dominantCard.classList.add('dominant');
}

The expressions object contains probabilities for all 7 emotions (0.0-1.0). We convert to percentages, update CSS widths for the progress bars, and highlight the dominant emotion.

System Architecture

📷 Webcam
🎬 Video Frame
🧠 TinyFaceDetector
📌 Landmark68
📈 ExpressionNet
🎭 Emotions

All processing happens locally in your browser using WebGL acceleration. No data is sent to any server.

Pre-trained Models

Model Size Purpose
tiny_face_detector 190 KB Locates faces in the frame
face_landmark_68 350 KB Maps 68 facial feature points
face_expression 330 KB Classifies 7 emotions