Skip to content
Blog/Science

What Is FACS? A Guide to the Facial Action Coding System

Paul Ekman's Facial Action Coding System has been the gold standard for measuring facial behaviour for 40+ years. Here's what it is, how it works, and why it matters for behavioral intelligence.

2026-04-078 min read
44Action Units mapped by FACS

The Origin of FACS

In the 1970s, psychologists Paul Ekman and Wallace V. Friesen set out to create an objective, anatomically-based system for describing every visually distinguishable facial movement. The result — the Facial Action Coding System — has since become the most widely used and validated framework for measuring facial behavior in behavioral science, clinical psychology, and now AI-driven analytics.

FACS decomposes facial expressions into individual Action Units (AUs), each corresponding to the contraction or relaxation of one or more facial muscles. There are 44 AUs in total, and any facial expression — from a subtle micro-expression of contempt to a full Duchenne smile — can be described as a combination of these units.

How Action Units Work

Each Action Unit is numbered and named after the facial muscle group it represents. AU1 (Inner Brow Raise) involves the frontalis muscle, pars medialis. AU6 (Cheek Raise) involves the orbicularis oculi, pars orbitalis. AU12 (Lip Corner Puller) involves the zygomaticus major — the muscle responsible for smiling.

The key insight of FACS is that it separates observation from interpretation. A FACS coder does not label an expression as "happy" or "angry" — they record which Action Units are active, at what intensity (A through E), and in what combination. Interpretation comes later, based on decades of validated research linking AU combinations to emotional states, cognitive load, and social intentions.

This separation is what makes FACS uniquely valuable for objective behavioral measurement. Unlike self-report surveys or subjective observation, FACS-based analysis produces the same result regardless of who performs the coding — human or machine.

FACS in the Age of AI

Traditional FACS coding is labor-intensive — a trained human coder takes approximately 100 minutes to code a single minute of video. This made large-scale FACS analysis impractical until the advent of computer vision.

Modern AI systems like Google's MediaPipe FaceMesh can track 468 facial landmarks in real-time, computing Action Unit proxies from geometric relationships between these points. What took a trained coder hours now takes a machine 90 seconds.

GRW Project uses this approach to democratize FACS-based behavioral intelligence. By implementing Action Unit detection at scale via MediaPipe FaceMesh, the platform delivers behavioral signals that were previously accessible only to academic researchers and elite sports organizations with dedicated coding teams.

Why FACS Matters for Performance

FACS is not about "reading emotions" — it's about measuring behavioral signals that correlate with performance-relevant states. Composure under pressure, authentic engagement, cognitive clarity, and decision readiness all produce measurable facial signatures.

For coaches, this means objective data on how athletes respond to high-pressure moments. For HR directors, it means evidence-based leadership assessments beyond gut instinct. For healthcare leaders, it means detecting burnout signals before they become crises.

The 40+ years of peer-reviewed validation behind FACS make it uniquely suited for high-stakes environments where subjective assessment carries real consequences.

The Future of Behavioral Intelligence

FACS was designed for human coders in laboratory settings. Its application to real-world video at scale represents a paradigm shift — from expensive, slow, academic measurement to fast, accessible, actionable intelligence.

As AI models improve and landmark detection becomes more precise, the resolution of FACS-based behavioral intelligence will only increase. The organizations that adopt these tools earliest will build the deepest behavioral datasets and the strongest competitive advantage.