Skip to content
Why Trust This

The output is only as
good as the
methodology behind it.

This page exists because we believe any tool that makes claims about human behaviour owes its users a complete explanation of how those claims are derived. Here is ours.

01

The Science Predates Us By Decades

Paul Ekman spent more than 20 years developing the Facial Action Coding System — a framework for mapping every measurable movement of the human face to a numbered Action Unit. The FBI uses it. Clinical psychologists use it. Researchers studying deception, pain, and emotional regulation use it. It is not a startup's theory. It is peer-reviewed, replicated, and published.

We did not invent FACS. We implemented it — at scale, in a browser, on any video you hand us. The leap was not scientific. It was engineering.

Research basis

FACS was developed by Paul Ekman & Wallace Friesen, first published 1978. Widely adopted in clinical, security, and performance psychology since the 1990s.

02

The Pipeline Is Open to Inspection

GRW Project uses Google's MediaPipe FaceMesh — an open-source, production-grade model that maps 468 3D landmarks across the face at up to 30fps. The same model running in your browser is the same model documented in Google's published research. Nothing proprietary between the video and the landmark data.

From those landmarks, we compute Eye Aspect Ratio (Soukupová & Čech, 2016), head pose via solvePnP, and FACS Action Unit proxies derived from published geometric relationships between landmark positions. Every formula has a citation. Every output has a source.

Research basis

MediaPipe FaceMesh: Kartynnik et al. (2019). Real-time Facial Surface Geometry from Monocular Video on Mobile GPUs. EAR: Soukupová & Čech (2016). Real-Time Eye Blink Detection using Facial Landmarks.

03

Your Video Is Never Retained

This is not a privacy policy. It is an architecture decision. For standard analysis (up to 20 people), the entire computer vision pipeline — TensorFlow.js, MediaPipe FaceMesh, landmark extraction, Action Unit computation, metric aggregation — runs in your browser. The video file is loaded into browser memory and processed by local compute. It does not touch a server.

For large group analysis (20+ people or multi-camera), video is temporarily transferred to GPU infrastructure for processing and immediately deleted afterward. Only geometric facial landmark coordinates (468 x,y,z points per face) are returned — no pixel data is retained. We never retain the video itself regardless of tier. We could not re-identify a subject from what reaches us even if we wanted to.

Research basis

Technical verification: open your browser network inspector while an analysis runs. You will see no upload traffic for your video file. The analysis data sent to the API is JSON — numerical aggregates only.

04

Confidence Is Shown, Not Hidden

Every finding in a GRW Project report carries a confidence level: High, Medium, Low, or Abstain. These are not decorative. They reflect the underlying data quality — how many frames were analysed, how consistently the face was detected, how stable the relevant signals were across the session.

"Abstain" means the system did not have enough reliable data to generate a finding. It chose to say nothing rather than say something unreliable. This is the correct behaviour. A finding that emerges from 12 low-quality frames carries less weight than one drawn from 400 stable ones — and the report tells you which is which.

Research basis

The quality gate runs before any output is generated. Minimum viable thresholds: ≥10 seconds of footage, ≥50% frames with face detected, quality score ≥0.3. Below threshold = analysis abstained.

05

3-Layer Transparency — Raw to Insight

Every paid report includes the 3-layer signal view: raw measurements (what the model detected), patterns (how those measurements changed over time), and insights (the behavioural interpretations we drew). You can inspect each layer independently.

This is not a feature. It is an accountability structure. If a finding reads as strange, you can trace it backwards. Insight → pattern → raw number. If the raw number looks wrong, the finding should be discarded. That decision is yours to make — and we give you the tools to make it.

Research basis

Layer 1 (Raw): direct MediaPipe outputs — AU intensities, EAR values, engagement percentages, frame counts. Layer 2 (Patterns): rolling aggregates, spike detection, trend direction. Layer 3 (Insights): probabilistic interpretations with confidence and evidence chains.

06

Probabilistic Language Is Not Weakness

Behavioural signals are correlates, not determinants. Elevated AU4 brow furrow frequency is associated with cognitive effort and stress — it does not prove stress. High blink rate variance is a cognitive load proxy — it is not a diagnosis. We write our findings in probabilistic language because that is what the evidence supports.

A tool that speaks in absolutes about human behaviour is not more powerful. It is less honest. The practitioners who get the most out of this product are the ones who use the data as a starting point for observation and conversation, not a verdict.

Research basis

Every finding that uses probabilistic language is flagged with a ⚠ indicator and a plain-language explanation of what the signal measures and what it does not claim to prove.

07

Every Analysis Makes the Next One Better

Every analysis that runs on GRW Project contributes anonymised aggregate data to a growing intelligence corpus — benchmark distributions, temporal pattern libraries, context-score correlations, and prediction accuracy metrics. No individual analysis is identifiable. No video is retained. Only numerical aggregates and statistical distributions.

This means the more analyses that run, the more accurate your benchmark positions become, the more patterns the system recognises, and the better the predictive models get. Your report doesn't just tell you where you are — it tells you where you stand against an ever-growing, ever-more-precise reference population segmented by industry, role, and context.

Research basis

Six aggregate intelligence systems run in parallel: (1) Benchmark percentile distributions by segment, (2) Temporal signal pattern library, (3) Prediction model feedback loops, (4) Cross-session cohort intelligence (opt-in), (5) Context-score correlation database, (6) Abstain rate intelligence for continuous quality improvement. All anonymised. All privacy-preserving. All improving with every analysis.

We built for compliance.
Not around it.

Processing behavioural and biometric data carries legal responsibility in most jurisdictions. We designed the platform's architecture to satisfy the most stringent requirements first, then everything downstream is easier.

🍁
Designed for PIPEDA

Personal Information Protection and Electronic Documents Act — Canada's federal private sector privacy law. Our architecture is designed to support compliance with PIPEDA's consent and data minimisation principles.

🇺🇸
BIPA-Aware

Illinois Biometric Information Privacy Act. We collect explicit consent before any biometric analysis runs and honour all user deletion requests. Users are responsible for compliance with their own jurisdictional requirements.

🇪🇺
Designed for GDPR Article 9

Biometric data for the purpose of uniquely identifying a natural person is special category data under GDPR. Our zero-retention architecture is designed to support compliance by limiting server-level biometric processing.

Privacy-preserving data
that gets smarter over time.

Every GRW Project analysis contributes to six anonymised intelligence systems. No individual is identifiable. No video is retained. Only aggregate statistics that make every future report more valuable.

System 01
Benchmark Percentiles

Running score distributions by segment (professional sport, amateur, executive, general). Know exactly where you rank — not against a static table, but against a live, growing population.

4 segments · 13 scores · updated with every analysis
System 02
Temporal Pattern Library

Abstract signal shapes — composure recovery arcs, stress spike profiles, flow state entries — cataloged and matched. Patterns are numerical curves, not identities.

5 pattern types · normalised to 20 time steps
System 03
Prediction Feedback Loops

When you run a follow-up analysis, the system compares past predictions against observed reality. Prediction error residuals tune the model for your segment.

30/60/90-day horizons · accuracy improves with every data point
System 04
Cohort Intelligence

Opt-in longitudinal tracking with one-way hashed identity. Cross-session trends reveal patterns invisible in single analyses — like declining synchrony predicting turnover.

Opt-in only · hash not reversible · opt out anytime
System 05
Context Correlations

How does context affect scores? Training vs game-day, sport vs corporate, leadership vs engagement focus. Effect sizes computed with Cohen's d, significance tracked.

Factor pairs × 13 scores · effect sizes + significance
System 06
Abstain Intelligence

We track what causes quality failures — lighting, blur, occlusion, duration — and use that to improve guidance and reduce abstain rates. Silence is better than noise.

Current abstain rate: 8% · top cause: lighting
Privacy guarantee

All six systems operate on anonymised aggregate data only. No individual analysis is identifiable from the aggregate tables. No video is retained. No biometric data is stored beyond the retention window. Cohort tracking uses a one-way SHA-256 hash that cannot be reversed to a user identity. You can opt out of longitudinal tracking at any time with no effect on your analysis quality.

What GRW Project
does not do.

Any tool honest about its capabilities also needs to be honest about its limits.

Not in scope
It does not diagnose

GRW Project identifies behavioural signals and patterns. It does not diagnose psychological conditions, learning disabilities, mental health disorders, or neurological states. It is a coaching tool, not a clinical instrument.

Not in scope
It does not make decisions

The output is data to inform a human decision — not to replace one. Talent selection, team composition, and performance interventions require human judgment. The report is a starting point for that conversation.

Not in scope
It is not lie detection

FACS can identify facial movement. It cannot determine intent, truthfulness, or deception with reliability. Anyone claiming facial analysis is a reliable lie detector is selling something the science does not support.

Not in scope
It needs a usable face

The pipeline requires the subject to be facing the camera with adequate lighting, occupying at least 20% of the frame. Occluded faces, harsh backlight, or very low resolution produce degraded or abstained results.

If you made it this far,
you're the right kind of user.

The people who scrutinise the methodology before they trust the output are the people who get the most out of it. Run a free analysis. Check the signal layers. Trace a finding back to its raw measurement. Then decide.