Dextro
ML Engineer / Core Architect · 2016–17 · Acquired by Axon Enterprise (NASDAQ: AXON)
The Problem
Law enforcement agencies had accumulated an overwhelming volume of body-worn camera footage — 5.4 petabytes and growing — with no scalable mechanism to review it. Manual review was infeasible. The challenge was building AI systems that could reliably detect critical events (use-of-force incidents, weapons, foot chases) in chaotic, real-world footage where lighting, motion, and context vary wildly.
The System
Dextro's platform combined visual, audio, and motion signals into a unified model that enabled apps, devices, and robots to "see" and understand the world in real time. I was the core architect of the evaluation and active learning pipeline.
data: 5.4 PB body-worn camera footage
pipeline: custom evaluation + active learning with web interface
metrics: IoU and mAP across stratified datasets
apps: Flask-based Turnkey and Nalanda architectures for client deployment
velocity: model validation cycles reduced from 2 weeks → 2 days (85% gain)
The key technical challenge was the "domain gap" — models trained on clean datasets failed on chaotic body-camera footage. I designed a bespoke active learning pipeline that identified "hard negatives" through a web-based visualization tool, feeding them back into training. This closed the gap and enabled high-recall performance essential for LAPD deployment.
The Outcome
In February 2017, Axon Enterprise acquired Dextro to form the core of their AI division. The technology became the backbone of Axon AI, deployed with major agencies including LAPD.
The team included researchers from UPenn GRASP Lab, Yale, IIT Delhi, and Microsoft. Dextro had raised $1.7M from Two Sigma Ventures and RRE Ventures.
This was a direct bridge from cutting-edge deep learning research to national-scale deployment — the kind of work where getting the model wrong has real consequences.