The yard, scanned in real
time as you scroll.
Same scroll-paints-frames trick. Different shape. Two cinematic drone clips, frame-stamped with the kind of telemetry an AI yard OS would surface. Your mouse wheel is the playhead.
Every truck. Every minute.
The same hyperlapse you'd see from a yard manager's office — except the system labels every box, lane, and unit before it stops moving.
One feed, zero radios.
Gate check-ins, dock assignments, dwell times — all surfaced from the same image stream a security camera was already capturing.
Trucks sort themselves.
Routing decisions get made before drivers reach for the radio. Spotters get dispatched to the right trailer, on the right lane, at the right minute.
ROI before the quarter ends.
Yards see double-digit dwell-time reductions in week two. The CFO sees it in the next P&L. Nobody had to install new hardware.
Then the truck leaves the yard.
The handoff between yard and highway is where most logistics software stops caring. We treat it as a continuation of the same feed — same model, same labels, just a moving camera and a different background.
Every mile, live.
The same engine that scans yards scans highways. Speed, temperature, geofence violations — all from a 1080p feed and one model call per frame.
Docks know before drivers.
ETAs lock in 95% accuracy by the halfway point of any route. Dock-side teams stage receivers ahead of arrival, every time.
This whole thing is 302 JPEGs.
Same trick as the AirPods above — pre-rendered frames painted to a canvas as you scroll. The narrative does the heavy lifting; the engine is plain HTML.
All of this is just JPEGs in a bucket.
Two source clips · 302 frames · a single canvas component. The copy and the labels do the storytelling — the engine is identical to the AirPods scene above.