The future of machine vision is multi-sensoryMurray Cox is principal engineer at Speedshield Technologies, where he seamlessly connects cutting-edge research with real-world industrial mobile equipment and applications. He is passionate about pushing the boundaries of AI-driven vision and spatial sensing to revolutionise workplace safety and operational efficiency across industries.
When it comes to safety in warehouses and worksites, what could be more important than visibility? Cameras mounted on forklifts and other industrial vehicles have become the new standard, giving operators and managers a clearer line of sight into busy, often dust-filled environments.
But as anyone who has spent time on a warehouse floor knows, seeing isn’t always the same as understanding.
A camera can record what’s in front of a forklift, but it can’t always interpret what that means for the machine, its operator, or the people moving around it.
That’s why the next wave of “machine vision” isn’t about sharper images or higher resolutions. It’s actually about equipping machines with the intelligence to see things the way we do – with context and situational awareness.
By combining cameras with radar, LIDAR, temperature sensors, and accelerometers, the humble forklift is able to graduate from nuts-and-bolts machinery to an intelligent workplace companion that empowers operators to act with safety and confidence.
In many ways, this multi-sensory approach mirrors how humans navigate the world: sight is important, but we also rely on hearing, touch, and spatial awareness to make safe decisions.
The rise of multi-sensory safety systems
Multi-sensory systems are already changing how vehicles interact with their surroundings. Radar and LIDAR, for instance, can detect objects and people even in poor visibility – conditions where cameras might struggle due to low light, glare, or dust.
Temperature sensors can provide early warning of overheating components or nearby fire risks, while accelerometers can sense changes in speed, tilt, or impact that may indicate instability. Together, these inputs create a web of awareness that exceeds anything a single lens can capture.
For operators, this means an extra layer of protection in high-risk situations. A forklift navigating a crowded aisle can now distinguish between a pedestrian stepping out from behind a pallet and a shadow cast by overhead lighting.
On a slippery surface, sensors can recognise traction loss and adjust behaviour before the operator even reacts. It also reduces the likelihood of false alarms – a major pain point in the industry that can cause endless distractions and sow mistrust in safety systems.
AI and edge processing: The brains behind the sensors
Collecting data from multiple sensors is only half the challenge; making sense of it in real time is where the real breakthroughs lie. In a busy warehouse, milliseconds matter.
A forklift approaching a blind intersection can’t afford a delay while raw data is sent to the cloud and back for analysis.
That’s why the next generation of safety systems is leaning heavily on edge computing – processing information directly on the machine itself.
And artificial intelligence is the layer that brings this capability to life.
Models trained on millions of hours of operational footage and sensor data can help machines recognise complex patterns, from spotting signs of unsafe operator behaviour to predicting the likelihood of a collision.