Dynamic SafetyDynamic Safety
Insights17 March 2026

Safety AI is not surveillance: a practical GDPR guide for industrial vision

By Dynamic Safety team

Any mention of cameras in the workplace triggers an immediate, and legitimate, question about surveillance and privacy. Active safety systems that use vision AI need a clear answer. Here is how we think about data minimisation, lawful basis, and employee trust when deploying SAiFI on site.

GDPRPrivacyComplianceSAiFI
Explore SAiFI

The question every project gets asked

Any mention of cameras in an industrial workplace triggers an immediate, and entirely legitimate, question about surveillance and privacy. It comes from safety managers, works councils, union reps, and HR teams. The instinct to ask it is healthy. Workplaces that take privacy seriously are workplaces that can introduce technology responsibly.

The short answer is that a vision-based active safety system is a different category of product to CCTV or employee monitoring. The longer answer is how you actually design it so that is true in practice, not just in the marketing.

This article is not legal advice. It describes how we think about compliance when we deploy, and the questions to push on when you are specifying a system. Your data protection officer and legal team should sign off on the specific lawful basis, privacy notices, and retention decisions for your site.

The distinction that actually matters

Surveillance is about individuals: who did what, when. Active safety is about classes and states: was anyone in the pedestrian zone when the vehicle approached, and did the system respond. That framing is not just rhetorical. It drives every real decision about what data a system needs, collects, processes, and keeps.

A system that only needs class-level detections does not need to retain imagery. A system that only needs to know whether a zone was breached does not need to identify who breached it. Every design choice downstream of that is about upholding the distinction in practice.

Data minimisation by design, not by promise

Under UK GDPR, data minimisation is a principle, not a checkbox. That means a well-designed active safety system does more than promise to use less data, it processes and discards data as part of how it works:

  • Frames are processed in real time, in memory, and discarded. There is no rolling archive of continuous footage.
  • Detections are stored as events: time, zone, class. Not as imagery of individuals.
  • Image capture for training is an explicit, opt-in, consented process that is never silently enabled.
  • No face recognition, no biometric matching, no employee identification, as a deliberate architectural choice.
  • Retention is configurable, defaulted short, and visible to the operator rather than buried.

Lawful basis in practice

For safety-critical monitoring, legitimate interests is the lawful basis most commonly appropriate, supported by a documented legitimate interests assessment. That assessment needs to show three things:

  • A genuine purpose: protecting people from a real, identified risk on site.
  • Necessity: the system is a proportionate way to achieve that purpose.
  • Balance: the privacy impact has been considered and minimised where possible.

Data minimisation by design strengthens all three arms of that assessment. So does being transparent with staff, through privacy notices and consultation, about what the system does and does not do.

What to tell your people

Trust on site depends on honesty. When we deploy, we encourage a clear staff-facing summary:

  • What the system is for, in plain language: preventing incidents in specific zones.
  • What it does not do: identify individuals, store continuous footage, or monitor productivity.
  • What happens when a detection occurs: what action the system takes, and what is logged.
  • How long the event records are kept, and who has access to them.
  • A named point of contact for questions, concerns, or subject access requests.

A workforce that understands the system is a workforce that trusts it. That is not a compliance nice-to-have, it is how the technology becomes part of how the site works.

Questions to ask a vendor

If you are specifying a vision-based safety system and privacy compliance matters to you, push on these:

  • Does the system retain continuous footage, and if so for how long and why?
  • Does the system perform any form of biometric matching or identification?
  • What data leaves site, and can the system operate with data staying on site?
  • What is the default retention for event records, and can it be configured down?
  • How is training data collected, and can that be disabled entirely?

A clear, direct answer to each of those is the baseline for a privacy-respecting system. Vague answers, or answers that require reading between the lines, are a signal to look harder.

Where we land on this

SAiFI is built as an active safety system, not a surveillance tool, and we design for that distinction at every layer. If you would like to see how it works in practice, or review the privacy positioning in detail with your team, we would be glad to arrange a demonstration and a conversation.

Speak to us

More from Dynamic Safety

Related reading across our latest news, partnerships and industry insight.

Back to news