With 1 in 5 retail crimes involving verbal or physical abuse, intimidation, threats, or the use of weapons, we are no longer talking about just a shoplifting problem. It's a violence and safety problem.
At Auror, we often hear from retail leaders that their main focus is their staff going home safe at the end of each shift. That’s why it is critical that retailers have the right tools to consistently record crime and understand the full scale and trends in offending. Then, to be able to take the next step in using recognition technology, such as facial recognition, to prevent crime from happening at the same rate.
It's not enough to just have the technology, though.
Being deployed responsibly and only as necessary, human oversight, and transparency are non-negotiable guardrails. That's why we developed our facial recognition technology integration product, Auror Subject Recognition (ASR), to meet safety needs, while also having strict guardrails in place to govern the responsible use of facial recognition technology and address privacy concerns. Deploying facial recognition technology responsibly necessitates these guardrails, along with clear definitions of who the technology focuses on and what role humans play at every step.
This is what responsible facial recognition looks like in practice and why it’s important for creating safer stores and supporting public safety.
Human review and decision making at every key step
The most common misconception about facial recognition is that it’s the technology making the decisions. But that is far from the truth, with human oversight central to the use of facial recognition technology.
Auror VP of Trust and Safety Nick McDonnell said on a recent Retail Risk Podcast episode that it is clear this is how ASR works.
“It is still a human-controlled system. You have human intervention at the start because you determine what to put into the system. You have human intervention where you determine who goes on your list. And facial recognition is a tool, it’s giving you a suggestion."
ASR allows retailers to identify individuals who have previously displayed violent or serious behavior, or criminal activities, in their stores:
- When one of those individuals enters a store, staff receive an alert with full context about the previous offending within their store network.
- From there, a trained person reviews and verifies the alert in line with the retailer’s own policies.
- The frontline team can then decide how they would like to address the individual. This might involve a greeting at the door, a visible presence, or simply heightened awareness.
Responsible use requires that artificial intelligence does not make autonomous life-altering decisions and that a trained person reviews all matches.
Meaningful human review and verification will remain central to the use of ASR, now and into the future. Increasingly, proposed regulations include requirements for facial recognition technology matches to be reviewed and validated by a human, rather than rely on artificial intelligence alone.
Trust through transparency
Transparency is an equally important part of the accountability element. Public concern about facial recognition technology is understandable given historical applications of the technology resulting in murkiness around where biometric data is stored, how it’s used, and issues around discrimination.
These issues with the technology don’t disappear by ignoring them. But rather, they are addressed by designing systems that make transparency a condition of deployment.
For ASR, that starts with the basics. Store customers should know facial recognition technology is in use, typically through clear in-store signage. This is a baseline requirement that supports both public safety and community trust and it’s what makes the technology sustainable in the long term.
Transparency alone is not enough, though. Trust also depends on confidence in the technology, its accuracy, and that it works in the same way for everyone, across all demographic groups.
The accuracy of facial recognition technology can also vary significantly based on the quality of images used. Poorly lit environments, off-angles, and low-resolution cameras can all affect performance. This is why image quality is a non-negotiable input requirement for responsible deployment.
ASR is designed to work with high-quality inputs, with a human touch available at every important step. As every match in ASR is reviewed by a trained person before any action is taken, this ensures the technology serves as a prompt for informed judgement by a real person, not a replacement for it.
Auror works exclusively with facial recognition technology providers whose systems are independently verified by organizations such as the National Institute of Standards and Technology (NIST). World-leading facial recognition technology now delivers true match rates of over 99.8% in challenging real-world environments. Reaching and maintaining that standard requires continuous testing and updating across demographic groups, not just a one-time benchmark.
Reduce human bias through responsible use
Traditional security relies on human instinct, which inherently comes with blind spots. One of the most common concerns about facial recognition technology is that it could entrench or amplify those biases, rather than reduce them. Earlier generations of the technology are known to have had higher error rates for individuals from certain demographic groups. This is a legitimate concern that has driven Auror's investment in ensuring ASR is more accurate and more responsible.
McDonnell explains that through intelligence about past offending, supported by technology and artificial intelligence, “your treatment in a store is going to be based on your previous behavior in that store, as opposed to how you look (or what you’re wearing).”
ASR uses artificial intelligence to match facial images against a list of known high-harm offenders whose previous behavior in a retailer’s store network has already been recorded by a person. The technology is not analyzing an individual’s identity based on demographic characteristics or demographic groups, nor is it making inferences from facial features beyond the match itself. The system has no interest in who someone is. Only whether they’ve caused serious harm before.
Auror’s platform is structured to enforce this from the ground up. Information like race, ethnicity, and sexual orientation can’t be captured in the system because they’re irrelevant to behavior in the store. Biometric data is never entered or stored within the retailer’s information on Auror, regardless of a match. All processing of data takes place within the integrated third-party software.
The result is a dataset and technology without bias baked in, which matters enormously when that dataset is the foundation for the facial recognition component.
Data governance as the foundation of facial recognition technology
For retailers considering the use of facial recognition, McDonnell points to data governance as a key to doing this in a compliant and ethical way:
“Not all systems are created equal. We’ve seen bad players in the past do bad things with this type of technology. A huge thing is around getting it right and ensuring that a retailer’s reputation is protected because people know they’re doing it in the right way.”
“Doing it in the right way, with all the right guardrails… I just consider hygiene factors for that type of tech. We’ve never seen it built that way anywhere else globally.”
A critical part of that governance is data retention, or rather, the absence of it. On top of biometric data never being entered or stored on Auror, the only data that enters the system is information that retailers have already captured about known people of interest (POI) through their own incident reporting.
This distinction matters because ASR is not a surveillance tool, it is a targeted and responsible retail crime prevention one.
Early rollouts of ASR for some retailers show that 95%+ of verified alerts result in no further incident. This real world performance shows that, thanks to facial recognition technology, frontline teams had the right information at the right time to enable them to respond appropriately.
Ultimately, data governance isn’t just a compliance exercise. It’s what gives frontline retail teams the confidence in facial recognition alerts, ensuring teams have exactly the right information in the moment to make safe decisions and to go home safely.
Facial recognition technology and the road to safer stores
The safety of retail workers and public safety are too important to not tap into the use of facial recognition technology, as long as it’s done with transparent governance and clear guardrails.
Given where the retail safety landscape is heading, from the Bunnings ruling in Australia to facial recognition technology rollouts across the UK, New Zealand, and the US, a strong and safeguarded approach towards responsible facial recognition will set the standard.
Looking to the future, it is essential that ongoing ethical considerations and responsible deployment continue to guide the evolution of facial recognition technology. Built in this way, facial recognition technology is one of the most meaningful tools the industry has right now to ensure frontline workers can go home safe after their shift.
It should also demonstrate that public safety and privacy concerns can, and must, be addressed together.


.png)
.png)