I recently came across a press release from a company called ZeroEyes. They make AI-powered gun detection technology. The company is touting their system being installed at a school. While technology can be a great way to augment situational awareness, we need to be careful to not get lulled into a false sense of security from such systems. Knowing of a threat and actually doing something about it are two very different things.
Let’s take a look at the technology first before we discuss its limitations.
ZeroEyes starts out with a pretty solid idea. Basically, the system is designed to spot weapons that aren’t visible to the naked eye (or camera). To train its neural networks to detect weapons in the feeds of existing school security cameras, the company has utilized hundreds of thousands of proprietary images and videos.
When a weapon is spotted, the footage and any information the neural net could determine is sent off for human verification.
Verification of every potential detection is carried out around the clock by former members of U.S military and law enforcement, operating from the in-house ZeroEyes Operations Center (ZOC). The objective is to provide accurate intelligence on gun-related incidents, including the physical description of the suspect, details such as their attire, the weapon used, and the real-time location of the incident.
If a non-lethal gun such as an AirSoft or BB is detected, law enforcement is informed, which can help keep them from overreacting. The goal is to then send this information and a photo on to security and law enforcement officials within seconds, allowing for a more rapid response than would otherwise be possible.
The system has already been deployed across a range of industries in over 30 states, including K-12 school districts, hospitals, military bases, commercial property groups, shopping malls, casinos, places of worship, manufacturing plants, and campuses of Fortune 500 companies.
What’s Great About This
Before I get to the important limitation of the system, I want to be fair and first discuss what I like about it.
First off, they’re not doing anything to target lawful concealed carry or harass people who are minding their own business. The system appears to be designed to quickly notify security and law enforcement of anyone doing something sketchy like openly carrying a rifle into a school. We’ve seen other companies that focus on disarming everyone who goes into a space to try to create “gun-free” zones. That’s not the goal of this technology (although it could be used to further that dumb idea).
Instead, they’re focusing on increasing the situational awareness of existing personnel. While every school could theoretically hire dozens of people to constantly watch doors and fencelines, that’s just not economically feasible, and vigilance tasks of that sort are known to induce drowsiness in humans.
Doing something to monitor the whole space helps increase the productivity of existing personnel, who ultimately are responsible for the response. That could include more than just security guards and police officers. Giving teachers (armed or not) a chance to barricade doors, deploy security measures, or move kids to a bullet-resistant shelter are a few examples.
The Easiest Mistake To Make Relying On This Technology
There are tons of “awareness” campaigns out there (See something, say something!). While making sure people are aware of a problem can be helpful, one of the biggest problems with such campaigns is that they often don’t inspire any action. Everyone in the world can be aware of the potential for something bad happening, but knowing about it and doing something about it are two different things.
We only have to look at Uvalde to see how this kind of technology can fail. The police on the scene in Uvalde were very much aware of what was going on. But, they stood around in hallways and looked at their cell phones while kids continued to get shot.
Now, more recently, they’re blubbering for the TV news cameras about how unsafe it was, and they’re trying to blame the shooter’s AR-15 rifle for their response — or lack thereof — despite having multiple copies of the same firepower, not to mention body armor and ballistic shields.
The biggest mistake a school or other entity could make installing and relying on AI to protect them is to assume that the technology alone will make a space safe. Getting an alert of a guy with a gun is worthless if there isn’t a plan and willing personnel in place to do something when and if that happens.
If the whole “plan” is to wait for the police, the technology could end up being worse than useless.
Read the full article here