Can artificial intelligence help stop mass shootings?

Written by on February 2, 2023

(NEW YORK) — A string of six mass shootings in California over less than two weeks, which left 30 dead and 19 injured, has reignited calls that the U.S. address gun violence.

President Joe Biden earlier this month pushed for a nationwide ban on assault rifles, while Republicans, who oppose such a measure, have remained largely silent in the aftermath of the attacks. In response to other mass shootings, Republicans have called for improved mental health services.

The congressional gridlock and apparent ineffectiveness of California’s robust state gun laws have left people searching for alternatives. One relatively new potential solution, the use of artificial intelligence-enhanced security, has drawn interest for its promise of apprehending shooters before a shot is fired.

The AI security industry touts cameras that identify suspects loitering outside of a school with weapons, high-tech metal detectors that spot hidden guns, and predictive algorithms that analyze information to flag a potential mass shooter.

Company officials behind the development of the AI-boosted security cameras say the technology corrects for fallible security officers, who they said often struggle to monitor multiple video feeds and suss out emerging threats. Instead, company officials say, AI reliably identifies assailants as they ready for an attack, saving security officials precious minutes or seconds and possibly saving lives.

“This is the best-kept secret,” Sam Alaimo, co-founder of an AI security company called ZeroEyes, told ABC News. “If there’s an assault rifle outside of a school, people want to know more information. If one life is saved, that’s a victory.”

Critics, however, question the effectiveness of the products, saying companies have failed to provide independently verified data about accuracy. Even if AI works effectively, they add, the technology raises major concerns over privacy infringement and potential discrimination.

“If you’re going to trade your privacy and freedom for security, the first question you need to ask is: Are you getting a good deal?” Jay Stanley, senior policy analyst at the ACLU Speech, Privacy, and Technology Project, told ABC News.

The market for AI security

As schools, retailers and offices consider adopting AI security, the industry is poised for growth. The market for products that detect concealed weapons is expected to nearly double from $630 million in 2022 to $1.2 billion by 2031, according to research firm Future Market Insights.

The optimism owes in part to the increased prevalence of security cameras, allowing AI companies to sell software that enhances the systems already in use at many buildings.

As of the 2017-18 school year, 83% of public schools said they use security cameras, the National Center for Education Statistics found. The figure marked a significant uptick from the 1999-2000 school year, when just 19% of schools were equipped with security cameras, the organization’s survey said.

“We work with an existing surveillance system,” Kris Greiner, vice president of sales at AI security company Scylla, told ABC News. “We just give it a brain.”

Companies working on AI security to prevent shootings

Scylla, an Austin, Texas-based company launched in 2017, offers AI that aids security cameras in not only detecting concealed weapons but also suspicious activity, such as efforts to circumvent security or start a fight, Greiner said.

When the fully automated system identifies a weapon or suspicious actor, it notifies officials at a school or business, he said, noting that mass shooters often draw their gun prior to entering a facility. The system can also be set to immediately deny access and lock doors, he said.

“At a time when every second counts, it’s quite possible that it would have a heavy impact,” Greiner said.

The company, which has performed about 300 installations across 33 countries, allows client institutions to overcome the common shortcomings of security officers, he added.

“Imagine a human sitting in a command center watching a video wall, the human can only watch four to five cameras for four to five minutes before he starts missing things,” Greiner said. “There’s no limit to what an AI can watch.”

Another AI security company, ZeroEyes, offers similar AI-enhanced video monitoring but with a narrower purpose: Gun detection.

The company, launched by former Navy Seals in 2018, entered the business after one of its founders realized that security cameras provided evidence to convict mass shooters after the fact but did little to prevent violence in the first place, said Alaimo, ZeroEyes Co-founder.

“In the majority of cases, the shooter has a gun exposed before squeezing the trigger,” Alaimo said. “We wanted to get an image of that gun and alert first responders with it.”

As with Scylla’s product, the ZeroEyes AI tracks live video feeds and sends an alert when it detects a gun. However, the alert at ZeroEyes goes to an internal control room, where company employees determine whether the situation poses a real threat.

“We have a human in the loop to make sure the client never gets a false positive,” Alaimo said, adding that the full process takes as little as three seconds from alert to verification to communication with a client.

Accuracy in AI security

AI-enhanced security sounds like a potentially life-saving breakthrough in theory, but the accuracy of the products remains uncertain, said Stanley, of the ACLU.

“If it isn’t effective, there’s no need to get into a conversation about privacy and security,” he said. “The conversation should be over.”

Greiner, of Scylla, said the company’s AI is 99.9% accurate — “with a whole lot of 9s” — in identifying weapons, such as guns. But he did not say how accurate the system is in identifying suspicious activity, and said the company has not undergone independent verification of the system’s accuracy.

“For us to search for a third party to do it — we haven’t done that yet,” Greiner said, adding that the company allows customers to test the product before purchasing it.

Alaimo, of ZeroEyes, said the company’s use of employee verification as part of its alert process allows for the elimination of false positives. But he declined to say how often the AI system presents employees with false positives or whether employees make mistakes in assessing the alerts.

“Transparency is key, because if communities are going to try to make hopefully democratic decisions about whether or not they want these technologies in these public places, they need to know whether it’s worth it,” said Stanley.

Other concerns about AI

Setting aside the efficacy of the systems, critics have raised concerns over privacy infringement as well as potential discrimination enabled by AI.

To start, more than 30 states allow people to openly carry handguns, leaving such people as potential targets of AI-enhanced security.

“Carrying a gun is lawful in most places in this country now,” Barry Friedman, a law professor at New York University who studies AI ethics, told ABC News. “It’s very hard to know what you’re going to search for in a way that doesn’t impinge on people’s rights.”

At ZeroEyes, the AI sends out a “non-lethal alert” to clients in cases where an individual is lawfully holding a gun, making the client aware of the weapon but stopping short of an emergency response, Alaimo said.

Noting additional privacy concerns, Stanley said security officials never watch the vast majority of surveillance footage currently recorded, unless a possible crime has taken place. With AI, however, an algorithm scans every minute of available footage and, in some cases, watches for activity deemed suspicious or unusual.

“That’s pretty spooky,” Stanley said.

In light of racial discrimination found in assessments made by facial recognition systems, Stanley cautioned that AI could suffer from the same issue. The problem risks replicating racial inequity in the wider criminal justice system, Friedman added.

“The cost of using these tools when we’re not ready to use them is that people’s lives will be shredded,” Friedman said. “People will be identified as targets for law enforcement when they should not be.”

For their part, Greiner and Alaimo said their AI systems do not assess the race of individuals displayed in security feeds.

“We don’t identify individuals by race, gender, ethnicity,” Greiner said. “We literally identify people as a human who is holding a gun.”

Alaimo said the U.S. could face unnecessary tragedy if it forgoes AI solutions, especially since other fixes operate on a longer time horizon.

“We can and should keep talking about mental health. We can and should keep arguing gun laws,” Alaimo said. “My concern is today — not a year from now, not 10 years from now, when we might have answers to those more difficult questions.”

“What we’re doing is a solution right now,” he said.

Copyright © 2023, ABC Audio. All rights reserved.


Reader's opinions

Leave a Reply


Current track

Title

Artist