An autonomous police robot patrolling the streets of Huntington Park, California is scanning license plates, logging IP addresses, and using facial recognition technology that alerts cops to the presence of “blacklisted” passersby. Details of the contraption’s capabilities were confirmed through a Freedom of Information Act request by MuckRock, a nonprofit FOIA clearinghouse in Boston.

The Knightscope K5, weighing 400 lbs and measuring 5 ft 2 inches tall (181 kg, 157 cm), is called an “autonomous data machine” by its Silicon Valley-based manufacturer. It’s “constantly on the prowl for suspicious activity and is a visual deterrent to would-be troublemakers and criminals,” according to marketing materials the company provided to the local police department. The robot, which the Knightscope brochure says “is never late for work and takes no vacations,” utilizes onboard sensors and cameras to capture photos of pedestrians’ faces, which, along with the unique identifiers of individual smartphones and license plates, can be compared against what Knightscope dubs “a ‘black’ or ‘unwanted’ list.”

“Once on the list, if the K5 detects an unwanted violator the company sends an alert signal,” Knightscope tells prospective buyers.

However, facial recognition technology is far from perfect and comes with unintended racial and gender biases baked directly into the computer code. The effectiveness of technologies like facial recognition has in the past been vastly overestimated by the local officials tasked with putting them to use. The FBI’s facial recognition systems are reportedly inaccurate in roughly 15% of cases and more often misidentify black people than whites. And Amazon’s facial recognition software misidentified 28 members of Congress as criminals.

This sort of technology is just the beginning, which alarms privacy advocates. While Huntington Park’s disclosures make no mention of it, some cities deploy systems that augment facial recognition and license plate reader data with “predictive policing” software—something the ACLU finds deeply troubling from a civil liberties perspective.

Manufacturers claim AI-powered predictive policing systems have the ability to forecast potential trouble spots, allowing police departments to more efficiently allocate resources. This, of course, doesn’t mean the technology can pinpoint crimes that will happen at specific times and specific locations. Yet, as Quartz recently reported, the city of Lancaster, California is currently working with IBM on a predictive policing initiative that officials believe can do something it definitely cannot.

“With machine learning, with automation, there’s a 99% success, so that robot is—will be—99% accurate in telling us what is going to happen next, which is really interesting,” a Lancaster official told mayor R. Rex Parris during a March 26 city council meeting.

“Well, why aren’t we mounting .50-calibers [out there]?” Parris replied, referring to a powerful rifle used by military snipers, before explaining that he was just “being facetious.”

Huntington Park’s K5 has been in use since June, with the three-year contract up for review after 12 months. Projected costs were $6,000 per month, but the city’s police department is currently paying $8,000. Invoices reviewed by MuckRock include a reimbursement of $1,066 issued to Huntington Park by Knightscope for four days of downtime in June.

Source:qz.com

LEAVE A REPLY