On Monday, we discussed the reportedly controversial decision by the San Francisco Board of Supervisors to allow the SFPD to use robots capable of employing deadly force. Such usage would only be allowed in extreme circumstances where an active threat needed to be negated and human officers could not do so safely. That seemed to make sense, and police departments in many parts of the country have already employed this type of technology. But after a couple of bad headlines popped up about the policy, the board almost immediately reversed course last night and put the proposal on hold. So if you live in San Francisco and find yourself in the vicinity of an active shooter situation, you won’t be seeing any bomb-toting robots rolling to the rescue any time soon. (Associated Press)
San Francisco supervisors voted Tuesday to put the brakes on a controversial policy that would have let police use robots for deadly force, reversing course just days after their approval of the plan generated fierce pushback and warnings about the militarization and automation of policing.
The Board of Supervisors voted unanimously to explicitly ban the use of robots in such a fashion for now. But they sent the issue back to a committee for further discussion and could vote in the future to let police use robots in a lethal manner in limited cases.
Is that really all it takes for the San Fran supervisors to completely back down? The original vote wasn’t unanimous, but the move to allow the robots did pass by a significant 8-3 margin. But last night’s vote was unanimous in the other direction. Apparently, all of their research and deliberations leading them to their original conclusion went out the window when a couple of loud progressive voices ran to the media complaining about the decision.
Of course, that’s probably just a typical mindset among San Francisco’s liberal elites. I will point you once again to the 2016 Texas incident I referenced on Monday. The Dallas PD was trying to neutralize a sniper who had already killed five police officers and could potentially have begun taking out civilians. He was holed up in a parking garage with no avenue to safely approach him and he had refused to speak with officers seeking to negotiate his surrender. So they sent in a robot with an explosive device and blew him up.
Has the Board considered what would happen if the San Francisco Police were to find themselves in the same situation? With the robot off the table, they would need to rush the suspect in person. Someone with a high-power rifle can do a lot of damage to the police even if they’re wearing body armor. Does that really sound like a better solution?
It’s also worth keeping in mind that none of the robots currently in use for police departments have any sort of artificial intelligence installed. They aren’t making any decisions on their own, so they won’t suddenly start blowing up tourists. They are controlled remotely by an officer who can monitor the video and audio being recorded by the robot and any weapons or explosives are operated remotely by the operator. If someone is inappropriately killed or wounded, it will be the fault of the operator, not the robot.
While I remain quite concerned over possible problems arising from artificial intelligence, I’m all in favor of the types of robots being discussed here. They are simply tools (albeit lethal ones) that keep police officers safe, or at least safer when dealing with the most dangerous and deadly sorts of criminals.