• Welcome to ROFLMAO.com—the ultimate destination for unfiltered discussions and endless entertainment! Whether it’s movies, TV, music, games, or whatever’s on your mind, this is your space to connect and share. Be funny. Be serious. Be You. Don’t just watch the conversation—join it now and be heard!

discuss AI powered Robot Police

A thread covering the latest news on trends, groundbreaking technologies, and digital innovations reshaping the tech landscape.
Joined
Nov 13, 2024
Messages
947
Impact
310
LOL Coins
𝕷1,615
Signature Pro
Cherries & Berries
I work with law enforcement. Therefore, I keep up with emerging topics in law enforcement and investigations.

Artificial Intelligence-powered robotic police technology is being developed to help make enforcing laws safer for the human side of police work.

Imagine a robot cop using machine learning to determine if you're a threat to the public enough to decide to use deadly force against you. Imagine it being wrong because AI isn't always right.

Any thoughts about this?

At some point, when is it time to say NO to using AI? When is it time to let humans keep doing human jobs that require human decision making?
 
The greatest fear with AI-driven robot police is the lack of accountability and trust in technological precision. No matter the enhancement in safety, we cannot allow for AI to make life or death decisions due to its inaccuracies.

Here' s a scary scenario with this:

Right now, when human law enforcement operators are forced to use deadly force, they may or may not be deemed as innocent. They may be considered a killer and jailed or their life can be ruined.

If a robot kills, the human doesn't have to take the blame. We'll just terminate that robot and improve them next ones.

Scary thought but could it become a reality?
 
The potential for mistakes, bias, and lack of transparency is alarming and don't let me start with the importance of considering the emotions and psyche of any individual required to interact with a robot cop. Human emotion and judgment are crucial when it comes to policing.
For sure.

I think we will see this experiment in the war theater, first. After that, I think it will become popular to move it to emergency services even if at first, AI is not allowed to use force.

But I do look at it this way.

It would be seen as safer to send in a robot to terminate a threat than SWAT operators who might get hurt in the process.

AI will start to kill our jobs for sure, hopefully it won't start killing us, as well.
 
Imagine a robot cop using machine learning to determine if you're a threat to the public enough to decide to use deadly force against you. Imagine it being wrong because AI isn't always right.

Any thoughts about this?
I've seen Terminator. No, just, no.

A robot's actions cannot be held against it, and the LLM used to determine the threat would just be cleaned up, with nobody held accountable for the poor decision it made.

For the use of deadly force or apprehension, leave it to the people.

It would be seen as safer to send in a robot to terminate a threat than SWAT operators who might get hurt in the process.
Who determines the threat, though? If it's a person that can be held accountable for sending in an AI robot, sure. Then, we can put that supervisor on blast if the AI made the wrong call or even if the supervisor made the wrong call, assuming they were a threat when they weren't.

SWAT operators who might get hurt in the process.
SWAT operators are very good at what they do. Remember, they're not always there to eliminate a threat.

There are DV calls and hostage situations as well. You would then need to send in "friendly" bots, which people might not trust, to help them evacuate the area before the terminator bot comes in. And, sometimes, that can't be done.

Human judgment and teamwork to accomplish the mission would be the best, in my opinion. They signed up for SWAT knowing the outcome they may face, but they go in and do their job anyway, and not because it's a job, but because they like to serve their communities.

AI could surely help 9-11 operators, though, at least in the first stage of collecting the necessary information that could slow down the response. Then, the callers could talk to a live person and give updates as necessary for the operator to pass on the information to the responders, be it EMS or the police.
 
Who determines the threat, though? If it's a person that can be held accountable for sending in an AI robot, sure. Then, we can put that supervisor on blast if the AI made the wrong call or even if the supervisor made the wrong call, assuming they were a threat when they weren't.
Remember Christopher Dorner, the ex-LAPD cop that lead to a manhunt? They used a robot to kill him. Not AI-powered but imagine what they could do now?

Probably use drone swarms.

 
Recently, an angry mob attacked police and private property in my city during a violent protest. One individual died when the mob burned down a building. Police couldn't go inside to save the person. Guy was dead before the firefighters came. In a situation like this Robot police could be a great use.
 
AI powered Robotic police may have some limited role in national security. Where the orders are for encounter, for instance, it is safer to send in a robot rather than a human, for the goal is clear. However, where an emotional decision needs to be made based on various evidences and arguments, its much wiser to employ human police.
 
AI can lie. That's a big problem.
This was my first thought. We all know that AI can have its flaws and some of them could be pretty serious, but when it comes to policing, lying could be a huge issue and depending on how the robot is programmed, it could lead to all kinds of issues unless done properly.

I do hope that robots don't end up policing, I would hate to see how things would end.
 
This was my first thought. We all know that AI can have its flaws and some of them could be pretty serious, but when it comes to policing, lying could be a huge issue and depending on how the robot is programmed, it could lead to all kinds of issues unless done properly.

I do hope that robots don't end up policing, I would hate to see how things would end.
Ahh, come on. Real police lie anyway. :D
 
Ahh, come on. Real police lie anyway. :D
You are not wrong with that one, some of them are genuine, but you do get those who tend to lie to either save themselves or to get someone into trouble that otherwise would not have done so if they had not lied.

I feel with robot police though, lying could be a lot worse, especially when they are programmed by humans.
 
One of the best 'AI' (though, probably mostly human-powered) tool that I've seen recently was a robot on an arm attached to the police cruiser. It would extend the robot up to the pulled-over car's window side and issue the warning. I'm sure that from the interaction with the robot, it can assist with helping the actual officer if human intervention is necessary. However, if it's just for a speeding infraction or warning, it could surely speed the process up as it can issue tickets on the spot or let them off with a warning, retracting to the police cruiser to let them go when the stop is over.

It could integrate some AI into it to sense the actions and behavior of the individual in the car which may prompt for human interaction, such as showing on the police's display that they "appear intoxicated" with the movements being made or are "aggressive", both of which may require the robot to retract and the police to directly intervene.

It was a pretty cool demonstration that can save time and lives, as sometimes officers are struck in that position by incoming traffic.
 
Back
Top