• Welcome to ROFLMAO.com—the ultimate destination for unfiltered discussions and endless entertainment! Whether it’s movies, TV, music, games, or whatever’s on your mind, this is your space to connect and share. Be funny. Be serious. Be You. Don’t just watch the conversation—join it now and be heard!

discuss Decision-making: Can we trust AI?

A thread covering the latest news on trends, groundbreaking technologies, and digital innovations reshaping the tech landscape.
Joined
Oct 18, 2024
Messages
748
Impact
51
LOL Coins
𝕷2,549
Many businesses and individuals are increasingly relying on Artificial Intelligence (AI) and Machine Learning (ML) in decision making. However, there are concerns about bias and fairness. Can we truly trust AI systems to make objective decisions? AI systems are only as good as the data they are trained on, and if that data is biased, the results will be too. Let us discuss the implications of AI-driven decision-making and explore potential solutions to mitigate the associated bias.
 
When we make decision, we use emotions. Decision should come out from analytical thinking but sadly we are guided by our emotions, that's why most of out decision can go wrong. When you use AI, it is purely based on facts, so I believe AI will come up with better decisions than we humans.
 
Have you heard about AI self driving cars? It's getting so much attention but being the kind of person I am, I would never trust an AI do drive me and not kill me. When it comes to decision making, AI will be the last thing I would allow do it for me.
That's a valid concern. Self-driving cars are definitely dangerous, and of course, people can doubt if AI can make critical choices. But now as it stands, AI has made so much progress, but it is not perfect and can make mistakes. That is why human beings are required to provide their supervision at critical moments.
 
I would never trust an AI do drive me and not kill me
That poses the question too: What if it could prevent you from being killed by making a split-second decision to avoid a collision that you couldn't tell would even happen because it was in your blind spot, but with the calculations of your speed and direction, along with the speed and direction of the other vehicle, it could slightly veer and stop you where your car wasn't touched at all?

On the other hand, I often worry about the decision-making of that, too.

What if it could prevent you from being killed by something coming at you in a blind spot, but to do so, it had to steer into a baby carriage? Would the car's AI want to protect you over a mother and child?
 
That poses the question too: What if it could prevent you from being killed by making a split-second decision to avoid a collision that you couldn't tell would even happen because it was in your blind spot, but with the calculations of your speed and direction, along with the speed and direction of the other vehicle, it could slightly veer and stop you where your car wasn't touched at all?

On the other hand, I often worry about the decision-making of that, too.

What if it could prevent you from being killed by something coming at you in a blind spot, but to do so, it had to steer into a baby carriage? Would the car's AI want to protect you over a mother and child?
A self-driving car has to decide on which way to go or probably end up with. This demonstrates the problem of attempting to teach an AI system to be moral when making a decision especially in extreme conditions. If any mistake occurs, it can cost human life. I can't seem to find myself in a self driving car.
 
A self-driving car has to decide on which way to go or probably end up with. This demonstrates the problem of attempting to teach an AI system to be moral when making a decision especially in extreme conditions. If any mistake occurs, it can cost human life. I can't seem to find myself in a self driving car.
Self-driving cars can be dangerous. It's like allowing the AI to make decisions in matters of life and death. This is a threats to human beings and that is why anyone would be concerned about relying on artificial intelligence in such circumstances.
 
That poses the question too: What if it could prevent you from being killed by making a split-second decision to avoid a collision that you couldn't tell would even happen because it was in your blind spot, but with the calculations of your speed and direction, along with the speed and direction of the other vehicle, it could slightly veer and stop you where your car wasn't touched at all?

On the other hand, I often worry about the decision-making of that, too.

What if it could prevent you from being killed by something coming at you in a blind spot, but to do so, it had to steer into a baby carriage? Would the car's AI want to protect you over a mother and child?

There are so many potential probabilities and variables than might be involved which is why I don't and can't leave AI to make that decision for me. Can AI be held accountable if its decision making that didn't go well? I don't think so. I'd leave myself to decide that until further notice.
 
Have you heard about AI self driving cars? It's getting so much attention but being the kind of person I am, I would never trust an AI do drive me and not kill me. When it comes to decision making, AI will be the last thing I would allow do it for me.
I get your point. Should there be something wrong with the program during self driving, you will have nobody to blame but yourself for entrusting you life to a machine.
 
When we make decision, we use emotions. Decision should come out from analytical thinking but sadly we are guided by our emotions, that's why most of out decision can go wrong. When you use AI, it is purely based on facts, so I believe AI will come up with better decisions than we humans.
There are certain decisions that one has to make with empathy. And that's the reason why we can't handover decision making to AI. Even when the facts are glaring, emotions matter in decisions. Using AI to make decision, it might tell to to dump your parents because the facts support so. But you have to consider it from the point of emotions too.
 
There are certain decisions that one has to make with empathy. And that's the reason why we can't handover decision making to AI. Even when the facts are glaring, emotions matter in decisions. Using AI to make decision, it might tell to to dump your parents because the facts support so. But you have to consider it from the point of emotions too.
Now tell me who will be foolish enough to ask chatgpt whether they should dump their parents or adopt them. When I said trusting AO for decision, I was only talking about something that wasn't personal like should I marry that person or not. You have a financial record of thee company, their sales, their product lines and then you feed the data to ChatGPT and ask should you invest, I believe you bet better decision from AI than yourself.
 
Back
Top