Man vs Computer
Self driving cars will be inputted with algorithms that help the car drive safely and keep the car on the road, but when buying the self driving car should you be given the choice to decide what decisions your car makes? It depends on what kind of decisions we’re talking about, the decisions like where is the car going to take me and what roads the car should take are some of the decisions that the driver should have. When faced with the decision to make ethical choices is one that is very tough and should not be made by the driver. The car is built to keep the car safe and the occupant safe. You should not be able to choose if you should hit a dog or swerve off and hit a pole, those choices should be made by the algorithms, and if you do not believe that is not fair then you should not buy a self driving car.
Everyone drives differently no driver is the same and it would make it very difficult for the manufacturers to come up with so many different algorithms to satisfy each persons moral choices. A Computer Scientist at Massachusetts Institute of Technology Iyad Rahwan says “People who think about machine ethics make it sound like you can come up with a perfect set of rules for robots, and what we show here with data is that there are no universal rules”. It is not possible for the companies to make such algorithms that follow everyones moral choices. There are so many different choices that the car will have to make like will it limit damage to the car or will it limit damage to the occupants or will it limit the damage to other things around the car like animals, pedestrians and cause harm to the occupants and the car. These choices are just to complex and the buyer of the car should not be given the option to choose what the car does in these ethical situation.
With the building and manufacturing of self driving cars the decisions should be made by the manufacturers and the specialized teams they have in place to make the car as safe as possible. These teams that these big manufacturers have are some of the smartest people in the world and they know what they are talking about. In an article posted on Towards data science, Andy Lau stats this ” The intent of the inventors is to create a better society for drivers and the planet. In addition, self-driving cars have proven to be significantly safer than having an actual driver; this has been shown by numerous studies and data collected from them. In the long run, autonomous cars will increase efficiency and productivity for people around the world. For more people to feel at ease with self-driving cars, companies, and self-driving car owners should understand they are responsible for the safety of all stakeholders. Risk management techniques can be used to quantify probabilistic risk in a way that is transparent and flexible. To create ethical vehicles, developers should continue to learn from past experiences in risk management and morally challenging situations.” Why should we let the buyer of the self driving car make the decisions of what the car should do in different ethical situations, when scientist and many years of research is put in to building these self driving cars to make the road safe and prevent crashes. We should believe in the choices that the manufactures they would not make a car that does not value the people, they would not make a car that is not safe and won’t protect the consumer. They will put the right algorithms together to allow for a safer road and more efficient world. The owners of the car should leave the decisions making of what the car should do to the people who make the cars and if they do not like that choice then they can drive the car them selves and make the choices on their own.
Waymo is another big competitor in the self driving car world and their teams has put together the first self driving car on the road. The team at Waymo has designed the car to be fully autonomous and are training the car to drive like a human, they are not giving the choice of what the car should do to the buyer. Waymo is working everyday to make the car able to share the road with human drivers, they are trying to fix small things that will allow their car to drive smoothly snd freely on the road. Waymo is the leading manufacturers for self driving car in an article from the verge it said that, ” Waymo already has a huge lead over its competitors in the field of autonomous driving. It has driven the most miles — 6 million on public roads, and 5 billion in simulation — and has collected vast stores of valuable data in the process.”
Another big topic that could be a major issue in the case of letting the owner of the car choosing what choice the car makes is legal issues. If the owner of the car tells the car what to do in a situation does that make them responsible and not the car, because the car is doing what the human said. If the owner does not tell the car what to do then only the manufactures could be at fault for legal issues that happen with the car.
While you would love to know what your car will do in any situation and you wish you could have a say in what it does, but that right now just doesn’t seem to be in the playing field. It is much safer for the people who studied most of their life to and put hours of work into these algorithms to be the ones who say what the car should do. They know what’s best for them and what’s best for their consumers.
Hawkins, Andrew J. “Inside Waymo’s Strategy to Grow the Best Brains for Self-Driving Cars.” The Verge, The Verge, 9 May 2018, http://www.theverge.com/2018/5/9/17307156/google-waymo-driverless-cars-deep-learning-neural-net-interview.
Maxmen, Amy. “Self-Driving Car Dilemmas Reveal That Moral Choices Are Not Universal.” Nature News, Nature Publishing Group, 24 Oct. 2018, http://www.nature.com/articles/d41586-018-07135-0.
Andy Lau, MBA. “The Ethics of Self-Driving Cars.” Medium, Towards Data Science, 13 Aug. 2020, towardsdatascience.com/the-ethics-of-self-driving-cars-efaaaaf9e320.
Paragraph 1. I admire how categorical your thesis is. Youi couldn’t be more blunt than to declare that “if you do not believe that is not fair then you should not buy a self driving car.” I’m not sure you’ll be able to sustain that point of view in the real world, but I like the clarity of your position.
Paragraph 2. Which version of algorithms do you believe is impossible (or commercially non-viable)? You clais it would not be possible to “come up with so many DIFFERENT algorithms to satisfy” personal moral choices. But your expert claims “there are no universal rules.” Does that not mean that your expert says there will be different algorithms for different situations, different cars maybe, possibly different drivers? What else could “no universal rules” mean? You’re very right that there will be thousands of possible complex calculations to make. Are you suggesting that every carmaker will program their cars to follow the exact same set of responses to those myriad choices? And if they did, wouldn’t that be a set of universal rules? I think most readers will be as confused as I am.
Paragraph 3. What does “make the car as safe as possible” mean, Sonny? Do they protect the driver, the car, the “other” car, the “other” driver, the pedestrian, personal and public property? If they can’t protect everything, what are the parameters for “choosing”? Andy Lau might mean that self-driving cars are better about anticipating and therefore AVOIDING collisions that human drivers would not avoid, but he doesn’t say so. YOU SHOULD if that’s what you mean. And you should back it up. Otherwise, we’re all just guessing, and we won’t guess what you want us to.
As for the rest of the paragraph, I’m going to suggest again that not every manufacturer will develop the same algorithms. They may not intend at first to create COMPETITIVE algorithms, but as evidence mounts from the inevitable accidents they fail to avoid, consumers will start to choose the style they prefer. Toyota swerves to avoid the dog, sacrificing the passenger’s side, but Mercedes clips the dog in order to spare the vehicle. Choices will be made on the basis of whatever the consumer thinks is more important.
To my mind, that’s the rebuttal argument you should be refuting. You are not obligated to agree with me, but you should consider the option.
Paragraph 4. I’m confused why “The team at Waymo has designed the car to be fully autonomous and are training the car to drive like a human.” Humans get into accidents. The cars are supposed to avoid them.
Paragraph 5. You’re certainly right to raise this issue (in fact, you could easily write 10,000 words on this aspect of the problem alone), but raising it doesn’t resolve it. You haven’t actually claimed that manufacturers WILL be held liable for accidents in which their cars are involved, but the issue will certainly be a muddy one. By the way, can drivers override the car’s program to take command of the wheel, brakes, accelerator if they feel they’re being placed in danger? That will confuse things even further.
Paragraph 6. I would certainly decline to buy such a car considering the way you describe it.
I hope you’re comfortable with this sort of feedback, Sonny. You didn’t specify what you wanted to hear (always dangerous!), so I offered what I think you need most.
Thank you for the feedback I am taking all of this information to see how I can better my work. Thank you
I always appreciate a response to feedback, Sonny. Thank you for that. (It’s how you rise to the top of my priority list.)