As technology continues to boom, software engineers are faced with more ever present ethical dilemmas. Ideas that were once science fiction are quickly becoming science fact, presenting ever more challenging ethical problems to solve. Autonomous cars may quickly become a reality, and so to will the ethical conundrums they bring with them.
This particular problem is especially interesting to me in contrast with the other readings due to the fact that it’s an ethical question that has yet to be fully answered. How should an autonomous vehicle respond in the event of an impending accident? Should it attempt to protect the occupants of the vehicle? Should it attempt to save as many people as possible, even if that doesn’t include those inside?
Most agree that the artificial intelligence should strive to preserve as many lives as possible, even if the occupants’s lives aren’t one of those lives, but only when they aren’t inside. We all like the idea of the AI in these cars protecting pedestrians, but do we still like the idea that it may come at the cost of our own? Would we even purchase a car that might make a deliberate decision to kill us?
When autonomous cars inevitably hit the market and take over our streets, these questions will no doubt once again arise, and we will once again ponder whether the decisions these cars make is the proper solution. The best course of action for now is to keep asking these questions in the hopes of one day reaching a conclusion that consumers and the public can accept.