Ethical AI Greed for Consumers
Awhile back, the ethics of autonomous vehicles was considered for cases like the Trolley Problem. Of course, in 2014, self-driving cars were not becoming as commonplace as they are now - so common in fact that Tesla drivers have been caught sleeping behind the wheel. Situations like this are forcing previous philosophy exercises into the forefront of discussion. At the moment, the big unanswered questions are who should provide the insurance for an AI controlled vehicle (pdf) - the uninvolved owner or the manufacturer that "trained" it? Furthermore, the aforementioned Trolley Problem is also relevant regarding what decision will the AI make - should it kill the owner or should it kill somebody else? Interestingly, but seemingly obvious, people believe the machine should decide to kill the least number of people ... unless its themselves in which case consumers don't want to buy a product that will choose their lives as less valuable.