When talking about the ethic of automation, I always associate it with the Trolley problem. Should I sacrifice one person to save a larger number or not? In an autonomous driving system, designers and programmers face the same question: if a collision occurs, there needs to be a mechanism to tell the car what to do. But there is no right or wrong on this decision, and everyone may choose differently. Then whose ethics would be applied?
MIT has built an interesting online experiment platform called "Moral Machine." Users would see 13 inevitable accident scenes on this website and decide whether the autonomous car should turn or continue to drive. The final result published in 2018 shows three strong preferences: protecting humans rather than animals, protecting more lives, and protecting young lives.
However, the only official guidelines for autonomous vehicles ethics, which was put forward by German Ethics Commission on Automated and Connected Driving, proposes that any discriminative decision based on individual characteristics (such as age) should be prohibited. Does this sound less cruel?
Flowers Drawn by AI
Also, I'd like to share a field named AI Art. By machine learning, AI nowadays is able to create artworks with various styles, just like humans do. Some people argue that AI paintings are meaningless and should not be counted as art. But is there any possibility that just because humans cannot understand it? The question now becomes how to define the boundaries between humans and machines again.