Moral Machine's perspective on autonomous vehicles
Artificial intelligence decision-making programming is judged by the public in a new study from a US university.
The public’s perspective on which decisions autonomous cars should make in fatal situations is being surveyed by the Massachusetts Institute of Technology (MIT).
MIT’s ‘Moral Machine’ poses numerous scenarios to the public in which an autonomous vehicle would need to decide who to kill. Respondents are given two choices, and in each, lives must be lost – there is no non-fatal option. To make each scenario and the victims of each clear, a written explanation is provided, in addition to the graphic demonstration.
Click here to read the full story
RELATED ARTICLES
Panel Discussion | Hybrid Powertrains as a Transition to Electrification | Future Powertrain Conclave
The panel discussion highlights the importance of hybrid powertrains, including range extenders and PHEVs, as viable mea...
Panel Discussion | The Future of Diesel | Future Powertrain Conclave
In this panel discussion, experts discuss the viability of diesel amidst tightening emission standards like the upcoming...
Panel Discussion | Powertrain and Policy Trends Shaping the Future | Future Powertrain Conclave
In this panel discussion held at the Future Powertrain Conclave in Chennai, industry leaders discuss the key policy meas...