Font Size: a A A

Exploratory Research On People's Expectations For Moral Decision-making Of Autonomous Machines

Posted on:2020-04-20Degree:MasterType:Thesis
Country:ChinaCandidate:Z N YuanFull Text:PDF
GTID:2405330572486888Subject:Applied Psychology
Abstract/Summary:PDF Full Text Request
In recent years,with the development of artificial intelligence technology,the autonomous machines' moral decision-making has become an important issue and attracted more and more researchers,attention.However,the discussion on this issue remains philosophical,ignoring people's real expectation of autonomous machines.This research explores people's expectations of autonomous machines' moral decision-making and the psychological mechanisms behind them through five studies based on Motivation Foundation Theory(MFT)and Model of Moral Motives(MMM).The pilot study investigates people's attitudes towards whether autonomous machines' should obey moral norms and which kind of moral norms that autonomous machines should never break to explore whether there are differences between people's expectations of autonomous machines and humans' on moral norms when making moral decisions.It turns out that compared with humans,people think that autonomous machines should absolutely abide by moral norms,and there is no difference in people's attitudes toward autonomous machines on different kind of moral norms.The results provided a basis for formal research.Study 1 explores what choices that people expect autonomous machines to make when faced with the conflict between owners' orders and moral norms in concrete moral scenarios.Given that their owners' orders violating moral norms can be out of not only immoral intentions but also moral intentions.Thus,Study 1 also explores whether the valence of intentions would make a difference.The study finds that whatever their owners' intention is moral,people always expect autonomous machines to comply with moral norms when facing conflicts between their owners'orders and moral norms.Study 2 contains Study 2a and Study 2b,explores what expectations people have of autonomous machines and human decision makers in scenarios that involving proscriptive(Study 2a)and prescriptive(Study 2b)moral norms respectively.In addition,Study 2 also takes moral evaluation as the dependent variable to explore people's acceptance and willingness to trust autonomous machines or human decision makers who choose to violate moral norms.The results show that people are less likely to allow autonomous machines to violate proscriptive moral norms and have lower acceptance of the violating decision and fewer willingness to trust autonomous machines after they choosing to violate proscriptive moral norms(Study 2a).While people are less likely to allow human decision makers to violate prescriptive moral norms and have higher acceptance of autonomous machines' violating decision and more willingness to trust autonomous machines after they choosing to violate prescriptive moral norms(Study 2b).Study 3 explores whether the uncertainty of the outcome will affect people's expectation of autonomous machines' moral decision-making.This study creates three kinds of uncertain situations:unlikely happened(20%chance),probably happened(80%chance),unknown whether it will happened.The results shows that people's expectation of human's moral decision-making is affected by the possibility of outcome occurrence.When the probability of outcome occurrence is low,people will not allow human decision-makers in moral dilemma to choose to violate proscriptive moral norms.However,the expectation of autonomous machines is not affected by the possibility of outcome,that autonomous machines should always abide by the proscriptive moral norms.Based on existing researches,Study 4 further explores people's expectations of autonomous machines on complying with proscriptive moral norms and whether to action.Meanwhile,Study 4 explores the mediating mechanism from two aspects,namely,autonomous machines' own attributes and people's need for control the autonomous machines.The result confirms that people do not expect the autonomous machine to choose inaction in moral scenarios,they hope that the autonomous machines can do something without violating the proscriptive moral norms.Additionally,It is the need to control the autonomous machine,rather than the attributes of the autonomous machine themselves that mediates people's expectations of the autonomous machines on the moral decision-making.According to the above-mentioned five studies,this research finds out that people have different moral decision-making expectations of autonomous machines and human decision makers.In scenarios involving proscriptive moral norms,human decision makers are allowed to violate proscriptive moral norms to some extent,but people do not expect autonomous machines to violate proscriptive moral norms.However,in scenarios involving prescriptive moral norms,human decision makers are expected to offered help but there is no expectation for autonomous machines to do so.In addition,the uncertainty of outcome only affects people's expectations of human decision makers,but not of the autonomous machines.The results of the mediating analysis show that people's needs to control,namely safety concerns about the outcome of autonomous machine's violation of moral norms and their desire to control autonomous machines,affect people's moral decision-making expectations of autonomous machines.
Keywords/Search Tags:autonomous machines, moral decision making, proscriptive moral norms, prescriptive moral norms, action/inaction
PDF Full Text Request
Related items