The finding has important implications for managing the moral dilemma that autonomous cars might face on the road.
The study showed that human moral behaviour can be well described by algorithms that could be used by machines as well.
Until now it has been assumed that moral decisions were strongly context-dependent and, therefore, could not be modelled or described algorithmically. But the study found it to be quite the opposite.
"Human behaviour in dilemma situations can be modelled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object," said lead author Leon Sutfeld, from the University of Osnabruck in Germany.
For the study, published in Frontiers in Behavioral Neuroscience, the team used immersive virtual reality to analyse human behaviour in simulated road traffic scenarios.
The findings have major implications in the debate around the behaviour of self-driving cars and other machines in unavoidable situations.
Since it now seems possible that machines can be programmed to make human-like moral decisions it was crucial that society engages in an urgent and serious debate, the researchers said.
"We need to ask whether autonomous systems should adopt moral judgements. If yes; should they imitate moral behaviour by imitating human decisions; should they behave along ethical theories and if so which ones; and critically, if things go wrong who or what is at fault," explained Gordon Pipa, Professor at the University of Osnabruck.