The CSR Newsletters are a freely-available resource generated as a dynamic complement to the textbook, Strategic Corporate Social Responsibility: Sustainable Value Creation.

To sign-up to receive the CSR Newsletters regularly during the fall and spring academic semesters, e-mail author David Chandler at david.chandler@ucdenver.edu.

Monday, March 9, 2015

Strategic CSR - Robots

The article in the url below makes an interesting point:
 
"A more automated world might, in a strange way, be a more humane one."
 
That is the article's concluding sentence, and it asks some interesting questions along the way about the increasing role of robots in our society and the ethical quandaries they present:
 
"Who, for example, is liable if a driverless car crashes? This is unclear, even though four US states have given the legal go-ahead for testing on public roads. … And what if a driverless car, in order to avoid a potentially fatal collision, has to mount the pavement? Should it be installed with ethics software so that, given the choice between mowing down an adult or a child, it opts for the adult?"
 
The article also discusses the rapidly increasing role for robots in war, which raises more immediate questions around life and death decisions. This is increasingly important because, for all the things robots can do as well as (if not better than) humans, "there are still capacities, such as moral reasoning, that elude them." For example:
 
"Scientists at Bristol Robotics Laboratory showed last year that a robot trained to save a person (in reality another robot) from falling down a hole was perfectly able to save one but struggled when faced with two people heading holewards. It sometimes dithered so long that it saved neither. Surely a robot programmed to save one life is better than a perplexed robot that can save none?"
 
The challenge of course is not only whether we can instill moral values into robots, but that, if we can, whose moral values should they be? The article raises this question, but does not provide an answer. Instead, the main point of the article is to alert us to the fact that progress is occurring whether we are ready or not and, in the author's eyes, it would be better to be involved in this debate and shaping it, rather than allowing Silicon Valley to plough ahead unrestrained:
 
"So artificial morality seems a natural, if controversial, next step. In fact, five universities, including Tufts and Yale, are researching whether robots can be taught right from wrong. But this is happening in a regulatory vacuum. Ryan Calo, law professor at Washington university, has proposed a Federal Robotics Commission to oversee developments."
 
If we can get the morality of robots right, the article suggests, the potential benefits seem limitless – progress that, ultimately, should:
 
"… challenge our assumptions about the superiority of human agency. Google Chauffeur might not instinctively avoid a pedestrian but it will not fall asleep at the wheel. A robot soldier, equipped with a moral code but devoid of emotion, will never pull the trigger in fear, anger or panic."
 
Hence, the conclusion that:
 
"A more automated world might, in a strange way, be a more humane one."
 
Take care
David
 
David Chandler & Bill Werther
 
Instructor Teaching and Student Study Site: http://www.sagepub.com/chandler3e/
Strategic CSR Simulation: http://www.strategiccsrsim.com/
The library of CSR Newsletters are archived at: http://strategiccsr-sage.blogspot.com/
 
 
When a moral machine is better than a flawed human being
By Anjana Ahuja
January 31, 2015
Financial Times
Late Edition – Final
9