morality

This is what happens when you crowdsource morality!

#3 | What would happen if we crowd sourced morality? In other words, instead of a bible or a quran or a torah guiding us, we make life and death decisions based solely on the collective wisdom of the public. Let’s take this a bit further, and plug the results of those whims into machines. What would happen then? Or rather, what’s the worst that could happen? Tune in to find out!

Links related to this podcast:

Special thanks to our sponsor:

Evolve Summit
EVOLVE! Summit:  The Greatest Recruitment and Sourcing Conference in CEE! Join Jim Stroud and a host of recruiting and sourcing experts from around the world, November 13-14, 2018. Click here for more information.

About the podcast:

The Jim Stroud Podcast explores the future of life itself by examining emerging technology,  the changing world of work, cultural trends and everything in between.

About the host:

Over the past decade, Jim Stroud has built an expertise in sourcing and recruiting strategy, public speaking, lead generation, video production, podcasting, online research, competitive intelligence, online community management and training. He has consulted for such companies as Microsoft, Google, MCI, Siemens, Bernard Hodes Group and a host of startup companies. During his tenure with Randstad Sourceright, he alleviated the recruitment headaches of their clients worldwide as their Global Head of Sourcing and Recruiting Strategy. His resume and career highlights can be viewed on his website at www.JimStroud.com.

Subscribe now!

PODCAST TRANSCRIPT

Hi. I’m Jim Stroud and this is my podcast.

{music}

Quick question for you. Okay, maybe two. What would happen if we crowd sourced morality? In other words, instead of a bible or a quran or a torah to guide us, we make life and death decisions based solely on the collective wisdom of the public. Let’s take it a bit further, and plug the results of those whims into machines. What would happen then? Or rather, what’s the worst that could happen? Hah! I’ll let you know after this important message.

{Promo message for Evolve Summit in CZ.}

Consider this… It’s a lovely day out, and you decide to go for a walk along the trolley tracks that crisscross your town. As you walk, you hear a trolley behind you, so you step away from the tracks. But as the trolley gets closer, you hear the sounds of panic — the five people on board are shouting for help. The trolley’s brakes have gone out, and it’s gathering speed. You find that you just happen to be standing next to a side track that veers into a sand pit, potentially providing safety for the trolley’s five passengers. All you have to do is pull a hand lever to switch the tracks, and you’ll save the five people. (Hoo-ray!) But, there’s a catch. Along this offshoot of track leading to the sandpit stands a man who is totally unaware of the trolley’s problem and the action you’re considering. There’s no time to warn him. So by pulling the lever and guiding the trolley to safety, you’ll save the five passengers. But, you’ll kill that one man. What do you do?

This scenario is called “The Trolley Problem.” It is a moral paradox first posed by Philippa Foot in her 1967 paper, “Abortion and the Doctrine of Double Effect,” and later expanded by Judith Jarvis Thomson. Far from solving the dilemma, the trolley problem launched a wave of further investigation into the philosophical quandary it raises. And it’s still being debated today.

Fast forward from that 1967 imaginary moral paradox to a real 2018 moral paradox; where you are riding alone inside of a self-driving robot car. Suddenly, three pedestrians leap into a crosswalk in front of the robot car you are riding in. The robot car must instantly decide between running the pedestrians down and thus, saving your life or, crashing into a concrete barrier which would kill you but save the lives of three pedestrians. So, what should the robot car do?

Since 2016, scientists have posed this scenario to folks around the world through the QUOTE “Moral Machine,” END QUOTE an online platform hosted by the Massachusetts Institute of Technology that gauges how humans respond to ethical decisions made by artificial intelligence. After collecting 40 million decisions across 233 countries, the researchers found that overall, participants favored sparing the lives of the many over the few, humans over animals and the young over the old. But if you look a little deeper into the geography of those who responded, you notice that where you live has a lot to do with your life and death decisions. For example…

…people in the U.S. and the U.K. favored sacrificing the one life in order to save more lives whereas in Taiwan and Japan the opposite view was the trend.
…people in China favored saving the elderly over the young moreso than those in France, who decided differently

I think all of this is fascinating, especially in light of Waymo. Waymo is a robotic car company created by Google. Waymo is about to make a major technological leap in California, where its vehicles will hit the roads without a human being on hand to take control in emergencies. As of October 30, 2018, the CA Department of Motor Vehicles cleared Waymo’s driverless cars to cruise through California at speeds up to 65 mph. I hope, for the sake of CA pedestrians, that Google has solved the trolley problem.

If you like what you just heard, hate what you just heard or don’t know what you just heard, I want to know about it. You can contact me via my website www.JimStroud.com or you can message me on LinkedIn, Twitter… I’m everywhere, everywhere, everywhere. Oh, oh, if you want to support my Starbucks habit by dropping a little somethin’-somethin’ in the virtual tip jar I will not be mad at that, at all. There is a donation link in the podcast description. Thank you in advance.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.