Robots will rule us all. I feel that’s already been established by more sci-fi writers than can be credited in one podcast.
You may have recently seen that an Uber self-driving car killed a pedestrian in a headline grabbing frenzy that re-awakened the humanity in all of us.
What if… insert dystopian human based car fiasco here where AI (artificial intelligence) is deciding the fate of human lives with alarmingly regularity. During my recent discussions I found some common themes where these infinite ethical conundrums often came back to the questions of responsibility. Who do you blame if AI kills one of our humans AND… assuming the AI is forced into making a decision which means one life over another how does it make that choice? Heavy stuff folks. Fasten your seatbelts. Or don't.
Doing my best and simplifying my position, I’ll start with defining for the curious (hey that’s why you listen to podcasts isn’t it!) how self-driving cars work and then we'll start talking philosophy and how we are trying to tackle those tough questions, or, ask ourselves if we're just being too darn human.
Read the notes at
Follow us on...