In Episode 64 of Hidden Forces, Demetri Kofinas speaks with mathematician and public educator, Hannah Fry. Dr. Fry’s mathematical expertise has led to the development of several  documentaries on the BBC, where she also hosts her own, long-running Radio 4 program: The Curious Cases of Rutherford and Fry. Already a two-time author, Hannah is out with her third and latest book, Hello World: Being Human in the Age of Algorithms.   

Since the turn of the twentieth century, algorithms have assumed the power previously associated with pontiffs or the divine right of kings. In an instance of late 20th century lore, the great Chess Champion Garry Kasparov, reflecting upon his historic loss to IBM’s Deep Blue described the algorithm that defeated him in less than twenty moves, as having ‘suddenly played like a God for one moment’. Kasparov’s experience – that of having been unnerved by the intelligence and obstinate posture of an otherwise lifeless machine – has not remained confined to the narrow dimensions of his chess board. In the 20 years since his loss, increasingly intelligent algorithms seem to be overtaking our world and making humanity obsolete in the process.

But in the age of the algorithm, there are those like Hannah Fry, who believe that our place has never been more important. She believes that we should stop seeing machines as objective masters. Instead, we need to start treating algorithms as we would any other source of power; questioning their decisions, scrutinizing their motives, and holding them accountable for their mistakes.

As computer algorithms increasingly control and decide our future, ‘Hello World’ is a reminder of a moment of dialogue between human and machine. Of an instant where the boundary between controller and controlled is virtually imperceptible. It marks the start of a partnership – a shared journey of possibilities, where one cannot exist without the other. In the age of the algorithm, that’s a sentiment worth bearing in mind.

Producer & Host: Demetri Kofinas

Editor & Engineer: Stylianos Nicolaou

Join the conversation on Facebook, Instagram, and Twitter at @hiddenforcespod

United States


00:00:00Today's episode of Hidden Forces is made possible by listeners like you For more information about this weeks episode or for easy access to related programming visit our website at hidden forces that Io and subscribe to our free email list If you listen to the show on your apple
00:00:17podcast up remember you can give us a review Each review helps more people find the show and join our amazing community And with that please enjoy this week's episode Since the turn of the twentieth century algorithms have assumed the power previously associate it with pontiff's for the divine
00:00:41right of Kings In an instance of late twentieth century lore the great chess champion Gary Kasparov reflecting upon his historic los toe IBM's deep blue describe the algorithm that defeated him in less than twenty moves as having suddenly played like a God For one moment Kasparov's experience that
00:01:06of having been unnerved by the intelligence and obstinate posture oven otherwise lifeless machine has not remained confined to the narrow dimensions of his chessboard In the twenty years since his loss increasingly intelligent algorithms seemed to be overtaking our world and making humanity obsolete in the process But in
00:01:30the age of the algorithm There are those who believe that humanity's place has never been more important that we should stop seeing machines as objective masters and start treating them as we would any other source of power questioning their decisions scrutinizing their motives and holding them accountable for
00:01:53their mistakes This week on Hidden forces Hannah Fry Power Complacent NSI and humanity in the Age of Algorithms Hannah Fry Welcome that enforces Thank you for having me It's so wonderful having you in the United States Yeah it's nice to be here Bit rainy for me today It's
00:02:28Rainier than is in the UK Yeah they're drizzles There is just gray sky that reaches down and touches the ground Where's his You actually get wet here This is not normal though It's also Yuen weak and the traffic's insane When did you get here On Sunday Sunday And
00:02:43you leave on Friday I'm confused About which days which at the moment A bit today And I think that is Tuesday Well it's wonderful having you here I told you I read your book And for our video viewers this is the U S Copy is what the U
00:02:57S Copy looks I told you I was not sure Have a picture of what The This is a picture from the run down This is what the UK copy looks like I couldn't decide which one I liked more but I think I was going with the American version
00:03:09of using like the UK version More Yeah I mean it's close but yeah I think I slightly prepared the return Something is actually it was my friend he made to the British Really Say I'm emotion in that I like it Like I told you We both agree It's
00:03:22the nerdy geeky earwax on the Net So the name of the book is Hello World The subtitle is Being human in an Age of Algorithms I told you one of things I really like about the book is that it takes an even keeled perspective Oftentimes you find that
00:03:36people take one or the other and I think because it's hard to do also it sells better particularly if you could tell me that now the to mingle in perspective But I like this because it does actually walk that line And I think part of that's captured in
00:03:51your title So tell us why you chose the title Hello World Yeah I think you're right I think that there is a lot of nuance in these discussions I love new ones on these big questions that I think it's really important to capture and the reason why I
00:04:03chose the title Hello Well there's a couple of reasons really The first is that anybody who's ever learned to program is going to be familiar Already with this phrase is it's almost like a rite of passage if you if you learn how to code The very first thing
00:04:16that you do is you program your machine to flash up the words Hello world onto the screen And the reason for that is a tradition that dates back about the nineteen seventies when a chap called Brian Cardigan wrote in his C programming book Right on DH He said
00:04:31that he did that because he had seen this cartoon where a little chick was hatching out of an egg and it was chirping Hello world as it was born and it was just something that's stuck in his mind But the thing that I really like about that really
00:04:44the reason why I chose that for the title of this book is that he was never totally clear about who the chick wasin the analogy whether it was the human who was learning to program for the first time learning how to create algorithms for the first time or
00:04:58whether it was the machine itself That was kind of being awoken aan die just like the idea that there's this moment where you can't tell which is which is a moment of dialogue really between human and machine where both the kind of amateurs and your is like a
00:05:12shared journey of possibilities that you're both embarking on together And I just thought that was exactly the right sentiment for where we're at now in time and really the age of algorithms that I agree That was what struck me about it as well though used the word dialogue
00:05:28Yeah because I think that oftentimes the conversation is about what is technology doing Tow us as opposed to I'm going to mention of previous guests who have mentioned before on the show Tim O'Reilly for the exact same reason he has this great analogy of the symbiotic organism and
00:05:44I think that's helpful Do you find that you see the world as one where we are increasingly interacting with machines and computers and algorithms as opposed to the silent communities One is being built here and it's eventually going to take over the other I'd like to but I
00:05:59think that's the world that we want to live in or certainly the world That's the future that I'm hoping for one Where is the symbiotic relationship this partnership But I do think actually that more and more we're not saying that I think that we're seeing question a conversation
00:06:13about human versus machine You know machines are coming to take all of our jobs machines the ones with the authority of the ones of the power And I think that actually that's a lot about what this book is about is kind of just taking that version of the
00:06:25future and just questioning whether that's really what we don't want to end up with How do you think that narrative came to be What do you think that comes from Is that just a general displaced anxiety about change Or is it very specific to technologies that both it's
00:06:42a really tough question I think in part it's because off the way that we are deeply for Lord as human So I sort of think that we've got this really strange relationship with machines Piers On the one hand we are really trusting of technology There are stories of
00:06:59people quite literally driving their cars off the edge of a cliff because that's not what I have told them to do it right But there are other examples of you know how many times when you go on sky scanner de bother to sort of double check the websites
00:07:11of all of the airline to make sure that you're being given the cheapest deal You know I send it and I trust I heard you should do it Shoot I've heard they're all sorts of tricks to getting a better airline Oh that's true That's true And like you
00:07:24know getting t ticket But no I don't do it I mean we were probably too busy to do it Well it's true but I also think that you know across the board really we're seeing lots of occasions where people were quite happy to just take a bit of
00:07:35a cognitive shortcut Right algorithms and computers give you an easy way out Sometimes they give you an easy source of authority But then on the other hand we also have this real habit off just dismissing technology completely if it ever is shown to make any mistakes So you
00:07:51know you've got people shouting at cirie for being stupid or just like throwing away algorithms that have got any kind of floors in them And I I don't think that that conversation about human versus machine that's so prevalent in the way that we think about them really stems
00:08:07from that the way that we think off machines as either totally perfect And we will wholeheartedly trust them completely all complete junk that we can never use it all when in reality we should be looking at this is something slightly more in the middle One of the things
00:08:21that came up when you're talking is this relationship of master slave and I think it's a term obviously in computer science as well But also I think that we perhaps see computers either as our slaves or if they're not Then they might PR masters and that makes me
00:08:36think of the story of Kasparov and Deep Blue You start the book I think it's the first chapter of the second chapter with the story of this famous chess player cast Broth and his famous match against IBM Deep Blue which he lost I'm going to give it away
00:08:50to those who don't know Tell us about that story and I should say also the book is full of this One of the things I liked about the book is its all of these anecdotes and it personalizes things And through those stories you're able to kind of connect
00:09:04with the particular issues Whatever it is that we're talking about tell us that story and why you chose Teo put it in the book Yeah it was a tough call that one actually because I know that the story of Kasparov Deep Blue is one that's been retold thousands
00:09:18and thousands of times I mean there's no chess match in history that's been poured over more more than that one But for me I have slightly different take on the on what happened during that match than the normal stories tend to cover because I think for me it
00:09:32wasn't so much a story about how remarkable the achievement of IBM's Deep Blue had been despite the fact that this was a remarkable achievement Technically it was a remarkable achievement I think for me this was the story about how Kaspar off let his human floors lose him The
00:09:50match So Kaspar off You have to understand This man is like he's unbelievably good like unbelievably good He was so intimidating You know I spoke to several chess grandmasters were I was research That's the spirit And they described him as being like a tornado when he entered the
00:10:04room like everyone was kind of pinned to the sides like watching this great man sort of walk three And he had these tricks where he would totally outside his opponents before they had even you know really sat down on the board on one of the things that he
00:10:16would use to do is he would take off his watch as he started to play and he would put it on the table next to him on DH You kind of toy with his opponents for a bit Let them think that they were having some kind of game
00:10:26And then when he was bored of toying with them he would pick up his watch and he would return it to his wrist And that was the signal that everyone in the room recognized which is that cast Ross was done with You know he was finished toying with
00:10:37you like I'm bored I'm bored You need to resign this game now because otherwise I'm just going to take you down either way defeating coming so scary But when it's against the machine you just can't do that right You can't use any of those kind of tactics And
00:10:51yet the machine could use those tactics on Kaspar off So one of the lesser known things about this machine was that it was programmed to be able to play chess of course but it was also programmed to try and psych Kaspar off out So it was programmed when
00:11:04it came up with an answer too Sometimes it would just sit on the answer for a little while and just count down the time and just let the diamond flanked on wheels in these years Steen on the idea behind that is that it wanted Caspar off to start
00:11:19thinking about what was going on inside the machine to start second guessing the machine So Kasparov got caught in this thing of Well I must have pulled it into this position where it's struggling with this calculation and I've done something That's what it's finding really tricky when in
00:11:33reality that the machine was just sitting back and you know letting the time a takedown and Kaspar off in his own book has written about his emotional states during that process because you know ultimately I think that widely in the chest community they regard the Castro for still
00:11:48better than the machine At that moment in time Deep Blue was not a better chess player than Kaspar off but it was Kasper offs attitude towards the machine That was the thing that lost him the match on I think that's something that we see across the board Well
00:12:02outside of the world of chess it's the way that we react to technology and we allowed technology to control us That really is where the questions like And that brings up the fear component The fact that there's this relationship of fear often times with computers and machines and
00:12:20the case of Casper was a classic example Yeah sometimes it's fair Sometimes it's faith Sometimes it's it's just we've got this really We're just not good at having these kind of clear and logical rational relationships of machines So one of the things you also in the book and
00:12:32it's called being human In the age of algorithms you focus in on algorithms and I think most people don't actually know what an algorithm is so I think it would be helpful for you to define that Tell us what that is and maybe kind of breakdown the different
00:12:44types of algorithms and how you do that in the book Yeah completely So I think it's completely understandable I think really why People don't totally know what the word algorithm means because it doesn't really mean very much To be honest it's such This is just this really broad
00:12:58umbrella term that doesn't really convey very much meaning but very very simply All an algorithm is is a Siri's of instructions right So it's something that takes you from some kind of input via some logical steps through to some kind of output So in theory a cake recipe
00:13:16councils and algorithms So you're imposed the ingredients and your outpost the cake at the end and your logical steps is the recipe itself But the way that people tend to use the word is the instructions that you give to a computer And there are lots of examples of
00:13:30algorithms that were really familiar with So Google Search for example is an algorithm or Facebook news feed or Amazons recommendation engine these aerial kind of algorithms that were just interacting with all the time But we're now seeing algorithms in all kinds of different places So you know they're
00:13:45in our cars there in our courtrooms in hospitals in our schools They're they're kind of behind the scenes making just little decisions about who gets to do watch on about how we're living our lives You break down four different categories for how these air used We have a
00:14:03think prioritization Is that what's used in Google I think in Google search engines things like that What are those Four Could you tell us Oh crock you're testing me now I've got him for you here Prioritization classifications association and filtering Yeah I think the reason I bring those
00:14:18up is because I wonder for people if it would be helpful to think about some of the places where they encounter these different algorithms in how they differ And I also think at some point I'd like to talk about the divers routine deterministic algorithms and these machine learning
00:14:33algorithms right They use inductive reasoning Yeah Absolutely Yeah So these different categories there's not like this official classification of algorithms So I think there are a lot of computer scientists who would argue with the way that I've chosen to group is his right But you know there isn't
00:14:48an official list so you gotta pick the line somewhere But I think they do a good job but kind of explained with different types of things that algorithms can do So for example prioritization algorithms So all these day is they give you a list This in order of
00:15:02priority Right So Google Search That's exactly what it's doing You put in a search term on it prioritised the entire Internet for you on DH spits it back at you in an ordered list But you could also have a prioritization algorithm that was used by the police to
00:15:17tell them where in the city they should target their forces If you know what a particular evening if they want to crack down on the most crime Right Well you had a great chapter on that By the way What were those decay What was it called Distance decay
00:15:29buffers Owners like that Interesting Properly I enjoyed it And I I didn't I wasn't aware that Anyway I interrupted you Please continue But we'll get to that when we deal with uh crime An algorithm Yeah Say you also have things like declassification algorithms Okay so classification algorithms will
00:15:45be like if you are female in your early thirties and you've been with your partner for a long time on Facebook they can tell that Then they would classify you as someone who's probably going to get engaged or probably gonna want a baby right and serve you up
00:15:59some adverts for a diamond rings or pregnancy It's right That is the kind of things but classification Algren It could also be classifying people in courtrooms as to whether they're high risk going on to offend or not in future So these kind of the very very simple basics
00:16:14really the type of task the algorithms Khun do So let's just take classifications For example you can see with that one right there How beneficial it Khun B But at the same time the dangers of it because it's stereotyping on steroids Right It's Ah it's you know you
00:16:27could get kind of nuts with stereotyping and segmenting people And of course we've seen that in so many areas One obvious cases on Facebook where people live in these ICO chambers off very particular types of news because they've been stereotyped as hard core wing Republican will Good luck
00:16:42changing because you're getting dished increasingly more radical content or vice versa on the left So I knew I mean the power of algorithms is strong and you make that point also with this great example of Robert Moses the great New York landscape architect Er I don't know what
00:16:57the official to grace the rest would Teo Great as a huge I mean his impact was in um enormous You give the example of the racist bridges you give that exam because I think that will be helpful for the honest to understand the omnipotence of these algorithms Yeah
00:17:16So they say something that I think people in the UK just haven't heard of it all but yes if your America you're a lot more aware of the work that Robert Moses did Robert Moses was an urban planner Town architect Is that let's say a town architect in
00:17:28around the nineteen urban planner town like I called him a landscape What it called last give engineer works long standing urban architect probably makes you forget that in the nineteen thirties he built a state park in Long Island Jones Beach It's called Beautiful State Park and he was
00:17:46basically a massive racist He was incredibly racist Andi He didn't want anyone who wasn't white to frequent his new beautiful state park So what he did to prevent that from happening because he had no political power What he did to prevent other types of people from using his
00:18:02park was that in the highway that approached this beach he deliberately built these bridges that hung very very low over the traffics the ark extremely low I think sometimes they only leave about nine feet of clearance from the tarmac below And the idea behind that of course was
00:18:18that white and wealthy Americans would be visiting the beach in their private cars and could easily slip underneath these narrow bridges all these low hung bridges But people who are travelling in buses which they were more likely to be if they were from the poor black neighborhoods wouldn't
00:18:32be able to pass under these bridges and therefore wouldn't be able to go to the park now The point that I was really making in using that example because it's you know there's no real algorithms that were involved in there But it's this idea that actually you can
00:18:44have Objects that aren't attached to humor is he's an animal objects They end up having a kind of clandestine control over people The end up being oppressive almost just because of the way that they're designed sometimes when they're deliberately designed to oppress people like in the case of
00:19:02those bridges But sometimes because people are just little bit thoughtless when they kind of admit being able to include people in the way that they're designing things And also because algorithms don't come out of the thin air right We create them right and we imbue them with our
00:19:16biases And some of those biases to your point in the book are necessary because if you don't have those biases than the algorithm doesn't work And yet having those biases can create a feedback loop where you have cases like these decision trees in parole boards in criminal hearings
00:19:33right where judges decide whether to let someone out early or not Based on a test which is heavily influenced by the neighborhood in which they live inadvertently their skin color and give that example That's a great example of what are they called Those recidivism Their gold Yeah exactly
00:19:52Yeah Things about sleeping around for a really long time which I was not aware of I'll tell you something that was very educational for me I wasn't aware of the fact that people use these types of decision trees to make these shins and I actually want to mention
00:20:04one more thing that I thought about with that and that feeds into the medicine aspect as well I think there's something attractive to people in such professions like the law and medicine where liability is very high for making bad decisions I think they welcome the opportunity tohave an
00:20:21algorithm where they can offload responsibility to a computer even though that may not be the ideal outcome for society or for the people that they're either seeing his patients or paroling Yeah I totally agree with you I think feel faced with the tricky decision I mean I don't
00:20:35know if you ever do this but I just don't wanna make a decision Consumption else just being charged right Like I don't can feel like that all the time I feel like that all the time and I will also say increasingly to the prior point I was making
00:20:46I also worry about being blamed for things exactly tickle you know I mean it's an easy way to hand over responsibility I entirely agree with you I think especially when you're talking about judges for example whether it is really really difficult difficult difficult decisions that they have to
00:21:02make about people's freedom right robbing people of their freedom I think that when you have an algorithm they're telling you an answer It's very very hard not to over trust that it's very hard not to just take the kind of cognitive shortcut if you like of that very
00:21:19easy source of authority because you can rely on it later to say Well you know the algorithm told me so hugely important in medicine If you're a radiologist and you miss let's say somebody's cancer and they sue you I mean wouldn't you be better off saying I was
00:21:31just following a computer's algorithm Yeah you would You definitely would But I think actually in medicine is the one area where I think that they really have understood these pitfalls and these big problems that on the horizon by having these algorithms telling you what to do or telling
00:21:48you their opinion on the way that they've done in medicine is that they have much more of a partnership right because ultimately the things that humans are really bad is we're really sloppy We get really tired really easily make lots of mistakes We miss little tumors that kind
00:22:05of thing We're really bad at that stuff Algorithms on the flip side don't have any of those problems They never get tired They're very consistent They're very precise but they don't understand context And they don't understand you once they make little mistakes right So the idea of medicine
00:22:21that they do is they create this partnership between human and machine So rather than having the machine make the diagnosis and then have the human come in and check with the machine is correct or not the machine is designed and said just a flag Areas that it thinks
00:22:35a suspicious and then the human comes along afterwards and actually does The diagnosis so looks through all the areas that the machine will flagged a suspicious and say yes no malignant benign malignant But it's really hard to say there's a runaway Thank you But by doing that you
00:22:50avoid this problem of Machine says It's eighteen months in jail What do you want to do right Instead you're saying the machine thinks that these areas are worthy of your attention What do you do It's a different way to frame the problem It's also coming at it from
00:23:05saying what our machines really great at And what are people still good at it And how do we work together in order to get a better outcome And I think you'd agree That's a more productive path to a future where people again to bring about this concerns about
00:23:18the fear aspect People have a fear of being replaced right by machines I mean we saw that during the Industrial Revolution and we're seeing it in terms of automation and people are worried that that's goingto take their job right Yeah on understandably so I think I think that
00:23:34actually that their workplace is going to change on the basis of have a lot of this stuff and I think one of the things about the Industrial Revolution at least is that we had a lot of there was time very big change But it was a big change
00:23:46over a number of years whereas this stuff is happening really quickly But I also think that I don't want Teo go into hospital where an algorithm is in charge I want that human to be part of that process I don't think we're anywhere near the stage where you
00:24:00can hand over this stuff to machines without any kind of human imp It'll go a long way away from that How much do you think that is Because that's a common response Not just you write A lot of people feel that in fact that I think I saw
00:24:12a video of you Maybe it was a Ted talk Or maybe it was an interview that I heard but he talked about how when someone goes in front of a judge would they rather have a cold You know piece of steel cranking out some depositions or do they
00:24:27want a warm beating heart You know even if they're worse someone who could look you in the T t jail and that brings us back to the Casper of thing So actually you know it's funny because I've asked a few different audiences This is that question now and
00:24:41most people say that if they were the one in the dock they would rather it was a human who was making the decision about their future that it was a human making that prediction But I think part of the reason for that is that when humans make mistakes
00:24:57people imagine that those mistakes will go in their favor If you explain to people that actually the kind of the variation in the type of decisions that a human judge will make our massive people always imagine they're going to be the one of the lucky ones right And
00:25:12ultimately I kind of think actually you know I'm not against the idea of algorithms in the courtroom I just think they have to be designed in the right way because algorithms can make better more accurate and fairer assessments of people And I think that's the difference when you're
00:25:27thinking about yourself in the courtroom or whether you're thinking about the whole country's courtrooms all together because I think when you're thinking about the biggest system you really want stuff to be as fair as it possibly can be and as unbiased as it possibly can be And I
00:25:41think that actually algorithms probably do have a role to play in that I have another theory about that and just came to me I think that there is some sense of appeal There is room for negotiation You tell me something You said You're a recent mother You said
00:25:53you're about a year old Your child your daughter You're not yet at that stage but we all have met children that love negotiating Right So we learn very early that if we don't like results well there's you know let's talk about let's talk about it You know you
00:26:08Where can I meet you At the wake of the room And there's no wiggle room with a computer You know you're right You know what I mean I think there's some sense and that even though it's illogical it's irrational There isn't once a judge hands down his or
00:26:20her decision That's it You know you don't like it to bed There's appeal in the court of law But the same thing would happen with us Computer I want to switch a little bit and talk about something that I think the public has a fascination with And I
00:26:32want to ask you first why you think it is and that is autonomous driving right Because there's so many fascinating technology is in the pipeline and some of them I think even could capture the imagination of the public like three D printing right But for some reason autonomous
00:26:46driving vehicles have just become the talk of the town It is such a big topic in the media People want to know more about it and they want understand And it's happened so quickly came out of nowhere You brought the great story of DARPA's race in the Mojave
00:27:01Desert of the two thousand For with those cars it was Ma'am This card is like you know flaws falling over the other It was a John you know I think the winner went seven miles before flipping over here every day Now here we are fourteen years later and
00:27:16cars are driving themselves you know getting on and off highways How did this happen First of all and That's kind of a way of getting into a conversation about thes machine learning algorithms and how computers learn using neural nets But also I'm curious Why do you think that
00:27:32people have such a fascination with autonomous cars Well I think we've always had a fascination with without our way We haven't for a very long time anywhere Surprisingly autonomous cars is not a new fantasy This is something that actually belongs to the era of jetpacks and tinfoil Hat
00:27:47The Jetsons Justin Yes Computer They speak with a British accent No I don't think they I think it was your it was Wei had made You might have had butlers You know this is a slight interjections A funny moment But I'll say have a guest on Henry Tim's
00:28:05I think was Henry Thames and Jeremy Hyman's They think it was Henry Tim's Oh who had this story Henry Times I believe it was He worked for a very rich woman in England and he's British and she apparently had on ly retired butlers from the Queen s OK
00:28:22He has fascinating stories about what it's like to actually be serving He has won A great example Was on a private plane and he had for gotten to buy a gift for his wife And literally the moment he realized that he didn't say anything literally Someone just the
00:28:36butler just came and said Would this do sir He had literally I took the liberty You know he goes I took the liberty and he had already bought the gift knowing that he had for gotten it without even telling me And he saw it in his face Remarkable
00:28:48But I interrupted you Jetson's We're talking about the Jetsons and about people always having this vision of a future with autonomous cars You brought the world's fair I think in the book Yeah and tell that story and well we'll keep going Yeah I mean this is back in
00:29:01the nineteen fifties where people and really seriously thinking about almost cars There's been plenty plenty plenty of attempts to do it There's one particularly Well I find it quite amusing Attempted in the UK of where they built this car thing was a citrine What car It was a
00:29:16citrine Anyway on DH they built it so that he would follow a bit of copper piping Right That's built into the motorway They dug out one of the freeways lay this copper piping and proved that it could work but then just sort of run out of steam So
00:29:30now still under one of the motorways in Britain there's like this piping which was supposed to be this nineteen sixties autonomous vehicle How far does it run No it's like rubbish It's like three miles or something that's a really short devastated for you I mean I think that
00:29:47it's understandable that we have this dream thing idea of you know getting out of your house and sitting in a car and then telling it where you want it to go and just chilling in the back while while you're driven is you know it's just the price of
00:30:01the butler for everyone writes the chauffeur for everyone I like real luxury for every individual It's where it's of course it's a dream that I think it makes perfect sense that it is And I think that now that after Darfur seeing this is much more of a reality
00:30:14it's understandable that people are getting excited about it and it's also been the popular culture like Kit You know Knight Rider Everyone wants their own car There was a nineteen seventies hard movie where the car was called like I forget the name but it was Christine I think
00:30:26that was the name of the movie where the car actually tried to kill everyone It would just drive around trying to kill people Some similarities to modern driverless government Exactly Yes exactly But I think Thomas driving cars or cool for a number of reasons One is because the
00:30:42decision making is so complicated The trolley problem is something you bring up in the book and you're dealing with incomplete information and you're really with nuanced decisions right There's another great quote I mentioned in the book you had I think I have it here Here it is I'm
00:30:56going to quote it You know what I'm going to say You're going to mug them right off They're going to stop and you're just going to nip round this's this's some results from a survey It's a very very English people It's very amusing You say I can't say
00:31:11with your action I didn't even want try Markham Royal mug him right off But and then you're going to just nip round Let's use that as a segue We have to talk about what it is that I was intending Because I do want to talk about the challenges
00:31:24of making autonomous driving riel and how that is a sort of subset for a larger conversation about thie bringing machine learning algorithms into the world Yeah I say that exact right that you mentioned there it was in answer to a question of how members of the public imagined
00:31:39the world might look if we had autonomous cars on our roads And thing is is that if you're creating a driverless car the number one rule right the number one thing that that driverless car has to be able to do is to avoid a collision where possible right
00:31:53That's the number one rule So what that means is this If you step out in front of a driverless car in a situation where it doesn't need to run you over is gonna have to stop for you And suddenly if you have a siri's of cars that you
00:32:08can know will reliably stop If you bully them then that kind of changes the rules of the road Right So you could markham off basically So madame off means what it basically means Billy They're okay That's what it means All right Got it on Nick Brown Them means
00:32:28that you can overtake It's like Yeah you're basically in full control The car has a pedestrian It's a reverse You don't have to be afraid You know you're in control because you know how this thing thinks and it has to protect you at all cost Exactly Exactly And
00:32:39suddenly you have people who traditionally didn't have very much power on the roads like cyclists for example who suddenly become much more powerful You haven't spent enough time in New York They get a lot more power than my family They really aggressive here But you bring up the
00:32:53trolley problem also which deals with some of this stuff Right And I love this problem because it shows you once you begin to think about it it shows you the enormity of the calculations required and the value said So this is I think what is it called in
00:33:08machine learning An A I The values when you program values into a machine so is in like ethical the utility Fun Yeah okay Yeah yeah Utility country You know something like that right Which is I mean if there's one driver in the car and it's a fifty year
00:33:23old man and there's a child on the street if you swerve to avoid the child is seventy five percent chance you might die the computer calculates But if it hits the child the child will one hundred percent die Do you hit the child or do you swerve You
00:33:37know completely I mean the thing is about this trolley problem which is what you're describing simply that ethical dilemma of what do you do in this situation What it was originally devised It was devised to demonstrate how there are some situations where there isn't a right answer where
00:33:53actually it's really hard to know what the morally correct thing to do is And the thing about driverless cars The reason why so many people talk about this trolley problem is that you can now imagine a situation in which you actually have to decide regardless of whether you
00:34:09want to or not Whether there's a right answer or not you have to decide what you're going to do in this situation Now if you speak to people who are experts in the field So people who kind of build these driverless cars there as she tend to be
00:34:21quite dismissive of the trolley problem they kind of say Well it's such a rare occasion such unusual thing that won't ever happen So we don't really need to worry about it You're certainly not going to program this in What do they mean that it's a rare problem That
00:34:33the idea of having to choose who to kill Why's that rare Why did they say that That's right It seems like it would be fairly common enough that everyday occurrence I mean I have changed my mind on this actually because I used to kind of agree with them
00:34:46I remember having a conversation with Professor Paul Newman from Oxford who runs this runs Ock spotted her which is you know a driverless car program in the UK And he said to me Has it ever happen to you Have you ever had to choose who to kill Well
00:34:59driving a double No I haven't He was like me And you When did you know anyone who's had to choose anyone Someone they had to kill someone No I guess I don't Have you ever heard of anyone who's had to choose who to kill So Well No I
00:35:09suppose not He's like Well then you know there you go We're getting sidetracked by this problem when actually there are much more interesting questions to be had about driverless cars And I kind of bought into their argument But then the trolley problem happened to my husband Exactly The
00:35:23twenty problem happened to my husband Wow So can you tell that's what he was driving down the road And then coming around the corner was someone who was trying to escape from the police and they cut into the wrong lane They were driving directly towards him and there
00:35:38was traffic going in their direction So he essentially had the choice of stay where he was and have a head on collision with this person who just gave him from the police swerved to the right and go head on to the traffic in the other lane or swerve
00:35:50to the left where there was a cyclist in that lane who he was going to hit if he swear Vlad And basically he had our daughter in the car So he decided to kind of save himself essentially went for the cyclist But thankfully thankfully the cyclists saw the
00:36:05situation unfold and went up on the pavement to get out of his way knowing that he was going to kind of mountain home So I kind of think that actually you know maybe this isn't a zoo Where is some people will imagine Wow Thank you for sharing that
00:36:17story It highlights a few things I thought about that when I was reading the book We don't know how we will react until we're in that moment Of course you know of course But then however you react You Khun justify it because you can say you take you
00:36:31no less Imagine that something really bad happened in that situation Someone did end up getting her You take my husband up into the docks in the courtroom after it and ask him to explain his decision And he can say it was a snap decision I didn't know I
00:36:43just had to make a decision In the moment You can't hold a person accountable for what they do In the high pressure situations whereas if there is an algorithm driving the car you can go through Look at line by line by line of code on DH see exactly
00:36:58what it wass that gave rise to the decision that it made and that just changes the rule slightly but doesn't decision it makes also depend on the values we assigned to the participants in this scenario only if you decide to program it that way only if you decide
00:37:14to address the Charlie Problem head on and say How old is that person How old is that person what the chance of this person survived and you don't necessarily have to do that But wouldn't the car's decision to your point you said on autonomous cars Number one job
00:37:27is not to hit anyone right The way I think a lot of people mistakenly think about how these cars will be programmed is that their number one objective will be not to endanger the driver for the person the passengers in the car But we haven't come to determination
00:37:43about how that's going to play out because you could imagine a scenario where let's say if every car's autonomous for fifty percent of the cars or whatever it is that the sum total of everyone involved in driving these cars were being in them would be better off with
00:38:00a Naga and the functions at a different way That prioritizes life differently than just the person in the car right But straightforward if it's the people in the car But if it's not that way how do you make those decisions Well I know it's I mean it's a
00:38:13really difficult problem and no one with an easy answer But I think that ultimately there was a survey done in fact actually going back to that comment that you mentioned earlier I think it was the same study which said which asked people if you were making the rules
00:38:26for driverless cars How would you want the rules to go And basically everyone agrees that the car should save the most people possible right That's how it should work But then if you ask people would you buy a car that would save the most people possible If that
00:38:41sometimes meant that you yourself would be killed like Well no of course not Of course I wouldn't so I think ultimately it comes back You know we see this in the courtroom as well right Like there's a difference between your own personal incentives in your own personal motivations
00:38:54on what you want for everybody What's right for the individual and what's right The group you know and not always necessarily the same thing I'm stuck on this thing about values because I thought about it before Have you read Nick Bo Streams book or any of his work
00:39:05on pretension Yeah Yeah during the time that I was reading that book I don't know where I read this other historical account of how the insurance industry was born and how actuary tables were developed But it was the first time that as populations were moving I think it
00:39:20was in London as populations were moving into the city the need for life insurance and things like that developed And they needed to be some way to value people And that was the first time that people have done that I think this is like a further evolution of
00:39:35that which is trying to assign values the human life into people and instances Another thing that you talk about the book which is we've mentioned it I believe on Ly an episode for sure in Episode two with Jim Records based Serum But perhaps in that one or two
00:39:50other episodes it's something that I've wanted to do a full episode on and I'd like for you to at least take this opportunity Now talk about it a little bit Why Who speak about it in the book and its relevance to machine learning in tow algorithms Yeah I
00:40:03mean it's just one of of the most powerful equations that has ever existed really And at its heart it's a really simple idea I think the way that I described in the book I mean I love frivolous example So I used quite silly example Teo illustrate But the
00:40:21idea is that you update your knowledge based on the information that you have You throw away the idea that something is one hundred percent absolutely true or force on You start talking about your certainty and something you start talking about your belief almost in something So it's like
00:40:35Let's imagine that you know we were in a restaurant somewhere on DH You said to me I think that's lady gaga over there sitting at the other table And before I turned around to have a look I will have some idea as to how much I believe that
00:40:50your hypothesis that that's going to be late to gargle right So maybe I'll take into account where we are in the world how posh the restaurant is How likely I imagine it would be that Lady Gaga would be in this restaurant But then as I look at the
00:41:02woman that you're pointing out there's going to be different things that I take into account So maybe whether she's got bodyguards with her maybe he's got blond hair all of these different things In every new piece of information that comes in I update my belief accordingly Kind of
00:41:14can either go up or down based on all of these new pieces of information on Then perhaps if I notice that she's wearing like a meat dress or something which is something that you don't tend to find in people who aren't lady gaga that might be enough to
00:41:27tip me over my threshold to conclude that I'm happy enough to believe that that is Lady Gaga girl and that's essentially the idea The very basics of the idea behind based serum is is that it's a systematic way Two Update your belief in something it stops You having
00:41:44to be absolutely certain about things on DH start dealing with theories and uncertainties And in the case of driverless cars this stuff is really really important Because if you think about the blue dot on your GPS right when your phone is telling you where you are What that
00:42:02blue dot is signifying is that there's some uncertainty in your position And when you've got a mobile phone and walking around with a mobile fine it doesn't particularly matter whether you're here or three meters to the left But when you're in a driverless car or in a car
00:42:15full stop that three meters could be the difference between being in your lane and being in a lane where you're driving straight into traffic So you need to start being able to deal with messy data that has errors in it With uncertainties really on DH that's why you
00:42:30need your car essentially tohave a belief about where is not just a measurement about where is it's the difference Another way to talk about it is the difference between deductive and inductive reasoning Yeah instead of saying there's certainty all I have to do is solve this equation and
00:42:46I'm going to have a clear answer It's dealing with complexity a world that's so complex that you can't come to a definitive solution to the problem and you need to do your best And the other thing that you mentioned there is this updating this learning and these machines
00:43:00Algorithms are learning and they're learning in a way that produces answers which are better than anything else we've seen before And yet because of how they're learning we don't know exactly why they can't do the answer They did write often That's the case You know often that's the
00:43:14case So the big analogy that I like to use is the difference between traditional types of programming on DH this machine learning on artificial intelligence that we're seeing the sort of more the newer stuff that there's being implemented much more now is like when you try and train
00:43:29a dog how to sit So if you train a dog you don't buy our list of instructions for the dog right You don't say Okay we justice and what I'd like you to do is to move this muscle and then that muscle and then move your tail down
00:43:40on whatever right You don't do that All you do to the dog is you clearly communicate with it the objective of what you'd like it to do So maybe you pushed down his barb and say the words it or whatever and then you repeat that process over and
00:43:53over again and you reward it whenever it gets something right and you ignore it whenever it does something wrong And over time if you repeat that enough times that dog works out what you wanted to do seems like a miracle whenever it happens It's especially true when when
00:44:09my dog manages is an actual America But but all of the steps in between the ways that that dog decides to manage TTO achieve the objective It makes all those decisions itself We works out the process itself And that's really what's happening with machine learning algorithms is that
00:44:25you clearly communicate with the computer what your objective is You rewarded when it gets it right and you let it work How will the stepped in between myself No now that we're on autonomous vehicles One thing I don't want to forget And I want to circle back and
00:44:37discusses this notion of complacency because that's something that as thes computers become better and better and better and these alligators become better and better and better at doing things that we increasingly suck at We get even worse at it right Yeah I had lived in Italy out of
00:44:54college and it was a good example where it was a brand new city I was based out of Florence and at first I didn't know where the hell I was by them You know within a month a few weeks or a month I would be in neighborhoods where
00:45:06previously I didn't understand where this place was But in my head now I had a map I knew where I was and I could get home at any point time You leave your ex a different story with you Leave me somewhere else Now I will get lost you
00:45:18know And another example I'll bring this one out because the great story I learned to drive stick shift you know And I learned to drive cars with no bells and whistles No certainly no parking alarms or things like that And I hadn't driven a stake in a few
00:45:31years And I was in Europe and I had rented a car and I realized that I had gotten used to parking with a camera that created you know a CG I graphic of my cars How much a park You know I didn't realize how much my senses had
00:45:47deteriorated You know it's a Terry What do we do about this That these machines are increasingly doing way better jobs than we can do Were offloading these menial tasks to them so that we could become more efficient doing other things I am That's the theory supposedly right But
00:46:04we're losing the ability to do things that certain cases we will need to do in certain moments in a crisis we will need to be able to do these things and we can't Yeah you're exactly right And I think that the case of autonomous causes is a really
00:46:17important illustration of this point because ultimately the way that driverless cars set up at the moment is that the car is in charge the car's doing all the driving but you are there to monitor the car on to step in in the case of emergency And that sounds
00:46:29absolutely fine Except that except that for one thing imagine that we get to a stage where driverless car makes a mistake Maybe once every ten thousand miles right Suddenly you are going to lose your skills the moment you have to step in and you have to take over
00:46:46The worst most difficult moment of all is when you have to be absolutely on its when you need your highest skill in driving And it's just that's incredibly difficult to do But even aside from the loss of skill stuff there's also the issue off went over to get
00:47:00it kind of tirelessly paying attention and looking out for something that's going wrong So you know the really sort of tragic example of this is Thie uber crash where the human monitor as they called it was looking down the human monitor Yeah that's the driving driving Yeah was
00:47:18looking down at their lap at the moment when you know a person walks out in front of the car and the car failed to break the curse and killed you Really Yeah It's really awful It's really often this is you know completely innocent bystander Just someone who's just
00:47:30crossing the road Really awful star But ultimately you know known about this for quite a long time Really your back since the nineteen eighties Really Where people were beginning to automate nuclear power plants And there was a ll these essays that came out around the time saying Hang
00:47:43on a second if you are in the majority of the grunt work in the hands of the machine and you're just expecting us human to sit there and just stare a monitor and wait for something to go wrong and then jump in and be extremely skilled at exactly
00:47:58that moment That is just a recipe for disaster And I do think that we have to kind of worry a tiny bit about this with the sort of modern this car's while we're still at the stage where people have to step in an emergency Well there's a really
00:48:12fantastic book on near nuclear catastrophes I don't know if you're familiar with it I forget the name of the author but it lists a number of them And one of them was Petrov which you talked about in the book which this guy was What is that Where you
00:48:27work At the area where Like the Central Command missile detection missile detection And he saw a missile supposedly being launched from the United States coming inbound to Russia And he had to make a decision about whether to launch a counter strike or not Right Yeah So he job
00:48:43was to just monitor this automatic system on DH If any missiles were detected to pick up the phone immediately inform your superiors who would then launch Can't strike on DH You know nuclear war would take over the planet and this happened It was as we're recording this In
00:48:57fact it's exactly thirty five years ago Really That's tomorrow I think is the with the anniversary on DH He waas monitoring the system the middle of the night This alarm went off Missiles were detected on his orders were very very clear Immediately make that phone call But something
00:49:12really gave him pause He was just a bit concerned about the accuracy of this system Because for one thing It said it only detected five missiles and he was like Wang Second if America are launching nuclear war why then he sending five Miss inductive reasoning Why wouldn't they
00:49:28have something right Why wouldn't they I stopped at my assumption that completely completely so He kind of frozen his chair and was like I'm not sure what to do because he knew that if he made that call there would be no one else to stop this from happening
00:49:41He knew that him making that call meant nuclear war right But then at the same time every second that he waited was really eating into Russia's a chance to launch a counter strike If this was indeed truth it's spelled the end for the USSR If indeed he was
00:49:57wrong He has had no way of knowing for sure one way or the other whether this was right or wrong And in the end thank goodness for all of us He decided to just assume that the machine had made a mistake but held off And it was twenty
00:50:09three minutes later I think that the ground detectors failed to detect these missiles landing and he knew that he'd been right and actually in the end that what happened was the sunlight bouncing off of clouds that the algorithm had mistaken those for missile So thank you No I
00:50:23mean he literally averted nuclear war by trusting his own instincts over that of an algorithm That's one of my concerns I mean not specifically the case of nuclear war but the issue of complacency It's high on my list because I experienced a mountain life and I can see
00:50:36that it's not It's not a frivolous concern that it happens that I have My skills have deteriorated in all sorts of areas and we have these hair trigger systems like our nuclear systems which I think they still do have human intermediaries obviously right We're not that Thea Skynet
00:50:51moment yet but I think that's the bigger concern The popular media has made it one where the machines are going to take over but the more realistic snares are that mistake happens Yeah yeah of course The mistakes do happen And to be honest I kind of think mistakes
00:51:07are completely inevitable We can worry about minimizing mistakes and I think that's an important thing to do But I also think that actually we have to be realistic that you can't ever eliminate all mistakes I mean I struggled algorithm or not right in the kind of computer world
00:51:22or otherwise I really struggled to think of any system in the world which is perfectly fair perfectly unbiased on never ever makes mistakes I just don't think that that happens So I kind of think my big argument that I try and make in this book is that well
00:51:36maybe we're thinking about this in all the wrong way Maybe we should just accept the fact that these machines are going tohave floors are going to make mistakes on We should just designed them to be to some sort of where their uncertainty very proudly and designed them that
00:51:51we can appeal them when they do make mistakes So that's the importance of having the code be available for people to be able to see what's going on There's not being black boxes right because if you have these algorithms that have so much control over our lives but
00:52:03that we can't actually something comes out You have the great example of the voters as in Idaho where wasn't right What do you think of that And then this could bring us into a conversation about you know reform regulation the law Are things being done better in in
00:52:18Europe with GPR What is a Calder in the United States Talked us a little bit about this about how to navigate this back Yeah So the example you're talking about in Idaho was there's a story about there was sixteen disabled residents who cost a pretty terrible news Actually
00:52:35the Department of Health and Welfare in Idaho I decided that they were going to introduce this new budget tool that was gonna work out how much state support each of these people are entitled to on DH These people very severe disabilities but who qualified for residential care but
00:52:49but who had chosen to be cared for in their own homes is dead The money that they're receiving was really important for their independence for their care in general They each went into the State Department and we're sat down and talked through the system And the budget tour
00:53:04came up with an overview of how much money they were entitled to be on It was just really strange because some people ended up with more money than they've had in previous years So it certainly wasn't a political decision to do slash Fund A But then other people
00:53:16ended up with way less than they'd had you know tens of thousands of dollars down on what they needed to have and they tried to appeal They tried Toe ask why on Earth these decisions have been made but no one in the State Department would help them really
00:53:30So they ended up filing a class action lawsuit on DH Getting the budget tour to be turned over for scrutiny on when it wass think just find quite extraordinary about this is that this algorithm that was holding so much power over them is arguing that so many people
00:53:45the State Department really put their faith in It wasn't like some clever a I right wasn't some really beautifully crafted mathematical what it was just an Excel spreadsheet on how she feels quite crafty Excel spreadsheet say like there was all these errors in the formula's good data was
00:54:01cut lows of bugs in this thing but and yet and yet just because it was wrapped up is this fancy house with martial machine Yeah it was given all of this power and authority I just think we've been living really in the Wild West You know you can
00:54:16collect data on whoever you want about whatever you want You could create an algorithm that makes any decision that you want the impacts anyone that you want and there's no one to stop you from doing that I think we used to be in a world where you could
00:54:29just put any colored liquid in a glass bottle and sell it and say that it was medicine and make a fortune snake Yeah right hundreds But we stopped doing that because for one thing it's morally repugnant But also it's just air harms people And we haven't at the
00:54:45moment got that backstop of someone like the FDA who are just checking that the things that are impacting this much on people's lives have benefits that outweigh the harms that they're imposing on people And I really think that's what we need We need some kind of a regulator
00:55:02body thatjust approves algorithms and says Yes this one is good enough to use on public a tall order I mean right now the market incentivizes and rewards people to work in the private sector and the private sector of course generates the profits and then lobby the government to
00:55:19prevent the regulations right Yeah I'd stoop That is kind of the best and worst of capitalism all on one You know on the one hand you have driving innovation forwards creating these incredible remarkable new technologies at speeds that just lightning compared to what any state funded groups could
00:55:36manage But then at the same time without regulation and control there's no real way to get the positive benefits off Those fully for society will be sure that you're really getting seeing that the positive benefits of those for society It's interesting You bring up the last the Industrial
00:55:51Revolution right Because that was one place where we saw similar dynamics one particular part of the economy the industrial parts to steal the oil the railroads They generated so much profit It had so much power that I think there's something similar happening today with respect to technology companies
00:56:11They've amassed so much power in terms of data Facebook has And it's not just Facebook with Facebook's kind of the poster child have had at it with their data and there has been an exchange But it's not clear what the value of that exchange is right and what
00:56:27people would be willing to give up all of this privacy for what they're getting in return right which is to be able to send a few messages that our friends have been other cases some really powerful That's great things You bring up twenty three and me and the
00:56:39potential And I mentioned to you before we started we had Eric shot on the program from the Eiken Institute and the challenge that he has and others have which is getting the data You need to do the analytics to actually be able Tio save people to make leaps
00:56:53and bounds in in cancer research in other areas There's so much potential that comes from being able to harness the power of these algorithms But at the same time there are risks and there are dangers and how to navigate That is not an easy task No it's not
00:57:09an easy task It's not an easy task It all But I kind of think that actually we can't be as individuals We can't be complacent about this stuff I think we have to recognize the trades that we are making I think we have to be slightly more switched
00:57:22on about what is going on around us Because we kind of just let it all happen You know data has been the new oil right on DH We've been living in the Wild West I know there's mixing up my oil and gold analogies but no it's a good
00:57:34well though both that Well I guess they weren't on the West but there was a lot of oil exploration Okay right Texas was a big part of that And the Midwest maybe There we go Okay We're living in the Midwest You're excused You're coming in for the transatlantic
00:57:47Hannah it was wonderful having you on I want you to stick around because I want to ask you some other questions for arm or nerdy viewers that we're going to make available through our Patri on But I appreciate you coming on the program Well thank you very much
00:57:58for having me Thank you And that was my episode with Hannah Fry I want to thank Hanna for being on my program Today's episode of Hidden Forces was ported at Edge Studio in New York City For more information about today's episode or if you want easy access to
00:58:16related programming visit our website at it enforces dot io and subscribe to our free email list If you're a regular listener the show take a moment to review us on apple podcasts Each review helps more people find the show and join our amazing community Today's episode was produced
00:58:37by me and edited by Stimulus to Call out form or episodes You can check out our website at hidden forces dot io Join the conversation at Facebook Twitter and it's the gram at Hidden Forces Pod Or send me an email at De Que at hidden forces dot io
00:58:57As always Thanks for listening We'll see you next week

Transcribed by algorithms. Report Errata
Disclaimer: The podcast and artwork embedded on this page are from Demetri Kofinas, which is the property of its owner and not affiliated with or endorsed by Listen Notes, Inc.


Thank you for helping to keep the podcast database up to date.