A book club for developers.
BookBytes is a book club podcast for developers. Each episode the hosts discuss part of a book they've been reading. And they also chat with authors about their books. The books are about development, design, ethics, history, and soft skills. Sometimes there are tangents (also known as footnotes).
Adam Garrett-Harris
Jason Staten
Megan Duclos
8/10/2020
Hello and welcome to BookBytes, a book club podcast for developers. Today we’re talking about “You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place” by Janelle Shane. I’m Adam Garrett-Harris.
I’m Jason Staten.
And I’m Megan Duclos.
Megan is new to the podcast and will hopefully be a regular co-host. We work together and can you tell us a little bit about yourself?
Oh, sure. Where to start? I’m a software engineer at Pluralsight, just like Adam. We actually work on the same team. I’ve been coding for a couple of years now, I’m still pretty new to it, but I really like books, a lot. So this is-
Yeah.
Right up my alley.
That’s why I wanted you on the show.
Yeah.
Welcome.
Cool.
And I’m super excited to have you, yeah.
Thanks for having me!
So, the author, Janelle Shane, she writes about artificial intelligence on her blog called AI Weirdness and how AI sometimes gets really weird and hilarious or sometimes really unsettling how it can get things wrong. And she’s been featured in the New York Times, in the Atlantic, and all sorts of places; and wrote this book now, kind of based off of that blog. So let’s get into the book. What did y’all think overall?
Overall I really liked it, I found it really interesting. Something I thought of tonight as I was finishing the book is it surprised me how AI is everywhere but then also on the other side, it surprised me how limited it is; because I really didn’t know much about AI before reading the book, but a lot of things like that surprised me and I found it really interesting.
Yeah, I definitely think she takes some of the mysterious parts of AI and, kind of, pulls off the covers a bit to see what’s actually going on. And it’s not done in a way that is dismissive or diminishing of it, but rather in a way of, yeah, like, this promise was made that AI can accomplish this thing, but in fact, that has some human assistance in it or it only works in this super narrow case.
Yeah, that’s kind of what the introduction talks about is, like, “Hey, is AI soon going to be everywhere?” Well, on one hand, it already is. It’s online, determining ads, suggesting videos, there’s detecting social media bots, being social media bots, it’s like approving candidates for resumes, and approving loans, and it’s a little bit in self-driving cars, and even in some, like, not so self-driving cars, and it’s in smartphones.
But then, also, no on the other hand. It’s not flawless. It’s way overhyped. It can’t do everything we think it can for lots of different reasons.
Yeah, and it’s also, like, not very good at some of the things that you just said it does. Like-
(laughs) Right.
(laughs) Like, the-
It probably shouldn’t be used for some of those things.
Yeah, yeah.
So I like, at the beginning, she has 5 principles of AI weirdness. Like, the danger of AI is not that it’s too smart but that it’s not smart enough. The second one is that AI has the approximate brainpower of a worm. So… (laughs) Yeah. And AI doesn’t really understand the problem you want it to solve but AI will do exactly what you tell it to do, or at least try its very best. And it will always take the path of least resistance.
Yeah, it made me think of water or where water always takes the simplest path that it can, like the most downward approach that it can until it eventually-
Right.
Runs into something that it has to go around.
Yeah, there’s an episode of The Simpsons where Homer goes off and runs away, he leaves home, and Marge finds him and he’s like, “How’d you find me?”
And she’s like, “I just left house and started going downhill.”
(laughing)
Sounds very AI-like.
Yeah.
Yeah.
If the criteria is to, like, make as much distance as possible, like, I mean, certainly downhill is what you’re going to take.
Yeah, I like the examples of it trying to teach robots to walk so the goal was “Make it to this point.” You start at Point A and end up at Point B and what it would usually do is grow really tall and then just fall over.
(laughs)
Yeah, that was pretty funny. Like, “I got there, I did it!” (laughs)
Yep.
Yeah. So what did you think about the definition where she kind of defines both rules-based programming compared to machine learning?
Yeah, I mean, rules-based learning involves listing out every single step whereas machine learning, kind of, just figures out the rules for itself by trial and error.
i thought it was helpful that she compared it to rules-based programming because that is what I’m most familiar with and so seeing how they were different from each other helped to understand better what it is.
I think for me, as a developer that is very much rules-oriented, like, I mean, imperative type programming, it’s almost a little bit uncomfortable knowing that, like, you’re creating this thing that’s not exactly right but instead, like, it’s getting to some probability of being correct.
Right. Well, I mean, in some of the problems you give AI there is no one right answer.
Mm-hmm (affirmative).
Like, how can you make an algorithm to generate cat names and how would you write unit tests for that? What is the correct possible answer?
(laughs)
Like, the definitive list of possible correct answers. There isn’t such a thing.
Yeah, and that’s kind of the whole point of getting an AI to do that for you, is that you don’t have to think of that list.
Yeah. It said it’s more like teaching a child than it is programming a computer. You just let the AI figure it out and it comes up with its own rules and sometimes its rules are bad. It talks about, like, different ways to detect a bad rule. Not that you can really… not that you can really see what the rules are. It’s really hard to look into an AI and see what it’s thinking.
Yeah, that was one of the things that I did like that she called out. Where Google had its dream project where it, kind of, made really funky artwork and part of that was going and picking at the nodes within the neural network that it then created and amplifying how important they were to see, like, what does this thing actually represent in order to say, “Oh, well actually we need to not have this, or decrease its importance in the realm.”
Hmm.
Do we want to talk about the title of the book and where it came from? (laughs)
Oh, yes! Yes, so I originally thought the title was the idea that, the AI, it looks at things and it’s like, “I like it!” Or, “I don’t like it.” But you want to explain?
Yeah, yeah. So she’s talking about how, let me see here, yeah, she’s talking about teaching it like it’s an impressionable child. She’s kind of knowing that the AI is going to start with a blank slate and she starts training the AI to produce pick up lines.
(laughs)
And that was one of the pickup lines that it came up with (laughs). And a lot of them were really weird! Like, some of the other ones were, “You must be a tringle ‘cause you’re the only thing here.” And-
A triangle?
No! I tringle, there’s no “a” in that word.
Oh, okay.
(laughs)
(laughs) Which makes it even funnier? I don’t know.
You’re so human, Adam. Overlooking that. (laughs)
(laughs)
Yeah, and these were, like, the best ones that she’d curated.
Yeah.
And the funniest.
There were others that were a lot worse.
That was a common theme within the book, I felt, as well. It was that human intervention, or like, working alongside the AI is definitely a critical component of being able to pick out, like, what are some of the top ones from this generated output that it came to?
Yeah. Yeah and she also talks about things in this book that she is calling AI and things she’s not calling AI. One thing she is calling AI is machine learning algorithms and that’s typically what she’s talking about in this book, I think. And then there’s deep learning, neural networks, recurrent neural networks - whatever that is - Markov chains, random forests, genetic algorithms, lots of other stuff, predictive text.
But things that aren’t AI are stuff from science fiction, rules-based programming, humans in robot costumes.
(laughs)
Or, you know, humans hired to pretend to be AIs, which actually happens a lot.
i believe that one of those is the history of where the name of Amazon’s Mechanical Turk came from, was that historically there was a person who claimed to have invented a machine that could play chess better than any human can and in fact it was just a box that had a human in it that played chess really well.
(laughs)
(laughs)
I’m going to go and find the wikipedia for that one.
Okay, and that’s where Amazon Turk came from?
Mechanical Turk.
Which is… Mechanical Turk… she mentions it in this book, actually. It’s humans that do really, a bunch of really simple tasks that you can hire out to… it’s almost like a humans as a service and she uses it to, maybe gather data, or I don’t know, things like that, things where you’d want humans. But then one of the problems with it is that the humans sometimes, ‘cause they’re not paid very well and you want to do it as fast as possible to get paid, they’ll use bots to do the task, which kind of defeats the purpose.
So you have to do the Turing Test against them. The Turing Test being-
Yeah!
That theory by Alan Turing that if something can fool one third of humans into thinking that it, itself, is a human, then it passes that test. But Janelle actually-
That's a pretty low bar to pass. (laughs)
One third of humans?
Yeah, it’s like the oh, I can’t remember the name for it, the test in movies, I’ll remember it. You can delete this part. I can’t… it’s called… I’ll look it up.
I think she mentions the Turing Test being in movies. Like, I can’t even think of it now. Ex Machina? But yeah, anyway.
Uh, that’s not what I’m thinking of.
Hmm.
Oh, the Bechdel test! That’s what I’m thinking of. It’s like a low bar to pass. The Bechdel-Wallace test is like a measure of representation of women in fiction. So, like movies and books and stuff.
Oh.
Where basically the requirements for that are that it has to have at least two women who talk to each other about something other than a man and those two women must be named characters.
(laughs)
Which you would be surprised how many movies don’t pass that, but I digress. Way off topic.
Wow. I was trying to think if even Pride and Prejudice passes that test because they talk about men a lot even though there’s a lot of women characters.
(laughs) Yeah.
Yeah, with Mechanical Turk they have to, in order to ensure humans are doing it, give them some other random tasks just to make sure they’re paying attention.
Yeah, I’ve actually done some Mechanical Turking, I guess, as they call it, in the past.
Okay!
Just to try it out. And you are offered up, like, an activity that you do and maybe it is looking at an image and clicking on all of the cows in it. Or I saw one-
(laughs)
That I thought was actually a pretty awesome idea. It was for races when there’s, like, a big race that has hundreds of people in it... going and picking out which pictures belong to which racer is a challenging job and while machine learning can definitely do a lot, there’s also a need for, like, human training on some of that, as well. Or like, training those models in order to do it So, a lot of times, that can actually wind up going through a set of Mechanical Turk workers to say, “Oh, yes, this is this bib number and this is this bib number.”
Hmm. Okay, so chapter two talks about AI is everywhere, but where is it exactly? And one of the weirdest examples was that it runs a cockroach farm. I don’t remember exactly why they had a cockroach farm.
I don’t… yeah, I don’t remember the reason for it, but-
That’s, like, a recurring theme throughout this book is that she uses that as an example. (laughs)
Yeah, is an algorithm to go and optimally raise a set of cockroaches.
Okay, so they would actually grind up the cockroaches for some Chinese medicine.
Oh, okay.
But it said, actually, it’s a good job for an AI because it can, it has, for one thing, it has a really quick feedback loop because cockroaches don’t live for very long before they reproduce again. What else?
And given you give it only, like, a narrow set of controls, too. Like, I mean, that’s the important thing. She mentions that, I mean, if it were able to go and, like, crank the heat up in one room so much that it would up killing the whole room in order to, I don’t know, give success to another room or something.
Like that could also be a way for the AI to succeed but with having guardrails in place in order to stop that. It’s probably something you need to remember, too.
Yeah.
And it moves on, as well, to get into self-driving cars because that is one of the, kind of, big, hyped AI cases that exist in today’s modern world, as well.
Right, and I thought we were really, really close to getting AI without having to have a human sit in the seat.
But we’re really not that close. (laughs)
After this, I’m not hopeful to see that in my lifetime.
Yeah, that’s kind of disappointing, but oh well. (laughs)
But I do expect to see a high level of automation. I mean, we’re already seeing lane assist and smart cruise control and it can go for long stretches on boring highway roads without needing any assistance. If there’s anything unusual, you can take over but it did say one problem is that humans are not good at taking over quickly when they have not been… when they’re used to not paying attention and I can imagine that being really boring, just sitting there waiting.
Yeah, it can be, it can be bad enough taking a long road trip with just cruise control on sometimes.
Yeah
Where you think, “Where did that last half hour go?”
(laughs) Yeah!
Yep.
And she does also mention that, I mean, some of the options to get us more automation wind up looking like existing public transit options where you have, say like, a caravan approach where, like, one car happened to be actually driving with a human in it and then the rest of them are, like, tailing it, like, in, kind of in lock step.
Hmm.
Or having, like, specific paths that are designed for AI-type driving. Kind of like, a-
Yeah. Or maybe tunnels.
Yeah. Or like a tunnel but at that point, we also have means of transit that go through tunnels called subways.
Yeah.
Or designated paths that are rails that things like trains can’t steer off of. So...
Right. Yeah, I mean, it talked about there’s a lot of ways that you could trick a self-driving car. You could just put up a stop sign or, like, paint a tunnel on a wall.
(laughs)
And then, like, how is it going to know to recognize emus.
When it’s never seen an emu in its training set.
Yeah.
Yeah.
Or what if the zombie apocalypse happens and it doesn’t know that it’s okay to run over the zombies, they’re not actually pedestrians.
(laughing)
There’s just no way to train them on that. Like, the world does change and a more serious example is someone was like, “Hey, let’s make an AI that recognizes cars. Oh wait, there already is one. Oh, wait, no. It’s trained on data from, like, the 1980s so it doesn’t recognize any modern cars.” I thought it-
Yeah.
Was going to say, “It was trained on normal cars and then the cyber truck came out and it can’t recognize that.”
Well… (laughs) That… that wouldn’t really, I don’t know, that wouldn’t be as common of an issue.
Yeah.
Anyway, yeah.
But, I mean, I barely recognize it as a car.
(laughs) I… I think you’re right.
So let’s move on to how they learn.
I liked this chapter.
Chapter three had a lot of-
And the magic sandwich hole!
(laughs) You want to describe that?
Sure! So she’s trying to illustrate how machine learning works and she says, “Hypothetically, let’s say we have this… we’ve discovered a magic hole in the ground that produces random sandwiches every few seconds.” Which, that’s very hypothetical, but…
So the problem with that is that the sandwiches are very, very random. So ingredients could include jam, ice cubes, old socks, literally anything could be on the sandwich. So we would have to sit and sort through all of the bad sandwiches to find any good sandwiches which is really tedious work, so she’s talking about hypothetically training an AI to do that work for us.
And I thought it was a really great way to illustrate how it works and what an AI would do with that information. Tell it a cheese and chicken sandwich is good but, like, if you add mud to that sandwich it’s definitely a no. But then she goes through-
Yeah, but it’s like, how does it know? How does it know if it’s the mud that’s bad or if it’s the chicken that’s bad?
Yeah. Yeah! Or if it’s just specifically the combination of mud and chicken. Like, maybe mud is good with something else.
Yeah.
Um…
Mud and peanut butter.
(laughs)
(laughs) Mud and peanut butter. But she goes through, like, all these different ways that an AI could get really confused and think that, like, egg shells and peanut butter is a good sandwich but, like, peanut butter and marshmallow was bad. She gave the fluffernutter example.
I’m going to have to try fluffernutter. Eh, I don’t know, it might be too much marshmallow.
The fluffernutter? I don’t really like marshmallows so I can pass on that one.
You could always swap for banans.
But I-
Oh, yeah.
I do banana all the time.
Yeah, I think she talks about, like, one problem could be that so many bad sandwiches compared to some that are actually good that it just takes a shortcut and just always assumes it's bad.
Yeah! (laughs) Yeah, I was just about to say that. It’s like, “Okay, you don’t like any sandwiches. We just won’t approve of any of the sandwiches.” Which is just illustrating how an AI will take the path of least resistance where it’s just like, “I’m just not even going to try anymore because 99% of the sandwiches I think are good, you say are bad. So I give up, basically.” (laughs)
Yeah.
“I’ll still be 99% accurate if I say all the sandwiches are bad!”
I think this section does a good job of showing, kind of, the diagram that an AI produces where it has all of these inputs and it does calculate, it’s just a bunch of numbers and it does some sort of calculation on them and there may be, like, several layers of that happening before it comes to the final output.
Yeah, and that those layers are necessary to deal with combinations. I mean, like, if you have a single layer of the nodes to handle input, then you get simple attribution of peanut butter: good, mud: bad. And that’s kind of the extent of calculation you have; whereas the second layer can go and handle the things, like the combinations. Or that mud is a dealbreaker and it always makes everything fail. Like, if you have mud on any sandwich, like, do not pass.
Hmm.
And, yeah, I don’t remember what they… it’s like the hidden layer, I believe, that it’s called?
I don’t know.
I do like, as well, that it goes on to describe some of the other algorithms, as well. Like, talking about what they are and, like, what role that they can wind up playing. So, like, Markov chains, Megan, you had mentioned predictive texts on a phone, like, Markov chains are a good candidate for that in that they are really lightweight to go and create and don’t need a lot of processing power or storage and so for typing on your phone, just being able to say, like, “Here’s a good possibility of the next three words.” But overall that, like, they… what they generate is not super, high quality, or that they can get stuck in things in a loop, like, “...under the sea, under the sea, under the sea…”
Yeah.
Yeah. And because they have really short memories, like, they’ll usually only have a couple of words in memory of the last three words that they suggested or the last three words that were typed in, so only having that much context doesn’t give them the full story.
Yeah, as opposed to recurrent neural networks that look back hundreds of words or longer. So they would be able to get out of the “...under the sea, under the sea…” loop that Jason was talking about. So I think she had a Markov chain here that was trained on Disney song data?
Mm-hmm (affirmative).
Yeah, so a good example of this is just on your phone and you start typing in a text message and then just hit the center word suggestion and I’ve seen people do this where you just tap the center word and see what comes out. And that can be kind of funny. And it learns based on what you’ve typed into your phone in the past, so everyone’s will be different even if you start with the same words.
Which is why you can get haunted by a typo.
It’s kind of fun.
Or sometimes keyboards will store a number that you put in one time and always offer you that number.
A number? (laughs)
Yeah.
Another example is a random forest algorithm which, to me, kind of, just looks like a flow chart of, like, a decision tree.
I mean, that’s actually what she refers to it as, is a decision tree, that it’s a bunch of, it’s kind of shallowed decision trees, kind of, all put together to kind of come to a conclusion.
Yeah.
And evolutionary algorithms, those are one that I feel like I’ve seen examples of online where they have a, like, a generated course where it’s… So they have, like, a car that’s trying to go across a course that was created that has maybe hills and valleys in it and sometimes it has to jump a gap or something, and the algorithm is able to go and try, like, different wheel sizes for the car or different weight distributions for it in order to find an ideal one and the ideal one is the one that makes it that furthest on the map.
I’m going to have to go and search for that because it’s kind of a cool way of seeing, like, the progression of the algorithm where it starts off pretty terrible because it’s just randomly guessing at what would be a car to use and based on how it succeeds, the algorithm says, “Okay, this should move onto the next generation and some attributes from this thing should be taken.” Or, “This one failed completely and it should die and not contribute its genes to the next pool.”
Right, yeah. An example in the book was that you’ve got a hallway that splits into two different hallways and the algorithm has to design a robot that will make people go down the right hallway, not the left hallway.
I loved this one.
and they can change the arm size and the foot size and originally, like, they might just fall over because they made one leg too long and then at some point, one falls over and it slightly blocks the left side and so-
And then at another point it starts killing humans.
(laughing)
No, I think at one point before it starts killing them it just starts, like, yelling annoying things and so the humans just walk around it to the other side.
(laughs) Yeah.
And then eventually, let’s see, if it starts killing humans, it wins, right? No humans went down the left hallway.
Uh, no ‘cause then… Yeah, so-
So you have to change the goal.
It does win, yeah. So it does win and then they go in and say, “Okay, you can’t kill humans, now.” (laughs)
It changed the goal where “Humans go down the right side” instead of the goal being “No humans go down the left side.”
Yeah.
And then eventually, it just makes a robot so big that it’s, like, basically a wall.
And the picture for it is… awesome. It says, “Yes! We have evolved! A door.”
(laughing)
And it finally covers generative adversarial networks which I thought this one was pretty awesome to learn about. I feel like I’ve seen the GAN abbreviation a handful of times and didn’t know what it was.
Yeah.
Or like, what it stood for? And so to kind of hear the description of it helped me out a lot. So basically, you have two machine learning algorithms working, I mean, against each other; or like, you have two machine algorithms working to go, well, one doing generation and the other one is attempting to depict, like, was it generated, or not?
So it’s a generator and a discriminator.
Yes, and it is commonly used for, like, generation of images. So there is the website of, like, ThisPersonDoesNotExist.com or something like that where they-
Hmm, yeah.
You can go there and see all sorts of faces that are generated that are not real people and they are quite convincing as long as you look at the faces. If you look at the edges though, I mean, they can get a little bit scary. Or a lot of times people are missing ears and stuff, or they’re mismatched.
I’m looking right now, they are creepy good. I mean, not creepy, they’re just good.
Yeah, like, you could certainly use one in a smaller place, too and be fooling, like, think like a Twitter avatar or something like that. You have, like-
Oh, yeah.
An unlimited pool to generate fake avatars for Twitter.
Yeah, and this was only just introduced in 2014 as a technique. And what I thought was really interesting about this is that the generator is… you have to give it a, like a, some sort of image to turn into the thing that it’s trying to create. So you can’t just say, “Make a picture of a horse.” You give it a picture of random noise and then it turns that picture into a horse.
And so that kind of made me think that it sounds like a pure function. Like, given this exact image of white noise, or random noise, it will produce the same image every time. Did you get the same feeling?
Um, I guess, I don’t know, I didn’t think too hard specifically on that front. But, yeah.
Hmm.
I’ll have to reread that specific part.
Yeah, it's got an image on the bottom of 103 where it has a picture of, like, just some dots and then it turns it into, it kind of moves those dots around into a horse.
Okay, yeah. I see that depiction that you’re talking about.
And so what’s really interesting about this is that you… it’s much better to have a discriminator AI than it is to have a human because at the beginning, the generator and the discriminator are both equally bad at their jobs.
(laughs) So the generator is terrible at generating pictures of horses, the discriminator is terrible at telling whether or not it’s a horse. It can’t tell the difference between a real horse and a bunch of garbage.
And that’s good because a human would just be like, “No. No. No. No. No. No. No.” But this discriminator is really bad. So yeah, it’s going to say “yes” sometimes and then the generator can kind of work that and it can get better and better.
Yeah, and she says that it, in a way, is using the generator and discriminator to perform a Turing Test in which both, it’s both the judge and contestant so then, like, over time, by the time the training is over, it’s generating horses that would fool a human judge, as well. So, like, they both get to get better together.
Yeah.
And while it is impressive in what it does, there are also many ways that the AI can go terribly wrong, as well; and Janelle gets into that in the, kind of, next few chapters.
Yeah, yeah. I like in chapter four it gives a lot of reasons why the AI may not be good or may not be good at that certain problem. Like, if the problem is too broad. AI is really good at very narrow tasks which is another reason why self-driving cars is maybe not be a good problem, it’s very broad.
Yeah, and it’s, like, constantly changing.
Yeah, also if you don't have enough data. It needs a lot of data to train on.
And also bad data. So even if you can get lots of data, if you give it data that is of a poor quality, and I mean, sometimes that can be hard to distinguish, even as a human, then that can be a case for failure, as well. She gave the example of determining skin conditions and teaching an AI that where-
(laughs)
It turned out that all of their training data that showed a picture of a tumor, also had a ruler in it. Therefore, the AI learned that if it sees a ruler, then it’s a tumor. So it was a ruler detector.
It’s so much easier to detect the ruler.
Yeah.
Yeah, it's a ruler detector.
(laughs) Which kind of reminds me, I don’t know, if either of you have watched Silicon Valley?
No.
No? But there is an episode where some of them create, one of them creates some kind of machine-learning thing…
Hot dog, not hot dog?
To tell if there’s a picture of a hot dog, yeah, hot dog or not a hot dog. (laughs) And like there’s a miscommunication where, like, a bunch of these other engineers thought that he had created an algorithm that could just, like, recognize anything in a picture; but it was literally just determining whether the picture had a hot dog or not. And yeah, then there were the other story lines that I won’t get into right now, but yeah, that’s what that reminds me of.
Ah, yeah, and there’s also time-wasting data. So I love the example of some researchers who made an AI that generates images of cats and then they noticed, like, these blocky, text-like markings on the images.
(laughs)
And it turns out they, a lot of the cat images they’d gotten from the internet had meme text on them at the top and the bottom. So it was trying to not only generate the cats, but also how to put text on it. And it shows some examples. It’s… it looks like words but it’s illegible.
That is one thing I’ve heard from some data scientists that I’ve talked to is that, like, grooming the data is such a critical data and like, such a, like, such a major portion of that position before actually putting it through the algorithm because, I mean, it is-
Hmm.
Still, like the classic “garbage in, garbage out” that you can wind up with.
Mm-hmm (affirmative). And then there’s over and under represented data, kind of like what we mentioned with the sandwiches. If there’s too many bad sandwiches and not enough examples of good sandwiches, it would just take a shortcut and say, “All sandwiches are bad.”
And then there’s a common thing in AI where AIs will often see giraffes everywhere, and I love this because people are more likely to photograph a giraffe than a plain landscape. So there’s way more images of giraffes exist than is representative of the real world.
And then other examples of female scientists being under-represented on Wikipedia. It gives an example of Donna Strickland didn’t get a Wikipedia entry until she won the Nobel Prize in Physics. (laughs)
That is one of the things, too. It’s just a general bias that winds up coming through in the algorithms simply because, or I guess not in the algorithm but, like, in the calculated result, simply from the data that’s fed to it.
Even if we, as humans, attempt to circumvent that bias by saying, “Well we’re not putting gender or race into the system.” But there are ways that the algorithm, I mean, finds patterns that are, like, “Oh, well this person lives in this specific area or has this specific name” or something like that as a proxy way of determining that same thing.
Hmm.
Another thing in this chapter that I found really interesting was talking about a problem with AI called unintentional memorization where she basically gives an example from 2017. Researchers from Google Brain showed that a standard machine learning language translation algorithm could memorize short sequences of numbers like credit card numbers or social security numbers even if they just appear, like, four times in a data set of 100,000 data sets.
Hmm.
So they would, like, somehow the AI would just memorize it and just spit out a social security number or a bank number or something like that.
Yeah, and then if you can trick that AI into spitting that back out then it’s leaked information, sensitive. It’s really bad.
Yeah, it can be a huge security vulnerability that can cause a lot of problems.
In chapter five it talks about overfitting, and overfitting is when an AI is trained for a very particular set of circumstances but not for the variety of situations that it might actually encounter that you want it to work.
So an example that’s not actually AI, but with training animals -and she said that here’s a lot of similarities between training AIs and training animals- the Soviets tried training dogs to run, to carry bombs and run underneath German tanks to blow up their tanks but there were several problems with that.
The Soviets, their tanks were not moving during the training ‘cause they wanted to save money on fuel and so then, the dogs would get scared around moving tanks. aAd also the German tanks smelled different, they ran on gasoline instead of diesel. So, often the dogs would end up running back underneath, or towards, or underneath the Soviet tanks which is really bad. (laughs) And also really sad that they tried to use dogs for that.
Yeah. And I would say that because AI does have its shortcomings, and these things, like that’s where the human intervention is necessary, and a lot of times why AI products that exist are ones that fall back to a human when they can’t recognize a scenario, or can’t handle a scenario.
So like, in a self-driving car, I mean, like, if you’re driving a Tesla, like, it always expects a human to be there to take control and it will start chiming at you and yelling at you if it’s starting to have a problem and making sure that your hand is also on the steering wheel, as well, every so often to make that, I mean, you’re doing your human role. Or like, a chatbot-
Yeah.
It will go back to a human if you start asking it things that are kind of nonsensical and the AI can’t distinguish what you’re trying to say there.
Yeah, with chatbots it said, later on it said one problem with that is that it inflates people’s expectations of what AI can do. If they don’t know whether or not they’re talking to a human or an AI, they may think AIs are really good.
Secondly, they may be mean to humans when they don’t mean to be because they think they’re talking to a bot, and then thirdly, they may reveal sensitive information because they think they’re just talking to a computer.
It’s one of those things that made me think of, I think it was a presentation Google made last year where they showed off using an AI to go and book a haircut, I believe it was, where-
I think it was making or booking a reservation at a restaurant. Oh, and a haircut! Yeah.
And while, like, I feel like that could work in a narrow situation, I could also feel it, or like, see it falling short with a, I don’t know, kind of a basic question that maybe it's not trained for. Of, like, I don’t know, “Do you have a gluten allergy?” Or something...
Yeah. I was surprised with that example, not only how lifelike those AIs voices sounded, but they gave examples where they had unexpected questions and unexpected answers, because with the haircut, they asked like, “What kind of haircut?” Or something and it actually responded to that.
And then with booking the restaurant, the person had a thick accent and had a hard time understanding what the person was asking.
Mm-hmm (affirmative).
Not the person, but what the AI was asking and then it had to repeat itself and it turns out they don’t take reservations so it was able to handle that, as well.
Yeah. I mean, it definitely is an impressive thing. Like, I mean, and it would be awesome for some particular cases.
I think a major problem with this is going to be that small businesses might get inundated with calls from AIs which could be super annoying. They might get more phone calls than normal. Basically they would be penalized if they don’t have online ordering or some sort of automated way to do it without AI.
So, what you just need to do is these small businesses also need to get AI on their end, answering the phone first.
Yeah.
And then as soon as that happens-
Right.
Like, there’s, like, a high-frequency sound that denotes, “Oh, this is actually an AI that this AI is talking to.” And then it just sounds like a modem.
(laughs)
Yeah.
That sounds great. Then no one will ever have to work customer service again.
Wait, wasn’t there some sort of example in this book about AI? Oh, with voice recognition AIs, you could trick it with some sort of white noise that humans would just think is white noise, but then the AI thinks the person is saying something completely different?
Yes! Yeah, I remember that.
It’s basically the same idea as putting, like, a little square of white noise on a photograph and then the AI thinks the photograph photo is something else.
That was-
Oh, that was in chapter eight.
Yeah.
It’s an adversarial attack. So that’s known as a rainbow of static. And another way of tricking an image recognition software is you take one image and then you start slowly overlaying that image with one pixel at a time from another image and eventually the AI thinks that it’s seeing something else that a human wouldn’t see. You just don’t know which pixels the AI is going to think is important.
Yeah.
Mm-hmm (affirmative).
Although thankfully it said now there are ways of an AI pointing out which pixels it used to determine. So it can kind of highlight those and it’s not always what you would expect. I think, like, with pictures of fish, a certain kind of fish, most of the pictures were of a human holding the fish. So the way it was recognizing it was human fingers on a green background.
(laughs)
(laughs)
(laughs)
I guess on one other note of humans and AI coming together, too, is creativity. Like, creating art and that is something that I was impressed with by a service that is called Generative.fm and that is a website that you can use similar to Spotify, or like other music websites, where you can pick the type of music that you want and then it will play that music and it will do it locally.
So, like, the sound itself exists on your machine, locally, but it is all generated from your machine. So it’s not like it’s streamed from somewhere or prerecorded but instead, just on the fly, just keeps playing. As long as you choose to listen, it will keep generating more and more music.
Hmm. That’s cool.
They have a lot of ambient type stuff. So if you just need something in the background to listen to while coding, it’s a good choice there.
Yeah, I see some... a lot of electronic, there’s some-
Mm-hmm (affirmative).
Piano, saxophone, guitar… that’s cool. Another thing I thought was interesting that we haven’t talked about yet was the idea of using multiple AIs to do one test because AIs are best when they focus on one task and if you try to teach them something else they’ll forget the first thing.
So one example was an AI to play Doom and there’s actually three different AIs so one is the vision, so it’s like, “I have detected various things!”
Oh, yeah!
And then one of them is the memory and it says, “I predict the fireball will continue to get bigger.”
And then one is actually the controller and it’s like, “Aah! Dodge left!”
Yeah, it was pretty cool.
So the future of AI might be more like that. Like a swarm of many different AIs working together, each one focusing on a very small sub task.
Did either of you have a favorite chapter?
I have a favorite funny part.
Okay.
I like in chapter one she’s having it make knock-knock jokes.
Yeah. (laughs) Those were my favorite.
And it kind of just goes from the beginning of how bad it is at first. Like, at first it’s like. “K space K space K space space space k k k k k” because, probably, knock-knock jokes have a lot of “k”s in them.
(laughs)
And then it’s just kind of gibberish and it’s doing some new lines and some question marks here and there and then eventually it’s like, “Whock. Whock, Whock, Whock. Whock. Whock. Whock.”
(laughs)
And then eventually it’s like, “Knock-knock. Who’s there? Iane. Aartar who? Aaane who? Aan who? Anac who? Iobe who? Irata who?”
(laughs)
And then finally it kind of starts figuring out the formula. And I like, “Knock-knock. Who’s there? A cow with no lips. A cow with no lips who? A cow with no lips says, ‘ooo ooo ooooooo’.”
(laughs)
Yeah.
And then it thinks that’s like, the best knock-knock joke ever.
And then it thinks, it thinks the cow with no lips is, like, the best joke ever.
(laughs)
So it keeps doing that one.
That was actually my note. That was my favorite part. I listened to the audio book on my initial reading of it and XZSands does it in, like, a robotic voice to do all of the AI pieces in it.
(laughs)
And it is… it is really good. Like, I enjoyed listening to that part of it, ‘cause just, “K. Kk. K. K-O. K.” It just, yeah.
Wow. Does the audiobook read all of the dialog in the pictures?
So there are a few places where it says, “Check out the PDF to go and see this thing.” So it doesn’t necessarily get all of them. So you do miss out on that, you miss out on some-
Okay.
Really nice illustrations. So the book itself is also worth going through but I do think the audiobook did a nice depiction of the AI and giving it a voice.
Yeah, cool. Yeah, I didn’t think this would make a very good audiobook, but overall it was okay? It was good?
Yeah.
Now I’m wishing i had listened to it ‘cause I find it so hard to find the time to sit down and read a book, physically, that it was hard for me to, like, finish it in time but I would have finished it weeks ago if I had just listened to it. (laughs)
Yeah, I don’t know how you'd take notes though, when you’re doing an audiobook.
Oh, I don’t.
Right.
Sometimes I’ll stop and pause and open up my notes app and throw something into there. Like, if something really stands out to me.
Mm.
I mostly just hope I remember.
All right. Any final thoughts?
So I thought it was a great spread of, like, what the AI world looks like right now and kind of is... it takes away the smoke and mirrors and shows more of what’s going on with AI but it does it at a level that, I feel like, I would be definitely apt to recommend to somebody who is less technical, that’s not a programmer perhaps, and that they could still get a ton of value out of this. So I really enjoyed it on that aspect.
Yeah I really enjoyed it. It’s a very fun read and it has a lot of technical things in there if you want to dig into it, but it doesn’t get too technical. For instance, I like how it has, kind of like vocabulary words in bold, but it doesn’t give a technical definition, it just, kind of, gives you a feel for what the word means with a lot of examples. So I really like that.
And it can be just a quick, fun read and it gives you, it can kind of calm your fears about AI, help you understand AI, where to expect it, and if you want to dig into it more, then I think this is a great jumping off place.
Yeah, I agree, too. I think it’s a really good, like, intro to AI and machine learning. I didn’t really know much about it before reading it and I learned a lot, but it wasn’t too, like overwhelmingly technical. It was really funny.
And I think it’s funny, Adam, that you mentioned, like, it will help calm your fears because, this is kind of silly, but at a really young age I watched some movie about a robot that went, like, turned evil or something and it scared me so much that I have this weird irrational fear of robots taking over the world which, (laughs) you’re supposed to laugh at that. That’s not, like… (laughs)
This book does talk about unfortunate murder bots.
Yes! So it’s, like, within the realm of possibility!
Yeah.
(laughs) But, like, this, kind of, helped me realize that, like, AI as it is right now is just not smart enough for that, yet.
Yeah.
Although a lot of them just, kind of, end up resorting to like that one in the hallway. So I don’t know, maybe my fear is not irrational and it’s fine.
Yeah. Well, I mean, it says, and I think this is a key idea, “For the foreseeable future, the danger will not be that the AI is too smart but that it’s not smart enough.”
Yeah.
And another really great, great quote from the end is, “There’s every reason to be optimistic about AI and every reason to be cautious. It all depends on how we use it.”
That’s great.
Also, I cannot stop looking at ThisPersonDoesNotExist.com.
(laughs)
(laughs)
If you do get tired of it, going to Janelle’s blog, AIWeirdness.com is also… it’s some good reads.
It’s a good laugh.
The valentine hearts? And yeah.
Yeah, and I love how she… yes. She takes the things that the AI generates and actually puts them on, she’s generating the valentines hearts and putting them actually on pictures of the valentine heart candy. Or taking the recipes and putting them on recipe cards.
Yeah, it just makes it even funnier.
Yes. All right. Well, thank you so much for listening. You can find me on Twitter @AGarrHarr, Jason, where can people find you online?
Sometimes I’m on Twitter @StatenJason.
And Megan, what about you?
Uh, I don’t really tweet that much but I am on Instagram.
And you can follow the show on Twitter @BookBytesFM and you can find the show notes and transcript for the episode as always at Orbit.fm.
See ya.
Bye.
Bye.