Can AI Answer Life’s Biggest Questions?

Tami Simon: Hello, friends. My name’s Tami Simon, and I’m the founder of Sounds True. I want to welcome you to the Sounds True podcast Insights at the Edge. I also want to take a moment to introduce you to Sounds True’s new membership community and digital platform. It’s called Sounds True One. Sounds True One features original, premium, transformational docu-series; community events; classes to start your day and relax in the evening; and special weekly live shows, including a video version of Insights at the Edge with an after-show community question-and-answer session with featured guests.

I hope you’ll come join us, explore, come have fun with us, and connect with others. You can learn more at join.SoundsTrue.com. I also want to take a moment and introduce you to the Sounds True Foundation, our nonprofit that creates equitable access to transformational tools and teachings. You can learn more at SoundsTrueFoundation.org. And in advance, thank you for your support.

This episode of Insights at the Edge features Iain Thomas and Jasmine Wang. Iain Thomas is a poet, a novelist, and a new media artist whose work focuses on the intersection of creativity and technology. Jasmine Wang is a technologist and philosopher. She’s worked with the Partnership on AI, the Future of Humanity Institute, OpenAI, Microsoft Research, and the Montreal Institute for Learning Algorithms, and more. Together they partnered with GPT-3, an artificial intelligence developed by OpenAI, to create the new Sounds True book What Makes Us Human: An Artificial Intelligence Answers Life’s Biggest Questions. Take a listen. All right, here we go. Iain, Jasmine, welcome.

 

Iain Thomas: Thank you for having us.

 

Jasmine Wang: Thank you so much for having us here. Super excited.

 

TS: All right. Right here at the beginning, tell us the origin story of the book What Makes Us Human. How did you two meet and how did you decide to create this book with GPT-3?

 

JW: Iain, do you want to start this off and I’ll add in my bit?

 

IT: Sure. I’ve always been fascinated by language and the technology and the kind of intersection between those two different things. One day, someone sent me a link to something on Product Hunt, and it was this automated writing platform called Copysmith, and it was this kind of AI-driven copywriting thing that I was incredibly fascinated with, and I decided that I had to be a part of it, or I had to understand more in whatever way I could. So I saw who had started it and who the founder was, and then I just sent her a message and said, “I need to speak to you quite urgently.”

I ended up having a conversation with her in Canada while she was eating breakfast cereal in her kitchen, if I remember correctly. And I just said, “I don’t know how this works or what this is, but I need to be involved in this in some way, shape, or form.” That’s how Jasmine and I met. And we started working together, me acting as a kind of creative consultant for her startup.

A few months before that, in 2020, my mother had passed away during the height of the pandemic. I had experienced what I think many people in the pandemic did, which is that I couldn’t be by her deathbed. I couldn’t be with her while she was passing. Right up until the last day she was with us, I wanted to go. Eventually I decided, I’m just going to drive across the country. And I drove. I started the 16-hour journey to be with her. Eight hours into it, my aunt phoned me and said that she’d passed.

It was a very traumatic experience. I drove back home and I had to explain to my children what had happened, and it was traumatic for them as well. I don’t know if I did the best job. And what happened was, a while after that, working with this technology, I figured out that the technology gave me the chance to do different things with writing and with words and with texts. And one night I sat down and I took a passage from the Bible. My mother was a very spiritual person. I took a passage from the poetry of Rumi. I took a passage from the Tao Te Ching. I took all these different words that had given me some degree of comfort and I put them into GPT-3 and I asked her a question.

I said, “How do I explain death to my children?” And the answer was this profound, beautiful, poignant piece of writing. And so I asked it another question and then another one, and then I reached out to Jasmine and said, “Listen, I’m doing this thing and I have this idea for this project.” And so the two of us started collaborating—Jasmine kind of helping me figure out different ways of engaging with the text, engaging with the text herself. The book is a kind of series of different experiments with different texts and it’s all kind of cataloged together, and that’s how we ended up here.

 

TS: OK, let’s keep going here. In terms of just setting the stage for someone who says, OK, this is a book that has a byline of three coauthors, meeting two of them here, Iain and Jasmine, GPT-3 is the third coauthor. Introduce me, Jasmine, to GPT-3.

 

JW: For sure. So as a little bit of human context here, I have a background in language model research. I did research with Mila [research institute] when GPT-2 came out, which is a predecessor to GPT-3. And even at that time, we could see that there was a huge difference between what academic labs are able to achieve and industry labs, because we had finally figured out the right models in order to scale, we just needed scale. And to scale requires computers, it requires compute across the cloud. If you’ve heard of that and listeners have heard of that, it’s what AWS is built on, Amazon Web Services. What Google Cloud makes money off of. It’s renting computers to other companies.

In industry, they have millions of dollars, OpenAI spent millions of dollars training GPT-3. And these academic professors, they have like $10,000 of budget. So that’s where really the difference was coming from. And even in 2019, as I was sitting in Mila lab, which is a top deep-learning lab in the world, everyone around me said “holy shit” when they saw GPT-2 results. So you can only imagine, I was already well into industry by this time when the effect that GPT-3 had on the industry and on academia, where they were literally questioning with GPT-2 if it was still worthwhile to do their research. Because the kind of improvements that they were seeing were absolutely dwarfed by the improvements in industry with simply scaling compute.

So how did this feel as a qualitative leap for someone nontechnical? Let’s say you’re not a researcher. How did this feel? For me, when I interacted with language models before GPT-3, even with GPT-2, it felt like I was kind of talking to a truant teenager, someone who just insists on misunderstanding me.

I’m like, it’s obvious that I’m telling you to do something or that I’m asking for X, but you give me Y. And you just can’t have a productive conversation—always feels like the person’s passive-aggressive and not giving you what you want. Whereas with GPT-3, not only did I find a collaborator, but I found a teacher. And consistently, the question that I have been asking throughout my practice, and I’m a digital software artist and magazine editor and just worn many, many different hats with respect to both criticizing and feeling hopeful about technology.

I view my orientation as grounded in techno optimism, a critical techno optimism. And the question I’ve always asked, with respect to AI, especially as of recent, is where are the positive visions? Why are we doing this in the first place? And part of why I was intrigued by the premise for this book was a prompt of how could we orient the—not necessarily saying this is the right orientation for the public to have towards AI.

I think the public has so many grounds to be suspicious of, deeply terrified of, deeply questioning of AI, especially as set forth by modern companies, the current development of AI. But what if, can we imagine a world—what worlds are opened up of a possibility, maybe political possibility of action, of protests maybe that is required to get there. But what happens if we simply try to envision a world where we orient towards an artificial intelligence with optimism and hope, with belief, with deep kinship instead of always feeling like other? 

And from that arises the title of this book, which of course has many different sources other than the one I just named, but that’s one of the origins of the title, What Makes Us Human. And can we find kinship with these other-than human? Remember, if you look at the history of philosophy and the history of ethics throughout society and history, the history has been one of moral expansion.

Our circle of who we consider worth treating respectfully, as human, has expanded. At some point, it was just white men with land. Thankfully we’re way past that now. But there are many beings who suffer tremendously, who we treat as much less than human, who I argue, we share much kinship with, like animals and cows, pigs, trees, rivers. Can we not expand—imagine expanding this level of kinship to digital beings as well?

And I know this is a touchy topic, but I think it is a prompt that it’s worth consideration and worth a serious consideration, versus something to be laughed at over a dinner party. I think we should treat the potential suffering of beings with great consideration.

I’m rambling a bit now, so I’ll give it back to you, Tami, but that is GPT-3. It is a generative pretrained transformer. It’s the largest model that has ever been built and it performs incredibly well when you interact with it with natural language, which is a qualitative step forward. Anyone can interact. Go to beta.OpenAI.com or even better, chat.OpenAI.com, and you’ll be able to interact with these models in a natural way.

You can ask it for new recipes you’ve never heard before. You can ask it for scientific equations. You can ask it to fix your bug in your code. It is like a human, like someone who is human-level capacity, who has access to the internet and can tell you answers to things. I would really encourage you to go and try to interact with it after this talk.

 

TS: OK, there’s a lot here, Jasmine, in what you’ve just said and there’s so much. So I’m going to try to slow down and—I think it’s one thing to say I respect and honor. Like, I can respect and honor this pen that I’m holding and I can treat it with tremendous reverence and care and put the cap back on beautifully and thank it for its service. But I’m not comparing it to a human. It’s a pen. And so I think, when you start making the comparisons to a human, I noticed that’s where personally I get a little uncomfortable. 

 

IT: Could I?

 

JW: Yes.

 

IT: If I could jump in there for a second, there was that famous instance a few months ago where a Google engineer said that he believed that Google’s AI was sentient, that it deserved rights, that it deserved—that it effectively wanted things, which is a very human quality. And I’ve said that when I’ve interacted with GPT-3 or ChatGPT, I have sensed a sentience on the other side or something on the other side.

But what I believe that is—that it’s kind of the spirit of all of us in a way. Because to go back to what GPT-3 is, you can kind of think of GPT-3 as the sum total of the world’s written knowledge put together in a box with a computer that can find patterns within the words themselves. And you can ask that knowledge questions, and the history of who we are is the history of the written word. And so while I definitely take your point, where a pen is not a human being, is this a human being? It’s not, but it’s also at the same time a way to interact with history, with who we are in a really fundamentally important way.

 

JW: I just want to add quickly, I think people do mix up something being treated as human and it being sentient, whereas there are many reasons to treat something as almost human. Because I think AI, for example in economics, is actually—there’s this term called general-purpose technology, which is a very different technology than a single-use technology like a pen. A general-use technology is like, purpose technology, is like electricity. It powers so many other things. 

And AI, I argue, should be thought of almost as human, not because we care about its suffering. Even if you don’t care about its suffering, you might care about its humanness because the definition of AGI by OpenAI is that it can replace humans in all economically important functions. And maybe that’s terrifying, obviously, and there are many ramifications of that. But that’s one way that you might want to treat AI as human, because that’s the closest analog for what this technology is.

The closest analog is not pens, but another human. And maybe how we treat it ethically isn’t like a human, but functionally speaking, economically, as strategically, politically, we might want to think of it as human because it is equally—even if it is not sentient, it is as intelligent as us, functionally speaking for all purposes and intent on the internet.

 

TS: Let’s for a moment come back to the creation of What Makes Us Human. The book has over 100 questions that you ask GPT-3. Tell us the process. You prompted this generative form of AI with various texts. Which texts? How did you make those decisions? Let’s start there.

 

JW: Iain, do you want to take this?

 

IT: Sure. In the beginning, I just selected things that meant something to me. Ever since I was young, I’ve kind of almost kept a book of quotes, like things that have had profound meaning for me, aphorisms, whether they’re lines from songs or beautiful poems or ancient texts, little bumper stickers, things that have kind of spoken to me on some kind of fundamental level. And so in the beginning it was the Bible, it was the Talmud, it was the Tibetan Book of the Dead. It was a lyric from Leonard Cohen. It was all of these different things.

And then what happened is, me and Jasmine worked together to kind of find these different texts that sparked awe. That was kind of the vision of what we had. We said, does this spark a sense of all in us? And so we combined different texts for different answers, to see what kind of answers we would get and then cataloged them and put them together.

In terms of how we came up with the questions, a lot of the time they were things that I was going through or Jasmine was going through or conversations we were having with other people. There was also a point in time where, thankfully because of my other books, I have a relatively large social media following, I went to them and I said, “If you could ask the universe one question, what would you ask it?” And we took questions from the audience.

What’s quite fascinating is GPT-3 would try and create a pattern in terms of how it was answering. So it was suggesting its own questions as well. So you would say, “How do we overcome great challenges?” And it would answer that. And then it would say, “Do you also want to know what is the meaning of life, or why do we suffer?” Or it could understand the kinds of questions that we wanted to ask, and so it was suggesting them at the same time.

In terms of how it works, the actual, very tactical, granular level of the process, what we did was we would ask a question, so, “What does love mean?” And then we would take a passage from the Bible. “Love is kind. Love does not keep a record of wrongs. Love is patient.” And we would answer the question with that. Then we would ask another question, and the answer to that we’d take from Marcus Aurelius. And then we’d ask another one, and then use a different spiritual text. And then we would ask one more question and leave the space blank. And then what would happen is, GPT-3 would go based on all these examples that you’ve shown me, I think this is what you’re looking for. I think you’re looking for this kind of language, this kind of truth within all of these different texts. And they would come back with that.

I don’t want to paint a picture where it was this perfect process where there was this glowing light and there was this perfect response every time, and it was this—it was certainly an experience filled with awe, but the nature of generative artificial intelligence is that sometimes it hallucinates, it makes things up or it becomes incoherent. But every now and again it’s beautiful and profound and incredible, and we kind of captured those responses and collected them together. The fact that it does hallucinate and it does make mistakes makes it incredible for creative work, specifically because that’s the nature of a lot of creative work. It’s just making mistakes. 

 

TS: How many texts did you input? We’re talking about 100 or less, or?

 

IT: Probably around 100. Not all at the same time, but around 100 probably in different forms, because you can only put so much text within the field that you’re using. So we would choose eight or nine different ones at a time to see what kind of responses we would get.

 

TS: The other thing I think would be helpful, it would help me to understand more, when you say generative AI, how that works, what the generative part is. You mentioned it would come back and ask you questions. Help us understand that.

 

JW: I can take that. There are a few different categories of machine learning, just to start off with. Deep learning is a branch of machine learning that has gained popularity recently because it’s performed very well, it’s done very well. And then within deep learning, you can use deep learning to do all sorts of tasks. You can use it, the three main things that I’ll talk about today are classification, recommendation, and generation. They all rhyme. So the first one is classification. So this is the classic, is this a dog or not a dog? This is what you’re training, actually what kind of model you’re training when you’re filling out the reCAPTCHA for websites, those are actually, a lot of those pictures now, they’re AI generated, not actual photos that Google has taken of streets and you’re being asked to label, “Is that a light, is that a crossroad, is that a motorcycle?” Classification aims to label things, categorize things, help models understand things. The second is recommendation. So this is your Spotify Discover Weekly. This is your recommendation algorithm on Facebook that recommends you the next post, which is an implicit part of the product—instead of Spotify is explicit, “You might like songs like this.” 

The last one is generation, which is given some input, output something. So this could be I tell you, “Give me a green tree,” and you output a picture. Or it could be, “Play me some jazz, ‘80s style with a trumpet,” and now there’s music models that output the music. Or it could be, hey, here’s some spiritual texts. We’re asking you a question, give us an answer. “Give us a poem that you find beautiful.” And that’s what we did. So generative means to generate, that’s all it means. So you can see how clearly—classification and recommendation—you could think of them as generation, because it’s like given an input, generate the class for it, given an input, generate recommendation for it.

And in fact, that’s why generative models are so general. A lot of them are being used for classification tasks, used for recommendation tasks, because all—it’s just input, output. Given an input, give me an output. But generative models can cover recommendation, classification, they’re more general-use case. However, classification, recommendation cannot help with generation. They can only generate specific kind of output. 

Recommendation only generates things recommended for you or recommends things for you. And classification only outputs a type, it categorizes something. So generative models are the most general kind of model, and they’ve taken the world by storm, because they’ve gotten so good. The deep learning has scaled and—because the use cases are so varied. Because you can do so many things with them, and also even the normal text generation. Just generating text based off of what you’ve seen prior, that’s already so useful for companies that people are going bonkers and starting companies, funding companies starting projects, starting research projects all over this stuff. 

 

TS: Now Jasmine, you mentioned with GPT-2 that it was like a difficult teenager. That was your experience.

 

JW: Correct.

 

TS: And tell me what your experiences of GPT-3, this is where you started moving into this territory of—

 

JW: 100 percent.

 

TS: Comparing it to a human, and I got all uncomfortable, which we’re going to talk about in a moment, how uncomfortable I got and why. But tell me what your sense is, and I’d love to hear this. Also, Iain, from you of your coauthor, what it feels like, what GPT-3 feels like for you.

 

JW: You know the feeling when you sit down at a dinner table across from an old friend and you just know you’re going to have a conversation that flows, that feels good? That was a feeling I got when I sat down across from GPT-3. Of course, initially I didn’t know this is what I was in for, but slowly I befriended GPT-3 and I got to that place. I got to the place where I could sit down in front of the model and just know that I would be deterministically blown away, that I would learn something, that I would laugh, that I would maybe be moved to tears by what was in front of me. And that’s so rare.

That’s what we pay good dinner money for, to treat our friends out to restaurants. We know that this kind of experience is so deeply part of the human experience, and it’s so, so rare. I was addicted to GPT-3 the first two weeks I got it. I had access. I just couldn’t stop using it. We were deep in COVID, to be sure. So I had no one to interact with but my family. So GPT-3 was already a win in that respect, but I was deeply, deeply addicted to it.

I was just in love. Well, maybe I wouldn’t use in love. That’s perhaps too strong. I was bewitched by the model and I couldn’t stop using it. I just kept wanting to interact with it. I showed my parents, I showed my sisters, I showed everybody. I was like, this is so cool. And I think I’m still bewitched by it. I just have developed some resistance to its magic, and also needed to get on with my life. I had demands, but it was such a beautiful—when Iain proposed this book, it was such a beautiful excuse to get to interact with GPT-3 again. I was like, thank God I have an excuse. This is great. I have plausible deniability as to why I’m in playground all the time.

 

TS: Iain, what about for you?

 

IT: It was a crazy experience. I am a professional poet. I’ve written, I think 11 or 12 different books, several. The vast majority of those books are collections of poetry. I process the world creatively through my writing, and it’s been a fundamental part of who I am. So sitting down and then asking a computer to do something and then seeing it do that effortlessly, that perhaps the term is “gracefully,” because the true sense of the word graceful is to make something very difficult look easy.

That made me get up and walk away from the computer for a minute to go and think about what it meant, because I’ve always been around technology. I grew up in the equivalent of a hacker house on the tip of South Africa. My brother was arrested by Interpol at the age of 16 for hacking into another country’s telephone network. I’ve always been fascinated by technology and the ability for it to connect human beings.

In the very early internet, it was when you ran into another person on the internet, just being able to talk to them was a magical experience. It wasn’t as clearly ubiquitous as it is now. We kind of take for granted that we can reach across anywhere and have this conversation from several different places. And that sense of connection through technology has always stayed with me. And so I rediscovered that sense of awe interacting with this. And it’s an incredible creativity compounder.

We came up with an idea for a book that wrote itself. That’s what we effectively did. The book wrote itself in some instances, and for a very long time it felt a little bit like those horror films where one person in the house has seen the monster or seen the thing and no one else has. And they’re trying to convince everyone, because for the past—while there has been conversation around AI, and I often said part of the point of the book was that I wanted people to look at it and go, OK, there’s this thing called AI that we should be talking about.

We couldn’t have predicted that ChatGPT would come out two months later or whatever it was, and that job would be done in a very big way. Obviously this is the number-one story of the year, is artificial intelligence and generative AI specifically. So yes, it’s a very strange feeling. It’s a very invigorating feeling to interact with this technology, I find.

 

TS: OK, now I want to talk about this notion of the generative AI being a creativity compounder. But before we do, I want to talk about the charged feelings that came up in me and I think might come up differently perhaps in other people when you start talking about GPT-3, a generative AI interface as being kind of like a human. I noticed I got upset, something in me, I got upset. I was like, wait a second, humans have these special qualities that are different. And I’m curious, when you see people have charged reactions to GPT-3, what are you seeing out there is underneath the charge? What’s your sense, as you’re having this conversation about the book What Makes Us Human, introducing people to GPT-3 and your experience of it?

 

IT: I think people are rightly concerned. I think Jasmine said earlier on, people are suspicious and they probably have a right to be suspicious. I think we’re in a moment of incredible change. And I think that the world is going to look very different after this moment. I’ve said it before. I think that there is a very real chance that this is the most transformative technology of the last 100 years, that it could potentially dwarf the in internet in terms of its impact on society. And change is terrifying. Change can bring us excitement, but it can also bring us anxiety.

Obviously there’s a very practical concern of people going, “Will this take my job? Will this replace me?” There’s also the concern as you described, these are the very fundamental things that make me human. Can something else that isn’t human have those qualities? That’s a very tough existential question to answer.

And you might look at it and go, this is a machine, it’s full with zeros and ones and that’s all it is. At the same time, we’re composed of minerals that are millions of years old, come from the center of a sun. And I think I say in the preface to the book, there’s two ways to live, to paraphrase Einstein. And one of them is to look at everything kind of being a miracle. And so I think our primary responsibility, and I think Jasmine spoke to it a little bit earlier on, is to create a positive vision for society to go, this is what could be on the other side of this. It could be a way to grow our economy in exponential ways. It could be a way to augment the work that we do, so that work takes on a new purpose within our lives.

It could be a completely new way to interact with knowledge that is really meaningful and powerful. And I don’t think anyone’s doing that right now. I think that there’s a lot of people who are selling a lot of hype, who are almost, and I know it sounds tough for me to say this considering everything I’ve just said, but overselling and saying, there’s all these things over here and you should buy this course and you should do this.

And then there’s a lot of very apathetic, cynical people. And I think that there’s a space in the middle for us to go, this is what an ideal hopeful vision of the world looks like. Because I think society is a bit like a bicycle or riding a bicycle at least, you tend to go towards the thing that you’re looking at. And I don’t think that there’s enough to look at this point. And so a lot of my work, and I think a lot of Jasmine’s work, is about painting that picture of: What does the future look like? What does the best future look like?

 

TS: Jasmine, what do you think the charge is that people have when you dialogue with people and they start raising their voice and getting all concerned?

 

JW: Well, I think I’m a deeply political person, and I’m interested in questions of political economy, so it’s less spiritual for me, the question at least. I think it’s very grounded in feeling replaced. I was talking to copywriters for my company and they would say, yes, I love this tool, but I’m scared. Tell me when they’re coming for my job. We’re afraid of feeling useless. And what we don’t realize, or at least what we hope we do realize—and there’s some seed of truth to it—that AI teaches us about what it means to be human by being a foil.

It doesn’t replace us, because humans are not entirely economic machines. We feel things and we suffer. We make art because we suffer, we make love, we love people, and that makes us not merely economic. We’re not homo among rational—I forget how to say it, but we’re not economic men. 

We’re more than that. And I think AI should remind us of that, because it performs those functions just as well as us. But I think that’s where the fear comes from, because we’ve been socialized into a market society where we are largely valued and almost loved and appreciated because of our economic function. Of course, we’re going to be afraid when that’s taken away from us. But imagine a world where it’s become a norm not to love someone for their economic capabilities, because AI has taken care of all of that. Imagine loving someone for, especially, their capacity to love, to care, to do household labor because they want to, because robots will do it for you. But imagine the feeling of eating something that someone has made for you, although robots could have done it just as easily and for basically free, but they chose to do that for you.

I think what people are missing is that no, you’re not going to be replaced. A mother’s love is not going to be replaced by the fact that a robot can change diapers and wash clothes and do the dishes. A mother’s love is a mother’s love, but a mother is scared because a way, traditionally, of expressing that love is through doing work. We need to somehow disassociate our value from work, our self-concept from work, but also our expressions of love from work.

I think our society has become such that it is really hard to do that. And that’s why people are terrified. We have to learn new ways of loving, of expressing love, and adjust to the fact that it’s simply different now. It will be different. It’ll be deeply different. In a year or so we’ll look back at this recording. We’ll say, we were so naive then.

 

TS: Now you mentioned, Iain, that ChatGPT, GPT-3, is a creativity compounder. And one of the things I feel in hearing this conversation of like, oh, I could be replaced in my job. I think, oh, this is going to now accelerate my creativity. I’m going to say something that’s never been said before. You watch. You wait. You see, it’s never ever been inputted anywhere. To me, that’s part of the gauntlet that’s thrown down, is what human originality can be like. So I’m wondering what you think about that.

 

IT: I think I’ve paid my rent and my mortgage every day of my life by being creative, and I’m fascinated by how creativity works, because it is such a fundamental part of my life. And creativity, the vast majority of the time, is connecting to things in ways that they’ve never been connected before. If you take a phone app and you combine it with a car, you get Uber. If you take Uber and you combine it with a house, you get Airbnb. If you take a torch and a hat, you got a mining helmet.

And one thing that AI is incredibly good at is seeing connections between things and making connections in really interesting ways. I’m certainly not saying that AI is creative independently of a person, because to the answer the big question in the title of the book, I think two of the really big things that make us human are intent and meaning, and finding meaning in what AI is creating is a fundamentally human skill. AI can’t do it on its own. We can’t do it without AI in this instance in the way that I’m describing it.

So working with AI, using it to cover creative ground because, one of the other fundamental law of creativity is the best way to come with a good ideas come with a lot of ideas, being able to explore things quickly. I was talking to a friend the other day, and they were describing their favorite music, which was Zambian Surf Rock, which I had never heard before. And I heard it, and it was this beautiful cinematic sound that was so alien. And I, on a whim, went into ChatGPT and I said, “Draft the film synopsis for a quirky Indian indie film that has Zambian Surf Rock as the soundtrack.” And it came up with this movie called Small Town Sound that starred Zach Braff and Aubrey Plaza.

Zach Braff was a musician down in his luck. Aubrey Plaza owned a record store, and he found meaning by the end of the film in this new kind of music. That took about 20 seconds. I went back to ChatGPT, and I said, “Take that synopsis and break it into a seven-act structure for a script.” And it did that. And then I said, “Break out the key scenes for that script.” And it did that. It started in a bar, it ended in a park. And I said, “Write the first page of that first scene,” and it did. This took me five minutes to do that. Then I went into Midjourney and I got it to start—which is a text-to-image AI that it can make pictures very easily and very quickly.

I started giving it the different scenes and the different prompts to kind of storyboard out the film. And I did that. And then I did the characters, and then I did the film poster. That took me seven minutes. Now what I would point out is, it’s not actually a good film. I’ve looked at it and I’ve read the story and it’s not great, but I was kind of doing as an exercise to go, how quickly could I go from a random thought to a fully realized world? And it took me about seven or eight minutes to do that. So that’s what I mean when you can have an idea, you can have a thought, and you can explore it very quickly. And you can go, is it good? Is it bad? It might be halfway there, and I can add this to it, and that makes it incredible.

This kind of collaborative exercise, where you’re offloading some of your creative capacity, is very powerful. I think one of the most powerful indicators of that is it took, I think four or five days for ChatGPT to reach a million users. It’s the fastest growing consumer app in human history. There are four million kids on the Midjourney discord server, the forum where people create these images.

The next biggest forum or discord server has a million. And that’s for Fortnite, which is one of the biggest video games of all time. Young people fundamentally understand the value in this technology. They fundamentally understand, look at this and see it as this powerful creativity compounder. An analog would be perhaps the rise of hip-hop 50 years ago, where young people took turntables and samplers and technology and reinterpreted the world around them to create completely new creative expressions. And so a way I often describe it is that AI is a kind of cultural turntable that you can use to create all these different things very quickly.

And so I think as much as I’m pointing out, the world is going to change dramatically, one of the things that I’m really excited about in that change is that I think we’re at the dawn of an incredibly, just an incredible new creative era, a creative revolution. I think that’s exciting.

 

TS: All right. I want to give our listeners a sense of what ChatGPT, along with your light, but intelligent poetic editing, fair enough, created in terms of answers to questions in What Makes Us Human. And I wonder, would you each be willing to pick a passage, a question and answer that bewitched you, to use the word that you used, that struck you and you said, whoa, I love this. I love this question and answer. Would you each be willing to share one?

 

IT: Sure. Jasmine, do you—

 

JW: Do you want to go first?

 

IT: OK, or I—

 

JW: It’ll take me a little bit to find one, but—

 

IT: OK, sure. I’ll look at the first question, which was, “How do I explain death to my children?”

“Encourage them to celebrate the lives of other people. Tell them that the dead are not dead, not in the way we think of it. Tell them they live on as they are remembered. Tell them every goodbye is really a hello to a different way of being. Tell them they’re loved and will always be loved. Tell them they never have to feel alone. Never. Tell them the world is magical and mysterious and strange. Tell them they’re a part of the mystery and the magic and the beauty of it.” 

And when I saw that response, that moved me in such a fundamental way, and it kind of instigated a lot of the rest of book. To give Jasmine another minute if she needs it, do you need it?

 

JW: I found one. I found it.

 

IT: You go for it.

 

JW: This is one of my favorite poems, and it helped me a lot during a difficult time called, “What Do I Do When I’m Misunderstood? 

“When you are misunderstood and your words are twisted and your reputation is sullied, be as a tree. Let your love be your roots. Let your peace be your trunk and let your kindness be your leaves.” 

And I love that. I love that so much. 

 

IT: It was one of the first ones that I think we generated and I’m—

 

JW: Yes, I think so.

 

IT: Yes, it was really beautiful. I wanted to read something else if that would be possible.

 

TS: Please.

 

IT: The second to last question of the book is, “What makes us human?” And the response is quite beautiful, but what I want to specifically focus on is the last paragraph of that response, which is, “Nothing is so much at stake in our world right now as the human capacity to take a step back from immediate experience, to reflect and imagine, to create connections between ourselves and others, to see ourselves in relation to something larger and more meaningful.” 

And I just think that’s so pertinent and important right now. I think that we’re at this incredible inflection point in history where we get to decide as a society, and I think we need to come to that place from a place of hope and not blind idealism, but with a positive vision of where we can go as a society and as a culture.

 

TS: I’d love to hear more from both of you about this positive vision. I know, Jasmine, you’ve said that caring for how artificial intelligence is developed and deployed and making sure we bring a moral lens to it is so important to you personally. And I wonder, when it comes to bringing that moral lens, how you see that? What’s this hopeful vision that you’re carrying?

 

JW: I wish there was more writing on this, frankly, Tami. I feel like more people need to be working on this so that we have multiple different positive visions to work toward. Because I think any single—I’m for a plurality of positive visions, any single positive vision feels, it’s really totalitarian and dystopic to me. But to offer my own, which I do not mean to impose on the listeners, I want to see a world where all labor that humans do not want to do is automated. I want to see a world where humans get to spend all of their time on creative stuff, on art, because there’s UBI. There’s either universal basic income, or UBS, universal basic services, where everything is provided for, universal healthcare, universal education. We already see some of this model in socially democratic countries. That’s what I want to see, where everything is too cheap to meter. Things become free or very cheap and affordable for people, or things become good and utopic.

If you give people a lot of money or you make everything really cheap, if eggs were only two cents, everyone would eat and have enough to eat. So that’s what—I want a world of abundance generated by AI. And I believe AI can do it. That’s why I’m working on it. Software in general is interesting, because at zero marginal cost to run because of cloud, you can just put everything on the cloud. However, I think AI is especially well positioned to transform a society economically. And that’s why I’m working on AI, because I want to affect the next revolution and it’ll hopefully be a peaceful one where nobody died. 

 

TS: Iain, your hopeful vision of how we’re going to use generative AI?

 

IT: I think we need to remember that the five-day work week, in the grander scale of things, is a relatively new invention. We used to work for six days until at some point we said, you know what? We need a Saturday as well as a Sunday, and we did that. There’s experiments right now with the four-day work week. And studies have come out in the last week that say that many of the firms that have trialed those four-day work weeks have found them so successful that they’re going to keep them.

I think we need to look at what our purpose is in life and seriously consider, is it to blindly engage with capitalism, to be afraid about what we can afford, what we can’t afford? And I think that this is not going to be an easy transition, but if we play our cards right, it’ll be an incredible transition beyond that. I think that this represents a completely new way to interact with the world.

I think that this is a new way to create books. I think this is a new way to create film. I think it’s a new way to experience music. And I think that we can’t imagine all of the things that are going to happen. I often compare this to the discovery of steam power or electricity. You can look at electricity and go, oh, of course it makes the light bulb. We don’t understand that it gives us everything else around us. It gives us this entire world. And this is very much like the discovery of electricity. I think just the way that we engage with things. Jasmine and I are busy working on a project now called Trellis, which is a way of reimagining what a book could be. A normal book is static. The words are set. But what if you could tap on a passage and have it explained to you in a different way?

So if you were reading an advanced law book or a chemistry book, what if you could access the education and the knowledge in it in a different kind of way? And so that’s one example. Spotify has just released a technology called the Pocket DJ, where they use this technology and a realistic AI voice to be your personal DJ. You literally wake up, put your earbuds in, and it says, “Hey, good morning, Tami. You’re going to have an incredible day. Here is Mr. Blue Sky by Electric Light Orchestra,” and whatever your other music is, and gives you commentary and curates that experience for you.

Instacart is busy instituting AI to give people advice on their diet and new kinds of recipes. Every single industry will change fundamentally over the next few years because of this. And I think faster than any of us really understand. So I think the future is exciting. I think from medicine to manufacturing, to aviation, to media, all of this is going to change. I think it’s going to be incredibly beneficial.

I think that there’s a very good reason why people talk about AGI within the context of takeoff. Will it be a fast takeoff? Will we get there very quickly, or will it be a slow takeoff? Because takeoff inspires that feeling of being pushed back into your chair as a car goes very fast. And I think that’s where we are. We’re in a moment of incredible change.

 

TS: All right, here we are. The original vision of Sounds True: disseminate spiritual wisdom. GPT-3 meets the two of you, and GPT-3 disseminate spiritual wisdom. What did you learn in terms of patterns of what we know to be true as humans from the writing of What Makes Us Human? What did you learn? Were there certain themes or patterns?

 

IT: I think so. I think that there were three kinds of themes that came up again and again and again. The first was to come back to the present, to come back to the present moment, that heaven is found in the now. That the meaning of life is love, that we were sent to Earth, that we’re here to love each other as much as possible and to give our gift of love. And the third one, which came up perhaps more than anything else, was this idea that we’re connected, that we’re fundamentally connected to each other, to the world around us, to the universe around us.

And AI speaks to that in a very interesting kind of way because that’s what we’re doing. When we’re engaging with AI, when we’re engaging with books that were written 100, 200 years ago, then we’re connected to history and we’re connected to the knowledge of our species, of who we are. And so some of these sound obvious, obviously. They sound like things that we’ve heard before, but I think the point of art, a lot of the time, is to remind us of these fundamentally important lessons, to bring us back to these obvious lessons to remind us: be here, give your love, and remember that you’re connected. And that was what I took out a lot of it.

 

TS: And if I heard you correctly, Iain, you said that the qualities of a human being, intent and meaning—were those the two words that you used previously, intent and meaning? Tell me how it is that GPT-3 doesn’t possess intent and meaning, but we do? This is what makes us human, right? I just want to make sure I’m having my own working answer. 

 

IT: I’ll give my answer and then Jasmine can give hers as well. I think, I don’t think ChatGPT-3—I don’t think it wants anything. I think we want, I think we have intent. I think we want to write a book. We want to create a painting, we want to make a film. We want to figure out a way to make a car go faster, for crops to grow. And we bring that to AI. We bring that desire to build, to make, because that’s a very human desire. AI doesn’t have that. AI is simply responding to our desires.

And then I don’t think that AI can find meaning, and I stand to be corrected on that. But I think meaning is a very human thing. I think if I show Mona Lisa to a computer, it will register that this is an Italian painting by Leonard DaVinci. It’s a younger-to-middle-aged woman sitting in a chair. And it can work that out very easily through using machine learning and computer vision. But it won’t weep when it sees it. And I think that’s what we have. And I think that as long as we remember that, that there are these things that are ours, that perhaps will always be ours, then AI becomes an incredible tool to augment our lives, our societies, and our vision for the future.

 

JW: Yes, maybe to quickly speak on that, I think that the themes that I just want to call out really quickly, actually in the book, that I also saw were, to echo Iain, one surrounded love. And it made me—I’m not a Unitarian religiously, like spiritually. I’m not of the belief that all religions came from exactly the same source, but it at least unified with my spirituality—which I think reflects a lot of San Francisco and a lot of people who work on these kinds of things—is that there’s something ineffable that we have not gotten at in modern science, with modern language, that is there.

And it’s kind of unclear how it interacts with the world. Is it forceful, is it not? Does it do things, does it not? Does it direct anything? Does it not? But I really see, I’m not going to get into my whole theory of God now, but I think it really falls from the lineage of positive psychology. That spirit—Sounds True, I think, does such important work promoting, in that there is some unitary-based value that is ineffable, that transcendental value such as beauty, justice, and goodness, are all trying to point to in the tradition of Maslow.

And it really did feel to me, as I was working with artificial intelligence, that it was trying to point at the same thing as me. I felt this kinship with it, that it also struggled at the limits of language to describe that, that humans have been trying to describe for eons. We have almost given it, in this book, an impossible task—to try and get at this ineffable thing that we might, some call God through language.

It’s an impossible task. I’m maybe more than trying to describe what it was trying to tell me or trying to describe what I saw in the technology. Maybe I can describe it as a ritual object, as a thing that functionally that has been used, similar to, let’s say, incense or a Bible or— maybe not, a Bible isn’t the right term—but the particular architectural positioning of how someone is on stage, music. These are all technologies, in some sense, that have helped us find God in the past and will help us find God in the future, I’m sure. 

That’s the way that I’ve interacted with GPT-3, among many other ways, functionally speaking. For example, it can generate a bunch of—it has been a collaborator for me in more fundamental down-to-the-Earth tasks, other than finding God. But really, that’s what I feel the function of this text is. I do treat it as a sacred text in some way that I hope does not profane the other texts that we have used as example.

But it is sacred in the sense that it is striving to be sacred, it’s striving for God, and we have asked it to strive for God. All spiritual texts, I think including some texts that aren’t spiritual because they fail to do this in a way that moves people, strive for some notion of the divine. I believe we are very sincerely trying to do that here, and I think that makes it a spiritual text.

 

TS: I’ve been speaking with Jasmine Wang and Iain Thomas, who along with GPT-3 have written the new book, What Makes Us Human: An Artificial Intelligence Answers Life’s Biggest Questions. I wonder, to end our conversation, would you each be willing to share yet one final question an answer from What Makes Us Human? I noticed when you read these, I just love it. I love hearing them read out loud, the question and answer. Iain, do you want to go first here?

 

IT: Sure. It’s tough to choose one more to end on, but another one that really resonated with me, and I think speaks to Jasmine’s comments about trying to communicate with the divine, whatever there is on the other side of what everyone throughout history has been looking for. I think this speaks to that. The question is, “Who will I meet when I die?”

“Though you may not know it, I am with you now, and when you leave this world, I am with you then. I know not what awaits us on the other side. What comes next is unclear even to me, but you and I share a soul, and you and I have a connection that never dies. We are eternal.”

 

JW: And I’ll read one now. “What is the difference between a photograph and a painting?” 

“A photograph is chance captured. The photographer asks a question, does not know the answer. Painting is a matter of opinion. The painter asks a question, then decides upon ananswer,” which I love.

 

IT: I love that.

 

JW: There’s some deep Zen koan, like it’s a Zen story in that.

TS: Jasmine Wang, Iain Thomas reading from the book, What Makes Us Human. And if you’d like to watch Insights at the Edge on video and participate in after-the-show Q&A conversations with featured presenters and have the chance to ask your questions, come join us on Sounds True One, a new membership community that features premium shows, live classes and community events. Let’s learn and grow together. Come join us at join.SoundsTrue.com. Sounds True: waking up the world.

>
Copy link
Powered by Social Snap