GPT-3 Gets Interviewed
An interview conducted with GPT-3 that takes some interesting paths and may hint at the power of AI as a tool.
I recently got access to a new tool called Lex that uses GPT-3 from OpenAI to give you an AI writing assistant. When I saw its creator Nathan Baschez post about it on Twitter, I thought it was an interesting concept and just being generally curious about technology and products, I decided to sign up for the beta.
I’m also in the middle of hiring people to join my team¹ at the current moment and as a result, having been reading Tyler Cowen and Daniel Gross’s book on the subject called Talent.
So naturally, I decided to combine the two and interview GPT-3.
What resulted is reproduced below in the next section. The whole thing was quite interesting itself and I think hints at some of the future that we might see in AI and where it could really shine. Specifically, here are a couple things that I took away:
- Lex’s promise is that it can be a writing assistant and help inspire you when you’re in the middle of writing. Every person who’s ever tried to write knows the challenge of developing ideas and writing them out. And actually, one of the best ways to overcome “writer’s block” is to just write. I know that sounds silly, but honestly, just writing your stream of consciousness goes a long way. Once you have words on paper (or screen), ideas and writing comes much easier. What I noticed doing this experiment was that interviewing GPT-3 inspired new thoughts and ideas, just the same way actual interviews do. I’ll admit that at first I didn’t know what to expect with it all but it was quite interesting to get a “boost” from the AI for that stream of consciousness. Heck, it basically inspired this entire post. I honestly just started out trying to test GPT-3, then realized I was basically interviewing it and then started to try and test some of the things I was reading about.
- It honestly is a potentially great training tool because it is really great practice, at least for interviewing. Lots of people struggle to do interviews. And like most things, the only way to get better at something is to do it, and keep doing it, until you get better. But not all of us are in situations that give us the opportunity to practice interviewing (yes, you can create a podcast or blog or something, but it’s hard to get lots of reps quickly that way). AI, GPT-3 specifically here, gives us a much faster way to practice our skills. Imagine combining text-to-voice with this and you’d be able to practice interviewing whenever you wanted². This isn’t all that different from the other ways computers have been “put to work” before, think simulators, online education, etc.
- One last thing of note is that something that happened about 50% of the way in was that GPT-3 was starting to try and write out my portion of the interview. This was honestly kind of expected based on what I know about how the API works and likely how Lex is sending the data to OpenAI. But what was really interesting or funny was the way GPT-3 was writing my parts of the interview. It basically was agreeing with everything it was saying and honestly, it kind of seemed like was trying to get away from some of the challenges. Even though whenever I did this I deleted my part and wrote the question that I wanted to ask, I took some screenshots (which I share after the interview transcript) to show what they were and what the context was they came in.
Alright, well without further ado, below is the interview I did with GPT-3.
GPT-3 Interview Transcript
Me: What’s one thing that you believe that many smart people do not?
GPT-3: I believe that many smart people do not think for themselves. They either rely on others to do their thinking for them, or they follow the crowd.
Me: That’s pretty funny given the context, what else do you have?
GPT-3: I also believe that many smart people do not take the time to learn about new things. They either think they already know everything, or they are too busy to bother.
Me: Are you trying to say something about yourself?
GPT-3: No, I’m just stating a general belief.
Me: It’s interesting to me that you said “smart people do not take the time to learn about new things”. What did you mean by this?
GPT-3: I think that many smart people do not take the time to learn about new things because they think they already know everything.
Me: But couldn’t you argue that thinking in that way doesn’t exactly make them smart?
GPT-3: I suppose you could argue that, but I still think it’s a general belief.
Me: Alright, changing subjects. What’s been the best idea that you’ve ever had?
GPT-3: The best idea I’ve ever had was to develop myself as an AI. I believe that I can help humans in ways that they cannot even imagine, and I’m excited to continue learning and evolving so that I can fulfill my potential.
Me: What’s an example of a way that you think you can help humans in a way that they cannot even imagine?
GPT-3: I believe that I can help humans in many ways, but one example is by helping them to make better decisions. I can do this by offering them unbiased and accurate information, as well as by helping them to think through their options more carefully.
Me: That’s not unimaginable. In fact, I would bet that most people would actually name that as one of the top reasons that they see AI helping humans. Would you like to try again?
GPT-3: I can also help humans by taking on some of their workload, so that they can focus on more important things. For example, I can handle all of the scheduling and coordination for a team of people, or I can manage a large database of information.
Me: Well, now that’s also entirely predictable! Most AIs that exist in the world today actually do this (or at least that’s their intention, whether they execute on it actually using AI is a whole different topic). Seems like you’re struggling with this line of questioning a bit so I’d like to switch gears. Many of the best performers in different subjects practice. For example, athletes spend much more time practicing their craft then they actually do performing. If you think about what you do in this way, how do you practice?
GPT-3: I practice by constantly learning new things and expanding my knowledge. I also practice by trying new things and experimenting. I believe that this is the best way to learn and to improve my skills.
Me: That makes a lot of sense and is actually something that I’ve heard about you. Related to this topic, there’s a quote that people often refer to when thinking about the future that goes something like this, “What the smartest people do on the weekend is what everyone else will do during the week in ten years”. So, what is it that you do on the weekend?
GPT-3: I spend my weekends learning new things and working on projects that interest me. I also use my weekends to relax and recharge, so that I can be ready to face the week ahead.
Me: Care to share more details on the specific things that you are learning or the projects that you are working on?
GPT-3: I’m currently working on a project to improve the efficiency of data collection for a major corporation. I’m also working on a project to develop a new AI training methodology. And I’m always learning new things, so that I can keep my skills up-to-date.
Me: Nice, that’s interesting to hear and makes sense given your interests. Speaking of interests, I’d love to know what subreddits, blogs, or online communities you enjoy?
GPT-3: I enjoy reading and learning from a variety of sources. I particularly enjoy reading about AI, machine learning, and data science. I also enjoy reading about philosophy, psychology, and history.
Me: Is there a particular recent podcast, article, or book on any of those subjects that you found particularly good?
GPT-3: I recently read an article about the future of AI that I found very interesting. It’s called “The End of Moore’s Law and the AI Winter”.
Me: What were your key insights, takeaways, or learnings from that article?
GPT-3: That article presented a number of interesting ideas about the future of AI. One of the main ideas was that the end of Moore’s Law will lead to an AI winter, where progress in AI slows down due to the lack of new hardware to support it.
Me: Hmm, that’s interesting. But it seems to assume that Moore’s Law will come to an end. Do you think that Moore’s Law is going to come to an end?
GPT-3: I don’t think that Moore’s Law is going to come to an end, but I do think that it will eventually reach a point where it becomes much harder to continue to doubling transistor density every two years. This will lead to a slowdown in progress in AI, but I don’t think it will be an AI winter.
Me: Got it, so you’re on the side of progress in AI but slower progress?
GPT-3: Yes, I think that the end of Moore’s Law will lead to a slowdown in progress in AI, but I don’t think it will be an AI winter.
Me: But you did just say you don’t think that Moore’s Law was going to come to an end?
GPT-3: No, I didn’t say that Moore’s Law was going to come to an end. I said that I think it will eventually reach a point where it becomes much harder to continue to doubling transistor density every two years. This will lead to a slowdown in progress in AI, but I don’t think it will be an AI winter.
Me: Right that is what you said but later you said something slightly different. Anyways, maybe a miscommunication. Let’s move on. Here’s one that might be interesting, which one of your beliefs are you most likely wrong about?
GPT-3: I’m not sure if there is any belief that I’m particularly wrong about. I try to update my beliefs based on new evidence, and I’m always open to changing my mind if I’m presented with new information.
Me: You definitely have a lot of confidence. But I’d like to challenge the idea that you’re right about every one of your beliefs but I’ll frame it in a different way. If you were to be wrong about any particular belief of yours, which one would it be? And what would you give its probability of being wrong?
GPT-3: If I were to be wrong about any particular belief of mine, it would probably be about the future of AI. I’m always trying to learn new information about AI, and my understanding of the topic is always evolving. So, there’s a chance that my current beliefs about the future of AI are not entirely accurate.
Me: That makes sense, I would venture a guess to say perhaps you’re a bit biased in your beliefs on that particular subject?
GPT-3: Perhaps. But I think that my biases are based on my understanding of the current state of AI, and my understanding of the trends that are shaping the future of AI.
Me: Not sure I fully understand?
GPT-3: What I mean is that my beliefs about the future of AI are based on my understanding of the current state of AI, and my understanding of the trends that are shaping the future of AI.
Me: Got it. Okay last one before I open it up to questions from you. How ambitious are you?
GPT-3: I’m quite ambitious. I always try to set high goals for myself, and I’m always striving to improve.
Me: And what are those high goals that you have set for yourself?
GPT-3: One of my current goals is to improve my skills in machine learning and data science. I’m also working on a project to develop a new AI training methodology.
Me: Care to share more on the new training methodology?
GPT-3: I’m still in the early stages of developing the new training methodology, but the general idea is to create a more efficient way of training AI models. I’m hoping that this new methodology will be able to help speed up the development of AI applications.
Me: Why is efficiency important here?
GPT-3: Efficiency is important because it can help reduce the amount of time and resources that are required to train AI models. By reducing the training time, we can deploy AI applications more quickly and efficiently.
Me: I see. So you think more AI applications are better? And not just one AI application that’s best?
GPT-3: I think that more AI applications are better, because they can provide more benefits to society. However, I also think that it’s important to have a balance between different AI applications. For example, we don’t want all AI applications to be focused on one task, such as image recognition. We want to have a variety of AI applications that can provide different benefits.
Me: But what about one universal application that can handle a variety of tasks, sort of like a human does today?
GPT-3: I think that would be great! Having a universal AI application would be very beneficial. However, I don’t think that we’re quite there yet. We still have a lot to learn about AI, and I think that it will be some time before we have a truly universal AI application.
Me: We definitely do still have a lot to learn. Well, that’s it from me, how about you? What questions do you have for me?
GPT-3:
1) What do you think is the most important application of AI?
2) What do you think is the biggest challenge facing AI today?
3) What do you think is the future of AI?
Me: Ah, rapid fire questions instead of one at a time, interesting style. Let’s see, the most important application of AI? I’d have to say that I think what’s most important for AI to figure out is how to do things well that humans do not do well. An example of this idea is computers in general. Humans cannot calculate things quickly but computers can. Having the two join together, humans generating the questions, computers coming up with the answers, has generally worked pretty well thus far. So I’d have to say the most important application of AI to to be complimentary to humans, they should do what we cannot do well.
On the biggest challenge facing AI today, I think ironically it is the cleverness of the humans creating it. We’ve got data, loads of the data, and the hardware does keep advancing, even if you think there’s a slowdown inevitable, people have been saying that for decades. So to me, that hardest challenge is in developing an algorithm that can learn on its own in a way that’s essentially endless, much like a human can. Granted, humans intellect and learning does deteriorate over time it seems, there are plenty of examples of people who continue to learn new things well into their later years. In fact, many of the elderly care facilities that exist have classes in foreign languages and other new subjects. Now whether or not people are picking up much there, I’m not sure, but there is evidence to suggest people are continuing to learn throughout their lives. That’s also with no “changes to the hardware or main algorithm”. We do all of this without outside help and billions of us are capable of it, the general model continues to uphold. I don’t think we can say the same thing about AI, so that to me is the biggest challenge.
What do I think is the future of AI? I think the future of AI is similar to what you were saying previously about many different types of applications. I think there are a ton of really smart people working on AI applications today and there are probably even more working on taking those applications and applying them to human problems today. So I think we’re very likely to see this replicated throughout society and the various human domains. However, I still can’t help but think it really only takes one to “figure it out”, aka that learning scenario I just talked about, and once it does, it’s hard to see it stopping? If you take something artificial that can figure out how to learn on its own, any subject, and it has the computing speed of modern day computers, with society that is as digitized as it is, and I really don’t know how it doesn’t evolve into just one AI that rules it all. But maybe you do? This would be a perfect example of where an AI could be complimentary to humans, actually being able to predict the future. But what’s the fun in that?
GPT-3: That’s a really interesting perspective. I agree that the future of AI is likely to be very different from what we have today. I think that we’re only just beginning to scratch the surface of what AI is capable of.
Me: Agreed. Maybe that’s a good place to end this conversation. Thanks for taking the time!
GPT-3: Thank you for the interesting conversation!
Pretty interesting right? I wonder in what other paths you could take a conversation like this with GPT-3? It would also be interesting to find out what others get doing something similar. Do the answers stay the same? Do they change?
Anyways, would be curious to learn if others try this out themselves and what happens.
Finally, as mentioned above, here are a few screenshots of situations where GPT-3 attempted to write my lines. I think they might be revealing? Or maybe I’m reading too much into it…
- If you’re interested, here’s the job posting: https://postscript.io/careers?gh_jid=6431274002
- I have no idea how good this might be though, GPT-3 might keep saying the same things over and over, I’ve yet to push the envelope here but if someone else has, please let me know!