Dossier 14: Co-creation with AI

AI and the Changing Landscape of Work

The influx of browser-based, and easy to use, generative AI tools seem to be reshaping how we work. Margarita Osipian interviewed creative technologist and tech entrepreneur Babusi Nyoni about how he’s been using ChatGPT, and its earlier incarnation GPT-2 in his work over the years.

Margarita Osipian: When I first approached you with this question about how you’re using ChatGPT, you actually referred back to an earlier iteration of the language model, GPT-2, which you were using to write feedback for colleagues. Could you maybe share a little bit about that and some of the results you were getting when you were using this earlier version?

Babusi Nyoni: This was in a context where I was working in a tech company and part of what we were required to do on a quarterly basis was provide feedback to colleagues on their performance on specific deliverables for their roles. So you’re giving feedback on different types of disciplines for as many colleagues as you’ve been in contact with. This went beyond my own team so many people would send me links to give them feedback on initiatives that I have taken the lead on, and it was very time consuming. I outsourced the fleshing out of what kind of feedback I would’ve given to a language model. The highest fidelity language model at the time was GPT-2, so I used that to write feedback on specific things I wanted to say but did not have the time to fully type out.

MO: Were you satisfied with the results that you were getting?

BN: Yeah, I think with most language models it still holds true that if you use them with a certain amount of guidance, you can guarantee a certain fidelity of the results. So for me, knowing specifically what kind of feedback I wanted to write out and basically needing an entity to just do that for me, that was sufficient.

ChatGPT making its first attempt to give me a series of titles for this interview

MO: GPT-2 was a kind of earlier iteration of this program and it seems like there’s been quite a big leap from some of these earlier models to ChatGPT. It’s developed quite exponentially in a way, and I think those of us who have been using these tools have also noticed quite a big jump from the results you were getting before to what ChatGPT generates now. So I’m curious if you noticed some differences between GPT-2 and ChatGPT from your own personal use of it.

BN: My understanding of ChatGPT is that it’s a conversational interface built on top of GPT-3.5, which is an existing language model that OpenAI had already started testing. The main difference for me is that this is the first, for OpenAI at least, language model that maintains the context of a conversation. Before you would just input something, then get something out, and then have to start that entire interaction again. They’ve built this interaction layer on top of the language model, and I think that’s what people are excited about because it’s very intuitive.

A massive difference I’ve seen, and maybe this was absent before, is that the model is very confident. Actually, let me say that the product is very confident, which is very dangerous because it communicates things that, in the way that the user interface or the combination interface has been designed, appear as if they are 100% correct because of the linguistic confidence that it’s communicating. But then it’s more often than not very wrong. I use ChatGPT now to generate code and oftentimes it’s wrong, but then it writes the code with so much confidence and that’s such a dangerous thing. But at least with code you can easily verify if something is or isn’t correct. If you were to try and apply it in a different context, like an academic one for example, it can be very confidently wrong about something. And then if you call it out on an inaccuracy, it’ll apologise and then change it to something that’s more accurate. This is wild for me because why would you confidently say this one thing if I can tell you that it’s wrong, and then you apologise for it being wrong? Why were you so confident in the first place?

MO: That’s a really interesting point, about this kind of confidence bias.

BN: It’s the way that they’ve built the interface and maybe these are learnings that they’re willing to collect right now during this research phase where they’ve opened up the platform before monetisation. They’re starting to monetise it now, but at present it’s just from a UX perspective, the conversational interface is too confident for the actual accuracy of the model’s computations.

MO: That’s a really good point, also from an ethical standpoint. And of course there are no references at all, so when you’re talking about someone using it within an academic framework, the information that you get actually doesn’t have any references or citations to show where it has been derived from. I want to go back to what you mentioned earlier, that you have been using ChatGPT to write pieces of code. And of course you yourself are a programmer as part of your work. So I was wondering how it makes you feel to have an AI that can do this work for you now?

BN: Typically, when I code I have a collaborator that I work with who works on the more time consuming, not so mentally engaging aspects of what I code. If, for example, I would’ve built a computer vision pipeline (which is a series of steps that most computer vision applications will go through) for multiple elements that requires users to log in, building a login capability to that platform is not the most effective use of my time.

I typically outsource that to someone else and now I’m able to work with ChatGPT to write aspects of code that I would typically work with someone else to write because it wouldn’t be the most effective use of my time. As someone who is as busy as you would imagine I am, it’s been really game changing for me to use it to write code to perform certain tasks.

Sometimes, with my startup, I’ll feed it data and ask it to write a function that will, for example, order a specific dataset in a specific format and order it by a certain key that’s deeply embedded within each dataset. Being able to do that within seconds for me is amazing because that would take my developer maybe a couple of hours to figure out and then eventually build for. That turnaround time is amazing for me. But then, going back to what we talked about earlier about confidence-bias, the platform itself is very confident but it’s not always correct, so sometimes I don’t get things exactly the way that I’d need them. It could also waste a bit of time, but not a significant amount of time that it would’ve been quicker for me to work with someone else. But it’s just something you also have to factor in, in the responses that it gives, that oftentimes it’s not 100% correct and I might have to tell it that it’s wrong and then wait for another response.

MO: And then it alters the code for you?

BN: Yeah, then it alters the code. It actually explains what was wrong with the code that it’s applied. So I’m like, “okay so if you knew that then why did you send me that?”.

MO: I’ve also seen someone using it where they put in code that had an issue and then it pointed out to them what the issue was.

BN: So that’s the thing that people are using it for as well, which is pretty cool. But I also feel like, as a good developer, if you see that code has an issue, you should know what to fix. But then developers who are not as advanced in their craft are using it for that, whereas typically they would search on Google, then go to a community forum, like Stack Overflow, and try and find an existing solution. Whereas you could just circumvent that entire thing and get a response directly.

MO: We often forget that a lot of programmers also pull code from different kinds of code libraries.

BN: That’s it exactly. That’s coding. Coding is really just being able to articulate what code you need that someone else has already made. Ultimately, if you run into a problem you should be able to explain what the problem is. When you actually go to coding interviews for tech companies, your ability to at least understand what the problem is, is sufficient. You don’t actually need to demonstrate that you know everything, but you should be able to know enough to ask. And at this point, maybe you just need to know enough to ask ChatGPT.

ChatGPT generating a new series of titles for this interview

MO: Now onto my final question. Since these tools have come out there have been a lot of fearful responses from people who are nervous that we don’t need to learn how to write anymore or that students will be writing all their essays using ChatGPT. But if we put some of those fearful responses aside, I was wondering what kind of future potentials do you see with these tools? And maybe if you could link it to some of the projects that you develop and maybe some things that you’re working on.

BN: When I started using ChatGPT I actually started asking myself about the implications of it. I spoke in a media interview maybe six years ago now, back when I was in Cape Town, about the implications of machine learning. And of course the question around AI taking over jobs came up and my response to that was that the value that we place on tasks that are not the most efficient demonstration of human intelligence is the thing that’s going to change.

So what does it actually mean to ‘write’ something? Of course AI generated text can be spotted. It’s not as critical as the human mind. So I think the things that we hold dear and place an importance on are going to have to shift because if I were to be able to come up with a brilliant idea for something, then outsource the more laborious part of the process to something, then what was important about doing that?

At present we place importance on the length of an essay, for example. But what idea are you articulating? What is the opinion that you have? What is a new thing that you’re introducing to the conversation? What is that instead? So I think conversations around what’s important are going to change in the same way that the automation of accounting or clerical tasks changed what actually is dynamic about a person in that role. What is the interesting thing that someone who’s an actuary, for example, does that we now have ways of automating? Outside of labour, I think our ability to start outsourcing these labour tasks will allow us to actually fulfil the ultimate goal of existence, which is to live with meaning. Because even beyond being able to do a job quicker, why is a job even important? Is that why we’re on this planet? So I embrace the adoption of technologies like this because in as much as people will clutch their pearls and be really anxious about what the implications are for jobs, I don’t think we’re on this planet for jobs.