What gets lost in the AI debate: It can be really fun

   

The viral fake Drake and The Weeknd song tells us a lot about the future of AI.

A viral fake Drake and The Weeknd song, which an anonymous user posted online and claimed to make using AI, shows how good AI is getting at entertaining us.

 Ollie Millington/WireImage

You’ve probably heard a lot lately about AI.

Everyone from Elon Musk to Joe Biden has been worried that AI could take over our jobs, spread misinformation, or even — if we’re not careful — one day kill us all. Meanwhile, some AI experts say instead of fixating on hypothetical doomsday scenarios in the long term, we should focus on how AI is actively harming us right now and the concentration of power in a handful of companies that are controlling its development. Already, the error-prone technology has been used to invent slanderous lies about peoplehack bank accounts, and mistakenly arrest criminal suspects.

But near-term and long-term concerns aside, there’s a major component to why it feels like AI is suddenly taking the world by storm: It’s fun.

Over the past few weeks, I’ve been playing around with the latest AI tools and talking to people who use them. I’ve found that the most exciting forms of AI right now are not the kind people are using to increase productivity by crunching spreadsheets or writing emails. (Although bosses love that idea!) They’re the kind being used to entertain us.

In just the past six months, AI has come an incredibly long way in helping people essentially create all kinds of media. With varying degrees of instruction, AI can craft photorealistic illustrations, design video games, or come up with catchy tunes with top-40 potential.

So what should we make of the fact that people are enthusiastic about using a technology that clearly has serious flaws and consequences?

“I think it’s *completely* reasonable for people to be excited and having fun,” Margaret Mitchell, chief ethics scientist for AI platform Hugging Face, wrote in a text. Mitchell formerly founded the Ethical AI team at Google, where she was controversially fired after co-authoring a paper calling out the risks associated with large language models that power many AI apps. Mitchell and her co-authors were prescient early critics of the shortcomings of recent AI technology — but she acknowledges its potential, too.

Pietro Schirano is a design lead at financial services startup Brex. He was also an early adopter of GPT-4, the latest iteration of the technology from the company behind the viral ChatGPT app, OpenAI.

When GPT-4 came out in March, Schirano couldn’t wait to use it. He decided to test its ability to write working lines of code from simple prompts. So Schirano set out to recreate the video game Pong because, in his words, “it was the first video game ever, and it would be cool to do it.”

In less than 60 seconds, after feeding GPT-4 a few sentences, copying the code, and pasting it into a code engine, Schirano had a working Pong he could play with. He was amazed.

“That was the first time that I had this sort of, like, ‘oh shoot’ moment where I’m like, oh my god,” he said. “This is different.”

His tweet posting a video of the process went viral.

When I asked Schirano if he worries about AI replacing the jobs of people like him, he said he wasn’t too concerned. He says he uses ChatGPT at work to help him be more productive and focus on higher-level decision-making.

“The way that I see these tools is actually not necessarily replacing us, but basically making us superhuman, in a way,” said Schirano.

As my colleague Rani Molla has reported, many workers are in the same camp as Schirano. They don’t think their jobs can fully be replaced by AI, and they aren’t particularly terrified of it — for now.

I talked to Ethan Mollick, a professor at the University of Pennsylvania’s Wharton business school, about the wide range of reactions to new AI tools. Any kind of new “general purpose technology” Mollick said — think electricity, steam power, or the computer — has the potential for major disruption, but it also catches on because of unexpected, novel, and often entertaining use cases. New advancements in AI like GPT-4, he added, fit very well into that general-purpose technology category.

AI is “absolutely supercharging creativity,” said Mollick. “How can you not spend every minute trying to make this thing do stuff? It’s incredible. I think it could both be incredible and terrifying.”

These fun creative ways to use AI also bring up the question of authenticity: Will it replace human creativity or merely help us in the production of it?

Last week, a hip-hop song that sounded like a mashup of the artists Drake and The Weeknd went viral on social media. The song, posted by the anonymous user “ghostwriter,” claimed to be made with AI, and got millions of plays before it was taken down by major platforms.

The proliferation of this kind of AI-generated media has spooked record labels enough that Universal Music Group asked streaming services like Spotify to stop AI companies from using its music to train their models, citing intellectual property concerns. And last month, a coalition of record industry unions and trade groups launched the “Human Artistry Campaign” to make sure AI doesn’t “replace or erode” artists.

A few artists, though, have embraced the AI concept. The musician Grimes even asked fans to co-create music with her likeness and offered to split 50 percent of the royalties.

Mollick compared the debate around whether AI will replace artists to the introduction of the synthesizer to modern music. When the synthesizer first came out, people debated whether it was “ruining music,” and whether people who used the instrument were real musicians.

Ultimately, the real dangers of AI may not lie so much in the technology but in who controls it and how it’s used, Mitchell said.

“My issues are more with tech leaders who mislead and push out tech inappropriately rather than creative people exploring new technology,”