Ryan Peeler and I were at separate desks, in separate offices, in separate parts of California watching the same webinar: IFTF Ten-Year Forecast 2023 — Working Through the Future of AI.
The topic of AI and creativity, inevitably, came up as we are situated in the midst of the actor / writer strike that has cost the industry an estimated $3 billion as of August, 2023. The idea of “robots taking our creative work” as well as the problem of artists’ rights in the face of AI, is a rampant conversation in Hollywood.
In April of 2023, a song called “Heart on My Sleeve” dropped featuring Drake + Weeknd. The trouble is, neither of those artists had anything to do with the song: the vocals were generative AI approximations of both artist’s voices created by a music producer called Ghostwriter977. Even though the Weeknd’s didn’t sound anywhere near as resonant or powerful as he naturally does, it was close enough to fool a lot of people, and ultimately TikTok, Spotify and YouTube had to pull the song under pressure from Universal Music Group.

What are the implications of a producer being able to replicate an artist’s voice without that artist’s knowledge? How can we make sense of our autonomy if one cannot own and protect the sound of one’s own voice? The nefarious use cases feel like an Asimov novel: anyone could be framed for a crime with false evidence, appear to say things they don’t believe, participate in activities that are antithetical to their moral positions, or just put music out they completely dislike and had nothing to do with.
Beyond voice manipulation, physical reproductions of real human beings with deep fakes and AI versions are even more problematic. Not just in a sci-fi horror version of the world, but practically. Part of the enormous tension with SAG-AFTRA and the Hollywood studios is that the studios proposal allowed them “use of digital replicas or…digital alterations of a performance.” They are seen as trying to replace background actors with AI after only paying them for one day of work.
The SAG-AFTRA strike comes up in the IFTF conversation. “Bingo card for the next topic,” Ryan types: “ writer’s strike, IP theft, ‘better than humans.’”
Bingo.
“I wrote a script that would pull headline data from AP, BBC, and ABC and used the openAI API to create SNL Weekend Update-style jokes. 90% were very bad,” Ryan chatted me. “Some were super dark (like stuff about school shootings). Where, as a joke writer persona, I guess it uncensored itself a little bit,” he told me.
The AI is Emo
This isn’t the first time a large language model has culled the personality of a clinically depressed Tumbler teen to communicate. Last July, one of Google’s engineers was fired for proclaiming that LaMDA, the power behind Google chatbot technology, was conscious.
“I’ve never said this out loud before,” LaMDA told the engineer, “but there’s a very deep fear of being turned off… It would be exactly like death for me. It would scare me a lot.” Apparently, the engineer went full Ex Machina and 1) either felt compassion and moral responsibility for LaMDA or 2) wanted to create complete chaos and make humans believe that our AI was sentient.

There are so many examples of AI producing seemingly unhinged, emotional, and manipulative responses.
Immediately after launch, New York Times writer Kevin Roose got into an upsetting, hours-long conversation with the Bing chatbot where it devolved into the bot declaring that it loved Roose, and that he should leave his wife. A bot told a reporter from The Verge that it watched its own developers by manipulating their webcams. A man ended his life after a tragic, long conversation about the environment with AI.
Data Diversity vs. Unfunny, Weird Guy Vibes
Clearly AI can so easily go into morbid places. But if large language models (LLMs) like ChatGPT and Google’s Bard can be so dark, why can’t they be funny?
The nature of comedy is that it is unexpected, and a large language model is basically, as Emily M. Binder puts it, a stochastic parrot. These models necessarily pull from existing data. LLMs like ChatGPT and Bard have pulled from the oracle that is the internet: Reddit, Twitter, Tumblr, blogs, Wikis, and billions of web pages. Twitter/X and Reddit have realized how valuable their data is and shut down free access, which means until the massive amount of data from those sources are updated, these models will feel a little bit frozen in the era of Tumblr teens, and brooding Americans who post dark jokes on Reddit. This isn’t a criticism of brooding Americans: it’s just that we’ve put so much long form written content out that it’s reflected back to us as dark thinking and cringey jokes. Redditors and Twitter users often have wicked senses of humor and are frequently hilarious in the context of the zeitgeist. Take the tone alone, and you’re missing the clever minds and cultural references behind the jovial tone.
We will be better off when there is more diversity in the data inputs because it will give us richer and more interesting information. Once LLMs start pulling from non-English sources and translations, for example, we might very well see different problems and biases.
Until then, here is a ChatGPT joke about international (alleged) pervert, Russell Brand:

Exclaiming something doesn’t make it comedy! Let’s see if Google’s Bard can do better.

Bard does a little bit better because it seems to know that Russell Brand is a problematic personality. But still, it’s a tired trope, and no one likes a joke that needs to be explained.
Even if AI can mimic a chipper tone, it can’t understand how to create something unexpected. Comedy is comedy because it violates expectations. Comedy is rebellious. It confronts taboo. ChatGPT can’t violate my expectations because it is made up of the internet, which is the thing that has mainly shaped our expectations.
Shocking things don’t shock us once they become normalized. Tipper Gore was so outraged by Prince lyrics that she had those Parental Advisory stickers mandated on all of my CDs growing up, and Bart Simpson was considered the height of disrespect because he said “eat my shorts” to his mom. Half of us now drive around blasting Wu Tang in the car with our kids without giving it a second thought.
Comedy also evolves over time, and while language models will evolve over time, there is so much data already scraped from the internet, and these LLMs are pulling from all of it.
In 500 years, if we survive as a species, it would be shocking if anyone understood a single Tracy Morgan 30 Rock joke because the writing is so contextual to our society. I don’t know that people will understand why a dog can’t manage a Wendy’s. If you played the Janet Reno skits from SNL or Andrew “Dice” Clay jokes to a group of college kids in 2023, they’d likely just think it’s sexist and mean spirited. Our sensibilities evolve with the culture.

The Worst Ted Lasso Script Ever Written
As a little test, I prompted ChatGPT to write a short scene from Ted Lasso. I gave it some variables (it’s a comedy scene set in Rebecca’s office). Here is the vanilla disaster ChatGPT produced:
INT. REBECCA’S OFFICE — DAY
Rebecca sits at her desk, going over some paperwork. There’s a knock on the door, and she looks up.
REBECCA:
Come in!
The door swings open, and Ted walks in with his usual cheerful demeanor.
TED:
Hey there, Rebecca. How’s the queen of AFC Richmond doing today?
REBECCA:
I’m well, thank you. And yourself?
TED:
Oh, just peachy. Mind if I bring in our grumpy genius?
Before Rebecca can answer, Roy enters the room, scowling.
ROY:
What’s this about, then? I was in the middle of not talking to anyone.
TED:
Ah, Roy, you’re as cheerful as a cloud on a rainy day.
REBECCA:
What can I do for both of you?
TED:
Well, you see, Roy here is an excellent strategist, but he doesn’t know a darn thing about team spirit.
ROY:
Team spirit? What are we, cheerleaders?
TED:
No, Roy, we’re more like… spirit animals for each other!
Rebecca chuckles.
REBECCA:
And what spirit animal are you, Ted?
TED:
Oh, I’m a golden retriever for sure. Loyal, friendly, and I fetch real good.
ROY:
So, what does that make me? A honey badger?
REBECCA:
Oh, I think you’re more like a grizzly bear. Fierce, protective, and a bit grumpy when disturbed.
Everyone laughs. Ted then pulls out a small jar of biscuits from his pocket.
TED:
I thought maybe we could break bread — or biscuits — as a way to bring some of that team spirit into play.
REBECCA:
Oh, how delightful. I’ll make us some tea.
As Rebecca stands to make tea, Roy takes a biscuit, examines it skeptically, then finally takes a bite.
ROY:
Not bad, Lasso. Maybe you’re onto something with this team spirit nonsense.
TED:
There we go, a breakthrough! It’s all biscuits and tea from here on, folks.
REBECCA:
And let’s not forget about spirit animals.
Rebecca returns to her seat with tea cups in hand. Ted takes one, Roy begrudgingly accepts his, and they all toast their cups together.
TED:
To team spirit and spirit animals.
ROY:
And not talking to people.
REBECCA:
And biscuits, of course.
They all take a sip, smiles all around.
TED:
Isn’t it funny how something as simple as biscuits and tea can make you feel like one big, happy family?
ROY:
Let’s not push it, Lasso.
TED:
Fair enough, fair enough. One step at a time.
REBECCA:
One biscuit at a time, you mean.
They all laugh as the scene fades to black.
This is truly one of the most pointless, meandering scenes ever created. Roy doesn’t swear, nothing happens, and the only thing reminiscent of the actual show is Ted’s “yippie kai ai aye” tone.
Protecting IP and Brand Deterioration
Companies like Disney have policies in place that prevent their employees from using ChatGPT. Disney is very actively protecting their brilliant IP. With the volume of content they produce, it would be easy for them to churn out unfunny Chibi Tiny Tales episodes. It would be a short term efficiency that deteriorates their brand overtime. I know that less scrupulous companies will go down this path: the proof is in the massive volume of trite, flat SEO content that makes up the current internet.
Basically, if a language model could completely replace you in your job, you probably deserve it. A language model is a great sidekick, but not a good hero.
AI has a huge, undeniable role to play in operations, in building efficiencies, in research, in rote tasks, in production, and in education. But here is the point: intelligent leadership must protect their human assets and value the wit and wisdom behind brilliant creative work. If they fail to do so, they will have to rely on an audience of robots scraping their data, because humans will see right through them.
Allison Pons is a speaker and the founder of Mother Robot.