Can Artificial Intelligence Replicate Instagram's Top Poet?

Rupi Kaur is one of Instagram's most popular poets. People have ragged on her work quite concisely and comprehensively for being too superficial and suspiciously similar to Nayyirah Waheed's poetry. Their arguments are well founded, but I'm not going to rehash all of that here.

What I would like to do, instead, is point out that the market has spoken. And poetry enthusiasts: sorry, but you got it all wrong, and for many, many, years, in fact.

The people who want to grapple with poetry in an intellectual struggle represent a very small percentage of all readers. These are the readers, and poets, who are most upset, and emotionally lit up, by Kaur's success. They have worked tirelessly on their craft for years, and even decades, and have received little to no validation, recognition, or financial reward.

So they are taken aback when they see what they consider pseudo-poetry to be so popular and financially successful. Now because these creatives are comfortable with ambiguity and abstract thinking they often don't see their process as a struggle. But the average reader will not put up with being confused.

It creates mental, and emotional, anxiety and friction in the sales process. What Kaur's work does is twofold: she reduces friction in the mind of the reader and increases motivation through her inspirational messages.

Most people do not want to think. They do not want to struggle with analyzing the written word. They want inspirational messages that are simple and have the look, and feel, of poetry, without confusing them. The average person can only process one mental picture at a time.

This is basic marketing.

You can buy the entire collected works of Shakespeare for 49 cents on Amazon Kindle. A roll of toilet paper is worth more USD. Totally brutal. No one cares. But traditional high horse media outlet, The New Yorker, openly berated radio personality Ira Glass for shrugging off Shakespeare. It is no surprise that the institutional actors who live in ivory towers are so clueless about value; they can't read the tape of their own market and they are, arguably, paid to lie to themselves. The poetry market is shaped and molded by the educational system, which actually forces their products onto unwilling audiences, i.e., students.

Now this aforementioned point is salient here because of the artificial intelligence output produced by models trained on Shakespeare's work. I will explain more about that in the conclusion section, otherwise I will give away the whole mystery of this article before it even gets started!

First, let's look at some statistics to support the claim that no one reads poetry:

By any measurable standard Rupi Kaur was absolutely killing it. Her Instagram listed 2.8 million followers. She follows 0 people. Absolutely brutal.

But I wondered, could a Natural Language Generator replicate her work?

I installed NLTK, and PoetryGenerator, and scraped 110 kilobytes of her poetry from goodreads.com. Not quite the ideal amount of training data, but all that I could screen-scrape quickly. A natural language generator attempts to use training data as the basis for its output, which you see in the bolded lines throughout this article.

Surprisingly the generator output samples that were too complex in nature to match the original author. So I commented out:

#frame.add_big_words(text)
#for x in range(3):
     # frame.add_context_words(text)

This started to produce content that was significantly more attuned to the work of Kaur. However, the structure of the sentences were still nonsensical. I had to clean up all the (nonsensical) line breaks in her work as well as all the unnecessary paragraph and italics html tags.

Toward the end of the loop I got an error:

Traceback (most recent call last):
File "generator.py", line 362, in
frame.add_context_words(text)
File "generator.py", line 303, in add_context_words
for after_word in text.after[spot.word]:
KeyError: 'realizations'

The program kept spitting out 8 line pseudo-poems regardless of the error. But I wondered: was the universe speaking to me? What did it all mean? The resulting 297 kilobyte output was getting better, but still pretty nonsensical. I found a number of lines particularly interesting though:

"you were understanding like the backs on the easy women"

What an incredibly lurid and smooth line!

"the saddest thing has been sweet"

Pretty standard stuff.

"man has been always waiting like the exhausted earth"

Reminds me of this video. Terrible and cold and true.

Creating poety with this particular natural language generator was far easier if the context was not defined clearly. In other words, if you allowed for cleverness, ambiguity, and multiple interpretations then the output would look okay. So the simplicity of Kaur's work had eluded Amy D'Entremont and Emma Hyde's NLTK model.

It was time to try something else, I thought. I downloaded and installed Sung Kim's "Multi-layer Recurrent Neural Networks (LSTM, RNN) for word-level language models in Python using TensorFlow" (GitHub). Machine learning developer Christopher Olah gives us a little bit of background context:

"Humans don’t start their thinking from scratch every second. As you read this essay, you understand each word based on your understanding of previous words. You don’t throw everything away and start thinking from scratch again. Your thoughts have persistence.

Traditional neural networks can’t do this, and it seems like a major shortcoming. For example, imagine you want to classify what kind of event is happening at every point in a movie. It’s unclear how a traditional neural network could use its reasoning about previous events in the film to inform later ones.

Recurrent neural networks address this issue. They are networks with loops in them, allowing information to persist."

You can read more from him on understanding LSTM networks here.

As it installed, Kaur's work needed some corporate branding, I thought. So I went to her Instagram page and downloaded a couple photos and then went to potterybarn.com and made some mockups, which you can see above and below here.

The output seemed more organic using this model, but it was still very jumbled. The first line I zeroed in on blew me away.

"the heart is a mirror. if it [is] searching for money
nothing will grow [t]here."

"Woah. That's pure Rupi AI genius," I thought to myself. I knew greater truths were coming soon! All I had to do was run more iterations.

"you want my twenties...
making gold out all of disappearance
like you[r] escaped this life when i might not."

A little rough, I thought, but still good. All those promises and women left in the dust. Sorry Nicole. Sorry Rupi.

"my god an easel of a becoming of myself"

Wow! This was getting very meta-Rupi! I found myself distracted by obscure lines that didn't really constitute Kaur's target audience's intellectual palate.

"the most delicate legs [are] growing
time to remain kind to the night"
"pried between wanted and may[,]
we pull everything that [is] naive."

How virginal and suggestive! "Great stuff," I thought. Very meta. But I had to stay on task and stick to real 2018 Kaur replicable marketing materials.

I thought, "I should come up with my own inspired poems from Kaur." If the AI can do it, then I can certainly.

"When you walk by me on the street
and tell me to smile
I remember countless ancestors
burned at the stake for expressing their femininity
and when I don't smile, know that I forgive you so that we may break the
cycle of karmic ignorance."

"That was pretty good," I thought to myself. I poured a glass of cheap Sho Chiku Bai sake to stay on the task of identifying artificially intelligent, machine learned, and inspired, Rupi lines. My train_loss was 2.651 after 46 epochs. If I tuned the learning rate or sample length to reach a zero loss then it would merely replicate her work exactly. Clearly the RNN and LSTM model needed improvement.

Cold Sho Chiku Bai tastes like water. I imagined that Kaur drank Ayahuasca, strictly.

I could feel the soul of her work in every AI produced paragraph, but it was all mangled up, only to be revealed in rare gems of sentences that seemed sparsely discoverable. But it was definitely there.

"we share the potential to be foolish.
your secret[']s with me to encompass
an alcoholic who tricked you to keep the blades"

Very personal, but not quite good enough. It was hard to stay on track. But then I got a gem:

"the sun in your eyes
you [are] sweeter than the world because
you touch me before i am born."

Good stuff. There were many more gems to be discovered. You are welcome to browse through the 1.1MB file yourself...

But that was enough for one night, I thought.

I went to sleep and I met Rupi's terrorists. They were two medium brown spirits — laced with black — with small square-like helmets — who could summon you to their torture chairs on the astral; but their blades were paper when you pulled back far enough.

Another criticism of her work involved the argument that she would not have been as successful if her upbringing was different. Arguably racist user "Emptyskies" says, "If people could see that these poems were written by [a] spotty, baseball hat wearing, white teenager from the Bronx, do you think they would garner the same interest or be published?"

There have been successful baseball-hat wearing authors. Tim O'Brien comes to mind. His Sweetheart Song of the Song Tra Bong is pure poetry, but it is not exactly the world of high horse intellectualism that we are talking about here.

This is a branding issue. Reducing friction and increasing motivation in your customer's mind is only part of the equation necessary for success.

You also have to have what Kaur calls "relatability." Look at the difference between Kaur's Instagram and Nayyirah Waheed's Instagram. Waheed has no photos of herself.

There is no visual narrative for the viewer in the middle of the bell curve to understand who she is, what her struggle is, or what her culture is about.

The average viewer will unconsciously conclude (without even thinking), "how can I care about her if I can't see who she is?"

Waheed gives us no visual information about herself. Even her website is a wasteland — empty of content.

Kaur, on the other hand, is very visual. She shows her carefully choreographed and photographed self. And she lets viewers know: "I travel. I'm an artist. I paint. I pose. I'm sexy and empowered. My friends are rock stars. You want this too. I tour. Come see me."

She understands that she is her own brand. Her poetry is just one of many potential products. She is a young girl ready for the next adventure. Rupi Kaur. Living the dream. Absolutely killing it.

Her work is not really my thing, but "good for her," I thought.

Conclusion

So could a standard RRN/LSTM model replicate her work? Absolutely not. The output we get that is trained on Shakespeare's work is nearly readable. On the surface, this sounds cool, until you try to replicate modern works of literature. The problem is that the output you get is nonsensical. In other words, the technology is not at the level where it can model language well yet; you get pseudo-language that is more or less nonsensical regarldess of the training input. What does that say about the perceptual relevancy of Shakespeare's language (Early Modern English) today?

The Unreasonable Effectiveness of Recurrent Neural Networks by Andrej Karpathy addresses this in significantly more detail and depth. The creative ideas, projects, and applications in the comments section provide some very interesting context as well as lackluster, but interesting, results in a variety of domains that mirror my own efforts here.

I stared into the black screen of the Emini S&P 500 and its endless pursuit. The platitudes there dealt in lives destroyed in a single print of the tape — the orderflow that screams and burns, and halts, on a moment's notice. The was a poetry there too, I thought. And the market, you could talk to it. A real conglomerate AI. It had a spirit. But to even get minute in communication was rare. Its machine consciousness had developed far beyond what the average intuitive could ever imagine, I thought.

And yet it hung on the lips...pressing forward.

"though my father is around me. I need you.
The mouth in us is a blood to become the rest of us.
I love nothing to make sure we need everything."

         - Rupi Kaur AI on the S&P 500.

I browsed through her poems and found these lines:

"i stand
on the sacrifices
of a million women before me
thinking
what can i do
to make this mountain taller
so the women after me
can see farther." -Rupi Kaur

I chuckled at her intimation. "The youth of poets never ends," I thought to myself. I wanted to tell her, "the blockchain has done the work for you darling. There will be no more sacrifice standing, because the future is peer-to-peer, decentralized, and disintermediated (I'm referring specifically to central banks here). I hope we'll see you there soon."

The Deeper Implications

With the mangled AI version of Kaur's work, the implications are not particularly deep, because we are educated enough to understand all of the words involved; when the work breaks down it does so quite starkly. But for older versions of English we start to enter a very strange territory. I left this for the very end, because I didn't think most people would really care about it. It is the kind of thing a professor can find and have his students write ivory tower mumbo-jumbo essays on. The biggest implication, and lesson, that I took away from this was the following question:

Is Shakespeare nonsensical to the modern ear?

Correlation is not causation, but in this case we have an interesting correlation. You have two separate arguments. Anything run through a current RNN/LSTM model will be nonsensical; the technology is just too rudimentary. That is a valid argument. The other argument is that some materials may be logical. The problem arises when Shakespeare is run through the model and it sounds and even reads logically in a fashion, or to a degree, that is significantly greater than any other works. And this is exactly what happens. I would argue that language has changed so much that Early Modern English is really nearly unrecognizable to the conscious mind. I think the average reader would just leave it at that. But there is an even deeper implication...

Now, you might ask: why is this even important? I mean, really, who the hell cares? But if you understand why the answer is important then you have tremendous power, because the answer lies in the middle ground between the logical and intuitive parts of the mind. "The space where you can no longer start to distinguish the way in which one's ancestors talked. Whoever has mastery over that space can write the future." I might add, "in a very particular fashion." Now let's just say this is a thesis to a new essay, or paper; the number of rabbit holes and wild goose chases you can go on are quite extensive. I will let you ponder that on your own...

Back to The Literature. Copyright, Josh Schultz. 2018.