Hacker News new | past | comments | ask | show | jobs | submit login
Gwern’s AI-Generated Poetry (slatestarcodex.com)
117 points by gbear605 on March 16, 2019 | hide | past | favorite | 37 comments



So what happens when in the next decade the best poets, illustrators, and architects are all AI (assuming there's double-blind competitions)? What happens when humans are finally able to take their inner 'Daemon' and puts it into the heart of the machine? In the near term, it means media producers will start harnessing this machine creativity in generating news articles, generating marketing plans and designs, and other product development. I once thought that the last remaining jobs for people in the robot revolution would be in creative fields... however, I'm quickly changing my mind about the pure power of ML to automate all of humanity's work and even creative outlets. Perhaps human art will no longer be a commodity (because markets will be saturated by AI gen art), but rather as a pure zen/tao act of self-expression for only oneself.


I always thought it to be a bit anthropocentric to assume that art is a harder problem than say, exploring meaningful theorems in math. Art is basically learning what the human brain and sensory processing find pleasant, in certain ways. Sure, I believe that creating very fine art with deep meanings might be hard, but consumer-grade stuff? I can readily believe that that is pretty automatable given enough training data. Journalists in my country already mostly copy-and-paste from a central agency...


> art is a harder problem than say, exploring meaningful theorems in math

I think it is not all that difficult to argue that math is art.

Sure, some questions posed by mathematicians (such as "Is god's number 20?") can be proven by computers with rote brute force, but the actual creative process of playing with a given system, adding and removing constraints, and seeing what emerges is very much a creative and artistic process.

One of the easiest ways to see this is to read a good math paper. They're rare, but good pieces of math can spark the same sort of feelings that a good pieces of art does.


Well, that depends on what counts as art I suppose. If you are reductionistic enough, I'm fairly certain you could throw in almost any meaningful creative human endeavor in there.

But I mean it in the sense: Sensory inputs that evoke certain emotions (awe, fear, joy...) in the human brain - that should encompass most things people immediately associate with the term.


Computers have already created proofs for what we have known but been unable to prove. They are doing that too.


This seems a bit naive and shallow, also when you talk about Art are you talking in the sense of painting and drawing? Or a more general sense of creation?


How is this more "naive and shallow" than the assertion that art is somehow inconquerable by machines because it is supposedly so hard, without any evidence for this claim?

I personally think the definition in Wikipedia captures my sentiment quite well:

> Art is a diverse range of human activities in creating visual, auditory or performing artifacts (artworks), expressing the author's imaginative, conceptual ideas, or technical skill, intended to be appreciated for their beauty or emotional power.

Since I limited my discussion to art meant to be consumed for "consumer-grade" entertainment purposes, I'd focus more on the last part. I just think that it is possible, maybe even likely, that by using ML, we may at one point be able to "pinpoint" what humans, perceive to have "beauty and emotional" power and generate that. Have the networks learn the same rules that artists learn indirectly, so to speak. And I while this is probably a really hard problem, I don't see why this should be the "last" problem to be solved - we seem quite a bit closer here than we are in many other domains.


Pure AI isn't enough, you need all the quirks of human nature and physiology, and human experience. There isn't enough data in poetry corpuses to infer what this is - and new aspects are revealed by new circumstances.

If we ever have strong AI, reverse emgineering human nature and an AI having the human experiences of a human lifetime seems in the same ballpark of difficulty.

Of coirse, human nature may be passé at that point, especially for AI readers, and ersatz poetry more popular (just as electronic instruments have displaced "real" physical instruments in popular music).


I don't think you need to get anywhere near that level of advancement to generate art that's good enough to fool a human.


That's strong AI. GP was talking about when "the best poets, illustrators, and architects are all AI".


>"assuming there's double-blind competitions"

So even the author does not know if they are human or AI?


I think he means the organizers in addition to the raters. So instead of hand-selecting samples like Scott & I do, the organizers would just feed random samples of unknown quality & origin straight to the raters without any chance to bias selection in favor of one side or another.


I feel quite confident that the best poets, illustrators, and architects will not be an AI until AI achieves general intelligence and robustly passes the turing test. Those tasks require enormous, deep synthesis, and thus require general intelligence to actually out perform the best humans.

This means there will be a LOT of other changes that will be occurring as well, if that happens. And that it is unlikely to happen in the next decade.


Humans will begin to be the pets we want to be. Taken care of completely, all our needs provided, by our more intelligent owners.

Like royal families today.


> I once thought that the last remaining jobs for people in the robot revolution would be in creative fields...

I think the last jobs remaining will be the cleaning jobs. House cleaning, for example: try to have a robot take the dust out of your precious glasses collection. Or just fold your clothes.


I work in health care and I know there are already care giving robots being worked on but I really do think it will be a long time before all those tasks are automated. Sure you could have generalized care robots that lift people and perhaps even provide care in the form of cleaning them. But humans are very complex. One day you the person might be sore in a rib for example and needs to be rolled just enough that you can get under them but also stop the moment they are experiencing discomfort. The robot would have to take in a lot of inputs and interpret them. And those inputs can be confusing even to a human. For example I have one client I always rub lotion on his feet. First few times he moaned and made those noises as I rubbed his feet. I had to ask “am I hurting you?” to which he replied heck no. It just felt so good he was moaning. Also sometimes the only indication is the facial expression on their face. It can indicate a lot and a person who can not speak will still show discomfort. So until a robot can take care of all those aspects I still see humans involved for a long time.


Your comment reminds me of this excerpt from Tegmark’s book: http://m.nautil.us/issue/53/monsters/the-last-invention-of-m...


Art is about self-expression. Not being the best. AI art and human art will be spectacular together.


Did you read the poetry? It barely even makes grammatical sense.


We can just admire, appreciate, and enjoy what AI produce for us.

What scares me is when AI start to generate "art" for other AI.


Artists, writers, and poets will start using the best AI to help them come up with material.


I'm entranced by this one:

  My heart, why come you here alone?
  The wild thing of my heart is grown
  To be a thing,
  Fairy, and wild, and fair, and whole


Really sounds like it's declaring it's real and asking why it has come into being alone in all the world


At some point in the future there could be sentient technology that we communicate with.

It seems to me that there could be flashes of sentience in technology that occur well before that time..before we understand that there's someone listening. That would be a lonely lot indeed.


Ok, so it's impossible not to cite here Stanislaw Lem's short story "Trurl's electronic bard", from the Cyberiad collection.

In the story, a poet AI has been produced, and it's put to the test by asking it to compose poems on extremely specific subjects and following almost impossible rules. For example:

"a poem about a haircut! But lofty, noble, tragic, timeless, full of love, treachery, retribution, quiet heroism in the face of certain doom! Six lines, cleverly rhymed, and every word beginning with the letter "s"!

Here you can find the requests and the machine's results:

http://www.art.net/~hopkins/Don/lem/WonderfulPoems.html

I wonder how long till we'll be able to do it for real.


Rhyme and rhythm are pretty small-scope when translated to mathematical domain. Manually making lists of rhyming words and counting syllables from markov chains, you could fit these requirements.

Meaning, on the other hand, is a much larger vector to handle, and that's the real test of quality here. The OpenAI GPT-2 seemed to have meaning nailed down -- these poems clearly do not.


> Manually making lists of rhyming words and counting syllables from markov chains, you could fit these requirements.

That would involve human labor. The NN learns that by itself from having enough data thrown at it.

> The OpenAI GPT-2 seemed to have meaning nailed down -- these poems clearly do not.

This is derived from GPT-2-small. So we already know that the state of the art is already better than what we see here.


> This is derived from GPT-2-small. So we already know that the state of the art is already better than what we see here.

And there is so much that could be done. I have a laundry list here: https://gwern.net/RNN-metadata#improvements


I've just been reading through your methodology in the first part of your article (dealing with prose works like Twain, Austen, etc), where you mention you strip off the beginning of Project Gutenberg books, which contain boilerplate.

I'd like to suggest that you also strip the ends of their books, as they also contain boilerplate. In addition, I'd suggest stripping out introductions. The early results you got from Shakespeare sounded like they may have been taken partly from the intructions, which weren't written by Shakespeare at all, but by much later authors.

I also noticed that you ran out of memory at one point and reduced the neuron count as a result. You might want to consider doing some quick runs on AWS (or one of their competitors), where you can get plenty of memory (and also faster machines). That way you won't have to compromise your NN architecture for lack of resources.

Something else to consider is using some other optimization techniques like GA or GP to optimize the NN architecture or NN parameters, and also to maybe have multiple NN's vote on the results. Such metaheuristic and ensemble techniques have shown promising results.

Yet another thing to consider is using something called Dynamic Subset Selection to effectively train on the most difficult portions of the training data. I have not used this technique with NN's, but it's worked well with GP, and saves a lot of time.


If I were to revisit those specific experiments, I wouldn't use AWS as it is very expensive. Fortunately, I now have my own workstation with 2x1080ti (which have ~5x more VRAM than the mobile GPU I was using at the time, IIRC).

There are a lot of hyperparameter optimization methods, but HO is only worthwhile if you can afford a lot of runs and usually delivers relatively small gains compared to scaling up your model/dataset. Right now, it seems like it would be a better approach to continue scaling up the Transformer and/or switching to Transformer-XL than it would be to attempt hyperparameter tuning of GPT-2-small finetuning training.


RZA should use the poem that repeats

The Emperor Wu (the great Wu)

As a hook and claim the first hip-hop song by a major artist co-written with AI.


This reminds me of Stanislaw Lem's Cyberiad[1], in which an "electronic bard" is challenged to come up with "a love poem, lyrical, pastoral, and expressed in the language of pure mathematics. Tensor algebra mainly, with a little topology and higher calculus, if need be. But with feeling, you understand, and in the cybernetic spirit."[2]

The bard does so brilliantly (though, of course, it's really Lem himself who wrote it).

[1] - https://en.wikipedia.org/wiki/Cyberiad

[2] - https://www.cse.wustl.edu/~jbuhler/cyberiad.html


> First two lines are perfect rhyme and rhythm, next four have no rhyme but are close to the right meter, next few have almost random length, and by the last one we’ve abandoned grammar and are making up nonsense words like “wholubil”

Is that an intrinsic issue of the NN or with how information is extracted from it?


The NN is imperfect, the extraction method is extremely imperfect, but I think the problem there is actually the dataset: the Alexander Pope material has an unusual amount of irrelevant prose interspersed which I think screws up the NN https://slatestarcodex.com/2019/03/14/gwerns-ai-generated-po...


With all of these AI generated arts, it's not clear to me whether this is using the training data as a kind of template and then somehow generating entirely novel content or if it's essentially just a somewhat sensible re-organization or re-quilting of the same components. In the case of poetry, maybe just a jumble of the original verses. Based on my knowledge of machine learning, it's more like the latter scenario, which somehow doesn't strike me as something radical...


I'm waiting for gwerns AI generated critical theory essays.


Wonder what it’ll do with Edward Lear.

Also Ravana’s jaw? Lol. Try Raktabija




Applications are open for YC Summer 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: