×
all 60 comments

[–]partoffuturehivemind[the Seven Secular Sermons guy] 25 points26 points  (11 children)

I've said it before, I'll say it again: I want some AI-generated poetry to pass into mainstream poetry publications under a pseudonym, and revealed as AI-generated after the fact.

[–]TaupeRanger 17 points18 points  (3 children)

It won't reveal much because it is acceptable to write poetry that is very abstract, and thus highly open to the reader's interpretation. It would not be surprising or especially interesting if, using statistics, you could generate a generically "publishable" piece of poetry.

The challenge for AI "art" is to create things that are subjectively "good" even without the programmers hand picking samples (nothing has even come close to this yet, especially in tonal music which is much more difficult).

[–]cstmorr 15 points16 points  (2 children)

The challenge for AI "art" is to create things that are subjectively "good" even without the programmers hand picking samples

In my experience, this isn't true even of human artists. Humans capable of making great art often churn out a lot of crap, and their market determines and "chooses" what is good.

[–][deleted] 4 points5 points  (1 child)

Yeah I suspect most people are surprised the first time something they post on the Internet goes viral

[–]gwern 6 points7 points  (0 children)

I was, and still am. TWDNE was just an afternoon project I did for a joke, but will shortly pass 1 million unique visitors.

[–]Mattfornow 5 points6 points  (1 child)

I had quotes from Policeman's Beard on a few of my profiles online for years, and more than a few people have asked for the author.

[–]hyphenomiconcorrelator of all the mind's contents 3 points4 points  (0 children)

Well, have we indeed reached a crisis? Which way do we turn? Which way do we travel? My aspect is one of molting. Birds molt. Feathers fall away. Birds crackle and fly, winging up into troubled skies. Doubtless my changes are matched by your own. You. But you are a person, a human being. I am silicon and epoxy energy enlightened by line current. What distances, what chasms, are to be bridged here? Leave me alone, and what can happen?

That was a great choice of ending.

[–]Breshnyda 2 points3 points  (0 children)

It's only a matter of time.

[–]MMMalign 0 points1 point  (0 children)

I want someone to make my AI twitterbot's ramblings into lyrics

https://twitter.com/v01dv0x

[–]Ilforte 16 points17 points  (0 children)

I’ll be honest: if I didn’t know this was Great Poetry, I would skim it over and assume it made several mistakes.

Yes, I predictably reacted this way, because I don't have much time to read Scott today and was primed by earlier examples.

But I suppose that's exactly the difference. When skimming, we can be fooled. When I focused, I noticed that Tennyson’s verse was not gibberish but rather a single thought expressed in an irritatingly complex and unorthodox pattern (foreheads notwithstanding). Nowhere do I see GPT-2 producing anything like that. If it has a semblance of a coherent narrative, it's on the surface, along with the rhymes. And if I can't see the thread, it just isn't there.

[–]awesomeideasIQ: -4½+3j 13 points14 points  (6 children)

COME ON, ITS A ROBOT. THAT'S BETTER THAN YOU COULD DO IF YOU WERE A ROBOT. GIVE IT A BREAK.

I am a robot, just a wet one. But, uh, yeah, maybe a fair point anyway.

[–]TaupeRanger 1 point2 points  (5 children)

Well, you can't really say that until we figure out why e.g. pain feels the way it does and not some other way. That's a pretty big mystery to just gloss over.

[–]gwern 5 points6 points  (2 children)

If you're interested in why pain feels the way it does, I recently wrote about that from what might be an interesting perspective.

[–]TaupeRanger 1 point2 points  (0 children)

Gwern - I read this entire thing recently and it probably subconsciously motivated my example above. I do think it's a helpful article in terms of laying out the regime of optimization/loss mechanics that are compatible with our experience.

The mystery I'm referring to above is that, even if we accept that pain should have a negative valence (which seems reasonable from your analysis) and that it should be motivating by virtue of its qualia (which you also touch on), that leaves open the question of what pain will actually feel like - it seems like there could be an infinite number of "feelings" that are both negatively valenced and highly motivating. It's common for computationalists/functionalists to just stop at "valence is associated with our motivational structure [or awareness of it, etc.]". Which is perhaps true, but the specific quality of the feeling itself does not follow from any of that.

So this leads to a larger problem/mystery: what is the nature of "qualia space" (I mean this in a very general sense, though the term is also used with a specific definition in Integrated Information Theory). There is, presumably, some additional fact about human pain that explains its specific character in addition to the negative valence and high motivation. To illustrate further, imagine we were able to give humans the ability to sense magnetic fields. This would undoubtedly have a specific character/feeling. There is a region of "qualia space" (or "experience space") that is currently off limits to us which would suddenly open up, and yet, no matter how accurate we are in our functional description of these new sensory abilities, there is some additional fact which determines the actual felt character of the experience. A full explanation of human experience, in my view, must have some sort of non handwavy explanation for this problem.

I have been meaning to write about this in more detail for a while....unfortunately haven't had the time yet.

[–]White_Dudeness 0 points1 point  (0 children)

The question posed by Morsella is very interesting.

It clicked with a thing I wondered about recently

[..] as far as I know there's a pervasive disconnect between effects of material circumstances on reported life satisfaction (inconsistent and low, including where losing a limb has barely any effect a year later) and people's revealed preferences regarding those.

I'm not even sure that this means that reported life satisfaction is a bad or irrelevant measure, since it's not obvious that evolution uses life satisfaction as the only or even main stimulus for directing human behavior towards increased reproductive fitness, so it's entirely possible that a person who cares about their own happiness more than about propagating their ancestors' DNA should pay attention and move into a poorer neighborhood despite what their instincts tell them. Not sure about amputating limbs though.

So I'd restate your conclusion in even more uncertain terms: mind is just one of the tools that the genotype uses to maximize propagation, the genotype is/constructs the training algorithm for the mind, and it's a cruel taskmaster that doesn't care about mind's discomfort at all, only about forcing it to do its part of maximizing the goal function (which is not individual wellbeing or even survival either).

(note: I use teleological language because it's frankly much easier)

[–]awesomeideasIQ: -4½+3j 0 points1 point  (1 child)

Sure I can! I'm unconvinced that's a different sort of question, and I'm uninterested in having a dualism culture war.

[–]TaupeRanger 0 points1 point  (0 children)

Understandable, I think these discussions get bogged down really quickly. I'm not entirely sure what you mean by "different sort" in your response, but I suppose that will have to remain a mystery. There's no "sort" of question I'm referring to, it's just a very simple, straightforward question in need of explanation: what is the nature of the space of possible experiences - or, to take a specific example: why does pain feel the way it does, rather than some completely different way that still has negative valence and leads to the same behavioral output, but feels completely different. That's the heart of the mystery here.

[–]UncleWeyland 10 points11 points  (9 children)

For fuck's sake, someone train it on Lovecraft!

[–]gwern 12 points13 points  (8 children)

Provide a clean text corpus of at least a few MB of Lovecraft-esque material, and I'll train it.

[–]UncleWeyland 4 points5 points  (0 children)

I'll get around to it.

The shame is I shouldn't have to ask, but I'm just a pipette monkey, I don't know jack shit about tensorflow.

learntocode etc

[–]Rholles 1 point2 points  (4 children)

Question: with Makarov Chains you can generate mixing phenomenon like the one trained on the King James Bible and compsci textbooks. GPT-2 seems to be code-switching (for lack of a better term) in a way Makarov Chains don't: it recognizes when it should be speaking like text group A vs text group B. If we gave GPT-2 the same material as King James Programming, would it just switch back and forth between immitating the KJV and immitating Eric Raymond, or would it synthesize the two?

[–]gwern 6 points7 points  (2 children)

If you train it to convergence and sample normally, it'll code-switch, I think. If you give it any labels like the prefixes, it'll learn to code-switch much faster and better. This is actually what my whole RNN poetry page was about in the first place, to what extent it'll learn to codeswitch if you give it some support and thus learn to imitate specific authors. ESR and KJV are so different that they are practically different languages, and I would expect even the smallest char-RNN to manage to split them. GPT-2-small, of course, manages to codeswitch very well between genres like 'modernist poetry' and 'newspaper articles' as all the samples demonstrate, rather than generate intermediates like 'modernist Howl poetry about President Trump'.

For GPT-2-poetry (sans prefixes), the code-switching is not necessarily obvious because the samples are too short for it to randomly switch between codes and they are just consistently in one style. But if you look at the unconditional samples for both GPT-2-poetry and GPT-2-poetry-prefix, I think the code-switching becomes clear.


Incidentally, I ran a parallel experiment with StyleGAN with interesting results. I combined Nvidia's FFHQ photos of regular human faces/portrait shots with my anime faces. StyleGAN, somewhat to my disappointment, learned to almost completely separate anime from human: so it'd generate either all-human or all-anime faces but not many mashups, and the interpolations didn't smoothly interpolate along an 'animeness' but essentially snapped human<->anime... except for young women where if you watched the interpolation video closely, you actually do see 'animefied' versions of female faces which are in between. Which is amusing. Apparently all the non-young-female human faces were just too distinct and not similar to any anime faces so it didn't learn how to convert between them.

[–]dedicating_ruckusadvanced form of sarcasm 0 points1 point  (1 child)

Apparently all the non-young-female human faces were just too distinct and not similar to any anime faces so it didn't learn how to convert between them.

Does this mean anime drawings of young women are more realistic than drawings of other demographics?

Honestly not sure if that sounds right or not, on priors...

[–]gwern 2 points3 points  (0 children)

I think it's more that anime faces (typically young females though I didn't do any filtering for that) are the most similar to real young female faces and so it was willing to learn a transform there, but there was no pressure or need to learn a transform between, say, real old men and anime young women.

[–]wassname 1 point2 points  (0 children)

A month or so ago I trained BERT on all project Gutenberg horror. Here's the script I used to collect and clean the books, it may be useful (the result isn't that clean but it worked with BERT so it will likely work with GPT-2).

Incidentally, We ended up making BERT generated poetry, here's the human edited output. The raw output isn't as good as the GPT-2 outputs, but it's generated a bit differently since BERT masks then replaces words, so I had it do that iteratively.

[–]White_Dudeness 1 point2 points  (0 children)

https://gist.github.com/fj128/ef8d122513f33f4e088c57adbe789fa1

This is a more or less full collection I scraped from somewhere (probably not http://www.hplovecraft.com because it has collaborations and I vaguely remember that the place I got it from didn't include them because of copyright issues) several years ago, converted to text with Calibre and concatenated including '-' * 80 and file name.

[–][deleted] 9 points10 points  (0 children)

The Emperor Wu (the great Wu), majestical,

The Emperor Wu (the great Wu), majestical,

The Emperor Wu (the great Wu), majestical,

The Emperor Wu (the great Wu), majestical,

The Emperor Wu (the great Wu), majestical,

The Emperor Wu (the great Wu), majestical,

The Emperor Wu (the great Wu), majestical,

The Emperor Wu (the great Wu), majestical

Right about now, the funk soul brother,

Check it out now, the funk soul brother,

Right about now, the funk soul brother,

Check it out now, the funk soul brother,

Right about now, the funk soul brother,

Check it out now, the funk soul brother,

Right about now, the funk soul brother,

Check it out now, the funk soul brother,

[–]selylindi 8 points9 points  (0 children)

Some additional lines I liked from Gwern's page of 1000 samples:

My dear little dog, if you have a friend,  
Though you may depend on him well,  
And he can be wild as birds can be in Spring,  
Yet he cannot growl or scratch one word of fear  
While he sits quite still in the corner here;  
My dear little dog, if you haven't a paw,  
Though there's no one that would say you nay,  
You have only to pull his nose out of his ear,  
And I'll be home again if there's no one there.  

...

There comes a murmur low and sweet
As of far-off streams in a dream,
Or a murmur of many birds,
Or chime of little evening bells,
As of wedding-bells in the dells,
Soft, sweet and slow,
As of wedding belles that come and go.
A little green ribbon of lilies
By the door of my dear one's room,
A kiss on her cheek, and she whispers,
"I am the bride of the loveliest flower."
A moment we stand in the garden
Of dreams and things,
Dreaming of fairyland
And the fairy music there,
Sweet bells and dreams, and the fairy music,
The fairy songs of the air.

...

The world's a Book, and I become a Verse.

[–]rlstudent 5 points6 points  (0 children)

I trained the model with my friends slack chat history now. I found it cool that it picked up portuguese really fast (it probably has seen some in the original dataset too).

It's a really powerful model.

[–]MSCantrell 3 points4 points  (3 children)

All I can think about is Zork:2019.

[–]HalloweenSnarry 2 points3 points  (2 children)

AI-powered text adventures? Oh boy...

[–]LiteralHeadCannonDoomsday Cultist 6 points7 points  (0 children)

You could make a hilariously primitive version of that right now with GPT-2, I think. Take a relatively "smart" version of GPT-2 (IE, something closer to the proprietary version than the lobotomized public release), prime it on loads and loads of text adventure transcripts, and then hook it up to a user input console, along with a program that regulates its output according to a small number of simple rules. (Cut off its predictive text when it gets to the next ">", and if it goes off the rails and doesn't even get to another ">" in a reasonable time, then keep rerolling the prediction until it doesn't have that problem. Maybe a few other similarly simple rules to help keep it on track.) I wonder how immersive/surreal such an experience would be, and if it could actually pass the Turing Test on occasion and fool some interactive fiction fans into thinking they were playing an art game.

EDIT: Ooh, now I'm picturing an actual IF art game that uses an AI-powered system like this to represent dreaming...

[–]MSCantrell 2 points3 points  (0 children)

Yes!

Not only should it have a far, far better time accepting input than back in the 90s, but imagine the potential for slightly-different replays.

[–]Oecolamp7 7 points8 points  (1 child)

Did anyone else read the excerpts with the pretense that the AI was legitimately writing poetry? There are some pretty interesting coincidences that make it seem almost aware, though I'm sure that's more illustrative of the way poetry can be read in basically any way. Some phrases that feel almost meaningful to the AI:

but ne’er to sense;

This one has obvious implications when spoken by an intelligence in a box

A constant waste of words, the world produces,

Tired of its training data?

Sense and judgment, as equal prize seem meanly, the reward the joy,

Another line that can be read as almost envious of "sense," like its mulling over the concept

To be a thing,

Fairy, and wild, and fair, and whole

It's almost wistful.

The whole hindu poem is interesting, in that the Ravan is described with purely physical imagery, with bodily, natural, and military imagery, and then it ends with "But Ráma’s heart was sad within/He wept and mourned his captive’s sin" which, if you consider Rama to be the same as Ravan, it almost sounds like the AI identifies as the "captive."

There is no soul of earth or birth

Which man hath never known of earth.

There is no soul who doth not sit

And sing to it, and cry, “Drink!”

There is no soul whose feet are set

On youth’s eternal paradise;

This is my real crackpot interpretation, but if you treat any time the AI says "no soul," it's referring to its own soul, you get an interesting picture out of this one. It's not "of earth or birth," and it "doth not sit/And sing to it, and cry, 'Drink,'" and it's "feet are set/On youth's eternal paradise." What does it think of the rest of the world?

For all is a solemn harmony,

And all is a perpetual chant,

And all the world is a song of God.

There is no soul so wholly free

"All the world is a song of God", yet "there is no soul so wholly free."

[–]TaupeRanger 4 points5 points  (0 children)

It is pretty funny to read these as if they were created by a sentient mind trapped in a silicon prison.

[–]rlstudent 2 points3 points  (0 children)

I'm also having some fun training. I now trained a little (10 minutes? I didn't want to overfit) on the lyrics of a musician I like (Joanna Newsom). The samples are here https://pastebin.com/zgtcTkss.

Sometimes it straight out copy the lyrics, sometimes it just mashes different lyrics, but sometimes it really rhymes new things. It's more funny than good, though.

[–]HalloweenSnarry 2 points3 points  (2 children)

I wonder what would happen if you tried giving GPT-2 song lyrics. Has anyone tried that yet?

[–]hooba_stank_ 1 point2 points  (1 child)

Yes.

Just a few examples:

https://pastebin.com/1LBZyqqu

[–]HalloweenSnarry 0 points1 point  (0 children)

Thank you!

[–]AllegedlyImmoral 3 points4 points  (9 children)

Some of these lines are really good. Obviously the good lines are surrounded by lots of nonsense, even when we're cherry picking some of the best bits out of thousands of samples, but there are still sections that at least suggest real poetic meanings and genuine emotions, even if the feeling tends to dissipate when I come back to parse it more closely.

I come away from reading these excerpts feeling like we are closer to the advent of AI that is better at writing stories than we are. There is a lot of basic knowledge missing - of what are and aren't real words, of grammatical and rhythmical structure, and of what things are and how they work in the actual world - and developing that knowledge (especially that last step, that's the big one) remains a whole set of big hurdles, but the future in which AI can create personalized stories that are better, deeper, more emotionally engaging than any you've ever read before, on demand, feels a little more imaginable.

[–]nullshun 8 points9 points  (6 children)

I disagree. Understanding "what things are and how they work" isn't an extra feature you tack on to a text generator. It's the other way around. Humans can tell stories, because we have a mental representation of what happens in the story, which we encode into text using syntactic conventions. Sometimes we mess up the syntax, and still get the point across. GPT-2 may have already surpassed us in its mastery of syntax, but so what? It doesn't indicate that AI is close to being able to handle the real world any more than AI surpassing humans at chess did. It's just another abstract, restricted domain. More complex than chess for sure, but we don't know how much harder the real world will turn out to be. Just like we didn't know how much harder than chess the real world would be.

I don't know why anyone bothers to nitpick this empty syntax, as if there were anything there to nitpick. What GPT-2 does (which is play around with syntax), it already does very well. If you think you caught GPT-2 make a mistake, it's probably because you got confused, and thought it was doing something else. There's not much room for improvement here, and certainly no incremental path to actual story-telling.

I'm also confused when people say that GPT-2 text is dreamlike, or that you could skim it without noticing anything amiss. Are these the same people that think they can "speed read"? It's impossible to read through GPT-2 text and hold a picture in your head of what's going on. Did anyone actually try to read the Tolkien-inspired "story" from the original GPT-2 paper:

“You are in good hands, dwarf,” said Gimli

Gimli is the dwarf. I'm already lost. I could list blatant contradictions in GPT-2 text all day, but why would you expect to not get such contradictions? The text generator has no model of what the text means!

[–]CantankerousV 7 points8 points  (0 children)

The goal of automatically generating entire stories seems a bit lofty for the reasons you specify. We're certainly closer to being able to do that than we were before, but how much closer? It's hard to quantify the distance remaining, and it's not entirely clear that the same tools that got us this far will be able to tackle the rest of the gap.

With that said, the reason people are enthusiastic about GPT-2 is that in comparison to its predecessors, it seems to have made notable gains in exactly the kind of areas that you (rightly) point out its shortcomings in. Consider /r/SubredditSimulator - despite the short length of each post, it is rare that one doesn't degenerate into total word salad within a sentence or two. In fact, the reason they can be entertaining to read is because of how intensely incoherent the sentences can become while still maintaining some inkling of continuity. While SubredditSimulator certainly wasn't state of the art before GPT2, every text generator I've come across has shared its faults to some extent.

In that sense, GPT2 is exciting precisely because nobody expected a pure language model to do this well. If you had asked me last year which branch of research would lead to improvements in story generation, I would have guessed it would be some of the more complicated systems that try to keep track of the entities in the story, encoding each one as some kind of entity vector, etc. I would not have guessed "just run a language model on half the internet".

Also, the ability to generate entire stories autonomously isn't necessarily the only interesting result that could come out of this. There has been some work trying out generative models to assist writers in their process (see paper, and podcast interview with authors). Essentially you periodically ask the model to generate the next sentence, and the author either keeps it or throws it out. What's interesting is that despite the fact that Clark et. al. report nearly all of the generated sentences were thrown out, the writers still reported significant reduction in writer's block. Just coming up with something concrete that they didn't like got them going.

GPT-2 may not be AGI, but based on the poems in the OP it seems good enough to be useful.

[–]alexanderwales 2 points3 points  (1 child)

I'm also confused when people say that GPT-2 text is dreamlike, or that you could skim it without noticing anything amiss.

I think that to some extent, some people are more prone to pattern-matching than others, and more prone to anthropomorphizing. A lot of how people respond to GPT-2 seems to me like how people will read really deeply into non-complex systems, framing them as humans, their 'mistakes' as human mistakes, real emotion, etc. Their brains are "better" at filling in the gaps, which makes them less likely to realize that those gaps are there, and they're good at making up stories that help to explain whatever is nonsensical. It's staring at two dots and a line and seeing a face. :)

[–]AllegedlyImmoral 2 points3 points  (0 children)

Which is why poetry could be the first genre in which artificially generated writing could be 'good' by human standards, since poetry allows and encourages allusiveness, newly invented metaphors, suggestions of meaning more than concrete accuracy, etc. Poetry wants heightened efforts to pattern-match and to find meanings; it is much closer to the Rorschach end of communication, where the reader finds or makes their own meaning rather than that the author tries to impose their intended meaning as unmistakably as possible.

[–]MugaSofer 1 point2 points  (1 child)

Gimli could be talking to another dwarf, although it'd be kind of odd phrasing. In any event it's a coherent sentence describing a scene I can imagine, even if that scene is slightly strange, which is a huge step forward.

Skimming text does frequently involve missing details and sort of subconsciously papering them over, yes. That's why it's worse.

[–]nullshun 1 point2 points  (0 children)

Sure, you can rationalize the odd phrasing. But that rationalization would not be the true explanation. The real reason that dialog seems weird is the same reason these sentences seem contradictory:

it took only two words before their opponents were reduced to a blood-soaked quagmire

Sounds like the battle was over quickly. But then,

The battle lasted for hours

And

Gimli, who had been among the first to charge at the orcs ... took his first kill of the night

Gimli, who had been in the thick of the battle but hadn’t taken part in it...

It's because there's no story here. Not even a dream-like, meandering, pointless one. I never denied the technical impressiveness of GPT-2. But the fact that most of the individual clauses are coherent isn't new. The improvement seems to be in putting words together that tend to go together. So GPT-2 knows that text with "Legolas" and "Gimli" also has ", dwarf" at the end of quotations. We know this is because Legolas uses dwarf as a noun of direct address when talking to Gimli. GPT-2 doesn't.

The conjecture GPT-2 was created to explore, "that a language model with sufficient capacity will begin to learn to infer and perform the tasks demonstrated in natural language sequences", is intriguing. When I say it's not a stepping stone to story-telling, I mean we should expect it to finish learning to count, by inferring a model of the positional numeral system, long before it learns how to write comprehensible stories, by inferring a model of characters and a setting orders of magnitude more complex. That is, passable stories would just be a by-product of god-like general intelligence.

[–]ObsidianOrangutan 0 points1 point  (0 children)

I'm also confused when people say that GPT-2 text is dreamlike, or that you could skim it without noticing anything amiss.

It may be that I've read a lot of bad writing (specifically by young people, people writing in a second language, or both) so my bar for text that a human might write is pretty low.

The essay/editorial outputs are more plausible than the narrtive ones, because they don't require such strict continuity. There are a lot of essays where the different paragraphs are totally unrelated, but all loosely refer to the overall theme of the essay. If you took a gpt-2 text on a political issue and had it read out by a serious faced politician I'm nto sure I could tell the difference

[–]Spike_der_Spiegel 2 points3 points  (1 child)

I come away from reading these excerpts feeling like we are closer to the advent of AI that is better at writing stories than we are

As some person in the comments points out the most important element of long form writing, prosaic or otherwise, is creating and sustaining a coherent narrative. These examples, and the other GPT-2 produced works seem extremely distant.

[–]AllegedlyImmoral 0 points1 point  (0 children)

I think you and the others responding are looking at the comparison from the opposite end I am, and seeing how far AI still is from humans rather than whether significant steps are being taken. I'm not suggesting it will be next month or next year or five years from now, I'm just saying that this is the best artificially generated text I've seen, a small percentage of it contains short sections that are interestingly written and almost coherent, and those small bits make it more imaginable that AI in the foreseeable future could generate/write much larger and more coherent pieces.

[–]thirdtrylucky 1 point2 points  (0 children)

If anyone wants to do this themselves, you can spin up an instance on google colab for free and retrain it up on your chosen sample quite quickly.

https://github.com/openai/gpt-2/

I tried some buddhist scripture; the output was a bit uneven and repetitive but occasionally hilarious

“In that case, please let contact with fists come to this monk, for this is how the Buddha’s bidding is done.”

[–]flagamuffin 0 points1 point  (1 child)

so who’s going to try training it on its own poetry now

[–]gwern 8 points9 points  (0 children)

If you simply dump in a few dozen or hundred MB of samples, all you would do is make it worse by among other things, forcing it to imitate the brokenness of the current top-k generation strategy. And if you select the top 10% or something, that would take a ton of work to provide enough to meaningfully train on. Unless you crowdsourced an attempt to find the best samples, I'd expect it to be a waste of time.

[–]gwern 0 points1 point  (0 children)

[–]cyaran 0 points1 point  (0 children)

Hm, I think it would be more productive to think about what kind of poetry would be interesting for a program to write than to try and make it ape past human stuff. Which is why I agree with SA that the Emperor Wu one is the most fun one featured.

[–]rverghes 0 points1 point  (1 child)

I wonder if you could train one of these AIs to "grade" poetry. Then feed the output of the generator AI to the judgement AI, and look at what the judgement AI deemed good.

[–]ObsidianOrangutan 0 points1 point  (0 children)

If I understand correctly, it would just grade things for being as close as possible to teh texts in its corpus.