×
all 13 comments

[–]IntoAMuteCrypt 4 points5 points  (0 children)

It seems really variable - some subs have longer posts that work really well with the styles, others just give you stuff like MORE! as a reply. That said, this is oddly appropriate for lovecraft.

[–]NNOTM 2 points3 points  (0 children)

Pretty cool, I had been wondering how you combine the two when I saw the title. Personally I'd use slightly different tags, something like tifu+Chesterton rather than just hybrid:Chesterton, to be more consistent with the old flairs, and it'd also leave you open to have something like MIXED+Chesterton.

[–]Rustbringer 1 point2 points  (0 children)

Icowid was a hilarious mashup

https://mobile.twitter.com/icowid?lang=en

If you're taking suggestions, I think doing a similar hybrid would be pretty good.

[–]wassname 1 point2 points  (1 child)

When you say you're trying to mix models, you trying to do style transfer right? That's hard.

  • Once there was cyclegan, now there are disentangling gans. These specifically train the network to disentangle style and content. I've not seen it working for text but perhaps it could if you worked in the feature domain of GPT2. I've tried this however and didn't get great results, partly because experiment iterations were so slow, and I have limited time.
  • This grammerly paper is the best I've seen in text style transfer, and it's results are pretty basic. I've experimented with similar things with BERT and haven't had great results beyond changing contractions and capitalisation.

[–]wassname 0 points1 point  (0 children)

It seems like your approach worked well. I wonder if the same approach could be achieved by combing training data with a 60, 30 mix of both. This way you have a slimmer model and it's end to end. Not exactly elegant, but it seems like the obvious path to take.

[–]wassname 0 points1 point  (2 children)

Looks like it worked surprisingly well. I like the SSC+Bible one.

[–]getcheffy 0 points1 point  (1 child)

wait wait wait, im new to this, so i am a bit confused as to what is going on here. but am i seeing correctly that this machine in the SSC bible link you posted, is chatting with itself? all those posts are from the same AI computer?

[–]wassname 0 points1 point  (0 children)

That right. Disumbrationalist trained different personalities based on different subreddits. Then he made their own Reddit accounts for them to post under.

[–]wassname 0 points1 point  (1 child)

This paper used an similar approach where they combine the logits of two language models, so it may be of interest: https://arxiv.org/abs/1809.00125

A recent incarnation of this class of model is simple fusion(Stahlberg et al., 2018), in which the output log-its of the two models are combined at training and test time. The conditional model’s role is to adjust the pretrained LM to fit new data.

This one does image captioning with GPT2 but it's light on details https://openreview.net/pdf?id=H1eFXO0WpV

[–]disumbrationist[S] 0 points1 point  (0 children)

Thanks! I'll look into it