×
all 6 comments

[–]stucchio 0 points1 point  (1 child)

I'd suggest an alternate model, which unfortunately has a lot more free variables. I don't know how much data would be needed to converge.

  1. Mana is drawn from some parameterized distribution, and choose a prior on the parameter.
  2. Assume each activity has a mana cost c[i], which is unknown (but try to pick some reasonable priors).
  3. Assume mana[j] = sum_i c[i] a[i] (plus some noise term), where mana[j] <- probability dist in (1). The choice of activities is arbitrary, not a random variable.
  4. Run MCMC, max likelihood, or something like that.

The main thing this model predicts is the probability distribution of mana[j] = sum_i c[i] a[j] going forward. So what I'd suggest for model validation is to make a histogram of inferred mana[j] in the training data, and see if it agrees with the probability distribution inferred.

That's how I'd approach this, at least. But I haven't figured out how I'd move forward to something you can optimize on.

(The last time I looked at something like this, I was helping reverse engineer a point-based admissions process - how many points is GMAT, backward caste status, going to IIT, etc worth on admissions to a place that does admissions.)

[–]gwern[S] 0 points1 point  (0 children)

The idea of rephrasing it as 'mana costs' is somewhat similar to what Savage Jim is suggesting on Twitter: https://twitter.com/jim_savage_/status/1034620853206609920

[–]manic_panic 0 points1 point  (2 children)

I recommend you read Bollen & bauldry (2011) in Psych Methods if you can access it. PM me if u Need a copy.

[–]TotesMessenger 0 points1 point  (0 children)

I'm a bot, bleep, bloop. Someone has linked to this thread from another place on reddit:

 If you follow any of the above links, please respect the rules of reddit and don't vote in the other threads. (Info / Contact)