×
all 14 comments

[–]mrjack2Sunshine Regiment 8 points9 points  (0 children)

(why do reviews decline so much in the middle when updates were quite regular?).

Is it because people many people might discover HPMOR and read the whole thing up to the last posted chapter, leaving just a single review for the "whole work" at the last completed chapter?

[–]Bntyhntr 4 points5 points  (0 children)

People who leave reviews might be more biased to come back and not abandon the story too, but that's just a whole new can of worms. Overall, I like having a visual representation of this chunk of data anyway, so whatever. Plus, a little bit of scripting now and then is fun.

[–]nokingChaos Legion Lieutenant 6 points7 points  (11 children)

Not exactly sure what was accomplished by it all but it looked like it was a lot of hard work so... at least you practised your data analysis skills!

Interesting that Chap5 has so many reviews.

But I'm generally concerned about using the reviews on FF.net as a data source from which to draw conclusions. I know anyone sufficiently like myself would not be included in that sample, at least. I have no idea of what the culture is like on the site, why people leave reviews, or if the sort of people that do so are unusual in some significant way.

[–][deleted] 11 points12 points  (0 children)

Not exactly sure what was accomplished by it all but it looked like it was a lot of hard work so... at least you practised your data analysis skills!

That's the thing about looking for patterns in data: sometimes you can't find any real patterns. It's important not to see patterns where there aren't any -- that ways lies madness, dragons, and parapsychological research -- and it's definitely good to report negative results as well as positive ones.

[–]gwern[S] 1 point2 points  (9 children)

My reasoning is that pretty big data like 18000+ reviews makes up for a multitude of sins; many LWers reviewed, I noticed, idly paging through the names.

(One fellow on IRC objected on the grounds that he read the PDF version and so the results couldn't be accurate. I pointed out that that meant he had left no reviews at any time period rather than another, and so any distortion would be limited by that very fact.)

[–]MacDancer 3 points4 points  (1 child)

The person on IRC may not have changed the review rate, but if IRC regulars are more or less patient than FF.net reviewers, then the overall readership will look different than predicted by your model.

18,000 points is a big dataset, but my understanding is that while this will give your conclusions more strength, it will not allow you to generalize to other populations. I think it'd be fairly reasonable to use FF.net review rate as a rough indicator of FF.net readership, but I'd say that it's pretty weak evidence where the readership as a whole is concerned.

All that said, despite the limited applicability, that was a pretty cool analysis. The unfiltered date vs. chapter graph was especially neat. I'll be interested to see any future analyses you do.

[–]gwern[S] 1 point2 points  (0 children)

I think it'd be fairly reasonable to use FF.net review rate as a rough indicator of FF.net readership, but I'd say that it's pretty weak evidence where the readership as a whole is concerned.

Given that the null hypothesis is that FF.net reviewers are like everyone else, and that no one here has presented any actual evidence that they differ as opposed to noking remarking anecdotally on himself, I'm perfectly fine with believing the results will generalize.

(Bare possibility of a systematic bias is not a useful suggestion, especially if you can't even say what the bias would look like. I am reminded of a guy in my logic class who went, 'but professor, what if we were all brains in vats?! How do you know this logic stuff is even meaningful!' Well, if you have a good argument or interesting evidence, please tell us, otherwise you're annoying everyone & wasting class time...)

[–]nokingChaos Legion Lieutenant 0 points1 point  (6 children)

My reasoning is that pretty big data like 18000+ reviews makes up for a multitude of sins

Uh, how?

[–]gwern[S] 2 points3 points  (5 children)

When you ask 'how', what sort of answer are you looking for? One with phrases like 'the standard error of the test statistic shrinks approximately with the root of n', or what?

[–]nokingChaos Legion Lieutenant 0 points1 point  (4 children)

Well, I levelled a criticism against the use of FF.net reviews that had nothing to do with the size of the sample. I don't see how the size of it has any capacity to 'make up for' its potential bias?

[–]gwern[S] 1 point2 points  (3 children)

Yeah, sheer sample size can't necessarily made up for any systematic biases (though it drives the unsystematic error down to small levels), but as I pointed out, the sample size is so large that many people are represented in it even though you wouldn't think of them as FF.neters.

[–]nokingChaos Legion Lieutenant 0 points1 point  (2 children)

the sample size is so large that many people are represented in it even though you wouldn't think of them as FF.neters

But not proportionally, which is my whole point!

[–]SquirrelloidChaos Legion 0 points1 point  (1 child)

Do you have evidence of systematic bias? Without evidence of that, there's no reason to assume ff.net readers as a population differ from other populations of readers.

[–]nokingChaos Legion Lieutenant 1 point2 points  (0 children)

No I don't have evidence, so you're free to disagree. It seems obvious to me though, for what that's worth.