Goliath broke into the top 100 this week at #9. Also, Leviathan somehow dropped two spots from #3 to #5.
Interestingly enough, Goliath dropped off the list this week, causing a ton of rides to move up one spot. The only two scenarios where I see this happening is that it very narrowly qualified last week, but this week it didn't or Jeff was messing with the algorithm. Jeff, if you could shed some light on this, that would be terrific. Also, Leviathan slid back up to tie The Voyage for the third spot.
Loving Maverick since 2007!
The code hasn't changed. The composition of the rating pool did because more people rated more rides, driving up the cut off.
Jeff - Editor - CoasterBuzz.com - My Blog
maverick master said:
Jeff, if you could shed some light on this, that would be terrific.
I couldn't help but hear Bill Lumbergh in my head as I read this.
Goliath only had 37 ratings when it was on the list last week. Outlaw Run only has 38, so I suppose the same thing could happen to it.
Fury 325 debuted this week at #2, matching what Outlaw Run did earlier this summer! Speaking of Outlaw Run, it dropped from 2nd to 9th based on one additional rating. I guess someone didn't enjoy it quite as much...
Those ratings are going to be a lot more volatile with fewer ratings. I'm not crazy about having so few ratings, but including more would mean allowing less experienced track records have more say. I'm not sure if that trade off matters... I could run some experiments. Pretty good representation by B&M in recent years in the top 10 though.
Jeff - Editor - CoasterBuzz.com - My Blog
What is the justification for requiring a certain amount of "experience" anyway"? Firstly, the cutoff for ride count seems arbitrary. Secondly, imo, even with greater ride counts I'm unlikely to change my previous evaluations. It's only a five point scale. People can tell a great ride from an ok or bad one by visiting only a single park with a diverse set of coasters.
"The term is 'amusement park.' An old Earth name for a place where people could go to see and do all sorts of fascinating things." -Spock, Stardate 3025
bjames said:
What is the justification for requiring a certain amount of "experience" anyway"? [...] People can tell a great ride from an ok or bad one by visiting only a single park with a diverse set of coasters.
If you've only ever been to your home park you'd probably rate the best ride at that park a "5", regardless of how good it is compared to other rides. I imagine there are a lot of accounts made on coasterbuzz who follow the forums for a week and then never return, so those rankings probably don't count too much
What he said. We don't need a bunch of fanboys creating track records with all of the rides from one park as fives. That would dramatically alter the results.
Jeff - Editor - CoasterBuzz.com - My Blog
And even if it's a legitimate account with legitimate ratings, I really don't put much stock in a "5" given by someone who only has 30 coasters under their belt. Of the first 30 coasters I rode, I don't think any would still have a 5 today but I would've given the likes of Vortex(CGA) and Invertigo and Hershey's boomerang 5+'s at the time.
Hobbes: "What's the point of attaching a number to everything you do?"
Calvin: "If your numbers go up, it means you're having more fun."
Different matrices can be selected to produce "better" results. Though the selection of one over another is arbitrary--at least without definitive evidence that it's better--and may well be worse. More accurately the results are just different.
But presumably there is some real "goodness" that these things are ostensibly trying to get at. If 99% if the people who had ridden both MF and Mean Streak prefer MF, it would be hard to call a methodology that put Mean Streak ahead "different" instead of "wrong."
Hobbes: "What's the point of attaching a number to everything you do?"
Calvin: "If your numbers go up, it means you're having more fun."
Yeah, statistical significance isn't arbitrary. It's statistically significant.
Jeff - Editor - CoasterBuzz.com - My Blog
But the poll isn't necessarily doing that, is it? Not every coaster being counted needs to be ridden by everyone whose rating counts. Based on the stated matrix, you could have two rides included in the top 100 that had no common riders, right?. So you don't know necessarily how many people rode both MF and Mean Streak. And even if there is overlap (and as they are both at CP, I suspect there is a lot), the ratings of people who haven't ridden both are included in the overall ratings. And the level of overlap will vary from ride to ride. Change the percentage of people you include in the poll and/or the number of track records in which a coaster must appear and you likely get different results. Which ones are "better?"
Ultimately, the issue is taking something subjective (which coaster or even type of coaster or element of coasters is best) and applying numbers (1-5 scale being arbitrary) and a matrix as to which coaster ratings and whose coaster ratings to include (both arbitrary) and produce a sorted list of averages (of random online profiles) carried out to 5 decimal places as if its an objective result. Similar issue is identified in the dialogue in Andy's signature.
I've had this discussion a million times with Jeff. :)
No one seems to want to admit that there's an 'art' to presenting the info you collect. It's that touch that creates the little variances between the otherwise ridiculously identical polls that everyone does.
ApolloAndy said:
But presumably there is some real "goodness" that these things are ostensibly trying to get at. If 99% if the people who had ridden both MF and Mean Streak prefer MF, it would be hard to call a methodology that put Mean Streak ahead "different" instead of "wrong."
True, but more to the reality of what would be happening is that 100 people say MF is good and 50 say Mean Streak is. But after accounting for track record and experience, 60 of those MF opinions are ignored and only 5 of the Mean Streak opinions are. Suddenly, based on the pollster's decision on what qualifies as an "informed decision" (the 'art' of creating a list like this), you have Mean Streak getting more support than MF.
And it gets weirder when there's just not enough opinion to be statistcally significant and a coaster isn't included simply because of lack of data. Like the Outlaw Run example a few posts back. It is loved among Buzzers. I'm sure looking at the data, that was clear (it did debut on the list at #2). But it wasn't included because of the perceived significance - or lack thereof - of the collected data. (and this was another debate/discussion I know I've had with Jeff in the past)
With it excluded, the list is incomplete at best and flat out incorrect at worst. At what point does the pollster infuse their expertise on the subject (it's not hard to know what coasters we're all raving about) and allow some of the art to lean the other direction? Sticking so rigidly to mathematic methodology and scientific approach in the case creates an inaccurate list...for the past three years.
In this case (coasters lists aren't exactly life changing endeavors), I tend to prefer the idea of using whatever data you have along with enthusiast expertise and a sprinkling of common sense to create complete - and likely mostly accurate, especially for the purposes at hand - lists over sticking so rigidly to signifcance and methodology and ending up with valid, verifiable numbers-based lists that are incomplete and, ironically, essentially incorrect as they're "best coasters" lists that exclude certain coasters.
ApolloAndy said:
I think I'd rather have a ride appear and be slightly inaccurate than not appear.
Me too. Completeness trumps accuracy in this case and any inaccuracies are going to be so slight that it's irrelevant.
The LG approach is closer to just different than to objectively better. As it would be more labor intensive, I would prefer the existing approach over it.
I'm sure I overexplained as I always do.
Point is, sticking rigidly to statistical significance creates situations where the list is useless because it's wrong due to incompleteness. Over time it works itself out, but when a 'real time' poll takes three seasons to list a new coaster something is off. Creating *some system* to allow/account for rides with less votes and/or less "qualified" riders makes so much more sense to me and always has.
You must be logged in to post