Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Nov 20, 2014 22:20:50 GMT -5
Back in the day when I was a student at Kellogg and computer time had to be reserved well in advance, I had a lot of fun doing multiple regressions to find formulae best explaining "past results." They were amazingly accurate, but were ridiculous for forecasting--which I think most people in VB and business are much more interested in. (e.g. in correlating price of aluminum, the price of Heinz Ketchup was a strong independent variable, though I have not followed up to see how it performed as a predictor).
|
|
|
Post by The Bofa on the Sofa on Nov 20, 2014 22:30:49 GMT -5
Back in the day when I was a student at Kellogg and computer time had to be reserved well in advance, I had a lot of fun doing multiple regressions to find formulae best explaining "past results." They were amazingly accurate, but were ridiculous for forecasting--which I think most people in VB and business are much more interested in. (e.g. in correlating price of aluminum, the price of Heinz Ketchup was a strong independent variable, though I have not followed up to see how it performed as a predictor). I disagree. Look at the constant complaints about rankings, if so-and-so is ranked behind the team they beat. Similarly, look at the NCAA's use of RPI, in that they that aren't interested in predicting, and only want a good assessment of what has been done. What's a better assessment of what has been done but to rank teams in a manner that best reflects who won and lost? Now, I can guarantee that when I post the rankings that end up, there are going to be complaints about "How can such and such a team be ranked there?" and the answer is simple, if you change their rating, the overall correct drops. But that won't suffice, and it will be, "Oh, this isn't right" But I figure, ok if you want to focus on who won, let's do that. But let's do it right. Then again, this is something you could combine with Pablo to get a mixed rating to wash out the extremes. 88.42% right now
|
|
|
Post by n00b on Nov 20, 2014 22:40:01 GMT -5
I'd be satisfied if they selected the field strictly based on a ranking like this. I imagine I'd be in the minority there though.
|
|
|
Post by The Bofa on the Sofa on Nov 20, 2014 22:57:12 GMT -5
I'd be satisfied if they selected the field strictly based on a ranking like this. I imagine I'd be in the minority there though. I agree with you. What I figure is if you want to say you are rewarding teams for what they have done, at least measure that as well as you can, instead of doing it the easy and crappy way.
|
|
|
Post by volleyguy on Nov 20, 2014 23:27:10 GMT -5
The NCAA has a lot of inadequacies, but watching out for their own self-interests generally isn't one of them. It's entirely plausible, and quite likely, the NCAA isn't interested in this iteration of RPI as a predictor of the past, present or future because their greater interest in "non revenue" sports, and quite clearly as far as volleyball goes, has ALWAYS been in promoting parity and growth, even by artificially promoting entire regions or individual programs. The fans or even the individual member institutions are not the stakeholders and opinion-makers the NCAA is interested in, but rather, it's the Executive Committee (Management Council), the conferences--especially the power conferences--and the sponsors that matter.
I'm certain someone at the NCAA understands generally what RPI does and does not measure. If there were a pressing need to change it, they would, just as they have done for basketball. This particular RPI algorithm is doing exactly what the NCAA wants it to do: it allows them to distribute the tournament bids and assemble the brackets in defensible and politically beneficial ways without fundamentally compromising the integrity of the crowning of the National Champion. The RPI "bias" (particularly towards West Coast teams) and principally at the outer margin of the field is an acceptable trade-off for a more geographically "balanced" field and for the lift it provides to non-volleyball conferences (SEC, ACC).
I like where this analysis is going, but I wouldn't hold out hope that it will result in changes to the tournament selection process unless the underlying revenue or political reality changes significantly.
|
|
|
Post by n00b on Nov 20, 2014 23:33:13 GMT -5
The NCAA has a lot of inadequacies, but watching out for their own self-interests generally isn't one of them. It's entirely plausible, and quite likely, the NCAA isn't interested in this iteration of RPI as a predictor of the past, present or future because their greater interest in "non revenue" sports, and quite clearly as far as volleyball goes, has ALWAYS been in promoting parity and growth, even by artificially promoting entire regions or individual programs. The fans or even the individual member institutions are not the stakeholders and opinion-makers the NCAA is interested in, but rather, it's the Executive Committee (Management Council), the conferences--especially the power conferences--and the sponsors that matter. I'm certain someone at the NCAA understands generally what RPI does and does not measure. If there were a pressing need to change it, they would, just as they have done for basketball. This particular RPI algorithm is doing exactly what the NCAA wants it to do: it allows them to distribute the tournament bids and assemble the brackets in defensible and politically beneficial ways without fundamentally compromising the integrity of the crowning of the National Champion. The RPI "bias" (particularly towards West Coast teams) and principally at the outer margin of the field is an acceptable trade-off for a more geographically "balanced" field and for the lift it provides to non-volleyball conferences (SEC, ACC). I like where this analysis is going, but I wouldn't hold out hope that it will result in changes to the tournament selection process unless the underlying revenue or political reality changes significantly. The RPI was not designed to promote geographical balance. This is a phenomenon in one division of one sport because there are a disproportionately large number of good teams in the area of the country where there are the fewest number of teams. They use RPI across all the sports for the sake of uniformity. I'm not defending the RPI as a good measure of team strength, but there is no ulterior motive behind the NCAA using it.
|
|
|
Post by volleyguy on Nov 20, 2014 23:44:40 GMT -5
The NCAA has a lot of inadequacies, but watching out for their own self-interests generally isn't one of them. It's entirely plausible, and quite likely, the NCAA isn't interested in this iteration of RPI as a predictor of the past, present or future because their greater interest in "non revenue" sports, and quite clearly as far as volleyball goes, has ALWAYS been in promoting parity and growth, even by artificially promoting entire regions or individual programs. The fans or even the individual member institutions are not the stakeholders and opinion-makers the NCAA is interested in, but rather, it's the Executive Committee (Management Council), the conferences--especially the power conferences--and the sponsors that matter. I'm certain someone at the NCAA understands generally what RPI does and does not measure. If there were a pressing need to change it, they would, just as they have done for basketball. This particular RPI algorithm is doing exactly what the NCAA wants it to do: it allows them to distribute the tournament bids and assemble the brackets in defensible and politically beneficial ways without fundamentally compromising the integrity of the crowning of the National Champion. The RPI "bias" (particularly towards West Coast teams) and principally at the outer margin of the field is an acceptable trade-off for a more geographically "balanced" field and for the lift it provides to non-volleyball conferences (SEC, ACC). I like where this analysis is going, but I wouldn't hold out hope that it will result in changes to the tournament selection process unless the underlying revenue or political reality changes significantly. The RPI was not designed to promote geographical balance. This is a phenomenon in one division of one sport because there are a disproportionately large number of good teams in the area of the country where there are the fewest number of teams. They use RPI across all the sports for the sake of uniformity. I'm not defending the RPI as a good measure of team strength, but there is no ulterior motive behind the NCAA using it. I didn't say that RPI was designed to promote geographic balance. I am saying that they're not oblivious to the effects of RPI with regard to volleyball, but it's an acceptable trade-off.
|
|
|
Post by jgrout on Nov 20, 2014 23:45:46 GMT -5
The final AVCA poll. Hindsight is always 20/20.
|
|
|
Post by Babar on Nov 20, 2014 23:55:51 GMT -5
The final AVCA poll. Hindsight is always 20/20. Not for Dick Cheyney!
|
|
|
Post by pogoball on Nov 21, 2014 0:38:55 GMT -5
Let's assume that this method proves to be a much better predictor of actual past results than RPI. In other words, it proves to be a far superior measure of the accomplishments of the teams over the course of the season. In my opinion, this should then be the primary means of tournament selection. The teams who accomplished the most are the teams that should be invited to the tournament.
What would it take to get the NCAA selection committee to consider or even prioritize this ranking in its selections? What would have to happen politically?
|
|
|
Post by n00b on Nov 21, 2014 1:11:36 GMT -5
What would it take to get the NCAA selection committee to consider or even prioritize this ranking in its selections? What would have to happen politically? Pablo taking Mark Emmert's job.
|
|
|
Post by johnbar on Nov 21, 2014 11:16:36 GMT -5
It's a really interesting approach. The result that stuck out (to me) was Seton Hall tied for 26th.
|
|
|
Post by The Bofa on the Sofa on Nov 21, 2014 11:29:32 GMT -5
It's a really interesting approach. The result that stuck out (to me) was Seton Hall tied for 26th. They didn't end up there. I am working on some formatting issues and will be posting results soon.
|
|
|
Post by The Bofa on the Sofa on Nov 21, 2014 11:35:19 GMT -5
OK, here's coming the whole shebang. I got rankings, ratings AND the rating range, which is how each team can be varied and maintain the same overall quality (I capped it at a change of 500, so teams that are 500 different can keep going pretty much indefinitely.
|
|
|
Post by badgerbreath on Nov 21, 2014 11:44:29 GMT -5
So here's a question. How much worse is the AVCA ranking, or the RPI ranking, or Pablo for that matter? I presume you could just enter those rankings and assess their relative optimality using the same metrics. (AVCA tricky because only some teams are ranked).
Another question...what is the basis for the base score. And what are the three columns to the left?
EDIT...Never mind...you are changing them!
|
|