|
Post by mikegarrison on Nov 18, 2014 11:44:20 GMT -5
|
|
|
Post by firedup on Nov 18, 2014 11:47:24 GMT -5
Boy, you are fired up about this...... I'm firedup about this and everything else!! The answer to the question of "why should this team be in the tournament?" is obvious. It's Michigan. They are the only team that can beat Stanford!
|
|
bluepenquin
Hall of Fame
4-Time VolleyTalk Poster of the Year (2019, 2018, 2017, 2016), All-VolleyTalk 1st Team (2021, 2020, 2019, 2018, 2017, 2016)
Posts: 12,447
|
Post by bluepenquin on Nov 18, 2014 12:02:17 GMT -5
USC is sort of on the extreme - they have the #1 RPI SOS (they still need to qualify). But generally to your question - RPI takes care of this. Washington State and Cal have great RPI SOS - and their RPI stinks because they don't win any of these games. Utah was in the same boat until just recently. Ohio State last year missed out because they couldn't win enough games in a very tough Big 10. RPI gives a pretty big penalty for losing. And then there is the RPI bias against the WCC, Mt West, and Big West. I do think that LMU and Santa Clara had about the same kind of path/opportunity to having an RPI bid as Oregon State and Utah. They had to win more games - but then their conference isn't nearly as good. Is it possible instead of a bias, that the WCC, Mt West and Big West teams just aren't that good and RPI is actually accurate ? Are there any head to head matches of the bubble teams from these West Coast conferences with bubble teams from the East Coast that actually show they are getting screwed somehow ? I define bias as difference between RPI and Pablo. It is possible that Pablo is 'wrong' - but the past evidence is that Pablo is far more accurate. I should be more careful in lumping all WCC, Mt West, and Big West teams together. The bias is not uniform. Last I checked, there is no bias in favor of the ACC - that they are suffering more of a negative bias. Purdue and Penn State is certainly not getting favors from RPI relative to Pablo. And not all teams in the WCC and Big West are getting 'screwed'. However, those conferences are far more likely to suffer a negative bias than any of the other conferences.
|
|
|
Post by BeachbytheBay on Nov 18, 2014 12:14:20 GMT -5
Is it possible instead of a bias, that the WCC, Mt West and Big West teams just aren't that good and RPI is actually accurate ? Are there any head to head matches of the bubble teams from these West Coast conferences with bubble teams from the East Coast that actually show they are getting screwed somehow ? I define bias as difference between RPI and Pablo. It is possible that Pablo is 'wrong' - but the past evidence is that Pablo is far more accurate. I should be more careful in lumping all WCC, Mt West, and Big West teams together. The bias is not uniform. Last I checked, there is no bias in favor of the ACC - that they are suffering more of a negative bias. Purdue and Penn State is certainly not getting favors from RPI relative to Pablo. And not all teams in the WCC and Big West are getting 'screwed'. However, those conferences are far more likely to suffer a negative bias than any of the other conferences. it is not possible that Pablo (or Massey) is wrong. Data is data. and actually the most glaring disparity of teams that are 'screwed' isn't in even in the top 50 or bubble teams, it's in the 50-150 range that huge disparities in team/conference strengths tend to show up - so it manifests itself more in top 100 W/Ls and what is considered a 'bad loss' - basically there isn't a team in the Pac-12 I'd consider a loss to that would be a 'bad loss', and only one team in the WCC (Portland)
|
|
|
Post by The Bofa on the Sofa on Nov 18, 2014 12:33:15 GMT -5
Hawaii is interesting this year. Their resume is very suspect. Basically best wins are Ohio & Northridge (at home), and a bad road resume. Heavy home schedule, that could get them degraded a lot in selection. No bad losses. What is interesting is that despite a suspect resume, their RPI is good and their Massey power rating is relatively very good (22 power vs. 31 Massey ranking) which indicates they score points. With their very good block, they could be a tough out in the tournament, if they have a good passing game. Seems like Washington would be the likely destination for the first round, possibly Stanford. Hawai'i's home schedule has been like this for years. Playing extra matches at home will not hurt them come selection time. Interesting dichotomy. Since it appears that Hawaii's excessive home matches have hurt them in the past (recall that year when they weren't seeded? The fact they played so many home matches certainly was an argument against them), it seems weird that you would insist it won't hurt them this year, while referring to the past.
|
|
|
Post by The Bofa on the Sofa on Nov 18, 2014 12:34:58 GMT -5
I agree. I would just like to see a concession from the committee that RPI overrates teams from the east coast and underrates teams from the west and take that into account. Is Hawaii a west coast team ? because I would contend they are way over rated in RPI this year. How does that relate to the east coast bias claim ? Hawaii current RPI 30 Hawaii current Pablo 20 Are you sure they are "way overrated" in RPI? Other than blatant assertion?
|
|
|
Post by Wolfgang on Nov 18, 2014 12:38:05 GMT -5
Hawaii will generally be screwed year in, year out, because of the travel distance and associated travel costs. More home games to lessen travel burdens. Also, make more $$$ to have home games.
My family used to have a family reunion once every two years back in the day. We would rotate among various meet-up locations, always some family member's hometown. Towns in California, North Carolina, Texas, Illinois, Washington, Colorado. The family member who got screwed over the most? The one who lived in Alaska. We NEVER held any family reunions in Alaska. Unfair? Yes.
|
|
|
Post by Barefoot In Kailua on Nov 18, 2014 12:42:26 GMT -5
Hawai'i's home schedule has been like this for years. Playing extra matches at home will not hurt them come selection time. Interesting dichotomy. Since it appears that Hawaii's excessive home matches have hurt them in the past (recall that year when they weren't seeded? The fact they played so many home matches certainly was an argument against them), it seems weird that you would insist it won't hurt them this year, while referring to the past. Bofa, Hawai''i's RPI has been sitting near 30 for quite awhile now. They will get into the tournament provided they don't fall apart in their remaining 4 games. The location of the matches during the pre conference schedule will not hurt them. It's wins and losses that matter, location of the matches, not so much. The argument against Hawai'i teams in the past that you reference were made my members of this forum, not by the Committee.
|
|
|
Post by mikegarrison on Nov 18, 2014 13:01:41 GMT -5
I define bias as difference between RPI and Pablo. It is possible that Pablo is 'wrong' - but the past evidence is that Pablo is far more accurate. I should be more careful in lumping all WCC, Mt West, and Big West teams together. The bias is not uniform. Last I checked, there is no bias in favor of the ACC - that they are suffering more of a negative bias. Purdue and Penn State is certainly not getting favors from RPI relative to Pablo. And not all teams in the WCC and Big West are getting 'screwed'. However, those conferences are far more likely to suffer a negative bias than any of the other conferences. it is not possible that Pablo (or Massey) is wrong. Data is data. and actually the most glaring disparity of teams that are 'screwed' isn't in even in the top 50 or bubble teams, it's in the 50-150 range that huge disparities in team/conference strengths tend to show up - so it manifests itself more in top 100 W/Ls and what is considered a 'bad loss' - basically there isn't a team in the Pac-12 I'd consider a loss to that would be a 'bad loss', and only one team in the WCC (Portland) Certainly it is possible that pablo or Massey is "wrong". They are both algorithms and also implementations of those algorithms. There could be errors in the implementation. Or the input data could be incorrect. Either of these would be a case where they are "wrong". But in terms of being wrong about which teams are stronger than which other teams -- that's a more tricky question. Designing a metric or a rating algorithm is easy. Coming up with a specific set of criteria for judging it is harder. And improving it until it does well against those criteria is also harder. One thing that is very important to pablo that many people fail to understand is that the creator of pablo has stated that the criteria for success is that it does a good job of predicting future results. That's the yardstick that changes to pablo are measured against. You need something like that in order to say whether a metric or a ranking algorithm is successful. I worked in an international committee that met for more than two years just to design a metric for measuring a particular thing. We first had to come up with a document we could all agree to listing "key criteria" that a successful metric would meet. Some of these were conflicting with others, so the final result was certain to be a balance of different criteria. We got a lot of data and analyzed it in many different ways, arguing about whether different metrics met the various key criteria. Eventually we were able to narrow down the possibilities and finally end up with a single answer that was judged to be the best at meeting our initial list of criteria. This sort of thing is NOT easy when it is done thoroughly!
|
|
|
Post by BeachbytheBay on Nov 18, 2014 13:04:06 GMT -5
Is Hawaii a west coast team ? because I would contend they are way over rated in RPI this year. How does that relate to the east coast bias claim ? Hawaii current RPI 30 Hawaii current Pablo 20 Are you sure they are "way overrated" in RPI? Other than blatant assertion? which points out this year's Hawaii team has no problem scoring points, they just don't win very well which means they could very possibly pull an upset(s) in the tournament - they block extremely well and they score points - good combo
|
|
|
Post by BeachbytheBay on Nov 18, 2014 13:14:07 GMT -5
it is not possible that Pablo (or Massey) is wrong. Data is data. and actually the most glaring disparity of teams that are 'screwed' isn't in even in the top 50 or bubble teams, it's in the 50-150 range that huge disparities in team/conference strengths tend to show up - so it manifests itself more in top 100 W/Ls and what is considered a 'bad loss' - basically there isn't a team in the Pac-12 I'd consider a loss to that would be a 'bad loss', and only one team in the WCC (Portland) Certainly it is possible that pablo or Massey is "wrong". They are both algorithms and also implementations of those algorithms. There could be errors in the implementation. Or the input data could be incorrect. Either of these would be a case where they are "wrong". But in terms of being wrong about which teams are stronger than which other teams -- that's a more tricky question. Designing a metric or a rating algorithm is easy. Coming up with a specific set of criteria for judging it is harder. And improving it until it does well against those criteria is also harder. One thing that is very important to pablo that many people fail to understand is that the creator of pablo has stated that the criteria for success is that it does a good job of predicting future results. That's the yardstick that changes to pablo are measured against. You need something like that in order to say whether a metric or a ranking algorithm is successful. I worked in an international committee that met for more than two years just to design a metric for measuring a particular thing. We first had to come up with a document we could all agree to listing "key criteria" that a successful metric would meet. Some of these were conflicting with others, so the final result was certain to be a balance of different criteria. We got a lot of data and analyzed it in many different ways, arguing about whether different metrics met the various key criteria. Eventually we were able to narrow down the possibilities and finally end up with a single answer that was judged to be the best at meeting our initial list of criteria. This sort of thing is NOT easy when it is done thoroughly! 'wrong' is such a black and white word bottom line is the published predictions I have seen in the past show that Pablo and Massey predict better than RPI - hence they are better INDICATORS of team strength - it's that simple - the data has supported that they provide a better estimation of order or ranking or whatever term one wants to assign to it it is also very simple that past published data shows the West Coast has the bias against it with RPI (and it is not just Volleyball, it is notable in college baseball) - hence these are the two sports that year in and year out west coast coaches alternately shrug or complain about imbalance in both selection and seeding it would be a very simple (not complicated) thing for the NCAA to provide Pablo/Massey top 25/50/100 'rap sheets' along with RPI, which would at least convince the stakeholders that the committee is taking into account a comprehensive (vs. limited) evaluation of the field it selects and seed now whether the NCAA is interested in such a simple guideline (and why would 75% of the conferences/membership be interested when there is basically no negative impact to them by not changing from RPI-centric rating???) only the NCAA could answer - but it would be interesting for them to actually address why/why not that wouldn't be an added formalized criteria
|
|
|
Post by mikegarrison on Nov 18, 2014 13:26:39 GMT -5
But in terms of being wrong about which teams are stronger than which other teams -- that's a more tricky question. Designing a metric or a rating algorithm is easy. Coming up with a specific set of criteria for judging it is harder. And improving it until it does well against those criteria is also harder. bottom line is the published predictions I have seen in the past show that Pablo and Massey predict better than RPI - hence they are better INDICATORS of team strength - it's that simple - the data has supported that they provide a better estimation of order or ranking or whatever term one wants to assign to it So your criteria is that they are better predictors. But what makes you thing that is the NCAA's criteria? The NCAA may have some criteria you are not considering. They may be concerned about data availability, and so their criteria may be that the metric can only use certain parameters (like wins and losses but not points). They may have a criteria that the metric can not have an incentive to run up the score. They may have a criteria that the metric should influence teams to schedule in a certain way or to not schedule in a certain way. They may have a criteria that the process is entirely controlled by the NCAA with no outside involvement. There are lots of criteria the NCAA might have that could be in conflict with the one you have focused on. Personally I happen to think that predicting the probabilities of future results is a very good criterion, but I'm not the NCAA.
|
|
|
Post by The Bofa on the Sofa on Nov 18, 2014 13:35:45 GMT -5
So regarding regional biases in RPI, recall an exercise that I have described before, where I examine inter-regional matches that have been played and compare them to the relative RPI values. I know the NCAA insists that RPI has nothing to do with predicting outcomes, but you'd think it should do a good job of reflecting outcomes for matches that have been played, right?
But then you see things like this: There have been 27 matches between teams from the East region this year against teams from the Pacific region. Of those 27 matches, RPI correctly reflects the team that won 23 times, so 85%. Great, right?
However, if you look more closely, it's not so great. For example, of those matches, the Pacific team has the higher RPI in 19 of them. And in those matches where the Pacific team does have the better RPI, they have won all 19.
In contrast, teams from the East have the better RPI for 8 matchups, but they only won 4 of them.
What this tells us is that Pacific team's RPIs are lower than they should be to best account for the results. The proper RPIs should be more like 7/8 for the east and 19/22 for the Pacific teams.
Alternatively, the way to think about it is that in matches that the east team won, the east team is rated higher 4/4 times. However, when the Pacific team won, they are only ranked higher 19/23 times, or 83%.
Now, this is only one comparison, and there's only 4 data points on the east winning, so we don't want to draw too much conclusion on this alone, but look at something like South vs West. There, when the south team wins, they end up ranked higher 90% of the time, but when the West team wins, they are ranked higher only 75% of the time. And this is like 30 - 40 matches. Conversely, when the South team has the higher rpi, they won 71% of the time. However, when the West has the higher RPI, they win 92% of the time. Same with the Pacific - when the South has the higher RPI, they won 65% of the time. When the Pacific has the higher RPI, they are 11/11. Conversely, the South teams ALWAYS (13/13) end up with a higher RPI than the Pacific team. However, when the Pacific team wins, they end up higher in RPI only 11/18 (61%).
Seriously, this is a conspiracy to the Herb Stimple level ("A gentile always beats a Jew, and the Gentile goes on to win more money")
|
|
|
Post by BeachbytheBay on Nov 18, 2014 13:37:08 GMT -5
bottom line is the published predictions I have seen in the past show that Pablo and Massey predict better than RPI - hence they are better INDICATORS of team strength - it's that simple - the data has supported that they provide a better estimation of order or ranking or whatever term one wants to assign to it So your criteria is that they are better predictors. But what makes you thing that is the NCAA's criteria? The NCAA may have some criteria you are not considering. They may be concerned about data availability, and so their criteria may be that the metric can only use certain parameters (like wins and losses but not points). They may have a criteria that the metric can not have an incentive to run up the score. They may have a criteria that the metric should influence teams to schedule in a certain way or to not schedule in a certain way. They may have a criteria that the process is entirely controlled by the NCAA with no outside involvement. There are lots of criteria the NCAA might have that could be in conflict with the one you have focused on. Personally I happen to think that predicting the probabilities of future results is a very good criterion, but I'm not the NCAA. well, we KNOW it's not the NCAAs criteria because they don't formalize using it!! there are ways around the 'run up the score' argument - they could dictate no benefit for 'blow-outs' (say below 16 points), and I seriously doubt blow-outs differentials (say 8 vs. 12) are of much use as to data availability, this is the information age, that's a non-starter. the fact that 'blow-out' data could not be used in large part should satisfy the scheduling, i.e., scheduling teams to maximize points won't really help, you still want to schedule non-conf a reasonable amount of teams that will challenge you I would agree the one thing RPI does is encourage west coast teams to 'schedule east', but that can be easier said than done, I've seen teams with a positive RPI differential one year have a massive negative RPI differential the next with similar type schedules playing eastern teams
|
|
|
Post by The Bofa on the Sofa on Nov 18, 2014 13:42:06 GMT -5
bottom line is the published predictions I have seen in the past show that Pablo and Massey predict better than RPI - hence they are better INDICATORS of team strength - it's that simple - the data has supported that they provide a better estimation of order or ranking or whatever term one wants to assign to it So your criteria is that they are better predictors. But what makes you thing that is the NCAA's criteria? That's true, Mike, but I will tell you something else. See my post above, looking not at predictions, but about reflecting what HAS happened. The NCAA can tell me all they want that they don't care about predicting, and I'll take it, but at that point, you darn well better be interested in reflecting things that have already happened, because if you don't care about that, then you've got nothing. And if you look at what RPI has done in terms of reflecting what has happened, you can see a very strong east/south bias, particularly against the west/pacific. When teams from the east win, they always end up rated higher. When teams from the west win, they are much more likely to be ranked behind the team that beat them. I know upsets happen. I was the one who taught everyone here that not only do upsets happen but that they MUST happen, so I get that. But when it just happens that east teams are the ones always pulling off the upsets and never getting upset themselves, there is a problem. And that's what we find with RPI.
|
|