|
Post by n00b on Oct 30, 2014 10:27:43 GMT -5
In the same way that the original BCS formula partially used a margin of victory measurement which led to some running up the score by power teams, you would probably see a bit less "sportsmanship" in volleyball if you incentivize point-scoring. But volleyball is not the same as these other sports. You can't "run up the score." You have to play every set to 25. Are you saying there are coaches who advocate throwing a certain number of points in a match in the name of "sportsmanship"? Anyway, Washington has experimented with lineups this year, and also has pulled starters out of matches where the opponent was clearly overmatched. Hasn't stopped them from ending up number 1 in pablo this week. People have been raising these concerns for a long time about pablo, but there has never been any actual evidence that the concerns are valid. I think it's fair to say that Washington is an outlier. They are one of only 2 teams to out score their opponent in every match this season (and those two teams are #1 and #2). Again, the issue isn't that those things are currently negatively affecting the rankings, it's that if Pablo became an official criteria, coaches would change how they coached.
|
|
|
Post by n00b on Oct 30, 2014 10:28:56 GMT -5
Coaches start seniors on senior day. Coaches often start non-starters against teams they know they are going to beat leading to closer set scores. Coaches put in backups late in sets to get them playing time late in sets that they are going to win. Do they still want to win those points? Of course. But they aren't doing everything they can to win those points. They're conceding some points in order to make they're bench players happy. And despite all this, the relationship between point percentage in the first match and the winning percentage in the second match is indistinguishable from what Pablo expects it to be. So either the stuff you claim is happening isn't happening, or it's not significant. The only place where we see deviation from the model is if we look at blowouts, but there the issue is not that teams do better than expected based on points, which would suggest they "let up" in blowouts, but that teams don't do as good as expected after blowouts, which is exactly opposite of what you'd expect if teams were letting off the gas. If anything, it is like the other team gives up. You failed to quote my next paragraph that agreed that it isn't significant. But it WOULD cause coaches to coach differently if Pablo was an officially criterion.
|
|
|
Post by The Bofa on the Sofa on Oct 30, 2014 10:59:25 GMT -5
And despite all this, the relationship between point percentage in the first match and the winning percentage in the second match is indistinguishable from what Pablo expects it to be. So either the stuff you claim is happening isn't happening, or it's not significant. The only place where we see deviation from the model is if we look at blowouts, but there the issue is not that teams do better than expected based on points, which would suggest they "let up" in blowouts, but that teams don't do as good as expected after blowouts, which is exactly opposite of what you'd expect if teams were letting off the gas. If anything, it is like the other team gives up. You failed to quote my next paragraph that agreed that it isn't significant. But it WOULD cause coaches to coach differently if Pablo was an officially criterion. But if it isn't significant, then why would coaching differently by not doing these things make a difference? If you can't tell a difference when they are doing them, how do you expect to tell a difference if they stop? The current results are indistinguishable from what you'd expect if they weren't doing these things.
|
|
|
Post by mikegarrison on Oct 30, 2014 11:14:01 GMT -5
You failed to quote my next paragraph that agreed that it isn't significant. But it WOULD cause coaches to coach differently if Pablo was an officially criterion. But if it isn't significant, then why would coaching differently by not doing these things make a difference? If you can't tell a difference when they are doing them, how do you expect to tell a difference if they stop? The current results are indistinguishable from what you'd expect if they weren't doing these things. Exactly. There is no evidence that coaches would change how they coach if seedings were based on pablo. You know what would change, though? Scheduling. Right now teams interested in raising their RPI have a very specific interest in scheduling matches against the best teams from the worst conferences. And I think the NCAA likes it that way. Certainly they like it that way in men's basketball, the birthplace of RPI. There were a whole bunch of teams from non-power conferences who were being excluded from the tournament in favor of the bottom teams from the power conferences, and they wanted a chance to show they deserved to be in. But the power conference teams refused to schedule them. RPI changed that.
|
|
|
Post by The Bofa on the Sofa on Oct 30, 2014 11:20:42 GMT -5
But if it isn't significant, then why would coaching differently by not doing these things make a difference? If you can't tell a difference when they are doing them, how do you expect to tell a difference if they stop? The current results are indistinguishable from what you'd expect if they weren't doing these things. Exactly. There is no evidence that coaches would change how they coach if seedings were based on pablo. You know what would change, though? Scheduling. Right now teams interested in raising their RPI have a very specific interest in scheduling matches against the best teams from the worst conferences. And I think the NCAA likes it that way. Certainly they like it that way in men's basketball, the birthplace of RPI. There were a whole bunch of teams from non-power conferences who were being excluded from the tournament in favor of the bottom teams from the power conferences, and they wanted a chance to show they deserved to be in. But the power conference teams refused to schedule them. RPI changed that. I have long said that. This is one reason why I think the NCAA likes RPI. It provides an incentive for teams at the top of conferences to schedule each other.
|
|
|
Post by s0uthie on Oct 30, 2014 14:09:22 GMT -5
Remember that the upper threshold for Pablo scoring affecting your rating is approximately 59% of points scored. If you beat a team 25-18, 25-18, 25-17, Pablo handles that game identically to one with scores of 25-0, 25-1, 25-3. I think anecdotally, we can all agree that coaches would coach differently if you points scored against those terrible teams in your conference mattered. Statistically, people are also correct in saying that it won't actually matter if they do coach differently because Pablo doesn't really care about those super blowouts where the score is 20-10 and you empty your bench. Does that bring the two camps closer to understanding each other?
|
|
|
Post by BeachbytheBay on Oct 30, 2014 14:47:34 GMT -5
Remember that the upper threshold for Pablo scoring affecting your rating is approximately 59% of points scored. If you beat a team 25-18, 25-18, 25-17, Pablo handles that game identically to one with scores of 25-0, 25-1, 25-3. I think anecdotally, we can all agree that coaches would coach differently if you points scored against those terrible teams in your conference mattered. Statistically, people are also correct in saying that it won't actually matter if they do coach differently because Pablo doesn't really care about those super blowouts where the score is 20-10 and you empty your bench. Does that bring the two camps closer to understanding each other? I don't get the argument that 'two camps' need to be brought closer together. why exactly should they need to be brought together?? it's been shown Pablo is a superior RATING system for predicting outcomes - end of story now as to ranking, using RPI because it factors in wins - I can understand using it for setting up some order or groupings of teams - but to compound it's flaws and also use it to measure the 'quality' of wins/losses is very circumspect because it introduces biases that can be very significant in evaluting quality of wins/losses, when there are 20-40 position disparaties in how RPI 'ranks' teams
|
|
|
Post by volleyguy on Oct 30, 2014 14:52:00 GMT -5
Remember that the upper threshold for Pablo scoring affecting your rating is approximately 59% of points scored. If you beat a team 25-18, 25-18, 25-17, Pablo handles that game identically to one with scores of 25-0, 25-1, 25-3. I think anecdotally, we can all agree that coaches would coach differently if you points scored against those terrible teams in your conference mattered. Statistically, people are also correct in saying that it won't actually matter if they do coach differently because Pablo doesn't really care about those super blowouts where the score is 20-10 and you empty your bench. Does that bring the two camps closer to understanding each other? I don't get the argument that 'two camps' need to be brought closer together. why exactly should they need to be brought together?? it's been shown Pablo is a superior RATING system for predicting outcomes - end of story now as to ranking, using RPI because it factors in wins - I can understand using it for setting up some order or groupings of teams - but to compound it's flaws and also use it to measure the 'quality' of wins/losses is very circumspect because it introduces biases that can be very significant in evaluting quality of wins/losses, when there is 20-40 position disparaties in how RPI 'ranks' teams But is it compounding or a consistent application of the measure. Are you suggesting that they should use another measure of good wins and bad losses after having used RPI to determine the field. That's what I don't understand about your argument.
|
|
|
Post by redbeard2008 on Oct 30, 2014 15:05:58 GMT -5
You know what would change, though? Scheduling. Right now teams interested in raising their RPI have a very specific interest in scheduling matches against the best teams from the worst conferences. And I think the NCAA likes it that way. Certainly they like it that way in men's basketball, the birthplace of RPI. There were a whole bunch of teams from non-power conferences who were being excluded from the tournament in favor of the bottom teams from the power conferences, and they wanted a chance to show they deserved to be in. But the power conference teams refused to schedule them. RPI changed that. Yeah, I remember when Washington refused, for years, to schedule Seattle U in men's basketball, because they "had nothing to gain."
|
|
|
Post by BeachbytheBay on Oct 30, 2014 19:49:04 GMT -5
I don't get the argument that 'two camps' need to be brought closer together. why exactly should they need to be brought together?? it's been shown Pablo is a superior RATING system for predicting outcomes - end of story now as to ranking, using RPI because it factors in wins - I can understand using it for setting up some order or groupings of teams - but to compound it's flaws and also use it to measure the 'quality' of wins/losses is very circumspect because it introduces biases that can be very significant in evaluting quality of wins/losses, when there is 20-40 position disparaties in how RPI 'ranks' teams But is it compounding or a consistent application of the measure. Are you suggesting that they should use another measure of good wins and bad losses after having used RPI to determine the field. That's what I don't understand about your argument. Yes, that is exactly what I am saying, if they use RPI for an initial ordering/ranking, they should then do the next level of evaluation for top 25/50/100 W/L based on another system that provides a better evaluation of how 'good' a teaam is (like Pablo or Massey). being consisant by using a flawed measuring stick is being consistantly wrong. it's like saying measure a floor with a ruler that is not calibrated (it is 11" but says it is 12"), and then checking your measurment with the same ruler - you just reinforce the incorrect result. If you use a different tool (Pablo or Massey) to check your result, you can see where there are small differences (in which case a team is correctly placed), or large differences (and a team is over/undervalued significantly and for those teams inspect a little more) in 'measurements' example, a team is RPI 60, and has a RPI top 50 W/L of 1-2, but if the Pablo top 50 W/L of that team is 4-3 it is really signicant in that it points to the team actually A) having played significantly more tougher matches than RPI indicated, and B) that it beat a lot more tougher teams than RPI indicated - that's significant that way the flaws of RPI don't get multiplied in the analysis - and it wouldn't be complicated, just show the teams RPI in one column and it's Pablo top 50/top 100 W/L in another column (vs RPI W/L), and then you have insured a more reasonable counterbalanced/check of the RPI data a real example, the team currently ranked 29 in RPI has a 7-5 top 100 & 1-2 top 25 RPI record - compare that teams Massey record: 3-5 top 100, and 1-2 top 50 Massey - that's significant in how the team is perceived - the record against what is perceived by using RPI now not every team will have such a large difference in ratings, but for the teams that have the large differences, at least they get a calibrated look rather than just ordering teams by RPI
|
|
|
Post by The Bofa on the Sofa on Oct 30, 2014 20:10:32 GMT -5
But is it compounding or a consistent application of the measure. Are you suggesting that they should use another measure of good wins and bad losses after having used RPI to determine the field. That's what I don't understand about your argument. Yes, that is exactly what I am saying, if they use RPI for an initial ordering/ranking, they should then do the next level of evaluation for top 25/50/100 W/L based on another system that provides a better evaluation of how 'good' a teaam is (like Pablo or Massey). being consisant by using a flawed measuring stick is being consistantly wrong. The problem is even deeper than that. Remember that RPI is not just based on winning percentage and OPP and OPPOPP, there are the super secret corrections that get applied, in which teams are given RPI boosts for things like...wins over top 25 teams, and they are given RPI penalties for losses to bad teams. So if you now do an RPI nitty gritty and focus on wins against top 25 and bad losses, you are actually double counting them! IOW, the current version of the RPI ALREADY credits teams for good wins and penalizes them for bad losses, as determined by RPI. That's why it would be so beneficial to use something else to give a different view of good wins and bad losses.
|
|
|
Post by volleyguy on Oct 30, 2014 23:46:30 GMT -5
Yes, that is exactly what I am saying, if they use RPI for an initial ordering/ranking, they should then do the next level of evaluation for top 25/50/100 W/L based on another system that provides a better evaluation of how 'good' a teaam is (like Pablo or Massey). being consisant by using a flawed measuring stick is being consistantly wrong. The problem is even deeper than that. Remember that RPI is not just based on winning percentage and OPP and OPPOPP, there are the super secret corrections that get applied, in which teams are given RPI boosts for things like...wins over top 25 teams, and they are given RPI penalties for losses to bad teams. So if you now do an RPI nitty gritty and focus on wins against top 25 and bad losses, you are actually double counting them! IOW, the current version of the RPI ALREADY credits teams for good wins and penalizes them for bad losses, as determined by RPI. That's why it would be so beneficial to use something else to give a different view of good wins and bad losses. I see. So using a straight RPI cut-off, which everyone howled about when it happened just a few years ago, was actually less problematic?
|
|
|
Post by The Bofa on the Sofa on Oct 31, 2014 7:02:00 GMT -5
The problem is even deeper than that. Remember that RPI is not just based on winning percentage and OPP and OPPOPP, there are the super secret corrections that get applied, in which teams are given RPI boosts for things like...wins over top 25 teams, and they are given RPI penalties for losses to bad teams. So if you now do an RPI nitty gritty and focus on wins against top 25 and bad losses, you are actually double counting them! IOW, the current version of the RPI ALREADY credits teams for good wins and penalizes them for bad losses, as determined by RPI. That's why it would be so beneficial to use something else to give a different view of good wins and bad losses. I see. So using a straight RPI cut-off, which everyone howled about when it happened just a few years ago, was actually less problematic? To an extent, yes. I mean, it still suffers from the failings of rpi, but it's more logically consistent. I'd prefer the NCAA just drop the stupid corrections and let the committee deal with it
|
|
|
Post by redbeard2008 on Oct 31, 2014 17:25:26 GMT -5
Which raises the question of who is on the committee and whether simply giving them carte blanche would be an improvement. If representative of Div 1 as a whole, the committee would be heavily weighted in favor of eastern schools. Part of the idea of RPI and other criteria is to take the politics out of it, or at least to cover the committee's tracks.
|
|
|
Post by BeachbytheBay on Oct 31, 2014 18:05:13 GMT -5
Which raises the question of who is on the committee and whether simply giving them carte blanche would be an improvement. If representative of Div 1 as a whole, the committee would be heavily weighted in favor of eastern schools. Part of the idea of RPI and other criteria is to take the politics out of it, or at least to cover the committee's tracks. lol, yet RPI favors eastern schools
|
|