|
Post by kiyoat on Nov 21, 2019 17:59:01 GMT -5
Thanks for the clarification (both posters that replied). I guess this Group 1 and 2 scenario is why I struggle with this RPI. So Team A could play three matches against Group 1, and Team B plays against Group 2. Team A goes 1-2, but beats UCLA. Team B goes 3-0. Team A has a Top 25 win and has played a much tougher schedule than Team B, who has not played a Top 50 team. Yet Team B gains a significant advantage on 75% of the RPI calculation despite not really playing any team of any caliber. I know that a sample size of 3 matches is small, and I am sure that someone will suggest that over the full schedule, the 3rd component will balance it all out. But this formula rewards teams for scheduling teams that traditionally compete for conference championships in the weaker, one-bid conferences instead a middle of the pack Big10/Pac12/SEC team because they will have an equal to better chance to win and will get a big bump on the 50% portion of the RPI calc. I'll give you an even simpler example of why the RPI needs a replacement metric: So called "good wins" and "bad losses" really have nothing to do with your RPI score/rank. Zero. Those things are factors in the human evaluation of teams by the committee, but don't really affect your score. So if Team A knocks off a top-15 team (measured by RPI), then loses to a 300-ranked team, they would net the same exact RPI score as Team B, who beats the same 300 team and loses to the same top-15 team. The RPI doesn't care WHICH teams you beat, only the fact that Team A and Team B both are 1-1 and played the same schedule. That aspect makes very little sense, IMHO. Sure, the above example would be caught by the committee, but only if they were actually scrutinizing the teams closely, which likely only happens with bubble teams and in seeding top teams. For the rest, its way easier to just go with the RPI rank. Also, in evaluating "top-whatever" wins, they are still using the straight, unmodified RPI rank of your opponents. So they use the RPI to check the validity of the RPI. That's called circular logic in my book. Why not use a weighted average of other ranking systems, or literally anything else? Anyway, sorry for the rant. I'm sure most of you on this board have heard anti-RPI propaganda for decades. Nothing new to see here. Carry on. No need to respond.
|
|
bluepenquin
Hall of Fame
4-Time VolleyTalk Poster of the Year (2019, 2018, 2017, 2016), All-VolleyTalk 1st Team (2021, 2020, 2019, 2018, 2017, 2016) All-VolleyTalk 2nd Team 2023
Posts: 13,306
|
Post by bluepenquin on Nov 21, 2019 18:31:45 GMT -5
You may want to start by gaining a better understanding of how RPI is calculated. en.wikipedia.org/wiki/Rating_percentage_indexIt's strictly a mathematical formula with the following components: "The current and commonly used formula for determining the RPI of a . . . team at any given time is as follows. RPI = (WP * 0.25) + (OWP * 0.50) + (OOWP * 0.25) where WP is Winning Percentage, OWP is Opponents' Winning Percentage and OOWP is Opponents' Opponents' Winning Percentage." There are also bonuses related to scheduling, etc., but they are not all that relevant in a basic RPI discussion. The perceived "strength" of the teams on a schedule does not factor into the RPI of the team playing that schedule. Consider the following two groups of teams (listed RPI's are from 11/18): Group 1: (RPI 19) Southern Cal (15-11); (23) UCLA (14-11); and, (45) Illinois (13-12) Group 2: (RPI 84) Winthrop (22-4); (85) Eastern Tennessee (23-5); and, (131) Fairfield (22-5) Most knowledgeable volleyball fans would agree that the teams in Group 1 are "better" than the teams in Group 2. However, from an RPI standpoint, a team would benefit more from playing the teams in Group 2, rather than the teams in Group 1. Smart RPI scheduling involves playing (and winning) non-conference matches against teams that are likely to dominate their conference and thus end up with an outstanding overall record. Thanks for the clarification (both posters that replied). I guess this Group 1 and 2 scenario is why I struggle with this RPI. So Team A could play three matches against Group 1, and Team B plays against Group 2. Team A goes 1-2, but beats UCLA. Team B goes 3-0. Team A has a Top 25 win and has played a much tougher schedule than Team B, who has not played a Top 50 team. Yet Team B gains a significant advantage on 75% of the RPI calculation despite not really playing any team of any caliber. I know that a sample size of 3 matches is small, and I am sure that someone will suggest that over the full schedule, the 3rd component will balance it all out. But this formula rewards teams for scheduling teams that traditionally compete for conference championships in the weaker, one-bid conferences instead a middle of the pack Big10/Pac12/SEC team because they will have an equal to better chance to win and will get a big bump on the 50% portion of the RPI calc. Yes - but it is more complicated that this. A good RPI Schedule for a Top 10 team is completely different than a good RPI Schedule for a Top 25 team and then a Top 50 team. The general rule - you want to schedule teams you can beat that you think will have the best record. Illinois schedule this year for last year's team would have been great for them. It came close to being a disaster for this year's team. In the end - Illinois has on their schedule too many losses - and this is killing their RPI. If you are a top 10 team and are interested in being a regional seed - then you probably don't want Missouri's schedule. No one is getting a regional seed by playing only high record 1-bid conference teams. Finally, RPI is gong to hammer middle of the road teams from the two power conferences - because they will lose too many games. Those teams in the current environment are always way better than their RPI would indicate - and they become RPI scheduling problems for their opponents. No one wanted Oregon on their schedule this year - or Illinois.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Nov 21, 2019 19:18:43 GMT -5
Thanks for the clarification (both posters that replied). I guess this Group 1 and 2 scenario is why I struggle with this RPI. So Team A could play three matches against Group 1, and Team B plays against Group 2. Team A goes 1-2, but beats UCLA. Team B goes 3-0. Team A has a Top 25 win and has played a much tougher schedule than Team B, who has not played a Top 50 team. Yet Team B gains a significant advantage on 75% of the RPI calculation despite not really playing any team of any caliber. I know that a sample size of 3 matches is small, and I am sure that someone will suggest that over the full schedule, the 3rd component will balance it all out. But this formula rewards teams for scheduling teams that traditionally compete for conference championships in the weaker, one-bid conferences instead a middle of the pack Big10/Pac12/SEC team because they will have an equal to better chance to win and will get a big bump on the 50% portion of the RPI calc. Yes - but it is more complicated that this. A good RPI Schedule for a Top 10 team is completely different than a good RPI Schedule for a Top 25 team and then a Top 50 team. The general rule - you want to schedule teams you can beat that you think will have the best record. Illinois schedule this year for last year's team would have been great for them. It came close to being a disaster for this year's team. In the end - Illinois has on their schedule too many losses - and this is killing their RPI. If you are a top 10 team and are interested in being a regional seed - then you probably don't want Missouri's schedule. No one is getting a regional seed by playing only high record 1-bid conference teams. Finally, RPI is gong to hammer middle of the road teams from the two power conferences - because they will lose too many games. Those teams in the current environment are always way better than their RPI would indicate - and they become RPI scheduling problems for their opponents. No one wanted Oregon on their schedule this year - or Illinois. So after all the explanations, I am more convinced that the RPI is an average to poor indicator of a team's relevant rank. Go back to my two original teams - Missouri and Illinois, both of whom I have seen more than once. The eye test says that Missouri is not one of the top teams in the discussion of a Top 16 seed, even though their RPI suggests they are. And Illinois is not a bubble team, despite the 12 losses. I think 9 of those losses are against Top 25 teams (Coaches Poll), 7 against Top 10. Yet, based on their RPI, they are one of the last 4 in. Enough of my venting.
|
|
|
Post by mikegarrison on Nov 21, 2019 19:30:46 GMT -5
So after all the explanations, I am more convinced that the RPI is an average to poor indicator of a team's relevant rank. Yes, this is the case. If you have ever written software, you may have encountered the "kludge". You write some quick and dirty program that uses a funky trick to get the answer you want. Then this becomes part of your standard process. Pretty soon you have to keep tweaking it and adding to it because it was only kind of accidental that it worked in the first place. Eventually you have spent more time programming around the limitations of your original code than you would have spent just doing it right the first time. Well, that's the NCAA D1 volleyball selection process. A shorthand version of what's wrong with RPI is that it is trying to solve the problem that win/loss records don't always reflect team strength, so to solve that it uses ... win/loss records. Hello, is anybody home in there? And since that doesn't work so well and RPI produces some bad results, they tweak it by giving bonuses for beating teams with ... high RPI. Yes, that's right. Win/loss records can be misleading, so they fix it by using more win/loss records. And RPI can be misleading, so they fix that by giving you bonuses based on the RPI of the teams you beat. Nobody at the NCAA ever heard that when you are stuck in a hole, maybe "keep digging" isn't the best response.
|
|
|
Post by trianglevolleyball on Nov 22, 2019 20:09:47 GMT -5
Lots of big matches for bubble teams tonight and some interesting results so far. NW up 2-0 on OSU LVille up 2-0 on ND South Carolina at UGA tied 1-1
|
|
|
Post by trianglevolleyball on Nov 22, 2019 20:47:29 GMT -5
Ohio State goes down in 3 to NW. Have to think that ends their tourney hopes sans winning their final 3 tough matches. Louisville takes down ND and solidifies itself as a top 25 win for Pitt.
|
|
|
Post by Kingsley on Nov 22, 2019 20:49:09 GMT -5
Ohio State goes down in 3 to NW. Have to think that ends their tourney hopes sans winning their final 3 tough matches. Louisville takes down ND and solidifies itself as a top 25 win for Pitt. Nothing bursts your bubble quite like getting swept at home by a team that’s 2-14 in conference play.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Nov 23, 2019 11:17:30 GMT -5
Louisville takes down ND and solidifies itself as a top 25 win for Pitt. Not sure this is true. Neither team was in the latest Top 30 coaches poll. LVille was 27th RPI, so maybe with the goofy RPI calculations they crack the Top 25. But "solidifies" might be a bit strong.
|
|
trojansc
Legend
All-VolleyTalk 1st Team (2023, 2022, 2021, 2020, 2019, 2018, 2017), All-VolleyTalk 2nd Team (2016), 2021, 2019 Fantasy League Champion, 2020 Fantasy League Runner Up, 2022 2nd Runner Up
Posts: 31,599
|
Post by trojansc on Nov 23, 2019 11:24:51 GMT -5
Louisville takes down ND and solidifies itself as a top 25 win for Pitt. Not sure this is true. Neither team was in the latest Top 30 coaches poll. LVille was 27th RPI, so maybe with the goofy RPI calculations they crack the Top 25. But "solidifies" might be a bit strong. The coaches poll means nothing. See 2011: USC #1 Hawaii #3. Two potential Final Four teams. Seeds from the committee? USC #7 vs. Hawaii #10
|
|
|
Post by redbeard2008 on Nov 23, 2019 12:55:48 GMT -5
Your Group 2 is also why Eastern Schools tend to have overly-inflated RPI's. More conferences = more RPI fodder. There are a lot fewer (horrible) conferences on the West Coast. West coast teams are more prone to "incestuous" scheduling, where in pre-conference you're playing someone who is playing someone who is playing you. In the extreme case, you're playing someone who is playing several someones who are playing you. If that sounds similar to what happens in conferences, it is - that's why round-robin conference members cancel each other out, RPI-wise. So, at best, RPI is a rough sorting tool, which is used to initially order the potential field. Two main tasks remain: 1) sorting the "bubble teams" to figure who's in, and out, and 2) seeding. Both are done, not with "raw", but with "adjusted" RPI. If adjusted RPI is only used as a "second-level" sorting tool, with a "third-level" sort following, further "correcting the corrections," by bringing other factors into play (theoretically, they can look at anything, but never do), it would be well and fine. The problem is that a lazy committee, or a committee running out of time, can stop short, by not sufficiently correcting the second-level sort, allowing too many "sore thumbs" to remain in the seed, for instance. This is also where politics and favoritism, by commission or omission, can sneak in, allowing overly easy regional brackets ("primrose paths") and overly difficult regionals ("regionals of death"). Committees can also get a lot of flack if their brackets veer too far from expectations, which may or may not be correct.
|
|
|
Post by trollhunter on Nov 23, 2019 13:08:41 GMT -5
Your Group 2 is also why Eastern Schools tend to have overly-inflated RPI's. More conferences = more RPI fodder. There are a lot fewer (horrible) conferences on the West Coast. West coast teams are more prone to "incestuous" scheduling, where in pre-conference you're playing someone who is playing someone who is playing you. In the extreme case, you're playing someone who is playing several someones who are playing you. If that sounds similar to what happens in conferences, it is - that's why round-robin conference members cancel each other out, RPI-wise. So, at best, RPI is a rough sorting tool, which is used to initially order the potential field. Two main tasks remain: 1) sorting the "bubble teams" to figure who's in, and out, and 2) seeding. Both are done, not with "raw", but with "adjusted" RPI. If adjusted RPI is only used as a "second-level" sorting tool, with a "third-level" sort following, further "correcting the corrections," by bringing other factors into play (theoretically, they can look at anything, but never do), it would be well and fine. The problem is that a lazy committee, or a committee running out of time, can stop short, by not sufficiently correcting the second-level sort, allowing too many "sore thumbs" to remain in the seed, for instance. This is also where politics and favoritism, by commission or omission, can sneak in, allowing overly easy regional brackets ("primrose paths") and overly difficult regionals ("regionals of death"). Committees can also get a lot of flack if their brackets veer too far from expectations, which may or may not be correct. Some excellent points about scheduling, RPI as a rough sorting tool, politics, expectations, etc. This is good insight into many aspects of how NCAA bids really work. A minor gripe about where/why many (not just you specifically) get the idea that committees only look at RPI, and not the other primary criteria as well (W/L, SOS, SigWinLoss, H2H, CommonOpponents). I can assure you that they do look at all the primary criteria, aka "other factors". It does not appear that many VT'ers do this, except maybe trojansc. VT'ers prefer eye-test, bashing RPI, homerism, etc. - NOT looking at Nitty Gritty sheets and primary criteria.
|
|
|
Post by redbeard2008 on Nov 23, 2019 14:40:38 GMT -5
A minor gripe about where/why many (not just you specifically) get the idea that committees only look at RPI, and not the other primary criteria as well (W/L, SOS, SigWinLoss, H2H, CommonOpponents). I can assure you that they do look at all the primary criteria, aka "other factors". They do. I was being intentionally brief. They are part of the "third-level" sort, or even a "fourth-level" sort, if choosing to break it down further. The sequence is: raw RPI -> adjusted RPI -> corrected RPI (using RPI-based factors, including SOS, significant wins, last ten matches, etc.) -> finalized field/seeding (using non-RPI factors, such as H2H, common opponent, etc). I don't think they specifically look at significant losses, however, because they are already baked into raw or adjusted RPI.
|
|
|
Post by trollhunter on Nov 23, 2019 14:59:27 GMT -5
A minor gripe about where/why many (not just you specifically) get the idea that committees only look at RPI, and not the other primary criteria as well (W/L, SOS, SigWinLoss, H2H, CommonOpponents). I can assure you that they do look at all the primary criteria, aka "other factors". They do. I was being intentionally brief. They are part of the "third-level" sort, or even a "fourth-level" sort, if choosing to break it down further. The sequence is: raw RPI -> adjusted RPI -> corrected RPI (using RPI-based factors, including SOS, significant wins, last ten matches, etc.) -> finalized field/seeding (using non-RPI factors, such as H2H, common opponent, etc). I don't think they specifically look at significant losses, however, because they are already baked into raw or adjusted RPI. Perhaps you were exaggerating as well as being brief, since you did write "they never do". I'm sure there have been some instances where they run out of time, overlook something, get lazy, etc. but not often. "by bringing other factors into play (theoretically, they can look at anything, but never do), it would be well and fine." Overall your description is close to what I believe the process is currently, except for a couple items: 1. Corrected RPI does give bonus for overall schedule and individual wins over top teams - but separate from that is a category called Significant Wins and Losses that is used in finalized field/seeding phase. Compares good wins and bad losses specifically between close teams. 2. W/L and SOS are also categories used in finalized phase, but are typically split for teams with similar RPI. TeamA wins W/L category while TeamB wins SOS. But there are exceptions due to OppOpp results. 3. Last 10 matches is a secondary criteria that is not used unless the comparison of all primary criteria is deemed tied (W/L, SOS, RPI, SigWinLoss, H2H, CommonOpponent)
|
|
|
Post by BeachbytheBay on Nov 23, 2019 15:07:36 GMT -5
it's interesting, the Big West and WCC have been the teams with the most RPI bias against them.
this year though there was such a chasm between the top 3 in each of the 2 conferences and the bottom, it turned out RPI wasn't a big deal, although Pepperdine isn't safely in yet.
|
|
|
Post by mikegarrison on Nov 23, 2019 15:26:06 GMT -5
In my opinion, which I warn you is very cynical on this subject, the committee members mainly use all those "secondary" criteria to rationalize making decisions they already want to make but can't be justified by the primary criteria.
|
|