|
Post by The Bofa on the Sofa on Nov 28, 2015 10:38:08 GMT -5
You're making a different argument. It's not a different argument. It's addressing your claim. Meaningless special pleading. Who cares how they beat them? What matters is that they are just as likely to beat them later on. That is inconsistent with the hypothesis that teams change different degrees to any significant extent. If teams changed a lot in different ways, that early season match would lose it's significance. If you want to see an example of that, look at the NFL. In the NFL, within about 2 months, the result of the earlier played game becomes essentially meaningless, and tells you almost nothing about who will win the second meeting (it's basically no different than home/road). Margin of victory helps that a little, but not much (actually, it's not so much margin of victory, but points scored - points allowed is a lousy predictor in the NFL). The NFL is a case where teams change a ton, due to personnel or strategy or whatever. And that shows up in the predictability of early matches. That just doesn't happen in volleyball. Jen, I've DONE this analysis. I know how small that coefficient is. It's very small. Smaller than is normally implied in the "apply greater weight to late season matches" discussions. Yes, there are variables that are much stronger than others. HCA is a minor component for example. But who cares? Again, it doesn't matter why, what matters is that Pablo is just as successful as predicting the first week of matches as it is for matches over the rest of the season. If there were significant changes in relative team quality over the season, there would be a dropoff in Pablo success week for week after the rankings come out. But that's not observed. Arizona St is 1 team in 330, and an extreme case. And even then, it's not so clear. Compare Arizona St's season to Iowa, for example. While not at as high of a level as ASU, Iowa started out the season very, very strong. They had some legitimate wins, including over Iowa St and Texas A&M. They also played well against Hawaii. By the end of September, Iowa was looking very good, and if they would have been able to keep that up, they were an easy NCAA qualifier. Then the B1G season started and they flopped big time, losing to everyone but Rutgers and IU. And everyone says, "Oh, the real Iowa has shown up." But if you look at it, it's not all that much different from Arizona St's season. ASU started at a higher level, but the concept is the same. What's different is that ASU has an injury to point to, and so the narrative changes. I know early season expectations were high for ASU, but the week Gardner got hurt, they were ranked 3rd in Pablo. Did anyone expect them to maintain that level of play? With Gardner at the beginning of the season, they were playing at the level of #3. Without Gardner, they've been more in the 40 range, is my guess. Pretty clear cause and effect, right? But Iowa, after the first week of the B1G season (and losing twice to Nebraska), was playing at the #34 level in the country. Since then, they been more like 100. Where is the cause of that? Or is there even one? Maybe it's just one of those things? And if that is the case, why couldn't a lot of ASU's season also be "just one of those things"? It's possible that it's due to loss of Gardner, sure. But that isn't required to account for their season. Don't let the narrative drive the analysis.
|
|
|
Post by huskerjen on Nov 28, 2015 12:17:47 GMT -5
You're making a different argument. It's not a different argument. It's addressing your claim. Meaningless special pleading. Who cares how they beat them? What matters is that they are just as likely to beat them later on. That is inconsistent with the hypothesis that teams change different degrees to any significant extent. If teams changed a lot in different ways, that early season match would lose it's significance. If you want to see an example of that, look at the NFL. In the NFL, within about 2 months, the result of the earlier played game becomes essentially meaningless, and tells you almost nothing about who will win the second meeting (it's basically no different than home/road). Margin of victory helps that a little, but not much (actually, it's not so much margin of victory, but points scored - points allowed is a lousy predictor in the NFL). The NFL is a case where teams change a ton, due to personnel or strategy or whatever. And that shows up in the predictability of early matches. That just doesn't happen in volleyball. Jen, I've DONE this analysis. I know how small that coefficient is. It's very small. Smaller than is normally implied in the "apply greater weight to late season matches" discussions. Yes, there are variables that are much stronger than others. HCA is a minor component for example. But who cares? Again, it doesn't matter why, what matters is that Pablo is just as successful as predicting the first week of matches as it is for matches over the rest of the season. If there were significant changes in relative team quality over the season, there would be a dropoff in Pablo success week for week after the rankings come out. But that's not observed. Arizona St is 1 team in 330, and an extreme case. And even then, it's not so clear. Compare Arizona St's season to Iowa, for example. While not at as high of a level as ASU, Iowa started out the season very, very strong. They had some legitimate wins, including over Iowa St and Texas A&M. They also played well against Hawaii. By the end of September, Iowa was looking very good, and if they would have been able to keep that up, they were an easy NCAA qualifier. Then the B1G season started and they flopped big time, losing to everyone but Rutgers and IU. And everyone says, "Oh, the real Iowa has shown up." But if you look at it, it's not all that much different from Arizona St's season. ASU started at a higher level, but the concept is the same. What's different is that ASU has an injury to point to, and so the narrative changes. I know early season expectations were high for ASU, but the week Gardner got hurt, they were ranked 3rd in Pablo. Did anyone expect them to maintain that level of play? With Gardner at the beginning of the season, they were playing at the level of #3. Without Gardner, they've been more in the 40 range, is my guess. Pretty clear cause and effect, right? But Iowa, after the first week of the B1G season (and losing twice to Nebraska), was playing at the #34 level in the country. Since then, they been more like 100. Where is the cause of that? Or is there even one? Maybe it's just one of those things? And if that is the case, why couldn't a lot of ASU's season also be "just one of those things"? It's possible that it's due to loss of Gardner, sure. But that isn't required to account for their season. Don't let the narrative drive the analysis. Re: last point. What is typical range of variance over a season? Is it normally distributed with high talent and low talent teams showing the least variance?
|
|
|
Post by The Bofa on the Sofa on Nov 28, 2015 12:47:39 GMT -5
Re: last point. What is typical range of variance over a season? Is it normally distributed with high talent and low talent teams showing the least variance? [/quote] Given that Pablo accounts for about 80% of matches that have been played, with a typical schedule of 30 matches, that means that each team will have about 6 upsets (for and against) in it's record. There is no evidence that these 6 upsets are non-randomly distributed, and irrespective of quality. The movement over the course of the season will depend on the distribution of those 6 upsets. If every team had 6 upsets over the course of the season (evaluated at the end), 1/64 of them would have their three upset wins happening first and then three upset losses (about 5 teams). Meanwhile, 1/64 will have all three losses first and then three wins. Now, the 5 teams that have three upset wins first will looked to have dropped off. Meanwhile, the 5 teams that have their three upset losses first will look to have undergone a massive improvement. Yet, this is all based on the PREMISE that no team changes throughout the season. Yet, we've got 5 teams that look like they have massively improved and 5 that dropped off the planet. That's why I said, don't let the narrative drive the analysis. If you think that there is a good team/bad team difference in these distributions, Jen, I am going to challenge you to provide the evidence. I will suggest it will be really, really hard to get enough data to detect a trend if there is one. And I doubt there is one (there's really no reason to think there should be)
|
|
|
Post by huskerjen on Nov 28, 2015 13:07:11 GMT -5
Re: last point. What is typical range of variance over a season? Is it normally distributed with high talent and low talent teams showing the least variance? There is no evidence that these 6 upsets are non-randomly distributed, and irrespective of quality. If you think that there is a good team/bad team difference in these distributions, Jen, I am going to challenge you to provide the evidence. I will suggest it will be really, really hard to get enough data to detect a trend if there is one. And I doubt there is one (there's really no reason to think there should be)[/quote] (1) I would suspect that the quality of upset would show structure just due to scheduling, i.e. non-con vs. con, and that the so-called major upsets would be less likely to occur if those matches were scheduled later in the season. However, that's just my speculation (2) That's what I presumed, the sample size is inadequate to assess structure of distribution. Ultimately, it's fine, this is just more about me wanting to see if Pablo reveals any non-random trends. Thanks.
|
|
jiml
Sophomore
Go Badgers
Posts: 234
|
Post by jiml on Nov 28, 2015 13:15:05 GMT -5
We know that teams improve a lot as the season progresses and they accumulate more playing time, e.g. compare early season WI backcourt play to current. The null hypothesis is apparently that all teams improve on the same trajectory, with no major changes in relative strength, in spite of the visible difference in absolute strength.
|
|
|
Post by huskerjen on Nov 28, 2015 13:32:03 GMT -5
We know that teams improve a lot as the season progresses and they accumulate more playing time, e.g. compare early season WI backcourt play to current. The null hypothesis is apparently that all teams improve on the same trajectory, with no major changes in relative strength, in spite of the visible difference in absolute strength. Interesting thought, what's the best way to test relative vs. absolute improvements within a group in this scenario?
|
|
|
Post by jaypak on Nov 28, 2015 14:35:39 GMT -5
Nova down 13-19 in the 1st.
|
|
|
Post by The Bofa on the Sofa on Nov 28, 2015 15:25:46 GMT -5
There is no evidence that these 6 upsets are non-randomly distributed, and irrespective of quality. What would constitute evidence that the upsets are randomly distributed? The best evidence is lack of a non-random distribution. Which you don't have, nor do I. If there's no evidence for it, either because you've not looked or because the data aren't sufficient to be able to establish it, then you can't just claim it is true. Then control for the level of competition. What does this have to do with good teams vs not as good teams? Yes. So it doesn't make sense to claim there ARE non-random effects.
|
|
|
Post by The Bofa on the Sofa on Nov 28, 2015 15:32:51 GMT -5
We know that teams improve a lot as the season progresses and they accumulate more playing time, e.g. compare early season WI backcourt play to current. The null hypothesis is apparently that all teams improve on the same trajectory, with no major changes in relative strength, in spite of the visible difference in absolute strength. That's not a "null hypothesis," it's a hypothesis based on the data. I see no reason to assume, a priori, that teams change to the same degree over the course of a season, but that's what the results I presented seem to suggest. See my discussion of the NFL for the counter-example. OTOH, it's kind of a non-sensical question. It's not "do teams change" over the course relative to each other, but "how much do teams change relative to each other." In the NFL, it's a lot. In volleyball, it's apparently very little. I've never tried to look at other sports. NBA basketball would be a good one, because teams play each other a lot, so there are a lot of rematches during the season.
|
|
|
Post by The Bofa on the Sofa on Nov 28, 2015 15:35:28 GMT -5
We know that teams improve a lot as the season progresses and they accumulate more playing time, e.g. compare early season WI backcourt play to current. The null hypothesis is apparently that all teams improve on the same trajectory, with no major changes in relative strength, in spite of the visible difference in absolute strength. Interesting thought, what's the best way to test relative vs. absolute improvements within a group in this scenario? For individual teams that is a good question. However, for any practical sense, it doesn't matter. What matters is how teams compare to each other.
|
|
|
Post by n00b on Nov 28, 2015 15:51:58 GMT -5
Creighton def Villanova 25-18, 25-18, 25-17.
Bluejays look like they could be a dangerous unseeded team. Villanova on the other hand will be very nervous for the next 30 hours.
|
|
|
Post by n00b on Nov 28, 2015 15:53:22 GMT -5
Arkansas and Texas A&M are tied at 1-1 with the Aggies leading set three 15-13.
|
|
|
Post by The Bofa on the Sofa on Nov 28, 2015 15:59:37 GMT -5
Creighton def Villanova 25-18, 25-18, 25-17. Bluejays look like they could be a dangerous unseeded team. Or maybe the whole Big East is overrated by RPI? Creighton is the only team in the top 50 in Pablo.
|
|
|
Post by n00b on Nov 28, 2015 16:03:08 GMT -5
Or maybe the whole Big East is overrated by RPI? Creighton is the only team in the top 50 in Pablo. I don't think there is any doubt that's true, but that doesn't mean Creighton won't be a tough out.
|
|
|
Post by n00b on Nov 28, 2015 16:05:51 GMT -5
Texas A&M wins set three 25-23 to take a 2-1 lead over Arkansas.
|
|