|
Post by The Bofa on the Sofa on Sept 26, 2009 13:44:01 GMT -5
That is very much not an "of course" observation. In fact, it is probably the most misunderstood concept of measuring things. Many, many times people think of a measurement and then try to figure out what it explains. It is far better to ask a question and then see if you can find a measurement that answers it. Well, hitting pct exists, regardless of what we want to do. In that respect, it is important to figure out what it means. However, in the analysis above, I examined it in light of the standard, "How well does it reflect the contribution of the activity (in this case, attacks) to winning?" in terms of scoring points (which leads to more wins). So that goal underlies the entire analysis.
|
|
|
Post by The Bofa on the Sofa on Sept 26, 2009 13:48:08 GMT -5
One minor flaw to your calculation of whether hitting/attack efficiency or kill percentage is better is that one player could have one or more zero attack that are dug and a kill on the same rally. It would almost be like counting it twice as the zero attack(s) still helped lead to a point on the rally, but already counted by the kill. I have been thinking about this. That will favor the non-error approach. From a hitting % standpoint, what it means is that there would be fewer than 15 points scored for those 15 attacks. However, the more there are terminated, the more points are accounted for. So if a player hits 9/6/15 = .200, the team scores 60% of those points. However, if she hits 3/0/15, that can come on fewer than 15 points. Let's see, suppose 5 of the attacks come as second chances...that would mean there are 10 points total. 3 were definate kills, and 7 are 50/50, or 3.5. That means that the team would score 65% of the points, which is better.
|
|
|
Post by The Bofa on the Sofa on Sept 26, 2009 13:59:33 GMT -5
I don't think of it as a percentage. It's actually the average number of points scored per attack. 6 kills and 3 errors means you scored a net 3 points for your team. In 15 attempts, the average is 0.200 points per attack. In addition to what mike says, that is .200 points per YOUR attack, but ultimately the ball is going to hit the floor, and what matters more is whether your team gets the point. My analysis points out that, if the opponent converts your non-kill, non-errors at something other than a 50% rate, then the two contributions are not the same, despite the same hitting %. In fact, thinking about it as 0.2 points/attack is actually a wrong approach. It is better to view it as 0.6 points/attack, because that is, on average (or so), about how many points the team scores. Admittedly, the conversion isn't obvious (I can't come up with it off the top of my head), but how much the attack contributed to the team scoring is more important than the individual performance. Now, if we assume a 50% conversion rate for the opponent, then all .200s are the same, but if it is not 50%, then the details matter more.
|
|
|
Post by The Bofa on the Sofa on Sept 26, 2009 14:04:11 GMT -5
Let's see, hitting .000 is means the team wins the point 50% Hitting .200 gives 60% hitting .400 (6/0/15) = 10.5/15 = 70%
ok, team point pct = .5 + hitting pct/2
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Sept 26, 2009 14:09:14 GMT -5
So the interesting step would be to evaluate the zero attacks. What was the ultimate result of the rally? Point for your team, your zero attack is a +. Point for your opponent, your zero attack is a -.
Then you can accurately compare the two .200 hitters -- at least if they're both playing for the same team. Maybe.
Wouldn't this be similar to the plus/minus ratings of hockey players? Plus if you're on the ice when a goal is scored, minus if the opposition scores?
There's that old VB adage about "bettering the ball". It's a cliche, but, personally, I don't think there's anything more important in the sport: has your touch improved your team's chances of scoring or made it worse?
|
|
|
Post by cruncher on Sept 26, 2009 14:25:32 GMT -5
All measures of performance involve a tradeoff between ease of collection and accuracy. Attack efficiency (hah! neither an average or a %age) is popular because of it's high correlation with winning and ease of collection/calculation. Not perfect, but pretty darn good.
|
|
|
Post by TheSantaBarbarian on Sept 26, 2009 14:36:13 GMT -5
To me the more important "stat" is the "points". On eplayer goes 15-0-30 and another goes 15-10-30, and they both get the same points. That is just wrong.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Sept 26, 2009 14:38:19 GMT -5
Except when people are comparing middles to OHs, or even opposites to OHs. Which happens all the time.
Pretty darned good, if used correctly. Kinda like an enema.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Sept 26, 2009 14:41:50 GMT -5
To me the more important "stat" is the "points". On eplayer goes 15-0-30 and another goes 15-10-30, and they both get the same points. That is just wrong. Yeah, player 2 should really be credited with 5 points. But that doesn't help us with the question raised: Which is better 5-0-30 or 15-10-30? What we do know is that the two lines are not equal.
|
|
|
Post by cruncher on Sept 26, 2009 14:58:58 GMT -5
Philosophically, I would prefer the 15-10-30 player because the ability to terminate at 50% is a rare commodity. Certainly more rare than a hitter who can keep the ball in play and pick up a few kills here and there.
|
|
|
Post by curly on Sept 26, 2009 16:24:46 GMT -5
We know that the scoring success on serve receive, for example, is only about 70%, and I wouldn't expect the average dug ball to be easier than serve reception. Not to quibble too much, but I'm pretty sure that for women's Division I play, it is lower than 70%. I used to have some numbers on this that I'll try to find, but I also just did a quick sampling that suggests the number is more like 60%, which is consistent with what I remember. In 2007, Michigan State's published box scores often included the team's actual sideout percentages for each game (set). You can see an example at: www.msuspartans.com/sports/w-volley/stats/2007-2008/msu15.htmlIt's the two rightmost columns in the table TEAM ATTACK PER GAME. I found these very interesting and I don't understand why they aren't a standard part of the box score. I just went back and compiled all the sideout percentages I could find from those games. There were 36 games (in 10 matches), each with a winner and a loser, so it's a sampling of 72 sideout percentages. Perhaps not enough to be statistically significant, but enough to be interesting. The lowest was 37% for MSU in a 17-30 loss to Wisconsin in East Lansing, and the highest was an astonishing 94% by Penn State in a 30-13 win in State College. The mean for the winning team in each match was 70% (rounded), which exactly matched the median. The mean for the loser was 56%, with a median of 55%. Overall the mean was 63% and the median was 62%. (I did not try to add up all the raw numbers, such as 17 sideouts out of 31 attempts, because it was too much work, and I would guess it would not have made much difference.) MSU finished 5th in the conference that year, so it seems as though this should be a reasonable selection of average Big Ten matches. The higher the level of play, the higher the sideout percentage normally is. If the top two Big Ten teams were playing each other, I would expect slightly higher sideout percentages. If we looked at the average for all of Division I, it would almost certainly be lower, perhaps 60% or even less.
|
|
|
Post by macroman on Sept 26, 2009 20:05:02 GMT -5
Relative hitting percentage predicts wins and losses better than any other single statistic. It does not explain the reasons why.
Relative hitting percentage is higher hitting pct - lower hitting pct
Sideout pct is a pretty good indicator of team performance as well but it gives more weight to service errors than I like.
|
|
|
Post by psuvbfan10 on Sept 26, 2009 21:36:05 GMT -5
I've always liked to look at the attack efficiency (K-E)/At and the kill ratio (K/At). I've coached players where the eff & ratio are too close - and they are afraid to make errors - and their non kills aren't very well placed attacks. The 'terminators' may have mediocre attack eff but better kill ratio. IMHO you need a mix, just like you need some servers who can get teams out of system but can make mistakes & servers who can give you a chance to score by keeping the ball in play more.
BTW - am I the only one who hates when announcers say hitting eff is like a batting average - that's bogus, the end number may be indicative of a good hitter in both sports (above 300 is good), but baseball you don't subtract errors (outs). Pet Peeve!
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Sept 26, 2009 21:40:19 GMT -5
Yep, it's one of the ten or so cliches they are always trotting out, most of which over-simplify the sport.
That's my biggest gripe. Most of them treat every match as if they are broadcasting to Eskimos. Eskimos who just came out of 15 year comas. Eskimos who just came out of 15 year comas with severe brain damage.
|
|
|
Post by macroman on Sept 26, 2009 22:01:45 GMT -5
.... BTW - am I the only one who hates when announcers say hitting eff is like a batting average - that's bogus, the end number may be indicative of a good hitter in both sports (above 300 is good), but baseball you don't subtract errors (outs). Pet Peeve! Announcers get this from coaches. If coaches don't use the analogy then neither will announcers. I understand where this comparison comes from though since the people using it are trying to reach out to a wider fan base that does not have a volleyball background. I would be happy enough if it worked even though not strictly correct. I like the idea of scoring the points off of each rotation position as a coaching tool. I can't see where this gets any popular acceptance however. How hard do you guys want the fans to work, anyhow?
|
|