It's the offseason so now is as good a time as ever to do some thinking about the way baseball players are evaluated and ranked.
One premise that I'd like to throw out there is that team-building strategies that may be sound and even very good in fantasy baseball won't work in real baseball. One example of a fantasy baseball strategy that works in fantasy but doesn't in real life is the "stars and scrubs" strategy. This says put all your eggs in one or two baskets and surround the stars with replacement-level freely-available talent. For now, rather than debate the merits of this strategy, I'd like to just stop with the proposal that we call this team-building strategy "stars and scrubs" and that, in light of Littlefield's recent comments about the A-Rod in Texas model, we agree that it's one the Pirates have considered and rejected.
Fantasy baseball players want good, accurate projections. Non-fantasy fans would also like them too and since most of the first group overlaps with the second group, it's fair to say that the various projection systems available to fantasy gamers (Rotowire's, Primer's ZiPS, Baseball Prospectus's PECOTA, etc.) have a growing influence on the expectations of the fans who actually buy tickets and go to the park.
The better projection systems use "park effects" to translate statistics from one environment to another. The theory is that a park like Coors field inflates offensive statistics in regular and predicable ways. Therefore any given player's real performance data can be "translated" or adjusted by multiplying his raw numbers against "a park effect" number.
The park-effect translation math also plays a key role in the various "unistats," or single measures of a player's overall contribution or value, and these things, be they Win Shares or VORP measures, appear to be having a growing influence as fans debate the relative merits and overall value of individual players.
A problem I have with park effects, as they are currently used (or as they are currently described), is that they seem to me to be amazingly crude and uneven. Some stadiums have years and years of data supporting their park effect. Others are one or two years old. And yet both park effect numbers appear to be trusted equally.
A second way that the park effect measures strike me as really crude is in their apparent inability to discriminate between kinds of players. All Pirate fans know that PNC Park is tougher on right-handed hitters, for example, than it is on left-handed hitters. Yet most park effect measures that I've seen describe the park as something like perfectly neutral. Handedness is only the most obvious way that all hitters can be broken down into subgroups. It seems pretty obvious to me that PNC ought to have one park effect for right-handed hitters and another for left-handed hitters. And ditto for pitchers. How much of an advantage will a lefty like Mark Redman have from PNC Park? So long as the park is described only as neutral, we can't get a good idea of that.
Hitters can be further broken down as power hitters and singles hitters, and pitchers can be broken down as soft-tossers and as power pitchers. Each group should have its own park effect.
The only protest I can imagine against providing more specific park effect numbers - what we need, really, are park effect tables for each park - is that by dividing the data the quality of the sample size will be compromised. Yet park effects are immediately provided for new parks like Petco and, at the end of the year, the VORP calculations of players in that one-year park appear on the same lists as if they were equally valid as the VORP calculations of players who have played half their games in much better-known ballparks.
Finally, I know that many calculations of park effects are not stadium-specific but look instead at the whole schedule of games a team played. So much Coors and so much Dodger stadium goes into some of those Petco numbers. But doesn't that compound the problem even further? The overall park effect number made up by averaging the various effects of all the parks a team played in may work with the aggregate numbers. But aren't these aggregates too muddled for use on the individual level? I wonder. Also, there's a problem with using the previous year's data to project the upcoming year. Last year the Pirates played many of their interleague games on the West Coast. Will those stadiums go into the calculation for the park effect for 2005, when the Pirates won't play any interleague games out there? Shouldn't a 2005 projection with an aggregated park effect measure consider the handful of games they'll play, for example, in Fenway Park? Maybe some of the systems already do this. It's not my intention to specifically single out one projection system or one measure of park effects and call it wrong, but to raise questions we ought to ask about these numbers before we trust them very much.
My guess is that ten years from now, we'll look back at the state of the easily-available advanced metrics today and marvel at their crudity, much as we look as the baseball video games of the 1980s and wonder how we could have ever thought they were so cutting-edge.
No comments:
Post a Comment