When filling out my most recent BlogPoll ballot, I paid little attention to my previous ballot and I did not look at the top 25 rankings compiled by the Associated Press, the coaches, the B.C.S. or any of its components, any other BlogPollster, or any other person or group.
Instead, based upon assurances I gave in response to a question raised by Burnt Orange Nation's Peter Bean, I made a concerted effort to employ the methodology known as "resume ranking."
Because Peter is a leading advocate of that selfsame method, I was curious how SportsBlogs Nation's Texas Longhorns weblog would rank the top 25 teams in the land following last weekend's action. Since Peter and I adopt the same approach to compiling our rankings, it stands to reason that we would produce similar results, despite the fact that I cast my ballot with no knowledge of how he cast his and (presumably) he was likewise unaware of the poll vote I submitted.
One of the intercollegiate athletics blogosphere's deepest thinkers, Sunday Morning Quarterback, listed these as the strengths of the resume ranking method:
On the other hand, S.M.Q. saw these as the weaknesses of this approach:
Far be it from me to sound like a French semiotician or anything---indeed, I like resume ranking for the same reason I like the New Criticism, as both are focused on evaluating what you have in front of you rather than being distracted by everything else surrounding it---but I think it's fair to say that we all have biases, so I was interested to see whether S.M.Q. was right to be concerned that perception would intrude upon Peter's and my attempts to use identical evidence (all the games played in the 2006 football season thus far) to reach the same conclusion (an accurate list ranking the top 25 teams in the country).
As it turns out, Burnt Orange Nation's ballot and mine had a lot of similarities. First of all, 22 of the teams ranked on my ballot also were ranked on Peter's ballot. One-fifth of the teams on my ballot (No. 1 Ohio State, No. 4 Florida, No. 5 Arkansas, No. 15 Wisconsin, and No. 17 Tennessee) occupied precisely the same position on Peter's ballot.
Four teams were separated by just one poll position in our respective rankings: Michigan, second on my ballot but third on Peter's; Southern California, third and second, respectively; Rutgers, eighth and seventh; and Louisville, ninth and eighth. Four other teams were two spaces apart on the two ballots: Oklahoma, 13th on mine and 11th on Peter's; Boise State, 14th and 12th; Cal, 20th and 22nd; and Hawaii, 21st and 23rd.
That's 13 teams---half of the squads on each of our ballots---that were no more than two spots away from one another in the Burnt Orange Nation and Dawg Sports rankings. Furthermore, while we disagreed somewhat on the sequence, Peter and I agreed completely on which teams were deserving of inclusion in the top 15. So far, resume ranking is looking like it produces the consistent results predicted by Sunday Morning Quarterback.
Of the remaining 12 teams on our respective ballots, one-third of them were within three slots of one another (No. 12/No. 9 L.S.U., No. 7/No. 10 Texas, No. 11/No. 14 West Virginia, and No. 18/No. 21 Nebraska). Note that I lowballed the S.E.C. team and Peter lowballed the two Big 12 teams, including his own.
An additional two schools appeared four spaces apart (No. 10/No. 6 Notre Dame and No. 16/No. 20 B.Y.U.), leaving three teams that were separated by five or more notches (No. 6/No. 13 Auburn, No. 24/No. 18 Georgia Tech, and No. 25/No. 16 Boston College).
Peter ranked Clemson, Virginia Tech, and Wake Forest, none of which appeared on my ballot, and I ranked Navy, Texas A&M, and Texas Christian, none of which appeared on his ballot.
Clearly, perception is coming into play, just as S.M.Q. said it would, but it is noteworthy that these matters of interpretation produce much more divergent results farther down on the two ballots, where the losses pile up and the significance to be assigned to particular outcomes becomes more subjective.
I also find it interesting that, for all the familiar assumptions about local biases, the opposite appears to be the case. It was the Georgia fan who ranked a pair of Lone Star State teams (Texas A&M and T.C.U.), but the Texas fan omitted these squads. On the other hand, I live within easy driving distance of several Atlantic Coast Conference campuses, yet Peter ranked a trio of A.C.C. teams that did not appear on my ballot. Has familiarity bred contempt?
Maybe, maybe not. Our most pronounced disagreements concerned heated rivalries: the Aggies made my top 20, but they did not make Peter's top 25; the Yellow Jackets were ranked six spots higher on Peter's ballot than on mine; Clemson is nowhere to be found in my rankings but the Tigers made the grade in his.
Then again, we disagreed about Auburn, a team I ranked seven spaces higher than Peter did . . . and I hate Auburn. He and I were in absolute agreement on Florida and Tennessee, the Bulldogs' perennial challengers for S.E.C. East supremacy, and we substantially agreed on Oklahoma, the Longhorns' major rival in the Big 12 South. Moreover, the team about which we disagreed the most (Boston College) is not a team that evokes particularly strong emotions for either of us.
None of that may prove a thing, of course, but I couldn't help noticing it when I saw the ballot B.O.N. had cast and, since Peter has expressed an interest in furthering the discussion of voting methodology, I thought it was worth examining. As George Will is fond of pointing out, "data" is the plural of "anecdote."
If nothing else, though, S.M.Q., B.O.N., and Dawg Sports all agree that poll voters should know whether teams won or lost before ranking them.
Go 'Dawgs!