clock menu more-arrow no yes mobile

Filed under:

Final 2007 BlogPoll Out, Discussed More Than You'd Have Thought Possible

The final 2007 BlogPoll was released on Thursday and the good news is that I didn't win (or even finish among the top five contenders for) any of the dubious honors such as "Mr. Bold" or "Mr. Manic-Depressive." This is good, because I've caught enough flak lately (more about which forthwith).

The really good news is that the Bulldogs finished second behind Louisiana State. The Bayou Bengals deservedly received 41 of 44 first-place votes, including mine, but one of the other three ballots ranked the Classic City Canines first in the land. (Sunday Morning Quarterback had the 'Dawgs ranked second, which is always an encouraging sign.)

Trailing Georgia were No. 3 Southern California, No. 4 West Virginia, and No. 5 Missouri. Also of interest to Bulldog Nation were the placements of Tennessee at No. 12, Florida at No. 13, Auburn at No. 14, Hawaii at No. 25, and Kentucky among those also receiving votes.

Rich Brooks could not be more thrilled to learn that somebody voted for his team in the BlogPoll.

As noted above, some discussion was generated here at Dawg Sports by my explanation of why I ranked the teams as I did. At times, the conversation got just a bit heated, but, in the end, cooler heads eventually prevailed, but I decided the issues that had been raised deserved discussion on the main page, so I decided to devote a series of postings to the subject.

This is the first installment in that series, which I begin by attempting to answer the following questions posed by DC Trojan:

Since I am not a resume ranker by nature, I'm curious about a couple of things in the post and accompanying comments.

Re: BYU. If Virginia Tech gets credit for winning their second shot at Boston College, why does BYU not get a significant boost from beating UCLA - albeit narrowly - in a bowl game?

Re: relative rankings of Virginia Tech and Kansas. I'm biased in this area because I think that Beamer and his teams have been getting a pass for years for playing cynical percentage football with deeply unbalanced teams. Having watched the Orange Bowl, I was struck that many members of the Hokies team might be more individually talented than Kansas, but when VT was faced with a team that attacked their weaknesses and had an offense, they lost. In other words, all the Hokies really had to do was let their defense do their thing, hand the ball to Ore a few times, and not let their QBs do much of anything - and yet the coaches couldn't respond to pressure from Kansas, and the players wobbled repeatedly. That to me doesn't say "winnar," but I am openly biased in this instance.

Re: penalties for conference structure. I can't get riled about the relative weight that you assign to "teams that X beat" when it comes to USC because the record shows what it does. You play in the schedule / conference that you have, after all. The Pac 10 was all over the place this year, and Idaho was supposed to be the only non-conference patsy until Nebraska and Notre Dame decided to have seasons that were rank on the order of hakarl (Icelandic putrified shark, if you were wondering.)

However, why is it that a Pac 10 winner / team gets slightly penalized for playing one game fewer because the Pac 10 doesn't need a conference championship? There's nothing magical about a conference championship, it just means that you've had one more game to account for a conference apparently having too many teams for them to play one another in the course of a 12 game season.

Finally, when you're looking back at the season based on resume, why are you adjusting your impression of a loss based on events after the fact? This struck me most with the Cal - Oregon game. When the game kicked off, both teams were 4 - 0, Cal was ranked #6 and Oregon was ranked #11. Cal hit a late season skid of epic proportions, but to look at Oregon and say "well you lost to a 7 - 6 Cal team" introduces an ex post facto re-evaluation that isn't strictly pertinent to the quality of the contest at the time it was played. Oregon lost to a team that went 5 - 0 on the strength of that result and which had beaten Tennessee. If Oregon had lost to late-season Cal, I could better understand the (mild) opprobrium.


I replied to the first of DC Trojan's inquiries in the following manner:
B.Y.U. got something of a boost for beating U.C.L.A. in the Las Vegas Bowl (a quality mid-major gets points for beating a B.C.S. conference opponent on a neutral field), but, differences in overall schedule strength aside, the Cougars benefited from their rematch less than the Hokies did for two reasons:
  1. Virginia Tech lost narrowly to Boston College in the regular season but beat the Eagles fairly convincingly in the postseason. Brigham Young lost to the Bruins by a large margin in the regular season but beat U.C.L.A. narrowly in the postseason. A close loss followed by a convincing win is better than a large loss followed by a close win.
  2. Boston College was a significantly better team in 2007 than U.C.L.A. turned out to be, so there is less dishonor in losing to the Eagles and more benefit to beating them.

DC Trojan's second question concerned my controversial decision to rank Virginia Tech ahead of the Kansas team that beat the Hokies in the Orange Bowl. I freely admit that, at first glance, it appears counterintuitive to rank a 12-1 team twelfth while ranking the 11-3 team it just defeated to end the season eighth. As an advocate of resume ranking, though, I am looking at a team's whole record of achievement on a weekly basis.

You can give Clint Eastwood all the credit in the world for every Western he made from "The Good, the Bad, and the Ugly" to "Unforgiven," for every movie in which he played a trigger-happy cop or rode around in a truck with an orangutan, and for putting Uga in "Midnight in the Garden of Good and Evil," but you still have to deduct points from his score for making a chick flick with Meryl Streep.

From my perspective, it isn't enough to say that one team won and another team lost, so their poll positions relative to one another should shift accordingly, based upon last week's rankings. Under that method, last week's rankings are based upon the rankings from the week before that, which ultimately are traceable back to the preseason rankings we all compiled based upon sheer guesswork. (My own initial preseason BlogPoll ballot had Michigan first, Texas third, Louisville seventh, U.C.L.A. ninth, Florida State twelfth, Georgia 14th, Cal 16th, Rutgers 19th, Oklahoma State 22nd, Texas Christian 23rd, Boston College 24th, and Oregon 25th, with Alabama, Arizona, Arizona State, Iowa, Nebraska, Notre Dame, South Carolina, and South Florida among the unranked teams receiving consideration. Since I am lousy at making predictions, I decided I'd do better to go with what I knew to be true rather than what I thought might be the case.)

Accordingly, I explained my relative placement of the Hokies and the Jayhawks in two parts:

At this stage of the season, plenty of teams will be ranked behind teams they defeated. No one would argue, for instance, that Tennessee deserves to be ranked ahead of Georgia, even though the Volunteers beat the Bulldogs by three touchdowns in a game that wasn't as close as the score indicated.

The Hokies and the Jayhawks, by contrast, played a close game. Although K.U. certainly and deservedly gets credit for winning that contest, other factors come into play, as well:

  • Virginia Tech played in and won its conference championship game. Kansas did neither.
  • Virginia Tech beat six Division I-A teams with winning records. Kansas beat seven Division I-A teams with losing records.
  • Virginia Tech played nine games against bowl teams. Kansas played six games against bowl teams.
  • Virginia Tech beat four teams that won eight or more games. Kansas beat two teams that won eight or more games.
  • Virginia Tech's second-best victory was over Clemson. Kansas's second-best victory was over Central Michigan. [I neglected to mention the pertinent datum that Clemson beat Central Michigan by a 70-14 margin.]
The Jayhawks' Orange Bowl win boosted their standing by quite a bit, but one quality victory---which is all Kansas has to show for its pretty but largely empty record---wasn't enough to overcome V.P.I.'s season-long record of solid achievement. . . .

I understand the logic behind [the] question, "[I]f you're not going to consider the winner of a BCS bowl better than the loser, then why play the game?"

I hope I will not come across as flippant when I respond, "If you're going to consider the winner of a B.C.S. bowl better than the loser without regard to the rest of the season, then why play the season?"

If one is a playoff advocate---as many people are, and as you may be, but as I am not---the point raised by your question is a tenable one. Irrespective of whether we should have a playoff, though, the fact is that we do not have a playoff at present, and my method of ranking teams is designed to take into account the season in its entirety. One win counts as that---one win---and only as that; its value is only as good as the quality of the opposition.

Certainly, Kansas deserves substantial credit for beating Virginia Tech. I gave the Jayhawks that credit by placing them as high as I did, which was higher than I ever before had placed them. Likewise, the Hokies' ledger was diminished by the defeat and V.P.I. dropped accordingly.

It seems undeniable to me, though, that Virginia Tech had the better season, even if Kansas had the better game. Prior to beating the Hokies, the 'Hawks had not defeated so much as a single team that finished with fewer than six losses. One good night in Miami cannot overcome a string of victories over cupcakes.

Look at it this way: Kansas played two good teams in 2007, Missouri and Virginia Tech. The Jayhawks faced both at neutral sites, winning one by a close margin and being beaten handily in the other.

V.P.I., by contrast, faced six teams better than the second-best team K.U. beat: Boston College (twice), Clemson, East Carolina, Kansas, Louisiana State, and Virginia. Virginia Tech tangled with three of those teams on the road and with two of them at neutral sites.

In their seven outings against those half-dozen opponents, the Hokies went 4-3, with two of the three losses, and none of the four victories, being close. Two of those wins were on the other team's home field.

When we separate the wheat from the chaff, we find that Virginia Tech was over .500 against good teams, winning by more than a touchdown every time the Hokies won and losing by more than a touchdown only a third of the times they lost. K.U., on the other hand, was right at .500 against good teams, faced fewer quality competitors, never won by more than a touchdown when the Jayhawks won, and never lost a close contest when they lost.

As I have indicated before, I understand and respect the perspective expressed by you on behalf of "a 12-1 Orange Bowl Champion whose only loss was to a Top 10 team." Once again, I recognize that this is a valid point of view and I freely admit that Kansas's schedule, while weak, was not of the low caliber of Hawaii's.

Over the course of the campaign, though, even with the win over Virginia Tech that gave them their only quality scalp of the season, the 'Hawks accomplished less than the Hokies, which is why I gave V.P.I. the nod over K.U.


DC Trojan's next question concerns what he characterizes as "penalties for conference structure," sensibly asking, "[W]hy is it that a Pac 10 winner / team gets slightly penalized for playing one game fewer because the Pac 10 doesn't need a conference championship? There's nothing magical about a conference championship, it just means you've had one more game to account for a conference apparently having too many teams for them to play one another in the course of a 12 game season."

That's a fair point. Prior to the admission of Arizona and Arizona State to the Pacific Coast conference, the Pac-8 played a round-robin seven-game conference schedule in which each team faced every other team in the league. This made it easy to break ties between teams in the conference standings, as every squad had gone head-to-head with every one of its coevals over the course of the campaign.

The addition of the two Copper State schools caused this practice to be discontinued temporarily, but, when the twelve-game regular season was made permanent in 2006, the Pac-10 did not use the extra outing to schedule an additional non-conference game. Instead, the league went back to a round-robin slate in which each team played all nine of its conference competitors.

I have noted before that this generally puts the Pac-10 champion and the S.E.C. champion on fairly even footing as far as conference schedules are concerned, as the first-place finishers in both leagues will have survived a nine-game league slate. While there are bound to be some variations from year to year, it generally will come out in the wash, as the Pac-10 champs are assured of playing every good team in the conference and every bad team in the conference, whereas the S.E.C. champs will miss two or three league teams (depending upon whether there is a rematch in the Georgia Dome) and whether those teams are good or bad will vary.

The S.E.C. champion, for instance, may miss out on Vanderbilt, but the Pac-10 champion is guaranteed to play Stanford. O.K., bad example.

Generally speaking, then, except in years in which the gap between the conferences is particularly large (which certainly is not the norm), winning a Pac-10 title and winning an S.E.C. title will represent approximately equal achievements. Due to the number of conference games played, finishing atop either the Pac-10 or the S.E.C.---or, for that matter, the A.C.C. or the Big 12---ordinarily will be more difficult than capturing first place in the Big Ten, in which each team plays eight conference games, or in the Big East, in which each team plays seven conference games.

(Please note that this is not an indictment of the quality of the Big East or the Big Ten as conferences; it's just a fact that, absent a greater than normal gap in strength between the members of any two given conferences, it's harder to win nine conference games than it is to win seven or eight. An S.E.C. or Big 12 team that did not appear in its conference championship game---and, hence, played only eight games against league opposition---likely has attained a level of success approximately equivalent to that of a Big Ten team with an identical conference ledger, although, obviously, none of this has any bearing on the quality, vel non, of a team's non-conference slate.)

It was not my intention to penalize a Pac-10 team, such as Oregon, that played nine conference games as part of its regular-season schedule by rewarding an S.E.C. team, such as Tennessee, that played nine conference games because it represented its division in the conference championship game. DC Trojan makes a fair point that the Southeastern Conference squad's extra game in the Georgia Dome merely brings the two teams even in terms of their respective conference slates.

Although I did not address the question specifically in this manner, I acknowledged the closeness of the call in differentiating between the Ducks and the Volunteers:

Oregon and Tennessee truly are comparable teams, which is why I ranked them within one poll position of one another. Neither the Ducks nor the Big Orange faced a Division I-AA team and both beat six teams with winning records and three teams with losing records.

Oregon beat Arizona State, Michigan, and Southern Cal, whereas Tennessee beat Arkansas, Georgia, and Wisconsin. Both teams played nine conference games and both teams beat 9-4 teams from B.C.S. conferences in bowl games.

I gave Tennessee the nod for two reasons. First of all, the Volunteers represented their division in the S.E.C. championship game and, therefore, played an extra game. All other things being equal, 10-4 trumps 9-4.

Secondly, and more importantly, Tennessee had the better set of losses. The Vols lost to Cal in Berkeley; the Ducks lost to Cal in Eugene. Tennessee lost to a pair of seven-win teams; Oregon lost to a pair of seven-loss teams. Tennessee's best loss was a close contest against the eventual national champion; Oregon's best loss was a close contest against the eventual Emerald Bowl champion.

In the end, though, Oregon's and Tennessee's resumes are close and, due to the margins by which the Ducks beat U.S.C. and the Big Orange beat Georgia, it is reasonable to call the Trojans' setback the better of the two. These are splittable differences. . . .


The problematic part of that, from DC Trojan's perspective, is the antepenultimate paragraph, in which I gave the Big Orange (perhaps undue) credit for playing an extra game. I stand by the sentiment that, "[a]ll other things being equal, 10-4 trumps 9-4," but the key question is whether all other things are equal.

In this particular instance, the Ducks went 4-0 against a non-conference schedule (including the Sun Bowl) that consisted of wins against three nine-win teams (Humanitarian Bowl champion Fresno State, Capital One Bowl champion Michigan, and South Florida) and one eight-win team (Houston, which appeared in the Texas Bowl). Oregon won two of those games at home, one on the road, and one at a neutral site.

The Volunteers went 4-1 against an out-of-conference slate (including the Outback Bowl) that included wins over five-win Arkansas State, three-win Louisiana-Lafayette, seven-win Southern Mississippi, and nine-win Wisconsin, as well as a loss to the selfsame Armed Forces Bowl champion California team to whom the Ducks also lost. (Oregon and Tennessee each caught the Golden Bears early in the season, prior to their decline. The Ducks lost a close game in Eugene and the Vols lost a blowout in Berkeley.) Of the Big Orange's five non-S.E.C. outings, three were at home, one was on the road, and one was at a neutral site.

DC Trojan makes a valid point to which I shall have to give further consideration when compiling next year's final BlogPoll ballot. While Tennessee played one more game (and posted one more win) than Oregon, there is a legitimate sense in which that was an extra non-conference game rather than an extra S.E.C. contest, since the league title tilt only brought the Volunteers' conference slate numerically into line with the Ducks'.

Similarly, breaking out the all-orange uniforms brought the Volunteers' fashion faux pas hideously into line with the Ducks'.

One could still legitimately rank Tennessee ahead of Oregon, of course. The Big Orange went 6-3 in its nine S.E.C. contests, while the Ducks were 5-4 in their nine Pac-10 outings. Both squads lost two conference road games which were not competitive: Tennessee, to Alabama and Florida; Oregon, to Arizona and U.C.L.A. The Bruins and the Crimson Tide had roughly analogous campaigns---6-6 regular seasons followed by minor bowl berths---but falling to the Gators in Gainesville is far more forgivable than losing to the Wildcats in Tucson . . . yet there were extenuating circumstances (Dennis Dixon's departure from the game in the latest in a series of debilitating injuries to the Ducks) and, to complicate matters further, the Saurians subsequently lost a neutral site game to the selfsame Wolverines whom Oregon pulverized in Ann Arbor (albeit, once again, at an historic low point for Michigan, one week after their devastating loss to Appalachian State).

As noted above, a close loss on December 1 to the eventual Emerald Bowl champion is in no way equivalent to a close loss on December 1 to the eventual national champion, even if Oregon State received a bowl berth below its ability level. However, Tennessee's usually somewhat stout non-conference schedule was in no sense the equal of Oregon's, particularly since the Volunteers' best regular-season out-of-conference clash was a loss to a team that beat the Ducks far less handily.

There is no easy answer to this conundrum. If anything, I now believe Oregon and Tennessee are more closely comparable than I believed them to be before, and I thought they were pretty close in the first place. I have struggled with the proper role of conference affiliation in determining rankings before and DC Trojan has revealed an additional layer of complexity to this question. This consideration deserves further thought and I invite DC Trojan or any other interested person to discuss this subject in the comments below, as I genuinely would appreciate some guidance in wrestling with this question.

Finally, DC Trojan raised an issue that has been brought to my attention before. One of the virtues of resume ranking is that it is not static; an early-season win or loss rises or falls in value with the performance of what may have been an overrated or underrated team at the start of the campaign.

This, for instance, would be an example of a team that was overrated throughout the season.

The potential pitfall of this method is one into which I may have tumbled, docking Oregon and Tennessee alike for what was, at the time, a quality loss to a good Cal team that ran out of steam and finished with a mediocre record but was far from middling on the day of the game. A similar albatross hangs about the Bulldogs' necks, as their loss to South Carolina had some credibility in September but lost quality late in the season, when the Gamecocks faded due partly to a season-ending injury to a key player who had been healthy for their game against Georgia.

I am somewhat troubled by this because, in a very meaningful way, I believe very strongly that it should not matter when a team wins or loses a game. One of the reasons I did not grouse when the Red and Black did not make it into the national championship game is my vehement opposition to the notion that the system for determining the top team in the sport should reward late-season hot streaks in a way that renders early-season results inconsequential. Down that road lie such affronts as the 2006 world champion St. Louis Cardinals . . . or, at a minimum, the possibility that a team like the 2001 Colorado Buffaloes could make it into the national title mix. I much prefer the system that correctly told the 1989 Florida State Seminoles that a ten-game winning streak including victories over Auburn, Miami (Florida), and Nebraska could not erase the stain of back-to-back season-opening losses to Southern Mississippi and Clemson. It became a cliché because it's true: every game counts.

When I am ranking teams, I am looking at their entire body of work, at the first game no less so than at the last game. Early-season wins by Louisiana State over Virginia Tech and by Missouri over Illinois took on added luster as the season went along, while victories by Georgia over Oklahoma State, Southern California over Nebraska, and Oklahoma over Miami (Florida) saw their value diminished over time. The worth of a particular win or a particular loss is not fixed, it fluctuates throughout the autumn, and it is a mistake to attempt to keep it frozen and immobile like a fly in amber before every game has been played.

Nevertheless, the circumstances surrounding a specific outcome, including its timing, are consequential. Climatic conditions, venues, kickoff times, injuries, suspensions, and such intangible factors as momentum unquestionably play a part and DC Trojan is right that I probably subtracted too much merit from Oregon's loss to Cal . . . although, to be fair, I deducted equal weight from Tennessee's loss to the Bears, as well.

And, again, the Vols lost points for all that orange.

We knew all along that it was not merely a matter of plugging figures into a formula; ranking college football teams, like most endeavors not wholly reducible to numbers, is an inexact science. That, frankly, is half of the fun. Perhaps the balance to be struck---and I apologize for the necessarily messy nature of this solution, but these are slippery issues by their very nature---is to discount as completely as possible when a result occurred but factor in why it occurred.

In some sense, we do this already. I pay attention to margins of victory and home field advantage because these can offer real insights into the reasons games turned out the way they did. There is no getting around the fact that Oregon and Tennessee each lost to a team that ended up with a 7-6 record, but, if there is a bedrock premise underlying my approach to compiling my top 25, it is that not all identical records are created equal. That Jeff Tedford's squad was playing better than its eventual record would indicate at the time of the Golden Bears' victories over the Ducks and the Volunteers cannot be doubted and DC Trojan is correct that there must be room for this datum on the scale when various teams' respective resumes are being weighed.

I wish to stress two points in conclusion. First of all, I am grateful to DC Trojan for asking these questions and getting the discussion started. Secondly, I put a great deal of thought into my BlogPoll ballot and, because I wish to be as conscientious a voter as I can, I actively solicit your participation in this conversation in the comments below. I really want to know what you think.

Next, I shall begin addressing the related issues raised by BCSBusters. Because he has brought up subjects of equal (or even greater) substance, my focus will be on answering him. Please take this opportunity to continue the discussion begun in this posting in my absence. Also . . . a few more comments like this one wouldn't exactly hurt my feelings.

Go 'Dawgs!