4 Things the Presidential Election Can Teach Us About Handicapping

Big Brown

Big Brown winning the 2008 Kentucky Derby.

Prior to the 2008 Belmont Stakes, Richard Dutrow, trainer of Kentucky Derby and Preakness winner Big Brown, was supremely confident that his colt would become the first Triple Crown winner since Affirmed in 1978.

“I feel that he will do it. It’s actually a foregone conclusion for me,” Dutrow said during a conference call with the media. “I see the horses in with him and I see our horse, and I expect him to win it.”

Big Brown pulled up on the turn for home — some say to save face — and did not finish the race, which was eventually won by longshot Da’ Tara.

Likewise, for most of the run-up to last night’s election, pundits and statistical gurus thought Hillary Clinton was a foregone conclusion to become the next president of the United States.

The New York Times gave Clinton an 85 percent chance of winning, Daily Kos pegged Clinton’s chances at 92 percent, the Huffington Post gave the former senator and Secretary of State a 98 percent shot of becoming the first female president, while Princeton professor Sam Wang tabbed Clinton’s White House hopes at 99 percent. In fact, Wang was so confident of a Clinton presidency that, in a tweet on Oct. 18, he promised to “eat a bug” if Trump garnered more than 240 electoral votes (I would suggest a cutworm, as I’ve heard that crows prefer them).

While considerably less bullish, vaunted statistician Nate Silver still believed there was a 71 percent — excuse me, 71.8 percent (nothing amuses me more than precise measurements of imprecise data) — likelihood that Clinton would occupy the Oval Office come Jan. 20. Interestingly, Silver’s tempered enthusiasm for Clinton throughout the election cycle prompted Huffington Post Washington Bureau Chief Ryan Grim to accuse Silver of deliberately skewing his own data at FiveThirtyEight.com to elevate Trump’s chances — a charge that Silver, um, vehemently denied.


So, how does all of this political stuff relate to handicapping? Let me go Elizabeth Barrett Browning on all of you and count the ways:


Every bettor claims to know this, yet very few act like they know it. I cannot tally the number of times I’ve been called “stupid” (usually with two os, which reduces the sting a bit) or had people tell me I “don’t know anything about racing” because I picked against a heavy favorite, especially a heavy fan favorite, e.g. Big Brown.

Favorites lose. Heavy favorites lose. Heavy favorites in Grade I stakes events also lose.

Many consider Secretariat to be the greatest horse in thoroughbred racing history — heck, even Trump referenced the two-time Horse of the Year in his victory speech — yet Big Red lost four times on his own accord and once via disqualification.

He was favored to win in every one of those defeats.


We live in a society that, for good reason, puts a lot of stock in science, which is great. However, very few understand what science is or the difference between “good” and “bad” science, which is not so great.

For months, many legitimately questioned the accuracy of the polls… and they were roundly trashed for doing so (in the name of science, of course). Yet, the polls are what Silver and other stats guys used to produce their statistical algorithms. And it was clear, at least to me, that there were issues with many of those algorithms long before last night.

In fact, in his war of words with Grim, Silver pointed one of these issues out:

Nonetheless, one of the reasons that Grim and others took exception to Silver’s model was the big fluctuations that it showed. For example, On July 31, Silver tabbed Trump’s chances of becoming the 45th president at 39.1 percent. A day later, the number stood at 32 percent. This hardly seems reasonable over a 24-hour period.

Similarly, handicappers should strive to be aware of which factors are relevant and which are not. It also helps to know the difference between what is predictive and what is profitable, but that is a topic for another day.


Related to the above, the size of one’s sample counts, especially when it comes to creating horse racing and sports betting algorithms, where the markets are continually becoming more efficient.

Frankly, I am unimpressed when I hear that so-and-so’s statistical model has predicted 30 of the past presidential elections, because I know: a) Most of the data was back-tested, i.e. tested after the results were already known, and b) There haven’t been many presidential elections (57 prior to last night), so there’s not a lot of data to work with, period.

Likewise, players need to be wary of the “great” trainer who is 2-of-3 with shippers in maiden claiming sprints running for a $10,000 tag or less. Such stats are nearly always cherry-picked and the sample size is way too small to be truly meaningful.


In a blog post on March 24 entitled “Failure Is Moving Science Forward,” Silver explained the way science works (or is supposed to work).

“It’s a process of becoming less wrong over time,” he wrote.

I couldn’t agree more (not that Silver f-ing cares). And the problem we have today is that too many people accept learned opinions as the truth, the whole truth and nothing but the truth, even when those holding such opinions make no such claims — or are flat-out wrong.

For instance, the notion that horses are becoming increasingly fragile is a theory, not a fact. Yes, many influential people in the horse racing industry believe it to be true — and it might be. But “might be” and “is” are as different as “foregone conclusion” and “Stunning Trump Win,” which is how the Los Angeles Times described Trump’s unlikely triumph.

Of course, learning from one’s mistakes is hard. After Trump wrapped up the GOP nomination in May, Silver issued a mea culpa for underestimating The Donald’s chances (last November, he approximated Trump’s national following to be equal to the percentage of people who believe the Apollo moon landing was faked).

In a piece titled “How I Acted like a Pundit and Screwed up on Donald Trump” dated May 18, 2016, Silver wrote the following:

Since Donald Trump effectively wrapped up the Republican nomination this month, I’ve seen a lot of critical self-assessments from empirically minded journalists — FiveThirtyEight included, twice over — about what they got wrong on Trump. This instinct to be accountable for one’s predictions is good since the conceit of “data journalism,” at least as I see it, is to apply the scientific method to the news. That means observing the world, formulating hypotheses about it, and making those hypotheses falsifiable. (Falsifiability is one of the big reasons we make predictions.) When those hypotheses fail, you should re-evaluate the evidence before moving on to the next subject. The distinguishing feature of the scientific method is not that it always gets the answer right, but that it fails forward by learning from its mistakes.

Unfortunately, after penning this brilliant paragraph, Silver let himself off the hook by openly wondering whether he and other data journalists were guilty of too much “self-flagellation.”

Look, I understand what he’s talking about. We all know that guy at the track who agonizes over every wrong decision he makes. Hey, sometimes things just happen. As the great philosopher Tupac Shakur once noted: “Life goes on.” Still, I think it is better to be introspective than to constantly make excuses for one’s shortcomings.

One of my all-time favorite justifications for a loss involved Orb, winner of the 2013 Kentucky Derby. After the son of Malibu Moon failed to repeat that victory (which I believe was pace-aided) in the Preakness Stakes, Orb fans claimed it was because he didn’t like racing on the inside.

And then he lost the Belmont… and the Travers (with a different jockey). By the time the Jockey Club Gold Cup rolled around, I was convinced that Orb didn’t like racing on any part of the racetrack, but that didn’t stop his diehard supporters from betting him down to 7-2.

Orb raced 3-4 wide in the Gold Cup and finished last.

My point here is that, in order to improve one’s handicapping, one must be willing to honestly assess the results and see if any changes to one’s approach are warranted. Sometimes the answer is no, but sometimes the answer is yes.

Posted on