2009 NFL Point Spread Picks Week 11

Pick 1: San Diego -3 correct NFL point spread pick + $54.55
Pick 2: San Francisco +6.5 correct NFL point spread pick + $54.55
Pick 3: Arizona -9 incorrect NFL point spread pick -$60
Pick 4: New Orleans -11 correct NFL point spread pick + $54.55

Just like the stock market, bankroll and ATS percentage is expected to oscillate, regardless of your handicapper or strategy. We try our best here to provide you not only with above average picks, but a betting strategy to come out winning at the end of the season. I cannot stress enough the importance of managing your bankroll. As it decreases, so do your bets and vice-versa. If our expected ATS percentage is 58-60%, then by the end of the year you will bank.

Again, this week we'll risk 20% of bankroll. All games equals, means we're betting ~$60/game.

Now, to this week's games. Again we have strong teams playing weaker teams and the spreads to these games range from 9 to 11. We have ARI @ STL, NO @ TB, PIT @ KC, WAS @ DAL, CIN @ KC. New Orleans has lost against the spread in the last two weeks and although I believe they'll rebound this week, the model didn't show much confidence. KC is a disaster and one might be tempted to be all over this game, but in my estimate's opinion, it is "priced right". Dallas might fall a bit short of 11 points according to estimates. Although at they're the only 'strong' team on this list playing at home and Washington showing life only last week, it is a toss up. For Cinci, I've decided to stay away and see how the Bengals play without Cedric Benson.

ARI @ STL $60 - Bulger played well against New Orleans mistake driven game last week. Steven Jackson is running 101 yards per game, with only two TDs. Meaning, the Rams can't score and boy can the cardinals build a lead fast with their passing game.

SD @ DEN $60 -Great divisional game! Question for you: how many points did the spread shift to account for Orton's injury? My guess is about 3 points. Last week's lost against Washington didn't help either. Momentum is on the Chargers with wins against the Eagles and Giants in the last two games. The Denver defense that allowed 6 pts per game in the first 3 games has allowed 28 pts per game in the last 3, what happened? More 3rd down conversions, rushing yards sky-rocketed, and many other reasons. Ask Mike Nolan, my guess it continues this week.

SF @ GB $60 - Great game for Green Bay last week, but that does not sum up their season. This spread seems inflated by recency bias. The 49ers are at the top of the ATS Standings. Although they won last week against the Bears, their offense was not impressive at all. 4 INT and the game came down to the last play, an interception.

NO @ TB $60 - Sharper and Greer could return for this week's game with Tampa Bay. If injuries do not plague the Saints, I don't see them losing their strong lead in the last quarter as they did against the Falcons and Rams.

Without further ado, here are the NFLpickles free point spread estimates for Week 11 of 2009.
































































































































GameVegas LineEstimatePrediction-VegasConfidence
SAN DIEGO @ DENVER36362%
SAN FRANCISCO @ GREEN BAY-6.50.77.260%
CINCINNATI @ OAKLAND9.512.22.759%
NEW ORLEANS @ TAMPA BAY1118.67.657%
ARIZONA @ ST LOUIS920.611.657%
TENNESSEE @ HOUSTON-4.5-9.3-4.854%
MIAMI @ CAROLINA-30.53.554%
ATLANTA @ NY GIANTS-6.5-2.34.253%
WASHINGTON @ DALLAS-11-9.02.053%
NY JETS @ NEW ENGLAND-10.5-9.51.053%
INDIANAPOLIS @ BALTIMORE11.92.951%
PITTSBURGH @ KANSAS CITY1011.61.650%
BUFFALO @ JACKSONVILLE-8.5-6.02.550%
PHILADELPHIA @ CHICAGO34.31.348%
SEATTLE @ MINNESOTA-11-9.81.247%
CLEVELAND @ DETROIT-3.5-5.41.944%


How to read the table:
  • Vegas Line: A NEGATIVE number implies the spread favors the HOME team
  • Estimate: NFL pickles spread estimate
  • Pred-Vegas: Subtraction of the previous two. POSITIVE implies VISITING team will cover spread.
  • Confidence: The probability that the spread pick is on the correct side.

11 comments:

siggy said...

Jamie--a question on the Arizona line--Do you realize that in games that your model showed a 10pt. or more variation in favor of the Away Fav. that your record is 0-4.

Joel said...

These week I don't have any that match your bets. Here is what I got:

Vikings -11
Giants -6.5
Patriots -10.5

Vikings: They know how to score, but so do the Seahawks. The difference? Seahawks are mistake prone, any turnovers A.D. has, the seahawks should have double. Why I am wrong: The Brett Farve of late last year starts to return, but if it does, AD will mitigate that effect.

Giants -6.5: Sure they have been losing. Off a bye week, they know they are still in the hunt for the division, they will come out and play hard against an Atlanta Falcons team that was lucky against the 49ers.

Patriots -10.5: They blew last week. Embarrassing. The poor Jets may take the hardest beating of their lives this Sunday, provided Sanchez doesn't produce a miracle.

Jaime said...

Siggy,

Does that mean that we're due to a win in that category? The confidence measure takes that into account, i.e., how far the estimate is from the line. Go Cardinals!

HappyBreathNet said...

Our models are pretty different this week. Mine doesn't actually bet against yours though, it just doesn't bet with you.

I've got:
PIT -10 (3 units)
CLE +3.5 (2 units)
MIN -10.5 (1 unit)
IND -1 (1 unit)

I've also got:
Over 50.5 in NO vs TB (3 units)
Under 42.5 in MIA vs CAR (2 units)
Under 38 (1 unit) in DET vs CLE

Good luck this week

- Happy

HappyBreathNet said...

Jaime,

Since you are a computer programmer (among other things) and since you have multiple models for predicting NFL outcomes, I thought your site would be an appropriate place to pose a question about object oriented predictive models.

Take for example, a simple base model such as Brian Burke's model based on team offense and defense efficiencies. Then take a tool-box of correction factor models that can be applied based on the situation. For example - I don't have real numbers; this is hypothetical - suppose teams with heavy defensive lines tend to outperform expectations (of the base model) when they play teams with a heavy RB1. An if statement might be if the average DL for team A is greater than (threshold) and the RB1 for Team B is greater than (different threshold) then apply an appropriate correction factor (based on the data that suggested this was a statistically distinct relationship) to the projected result.

The advantage of being able to apply each different relationship as correction factor to the base model has to do with the limits of computational power. Rather than having 3^44 possibilities, it would be more manageable to have 3^8 + 3^10 + 3^6 + 3^12 + 3^8.

The advantage of being able to apply correction factors conditionally has to do with capturing interactions without computing the entire range of possibilities (similar to the advantage of a fractional factorial DOE vs. a full factorial design). Using if statements to apply relationships that are found to be statistically distinct when appropriate and ignoring them when they don't apply holds the best hope, in my estimation, of being able to concurrently harness the benefits of multiple models' strengths without also incorporating their weaknesses.

This approach also allows a model to be constructed in steps, which is nice when starting out (i.e. me), because modules can be added as they are found without nullifying or re-inventing the base model or the previously developed correction factors.

The danger, of course, is that there could be interactions between the correction factors that are developed. Say, for example, teams with inexperienced quarterbacks have a negative correction against teams with strong run defenses and teams with rookie quarterbacks have a positive correction against teams with weak pass rushes. Since a team with a strong run defense may be so in part because they are less aggressive in their blitz packages, these parameters are likely to have an interaction.

I am curious to what degree you have been able to blend your different models and whether you think this object oriented approach has any hope of success.

Best Regards,

Happy

Joel said...

Happy,

I would imagine using the number of the factors and all factors involved instead of an "if" statement. You may lose alot of information in your data if you apply categories. You very well could be using something other than a regression type model though. Using all the factors, you can always find the relationships between "Heavy defensive lines" and the "Heavy RB1". Of course, you would definitely want to consider the team or skill level. How you would accomplish that is a who knows. The problem is, do you have enough degrees of freedom to do that?

when you are categorizing the factors, you are going to lose a lot of degrees of freedom for other things you may want to implement. If you can spare them, by all means go for it, but I still think you lose too much information by categorizing continuous variables into categorical.

In your model, you can always check to see how well your factors are in determining the point spread with the p-value using whatever test. In a way, it will most likely implement this correction factor in the prediction and detect this behavior.

In terms of prediction, you don't have to worry if the variables are heavily correlated with each other, and if you wanted to check that, check the Variance Inflation Factor or other colinearity measures.

My two cents, don't know if that is what you were trying to imply Happy.

HappyBreathNet said...

Joel,

Thanks for your input. I do understand what you're saying. The trouble is of course with the "who knows." Too many variables in one model quickly becomes untenable from a calculation standpoint. Having many smaller models is much easier and can in fact be computed in parallel.

A compromise may be to use the outputs of the correction factors as inputs into a larger model. This still reduces the number of variables in the master model without assuming additive properties for each of the corrections and without ignoring interactions between the correction factors. This assumes that each correction factor has fewer outputs than inputs.

Thanks again,

Happy

DDW said...

Jamie,

The last few weeks have been a struggle. Hopefully we can turn things around. My picks vs yours are:

Den +3
GB -6.5
Stl +9
No -11

I don't think i'll be taking any of these games, but good luck this week.

DDW

dtBy said...

Looks like 75% this week. That's got to feel better.

I feel so disconnected! It's already Monday morning here.. and they won't let me switch the TV from Bloomberg to SNF coverage.

Even though I can't (easily) watch the games, I'm probably going to start running my models again this week. I just need to figure out if I can keep up with the news in a timely way.

Good luck going forward!

HappyBreathNet said...

Nice week. Probably would've been 4/4 if Warner didn't go down.

Jaime said...

It was about time right?