RSS

Monte Carlo Simulation: Election Forecasting and Exit Poll Modeling

01 Sep

Richard Charnin

Updated: July 8, 2012

MONTE CARLO SIMULATION

Monte Carlo simulation is a random process of repeated experimental “trials” applied to a mathematical system model. The Election Simulation Model runs 200 trial “elections” to determine the expected electoral vote and win probability.

Statistical polling (state and national) ideally is an indicator of current voter preference. Pre-election poll shares are adjusted for undecided voters and state win probabilities are calculated. The probabilities are input to a Monte Carlo Simulation based on random numbers. The final probability of winning the electoral vote is simply the number of winning election trials divided by the total number of trials (200 in the ESM; 5000 in the Election Model).

The only forecast assumption is the allocation of undecided/other voters. Historically, 70-80% of undecided voters break for the challenger. If the race is tied at 45-45, a 60-40% split of undecided voters results in a 51-49% projected vote share.

The theoretical expected electoral vote for a candidate is a simple calculation. It is just the sum of the 51 products: state electoral vote times the win probability. In the simulation, the average (mean) electoral vote will converge to the theoretical value as the number of election trials increase.

Sensitivity Analysis

A major advantage of the Monte Carlo method is that the win probability is not sensitive to minor deviations in the state polls. It is not an all-or-nothing proposition as far as allocating the electoral vote is concerned. A projected 51% vote share has less electoral “weight” than a 52% share, etc. Electoral vote projections from media pundits and Internet bloggers use a single snapshot of the latest polls to determine a projected electoral vote split. This can be misleading when the states are competitive and often results in wild electoral vote swings.

In the Election Model, five projection scenarios are executed over a range of undecided voter allocation assumptions display the effects on aggregate vote share, electoral vote and win probability.

Snapshot projections do not provide a robust expected electoral vote split and win probability. That’s because unlike the Monte Carlo method, they fail to consider the two bedrocks of statistical analysis: The Law of Large Numbers and the Central Limit Theorem.

For example, assume that Florida’s polls shift 1% from 46-45 to 45-46. This would have a major impact in the electoral vote split. On the other hand, in a Monte Carlo simulation, the change would have just a minimal effect on the expected (average) electoral vote and win probability. The 46-45 poll split means that the race is too close to clearly project a winner; both candidates have a nearly equal win probability.

ELECTION FORECASTING METHODOLOGY

The Law of Large Numbers is the basis for statistical sampling. All things being equal, polling accuracy is directly related to sample size – the larger the sample, the smaller the Margin of Error (MoE). In an unbiased random sample, there is a 95% probability that the vote will fall within the MoE of the mean.

There are two basic methods used to forecast presidential elections:
1) Projections based on state and national polls
2) Time-series regression models

Academics and political scientists create Linear regression models to forecast election vote shares and run the models months in advance of the election. The models utilize time-series data, such as: economic growth, inflation, job growth, interest rates, foreign policy, historical election results, incumbency, approval rating, etc. Regression modeling is an interesting theoretical exercise, but it does not account for daily events which affect voter psychology.

Polling and regression models are analogous to the market value of a stock and its intrinsic (theoretical) value. The latest poll share is the equivalent of the current stock price. The intrinsic value of a stock is based on forecast cash flows. The intrinsic value is rarely equal to the market value.

The historical evidence is clear: state and national polls, adjusted for undecided voters and estimated turnout, are superior to time-series models executed months in advance.

Inherent problems exist in election models, the most important of which is never discussed: Election forecasters and media pundits never account for the probability of fraud. The implicit assumption is that the official recorded vote will accurately reflect the True Vote and that the election will be fraud-free.

ELECTORAL AND POPULAR VOTE WIN PROBABILITIES

The probability of winning the popular vote is a function of the projected 2-party vote share and polling margin of error. These are input to the Excel normal distribution function. The simulation generates a electoral vote win probability that is not sensitive to minor changes in the state polls.

Prob (win) = NORMDIST (Proj, 0.50, MoE/1.96, True)

For each state in an election trial, a random number (RND) between 0 and 1 is generated and compared to the probability of winning the state. For example, if Kerry has a 90% probability of winning Oregon and RND is less than 0.90, Kerry wins 7 electoral votes. Otherwise, if RND is greater than 0.90, Bush wins. The procedure is repeated for all 50 states and DC. The election trial winner has at least 270 EV.

The electoral vote win probability is directly correlated to the probability of winning the national popular vote. But electoral vote win probabilities in models developed by academics and bloggers are often incompatible with the projected national vote shares.

For example, assume a 53% projected national vote share. If the corresponding EV win probability is given 88%, the model design/logic is incorrect; the 53% share and 88% win probability are incompatible. For a 53% share, the win probability is virtually 100%. This is proved using Monte Carlo simulation based on state win probabilities in which there is a 53% aggregate projected national share.

The state win probability is based on the final pre-election polls which typically sample 600 likely voters (a 4% MoE). In the 2004 Election Simulation Model, the electoral vote is calculated using 200 election trials. The average (mean) electoral vote is usually within a few votes of the median (middle value). As the number of simulation trials increase, the mean approaches the theoretical expected value. That is due to the Law of Large Numbers.

2004 ELECTION MODEL

The simulation model consists of 200 election trials based on pre-election state polls and post-election exit polls. It is strong circumstantial evidence that the election was stolen.

In the pre-election model, the state and national polls are adjusted for the allocation of undecided voters. The post-election model is based on unadjusted and adjusted state exit polls. Monte Carlo simulation is used to project state and aggregate vote shares and calculate the popular and electoral vote win probabilities.

The state win probability is a function of 1) the projected vote shares (after allocating undecided voters) and 2) the state poll margin of error.

The expected (theoretical) electoral vote can be calculated using a simple summation formula. It is just the product sum of the state win probabilities and corresponding electoral votes.

The purpose of the simulation is to calculate the overall probability of winning the electoral vote. As the number of election trials increase, the average (mean) electoral vote will approach the theoretical expected value.

The electoral vote win probability is the ratio of the number of winning election trials to the total number of trials.

In every presidential election, millions of voters are disenfranchised and millions of votes are uncounted. Forecasting models should have the following disclaimer:

Note: The following forecast will surely deviate from the official recorded vote. If they are nearly equal, then there must have been errors in the a) input data, b) assumptions, c) model logic and/or methodology.

Kerry led the weighted pre-election state and national polls by 1%. After allocating 75% of undecided voters to him, he was projected to win by 51.4-47.7%. Kerry had 51.1% in the unadjusted state exit poll aggregate (76,000 respondents) and 51.7% in the unadjusted National Exit Poll (13,660 respondents).

The National Election Pool, a consortium of six media giants, funds the exit polls. The published National Exit Poll is always forced to match the recorded vote. The Final 2004 NEP adjusted the actual exit poll responses to force a match to the recorded vote (Bush by 50.7-48.3%).

The large discrepancy between the exit polls and the vote count indicates that either a) the pre-election and unadjusted exit polls were faulty or b) the votes were miscounted, or c) a combination of both. Other evidence confirms that the votes were miscounted in favor of Bush.

The True Vote always differs from the official recorded vote due to uncounted, switched and stuffed ballots. Were the pollsters who forecast a Bush win correct? Or were Zogby and Harris correct in projecting that Kerry would win?

None of the pollsters mentioned the election fraud factor – the most important variable of all.

MODEL OVERVIEW

The workbook contains a full analysis of the 2004 election, based on four sets of polls:

(1) Pre-election state polls
(2) Pre-election national polls
(3) Post-election state exit polls
(4) National Exit Poll

Click the tabs at the bottom of the screen to select:
MAIN: Data input and summary analysis.
SIMULATION: Monte Carlo Simulation of state pre-election and exit polls.
NATPRE: Projections and analysis of 18 national pre-election polls.
In addition, three summary graphs are provided in separate sheets.

Calculation methods and assumptions are entered in the MAIN sheet:
1) Calculation code: 1 for pre-election polls; 2 for EXIT polls.
2) Undecided voter allocation (UVA): Kerry’s share (default 75%).
3) Exit Poll Cluster Effect: increase in margin of error (default 30%).
4) State Exit Poll Calculation Method:
1= WPD: average precinct discrepancy.
2= Best GEO: adjusted based on recorded vote geographic weightings.
3= Composite: further adjustment to include pre-election polls.
4= Unadjusted state exit polls

Note: The Composite state exit poll data set (12:40am) was downloaded from the CNN election site by Jonathan Simon. The polls were in the process of being adjusted to the incoming vote counts and weighted to include pre-election polls.The final adjustment at 1am forced a match to the final recorded votes.

Simulation forecast trends are displayed in the following graphs:

State aggregate poll trend
Electoral vote and win probability
Electoral and popular vote
Undecided voter allocation impact on electoral vote and win probability
National poll trend
Monte Carlo Simulation
Monte Carlo Electoral Vote Histogram

POLL SAMPLE-SIZE AND MARGIN OF ERROR

Approximately 600 were surveyed in each of the state pre-election polls (a 4% margin of error). The national aggregate has a lower MoE; approximately 30,000 were polled. In 18 national pre-election polls, the samples ranged from 800 (3.5% MoE) to 3500 (1.7% MoE).

In the exit polls, 76,000 voters were sampled. Kerry won the unadjusted state exit poll aggregate by 51.1-47.5%. He also won the unadjusted National Exit Poll (NEP) by 51.7-47.0%. The NEP is a 13,660 sample subset of the state exit polls.The NEP was adjusted to match the recorded vote -using the same 13660 respondents.

Assuming a 30% exit poll “cluster effect” (1.1% MoE), Kerry had a 98% probability of winning the popular vote. The Monte Carlo simulation indicates he had better than a 99% probability of winning the Electoral Vote.

The Election Model was executed weekly from August to the election. It tracked state and national polls which were input to a 5000 trial Monte Carlo simulation. The final Nov. 1 forecast had Kerry winning 51.8% of the two-party vote and 337 electoral votes. He had a 99.8% electoral vote win probability: the percentage of trials in which he had at least 270 electoral votes.

https://docs.google.com/spreadsheets/d/1NS7iJ-8zPFoP-4TlKqPBS7XnZl2chTW448qiTqzolRc/edit#gid=0

The “cluster effect” is the percentage increase in the theoretical Exit Poll margin of error. When it is not practical to carry out a pure random sample, a common shortcut is to use an area cluster sample: primary Sampling Units (PSUs) are selected at random within the larger geographic area.

The Margin of Error (MoE) is a function of the sample size (n) and the polling percentage split: MoE = 1.96* Sqrt(P*(1-P)/n)

NORMAL DISTRIBUTION

The Excel function has a very wide range of applications in statistics, including hypothesis testing.

NORMDIST (x, mean, stdev, cumulative)
X is the value for which you want the distribution.
Mean is the arithmetic mean of the distribution.
Stdev is the standard deviation of the distribution.

EXAMPLE: Calculate the probability Kerry would win Ohio based on the exit poll assuming a 95% level of confidence.
Sample Size = 1963; MoE =2.21%; Cluster effect = 20%; Adj. MoE = 2.65%
Std Dev = 1.35% = 2.65% / 1.96

Kerry win probability:
Kerry = 54.0%; Bush = 45.5%; StdDev = 1.35%
Kerry 2-party share: 54.25%
Probability = NORMDIST(.5425, .5, .0135, TRUE)= 99.92%

BINOMIAL DISTRIBUTION

BINOMDIST is used in problems with a fixed number of tests or trials, where the outcome of any trial is success or failure. The trials are independent. The probability of success is constant in each trial (heads or tails, win or lose).

EXAMPLE: Determine the probability that the state exit poll MoE is exceeded in at least n states assuming a 95% level of confidence. The one-tail probability of Bush exceeding his exit poll share by the MoE is 2.5%.
N = 16 states exceeded the MoE at 12:22am in favor of Bush.
The probability that the MoE is exceeded in at least 16 polls for Bush:
= 1- BINOMDIST (15, 50, 0.025, TRUE)
= 5.24E-14 or 1 in 19,083,049,268,519

A SAMPLING PRIMER

The following is an edited summary from http://www.csupomona.edu/~jlkorey/POWERMUTT/Topics/data_collection.html

A random sample is one which each outcome has an equal probability of being included. It is an unbiased estimate of the characteristics of the population in which the respondents are representative of the population as a whole.

The reliability of the sample increases with the size of the sample. Ninety-five times out of a hundred, a random sample of 1,000 will be accurate to within about 3 percentage points. The sample has a margin of error of approximately plus or minus (±) 3 percent at a 95 percent confidence level. If a random sample of 1,000 voters shows that 60 percent favor candidate X, there is a 95 percent chance that the real figure in the population is in the 57 to 63 percent range.

Beyond a certain point, the size of the population makes little difference. The confidence interval is not reduced dramatically. Therefore pre-election national polls usually don’t survey more than about 1,500 respondents. Increasing this number increases the cost proportionately, but the margin of error will be reduced only a little.

Often it is not practical to carry out a pure random sample. One common shortcut is the area cluster sample. In this approach, a number of Primary Sampling Units (PSUs) are selected at random within a larger geographic area. For example, a study of the United States might begin by choosing a subset of congressional districts. Within each PSU, smaller areas may be selected in several stages down to the individual household. Within each household, an individual respondent is then chosen. Ideally, each stage of the process is carried out at random. Even when this is done, the resulting sampling error will tend to be a little higher than in a pure random sample,[7] but the cost savings may make the trade-off well worthwhile.

Somewhat similar to a cluster sample is a stratified sample. An area cluster sample is appropriate when it would be impractical to conduct a random sample over the entire population being studied. A stratified sample is appropriate when it is important to ensure inclusion in the sample of sufficient numbers of respondents within subcategories of the population.

Even in the best designed surveys, strict random sampling is a goal that can almost never be fully achieved under real world conditions, resulting in non-random (or “systematic”) error. For example, assume a survey is being conducted by phone. Not everyone has one. Not all are home when called. People may refuse to participate. The resulting sample of people who are willing and able to participate may differ in systematic ways from other potential respondents.

Apart from non-randomness of samples, there are other sources of systematic error in surveys. Slight differences in question wording may produce large differences in how questions are answered. The order in which questions are asked may influence responses. Respondents may lie.

Journalists who use polls to measure the “horse race” aspect of a political campaign face additional problems. One is trying to guess which respondents will actually turn out to vote. Pollsters have devised various methods for isolating the responses of “likely voters,” but these are basically educated guesses. Exit polls, in which voters are questioned as they leave the voting area, avoid this problem, but the widespread use of absentee voting in many states creates new problems. These issues are usually not a problem for academic survey research. Such surveys are not designed to predict future events, but to analyze existing patterns. Some are conducted after the election. The American National Election Study, for example, includes both pre and post election interviews. Post election surveys are not without their own pitfalls, however. Respondents will sometimes have a tendency to report voting for the winner, even when they did not.

The American National Election Study split its sample between face to face and telephone interviews for its 2000 pre-election survey. The response rate was 64.8 percent for the former, compared to 57.2 percent for the latter. An analysis of a number of telephone and face-to-face surveys showed that face-to-face surveys were generally more representative of the demographic characteristics of the general population. Note that many telephone surveys produce response rates far lower than that obtained by the ANES.

Another approach is the online poll, in which the “interaction” is conducted over the Internet. Like robo polls, online polls are also less expensive than traditional telephone surveys, and so larger samples are feasible. Because they require respondents to “opt in,” however, the results are not really random samples.

When samples, however obtained, differ from known characteristics of the population (for example, by comparing the sample to recent census figures), samples can be weighted to compensate for under or over representation of certain groups. There is still no way of knowing, however, whether respondents and non-respondents within these groups differ in their political attitudes and behavior.

 
1 Comment

Posted by on September 1, 2011 in 2004 Election

 

Tags: , , , ,

One response to “Monte Carlo Simulation: Election Forecasting and Exit Poll Modeling

Leave a comment

 
MishTalk

Global Economic Trend Analysis