RSS

Category Archives: monte carlo

A Simple 2000-2012 Electoral Vote Simulation Model

A Simple 2000-2012 Electoral Vote Simulation Model

Richard Charnin
July 27, 2015
Updated: Oct.5, 2015
Links to website and blog posts
Look inside the books:
Matrix of Deceit: Forcing Pre-election and Exit Polls to Match Fraudulent Vote Counts
Reclaiming Science:The JFK Conspiracy

The purpose of the Monte Carlo Electoral Vote Simulation Model is to calculate the probability of a candidate winning at least 270 Electoral votes.

The Total EV is calculated as the sum of the products of the state win probabilities and corresponding electoral votes. The probability of winning each state is required in order to calculate the total probability of winning 270 EV. It is calculated using the projected two-party vote share and the margin of error (MoE) as input to the Normal distribution.

Prob = NORMDIST (vote share, 0.5, MoE/1.96, true)

The probability of winning the election is the ratio of winning simulation trials (at least 270 EV) to the total number of simulation trials (200).

The model contains the following 2-party vote shares:
2000- Gore unadjusted state and national exit polls and recorded shares
2004- Kerry unadjusted state and national exit polls and recorded shares
2008- Obama Unadjusted state and national exit polls and recorded shares
2012- Obama state and national True Vote and recorded shares
(In 2012, 19 states were not exit polled)

Only ONE input (code 1-8) is required to indicate the election and method:
2000: 1- exit poll, 2- recorded votes
2004: 3- exit poll, 4- recorded votes
2008: 5- exit poll, 6- recorded votes
2012: 7- True vote, 8- recorded votes

The Electoral Vote Histogram shows the results of 200 simulation trials.

There are three Total Electoral Vote calculations:
1-Theoretical EV: the product sum of state win probabilities and corresponding EVs.
2-Snapshot EV: sum of the projected electoral votes.
3-Mean EV: average EV of the all simulation trials.

In 2000, Gore defeated Bush by just 544,000 recorded votes. But he won the unadjusted state exit poll aggregate by 51.7-46.8%, Given that there were 105.4 million recorded votes, then based in the exit polls, he won by at least 5 million votes. There were 11 states in which he led the exit polls but flipped to Bush. If he had won just one, he would have won the election. If he won all 11, he would have had 408 electoral votes.

In 2004, Kerry had a 48.3% recorded share, 252 EV and lost by 3 million votes. But the unadjusted state and national exit polls indicate that he had 51-52% and won by 5-6 million votes with 349 EV. Seven states with 97 electoral votes flipped from Kerry in the exit polls to Bush in the recorded vote: CO,FL,IA,MO,NV,OH,VA. Kerry would have had 252+97=349 electoral votes had he won the states. The True Vote Model indicates that he had 53.5% and won by 10 million votes.

In the 2008 Election Model Obama’s 365.3 expected theoretical electoral vote was a near-perfect match to his recorded 365 EV. The simulation mean EV was 365.8 and the snapshot was 367. Obama’s won all 5000 election trials. His projected 53.1% share was a close match to the 52.9% recorded share.

The 2008 TVM exactly matched Obama’s 58% share of the unadjusted state exit polls: he won by 23 million votes (not the 9.5 million recorded) and had 420 electoral votes. Obama led the unadjusted National Exit Poll (17,836 respondents, 2% MoE) by 61-37%, an astounding 30 million vote margin.

The 2012 Monte Carlo Simulation Forecast exactly matched Obama’s 332 electoral votes and 51.0% total vote share. In the True Vote Model he had 55.6% and 391 Electoral votes.

Pre-election Registered Voter (RV) polls projected a 57% Obama share which closely matched the True Vote Model. Likely Voter (LV) polls are a subset of the RV polls. The LVs eliminate many new voters or others who did not vote in the prior election, cutting the projected Democratic share.

LV polls have an excellent track record in predicting the bogus recorded vote, as proven by the 2008 and 2012 Election Models. Final pre-election LV polls are used by the political pundits for their projections. After all, the media is paid to forecast the official recorded vote – not the true vote.

 

Advertisements
 

Tags: , ,

Exit Pollsters at Edison Research: Never Discuss the Election Fraud Factor

Richard Charnin
July 20 2015

Charnin Website
Look inside the book: Matrix of Deceit: Forcing Pre-election and Exit Polls to Match Fraudulent Vote Counts
Look inside the book:Reclaiming Science:The JFK Conspiracy

Frustrated voters who have seen their elections stolen need to know the facts. The corporate media never discusses Election Fraud – the third-rail of American politics. But it is no longer the dirty little secret it was before the 2000 election.

This is an analytic overview of Historical Election Fraud: https://richardcharnin.wordpress.com/2013/01/31/historical-overview-of-election-fraud-analysis/

Edison Research conducts exit polls. In this report, ER once again fails to mention the Election Fraud factor, which has skewed the True Vote in national, state and local elections for decades. http://statistical-research.com/wp-content/uploads/2014/08/Probability-Based-Exit-Poll-Estimation.pdf

In all exit polls, the pollsters adjust returning voters and/or vote shares to match the recorded vote. EDISON RESEARCH MAKES THE INVALID ASSUMPTION THAT THE RECORDED VOTE IS THE TRUE VOTE. IT IS AN UNSCIENTIFIC MYTH WHICH ONLY SERVES TO PERPETUATE FRAUD.

The following is a summary of the major points in the Edison Research article. My comments are in bold italics.

Edison: Of the surveys there were 19 states where the sample size was too small for individual state demographic or other breakouts.
That is absolute nonsense. In 2012, the National Election Pool (NEP) of six media giants which funds the exit polls said it did not want to incur the cost, so they would not run exit polls in 19 states. That was a canard. Could it be that the NEP and the pollsters did not want the full set of 50 state exit polls to be used in a True Vote analysis? The continued pattern of discrepancies would just further reveal built-in systematic fraud. That is also why the question “How Did You Vote in 2008” was not published along with the usual cross tabs. The “How Voted” crosstab is the Smoking Gun of Election Fraud. In every election since 1988, the crosstab illustrates how pollsters adjust the number of returning Republican and Democratic voters (as well as the current vote shares) to match the recorded vote.
https://richardcharnin.wordpress.com/2014/11/19/the-exit-poll-smoking-gun-how-did-you-vote-in-the-last-election/

Edison: The majority of interviews are conducted in-person on Election Day in a probability sample that is stratified based on geography and past vote.
The past vote is the bogus recorded vote which favors the Republicans. Any stratification strategy is therefore biased and weighted to the Republicans.

Edison: The goal in this paper is not to provide a comprehensive and exhaustive discussion of the intricacies of the operational and statistical aspects of an exit poll but to provide additional discussion on various ways to incorporate probability distributions into an exit poll framework. The core of this discussion is based on discrete data in the exit poll. The examples used in this paper will be based on the data obtained from the 2012 presidential election and will specifically address the use of the Dirichlet and Normal distributions.
There is nothing intricate about forcing unadjusted exit polls to match the recorded vote. It is quite simple. And it happens in every election.

How does Edison explain the massive exit poll discrepancies?

– In 2008, Obama had 61% in the National Exit Poll (17836 respondents) and 58% in the weighted aggregate of the state exit polls. But he had a 52.9% recorded share. The probability of the discrepancy is ZERO.

– In 2004, John Kerry had 51.7% in the unadjusted National Exit Poll (13660 respondents)s. He led the state aggregate by 51.1-47.6%. But Kerry lost  the recorded vote by 50.7-48.3%.

– In 2000, Al Gore led the unadjusted National Exit Poll by 48.5- 46.3%. He led the state aggregate polls by 50.8-44.4% (6 million votes). But Gore was held to a 48.4-47.9% (540,000 vote margin) in the recorded vote.

Edison: A useful characteristic relating to probability distributions is the ability to use known data and then simulate from the posterior distribution. Using the exit poll framework, the statewide candidate estimates can be used and applied using the Dirichlet distribution approach. This means that the estimates from each state can be used to determine the probability that a given candidate will win each state. With the probability of success established for each state we can incorporate these probabilities into a winner-take-all Binomial distribution for all 50 states and the District of Columbia.
A simulation is not required to calculate the expected electoral vote. The expected EV is the product sum of the state win probabilities and corresponding EVs.
EV = SUMPRODUCT[prob(i) * EV(i)], where i =1,51.

In the 2012 True Vote Election Model, pre-election state win probabilities were calculated based on final Likely Voter (LV) polls. The model exactly projected Obama’s 332 EV. But Obama’s True Vote was much better than his recorded share. Note: LVs are a subset of Registered Voter (RV) polls which eliminate new, mostly Democratic, “unlikely” voters.
https://richardcharnin.wordpress.com/2012/10/17/update-daily-presidential-true-voteelection-fraud-forecast-model/

Edison: Clearly, ‘calling’ a national election based purely on sample data is not the most favorable strategy due to sampling variability. However, updating the probability that a candidate will win with additional known data in each of the given states will decrease the variability in the posterior distribution. This can be accomplished by using additional known prior data or, as is often the case in elections, by adding the final precinct election results provided shortly after the polling places close.

This is all good theoretically, but it assumes that the final precinct data has not been manipulated. In any case, a 10 million trial simulation is overkill. Only 500 Monte Carlo trials are necessary to calculate the probability of winning the electoral vote.

Edison: This can be accomplished by using additional known prior data or, as is often the case in elections, by adding the final precinct election results provided shortly after the polling places close. Due to the nature of elections, informed priors are often available and can be incorporated into the estimates to improve the probability distribution. In this way, specific models can be developed to handle states with more or less available prior data and improve the overall model.
Again, no mention of the votes being flipped in the precincts.

Edison: We can take the currently collected data and model the results using other quantities that are available. In some ways, due to the nature of linear regression, prior information is already implicitly included in exit poll regression models.
But the prior election returning voter mix in five presidential elections was mathematically and physically impossible. The exit polls indicate that there were more returning Nixon and Bush voters from the prior election than were actually still alive. This is absolute proof that the published exit polls were adjusted to match vote-miscounts. Garbage in, garbage out.

Edison: There are two primary goals that are addressed by regression models in this paper:
1) general understanding of the data within a given state. In other words identifying variables that aid in a linear prediction of the candidate’s vote; and
2) predicting y, given x, for future observations.
Which data? The adjusted demographic data or the actual pristine data?
If Y = f(X), then X should not be forced to fit the recorded result.

Edison: For the purposes of this paper the sample of polling locations using the final end of night results are used as the response variable. Generally for all states past data tends to be a very good predictor of current results. In some states there are other predictors (e.g. precinct boundary changes, current voter registration, weather, etc.) that work well while in other states those same predictors provide no additional information and make the model unnecessarily complex.
But past data does not reflect the prior True Vote, so any regression analysis cannot predict the True Vote. It will however predict the bogus, recorded vote.

Edison: Again, the regression model presented here is an example model used for demonstration purposes (i.e. no formal model selection procedure was used). Furthermore, for this same purpose the non-informative prior is used. It’s clear from the output of the regression summary that there is a strong effect for 2008 candidate vote percentage, precincts with high Democrat vote in 2008 tend to have a very predictable Democrat vote in 2012. As one would expect the 2012 exit poll results have a strong effect when predicting the final polling location results. This example regression model for Florida is provided in Equation 2.
E (CANDj |x,θ) = β0 +β1 ·CANDEP2012j + β2 ·CAND2008j

All this is saying that a candidate’s vote share is predictable using regression analysis based on the 2008 recorded vote and 2012 adjusted precinct exit poll data. But if the precinct data is biased; the projection will reflect the bias. And the cycle continues in all elections that follow.

Edison: We can check to see if the observed data from the polling places are consistent with the fitted model. Based on the model and the predictive distribution, the model fits quite well without outliers in any of the precincts.
Of course the model will fit the bogus recorded vote quite well because it was forced to match the recorded vote.But what if the observed recorded precinct vote data is manipulated?

Edison: Several important conclusions about the analysis of exit poll data can be drawn from this review of approaches using probability distributions. First, it is clear that there are many probability distribution components to an exit poll.
But the prior information (recorded vote and adjusted exit polls) used in the probability analysis is bogus as long as there is no consideration of the Election Fraud Factor.
Recorded Vote = True Vote + Fraud

Edison: This research on exit polling serves as an exploration of ways to investigate and analyze data and to provide alternate, complementary approaches that may be more fully integrated into standard election (and non-election) exit polling. These procedures are only a few of the many ways that can be used to analyze exit poll data. These approaches provide an alternate way to summarize and report on these data. It also provides additional visualization and ways to view the data and how the data are distributed.
But the core problem is not addressed here. All alternative models are useless if they are based on prior and current recorded vote data which has been corrupted.

Edison: Further topics include small sample sizes, missing data, censored data, and a deeper investigation into absentee/early voting. Additionally, these approaches can be used to investigate various complex sample design techniques (e.g. stratified, cluster, multi-phase, etc.) and evaluate how the designs interact with probabilistic approaches in an exit polling context. Further hierarchical modeling may provide additional insight into the complexities of the exit poll data.
These sample design techniques are all based on recorded vote data. Why are pristine exit polls always adjusted (forced) to match the Election Day recorded vote to within 0.1%?
Proof: Unadjusted Exit Polls are forced to match the Recorded vote:
https://docs.google.com/spreadsheet/ccc?key=0AjAk1JUWDMyRdFIzSTJtMTJZekNBWUdtbWp3bHlpWGc#gid=15

 

Tags: ,

 
Richard Charnin's Blog

JFK Conspiracy and Systemic Election Fraud Analysis