RSS

Exit Pollsters at Edison Research: Never Discuss the Election Fraud Factor

20 Jul

Richard Charnin
July 20 2015

Charnin Website
Look inside the book: Matrix of Deceit: Forcing Pre-election and Exit Polls to Match Fraudulent Vote Counts
Look inside the book:Reclaiming Science:The JFK Conspiracy

Frustrated voters who have seen their elections stolen need to know the facts. The corporate media never discusses Election Fraud – the third-rail of American politics. But it is no longer the dirty little secret it was before the 2000 election.

This is an analytic overview of Historical Election Fraud: https://richardcharnin.wordpress.com/2013/01/31/historical-overview-of-election-fraud-analysis/

Edison Research conducts exit polls. In this report, ER once again fails to mention the Election Fraud factor, which has skewed the True Vote in national, state and local elections for decades. http://statistical-research.com/wp-content/uploads/2014/08/Probability-Based-Exit-Poll-Estimation.pdf

In all exit polls, the pollsters adjust returning voters and/or vote shares to match the recorded vote. EDISON RESEARCH MAKES THE INVALID ASSUMPTION THAT THE RECORDED VOTE IS THE TRUE VOTE. IT IS AN UNSCIENTIFIC MYTH WHICH ONLY SERVES TO PERPETUATE FRAUD.

The following is a summary of the major points in the Edison Research article. My comments are in bold italics.

Edison: Of the surveys there were 19 states where the sample size was too small for individual state demographic or other breakouts.
That is absolute nonsense. In 2012, the National Election Pool (NEP) of six media giants which funds the exit polls said it did not want to incur the cost, so they would not run exit polls in 19 states. That was a canard. Could it be that the NEP and the pollsters did not want the full set of 50 state exit polls to be used in a True Vote analysis? The continued pattern of discrepancies would just further reveal built-in systematic fraud. That is also why the question “How Did You Vote in 2008” was not published along with the usual cross tabs. The “How Voted” crosstab is the Smoking Gun of Election Fraud. In every election since 1988, the crosstab illustrates how pollsters adjust the number of returning Republican and Democratic voters (as well as the current vote shares) to match the recorded vote.
https://richardcharnin.wordpress.com/2014/11/19/the-exit-poll-smoking-gun-how-did-you-vote-in-the-last-election/

Edison: The majority of interviews are conducted in-person on Election Day in a probability sample that is stratified based on geography and past vote.
The past vote is the bogus recorded vote which favors the Republicans. Any stratification strategy is therefore biased and weighted to the Republicans.

Edison: The goal in this paper is not to provide a comprehensive and exhaustive discussion of the intricacies of the operational and statistical aspects of an exit poll but to provide additional discussion on various ways to incorporate probability distributions into an exit poll framework. The core of this discussion is based on discrete data in the exit poll. The examples used in this paper will be based on the data obtained from the 2012 presidential election and will specifically address the use of the Dirichlet and Normal distributions.
There is nothing intricate about forcing unadjusted exit polls to match the recorded vote. It is quite simple. And it happens in every election.

How does Edison explain the massive exit poll discrepancies?

– In 2008, Obama had 61% in the National Exit Poll (17836 respondents) and 58% in the weighted aggregate of the state exit polls. But he had a 52.9% recorded share. The probability of the discrepancy is ZERO.

– In 2004, John Kerry had 51.7% in the unadjusted National Exit Poll (13660 respondents)s. He led the state aggregate by 51.1-47.6%. But Kerry lost  the recorded vote by 50.7-48.3%.

– In 2000, Al Gore led the unadjusted National Exit Poll by 48.5- 46.3%. He led the state aggregate polls by 50.8-44.4% (6 million votes). But Gore was held to a 48.4-47.9% (540,000 vote margin) in the recorded vote.

Edison: A useful characteristic relating to probability distributions is the ability to use known data and then simulate from the posterior distribution. Using the exit poll framework, the statewide candidate estimates can be used and applied using the Dirichlet distribution approach. This means that the estimates from each state can be used to determine the probability that a given candidate will win each state. With the probability of success established for each state we can incorporate these probabilities into a winner-take-all Binomial distribution for all 50 states and the District of Columbia.
A simulation is not required to calculate the expected electoral vote. The expected EV is the product sum of the state win probabilities and corresponding EVs.
EV = SUMPRODUCT[prob(i) * EV(i)], where i =1,51.

In the 2012 True Vote Election Model, pre-election state win probabilities were calculated based on final Likely Voter (LV) polls. The model exactly projected Obama’s 332 EV. But Obama’s True Vote was much better than his recorded share. Note: LVs are a subset of Registered Voter (RV) polls which eliminate new, mostly Democratic, “unlikely” voters.
https://richardcharnin.wordpress.com/2012/10/17/update-daily-presidential-true-voteelection-fraud-forecast-model/

Edison: Clearly, ‘calling’ a national election based purely on sample data is not the most favorable strategy due to sampling variability. However, updating the probability that a candidate will win with additional known data in each of the given states will decrease the variability in the posterior distribution. This can be accomplished by using additional known prior data or, as is often the case in elections, by adding the final precinct election results provided shortly after the polling places close.

This is all good theoretically, but it assumes that the final precinct data has not been manipulated. In any case, a 10 million trial simulation is overkill. Only 500 Monte Carlo trials are necessary to calculate the probability of winning the electoral vote.

Edison: Due to the nature of elections, informed priors are often available and can be incorporated into the estimates to improve the probability distribution. In this way, specific models can be developed to handle states with more or less available prior data and improve the overall model.
Again, no mention of the votes being flipped in the precincts.

Edison: We can take the currently collected data and model the results using other quantities that are available. In some ways, due to the nature of linear regression, prior information is already implicitly included in exit poll regression models.
But the prior election returning voter mix in five presidential elections was mathematically and physically impossible. The exit polls indicate that there were more returning Nixon and Bush voters from the prior election than were actually still alive. This is absolute proof that the published exit polls were adjusted to match vote-miscounts. Garbage in, garbage out.

Edison: There are two primary goals that are addressed by regression models in this paper:
1) general understanding of the data within a given state. In other words identifying variables that aid in a linear prediction of the candidate’s vote; and
2) predicting y, given x, for future observations.
Which data? The adjusted demographic data or the actual pristine data?
If Y = f(X), then X should not be forced to fit the recorded result.

Edison: For the purposes of this paper the sample of polling locations using the final end of night results are used as the response variable. Generally for all states past data tends to be a very good predictor of current results. In some states there are other predictors (e.g. precinct boundary changes, current voter registration, weather, etc.) that work well while in other states those same predictors provide no additional information and make the model unnecessarily complex.
But past data does not reflect the prior True Vote, so any regression analysis cannot predict the True Vote. It will however predict the bogus, recorded vote.

Edison: Again, the regression model presented here is an example model used for demonstration purposes (i.e. no formal model selection procedure was used). Furthermore, for this same purpose the non-informative prior is used. It’s clear from the output of the regression summary that there is a strong effect for 2008 candidate vote percentage, precincts with high Democrat vote in 2008 tend to have a very predictable Democrat vote in 2012. As one would expect the 2012 exit poll results have a strong effect when predicting the final polling location results. This example regression model for Florida is provided in Equation 2.
E (CANDj |x,θ) = β0 +β1 ·CANDEP2012j + β2 ·CAND2008j

All this is saying that a candidate’s vote share is predictable using regression analysis based on the 2008 recorded vote and 2012 adjusted precinct exit poll data. But if the precinct data is biased; the projection will reflect the bias. And the cycle continues in all elections that follow.

Edison: We can check to see if the observed data from the polling places are consistent with the fitted model. Based on the model and the predictive distribution, the model fits quite well without outliers in any of the precincts.
Of course the model will fit the bogus recorded vote quite well because it was forced to match the recorded vote.But what if the observed recorded precinct vote data is manipulated?

Edison: Several important conclusions about the analysis of exit poll data can be drawn from this review of approaches using probability distributions. First, it is clear that there are many probability distribution components to an exit poll.
But the prior information (recorded vote and adjusted exit polls) used in the probability analysis is bogus as long as there is no consideration of the Election Fraud Factor.
Recorded Vote = True Vote + Fraud

Edison: This research on exit polling serves as an exploration of ways to investigate and analyze data and to provide alternate, complementary approaches that may be more fully integrated into standard election (and non-election) exit polling. These procedures are only a few of the many ways that can be used to analyze exit poll data. These approaches provide an alternate way to summarize and report on these data. It also provides additional visualization and ways to view the data and how the data are distributed.
But the core problem is not addressed here. All alternative models are useless if they are based on prior and current recorded vote data which has been corrupted.

Edison: Further topics include small sample sizes, missing data, censored data, and a deeper investigation into absentee/early voting. Additionally, these approaches can be used to investigate various complex sample design techniques (e.g. stratified, cluster, multi-phase, etc.) and evaluate how the designs interact with probabilistic approaches in an exit polling context. Further hierarchical modeling may provide additional insight into the complexities of the exit poll data.
These sample design techniques are all based on recorded vote data. Why are pristine exit polls always adjusted (forced) to match the Election Day recorded vote to within 0.1%?
Proof: Unadjusted Exit Polls are forced to match the Recorded vote:
https://docs.google.com/spreadsheet/ccc?key=0AjAk1JUWDMyRdFIzSTJtMTJZekNBWUdtbWp3bHlpWGc#gid=15

 

Tags: ,

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

 
%d bloggers like this: