RSS

Election Fraud Lockdown: No Discussion by Politicians, Forecasters and Media Pundits

31 Oct

Election Fraud Lockdown: No Discussion by Politicians, Forecasters and Media Pundits

Richard Charnin (TruthIsAll)

Election forecasters measure their performance against the recorded vote. But there is a fundamental flaw in their models: Election Fraud is never mentioned as a factor. The implicit assumption is that the official recorded vote represents the True Vote (i.e. election will be fraud-free). But it cannot be since we know that millions of votes are uncounted in every election.

The forecasters disregard the Systemic Election Fraud Factor.
Recorded Vote = True Vote + Election Fraud

http://www.richardcharnin.com/AcademicandMediaNeverDiscussElectionFraud.htm

Forecasters who predicted a Bush win in 2000 and 2004 were only “correct” because of rigged recorded vote counts. Gore won the recorded vote by 540,000; he won the True Vote by 3 million. Kerry lost the recorded vote by 3 million; he won the True Vote by 10 million. The pattern continued in 2008. Obama won the recorded vote by 9.5 million; he won the True Vote by nearly 23 million.

This graph summarizes the discrepancies between the1988-2008 State Exit Polls vs. the corresponding Recorded Votes

In 2004, Kerry had a slight 1% lead in the weighted pre-election state and national polls. After allocating the 6% undecided voters, he was projected to win by 51.4-47.7%. Kerry had 51.7% in both the unadjusted state exit poll aggregate (70,000 respondents) and the unadjusted National Exit Poll, a subset of 13,660 respondents.

The 2004 Election True Vote Model is based on 2000 votes cast (includes uncounted votes), adjusted for voter mortality and 2000 voter turnout in 2004. Vote shares are based on the 2004 National Exit Poll “Voted 2000” crosstab. The model indicates that Kerry won by 53.2-45.4% (66.9-57.1m). It proves that for Bush to obtain his 3.0m margin in 2004, he would have required 21.5% of returning Gore voters!

Bush won the official recorded vote by 50.7-48.4%. The Final National Exit Poll was forced to match the recorded vote.

https://docs.google.com/spreadsheet/ccc?key=0AjAk1JUWDMyRdGN3WEZNTUFaR0tfOHVXTzA1VGRsdHc#gid=31

In 2008, the national aggregate of the unadjusted state exit polls (81,388 respondents, weighted by voting population) indicated that Obama won by 58.0-40.2%. There is a 97.5% probability that he had at least 57.5% (assuming an unbiased sample).

The unadjusted 2008 National Exit Poll (17,836 respondents) is a subset of the state polls. Obama won by a massive 61.0-37.2% margin. The probability is 97.5% that he had at least 60% (assuming an unbiased sample).

The 2008 True Vote Model is based on 2004 votes cast and the 2008 NEP “Voted 2004” crosstab. It indicates that Obama won by 58.0-40.5%.

Obama won the recorded vote by 52.9-45.6%. The Final National Exit Poll was forced to match the recorded vote.

Prominent election forecasters discussed their methodologies in the International Journal of Forecasting. The articles range from descriptions of diverse election forecasting models, such as those that use political futures markets and historical analysis, to those which evaluate the success of election forecasting in past elections. But none mention the taboo subject of historical election fraud. Are they that clueless? Or are they fearful of jeopardizing their positions by daring to suggest that our “democracy” is a myth?

This statement is from the American Association of Public Opinion Research (AAPOR):
“What is important to note is that at the close of Election Day, exit poll results are weighted to reflect the actual election outcomes. It is in this way that the final exit poll data can be used for its primary and most important purpose – to shed light on why the election turned out the way it did. That is, exit polls are just as important for the information they gather about the voters’ demographics and attitudinal predispositions towards the candidates and the campaign issues as they are for making the projections reported by news organizations on Election Night”.

So the purpose of the final exit poll is to get accurate demographic data by matching to the actual vote count. Is this the way to conduct statistical research, by adjusting the results to fit the recorded vote? What if the vote count is corrupted? They never even ask the question. The charade continues unabated.

Uncounted votes have steadily declined as percent of total votes cast – from 10.4% in 1988 to 2.7% in 2004. When added to the recorded vote in order to derive total votes cast from 1988-2004, the average Democratic unadjusted exit poll share was within 1% of the adjusted vote. But the 2004 exit poll discrepancies were different in kind and scope from the prior elections; the discrepancies cannot be explained by uncounted votes alone.

This article will discuss the following topics:
. Election 2004 Forecast Models: The Track Record
. The American Association of Public Opinion Research (AAPOR)
. Uncounted Votes and Exit Poll Discrepancies (1988-2004)
. Projection and Post-election Models: Monte Carlo Simulation vs. Regression Analysis
. Implausible: Returning Gore voters required for Bush’s 3.0m margin in 2004

___________________________________________________________________________________

Election 2004 Forecast Models: The Track Record

The following election forecast models were executed 2-9 months before the 2004 election. All except one forecast that Bush would win the 2-party popular vote with an average 53.9% share. Bush had a 51.2% recorded share, but just 47.5% according to the aggregate unadjusted state exit polls. Furthermore, the estimated popular vote win probabilities were incompatible with the forecast vote shares (they were too low). None of the models forecast the electoral vote. None mentioned the possibility of election fraud.

Author Date Pick 2-pty Win Prob
Recorded 2-Nov Bush 51.2 Final

Beck-Tien 27-Aug Kerry 50.1 50
Abramowitz 31-Jul Bush 53.7 -
Campbell 06-Sep Bush 53.8 97
Wlezien 27-Jul Bush 52.9 75
Holbrook 30-Aug Bush 54.5 92
Lockabie 21-May Bush 57.6 92
Norpoth 29-Jan Bush 54.7 95

Compare the above projections to these pre-election poll and exit poll-based models.

Election Model (11/01/04)
Assumption: Kerry wins 75% of undecided voters
Kerry 51.8%; 99.9% win probability
Monte Carlo EV Simulation: 4995 wins/5000 trials

Final 5 National Polls: Kerry 51.6%; 94.5% win probability
2004 Election Model Graphs
National Trend
http://www.richardcharnin.com/index_files/ElectionModel_9609_image001.png
Electoral vote and win probability
http://www.richardcharnin.com/index_files/ElectionModel_9609_image002.png
Electoral and popular vote
http://www.richardcharnin.com/index_files/ElectionModel_9609_image003.png
Undecided voter allocation impact on electoral vote and win probability
http://www.richardcharnin.com/index_files/ElectionModel_9609_image004.png
National Poll Trend
http://www.richardcharnin.com/index_files/ElectionModel_9609_image008.png
Monte Carlo Simulation
http://www.richardcharnin.com/index_files/ElectionModel_9609_image011.png
Monte Carlo Electoral Vote Histogram
http://www.richardcharnin.com/index_files/ElectionModel_9609_image012.png

Unadjusted State Exit Polls (70,000 respondents)
State Aggregate: Kerry 52.5%; 99.1% win prob.

National Exit Poll (12:22am, 13,047 respondents)
NEP 1: Kerry 51.9%; 96.9% win prob
39/41 Gore/Bush weights

NEP 2: Kerry 52.9%; 99.8% win prob.
37.6/37.4 adjusted, plausible weights

True Vote Model
Kerry 53.7%; 99.99% win prob.
12:22am NEP, 125.7m votes cast; 1.22% annual voter mortality, 95% voter turnout

The following article describes the methodologies used by a number of 2008 election forecasters. None of the articles discuss historical evidence of election fraud or its likely impact on the forecast.
__________________________________________________________________________________

Election Forecasters Preparing for Historic Election

Science Daily (June 23, 2008) — Anticipating what is likely to be one of the most interesting elections in modern history, University at Buffalo professor of political science James E. Campbell and Michael S. Lewis-Beck, professor of political science at the University of Iowa, have assembled the insights of prominent election forecasters in a special issue of the International Journal of Forecasting published this month.

Each of the articles demonstrates the challenges of election forecasting, according to Campbell, chair of UB’s Department of Political Science, who since 1992 has produced a trial-heat-and-economy forecast of the U.S. presidential election. His forecast uses the second-quarter growth rate in the gross domestic product and results of the trial-heat (preference) poll released by Gallup near Labor Day to predict what percentage of the popular vote will be received by the major party candidates.

The articles range from descriptions of diverse election forecasting models, such as those that use political futures markets and historical analysis, to articles that evaluate the success of election forecasting in past elections. Two of the articles address a topic particularly pertinent to the 2008 presidential election: whether open seat and incumbent elections should be treated differently by election forecasters.

“One of the biggest misunderstandings about election forecasting is the idea that accurate forecasts must assume that the campaign does not matter,” Campbell explains. “This is not true. First, one of the reasons that forecasts can be accurate is that they are based on measures of the conditions that influence campaigns. So campaign effects are, to a significant degree, predictable. Second, forecasters know that their forecasts are not perfect. Forecasts are based on imperfect measures and may not capture all of the factors affecting a campaign. Some portion of campaign effects is always unpredictable.”

Though some campaign effects are unpredictable “the extent of these effects is usually limited,” Campbell points out. In the historic contest between presumptive presidential nomineesBarack Obama and John McCain one thing is certain: “Forecasting this election will be more difficult than usual,” Campbell says: “First, there isn’t an incumbent. Approval ratings and the economy are likely to provide weaker clues to an election’s outcome when the incumbent is not running. Second, Democrats had a very divided nomination contest and it is unclear how lasting the divisions will be. Third, many Republicans are not very enthusiastic about McCain and it is unclear how strong Republican turnout will be for him.”

Of the six different forecast models described in the journal articles, only two have a forecast at this point. The other four will have forecasts between late July and Labor Day. The journal articles can be downloaded at sciencedirect.com. Below are brief descriptions:

In “U.S. Presidential Election Forecasting: An Introduction” journal co-editors Campbell and Lewis-Beck provide a brief history of the development of the election forecasting field and an overview of the articles in this special issue.

In “Forecasting the Presidential Primary Vote: Viability, Ideology and Momentum,” Wayne P. Steger of DePaul University takes on the difficult task of improving on forecasting models of presidential nominations. He focuses on the forecast of the primary vote in contests where the incumbent president is not a candidate, comparing models using information from before the Iowa Caucus and New Hampshire primary to those taking these momentum-inducing events into account.

In “It’s About Time: Forecasting the 2008 Presidential Election with the Time-for-Change Model,” Alan I. Abramowitz of Emory University updates his referenda theory-based “time for a change” election forecasting model first published in 1988. Specifically, his model forecasts the two-party division of the national popular vote for the in-party candidate based on presidential approval in June, economic growth in the first half of the election year, and whether the president’s party is seeking more than a second consecutive term in office.

In “The Economy and the Presidential Vote: What the Leading Indicators Reveal Well in Advance,” Robert S. Erikson of Columbia University and Christopher Wlezien of Temple University ask what is the preferred economic measure in election forecasting and what is the optimal time before the election to issue a forecast.

In “Forecasting Presidential Elections: When to Change the Model?” Michael S. Lewis-Beck of the University of Iowa and Charles Tien of Hunter College, CUNY ask whether the addition of variables can genuinely reduce forecasting error, as opposed to merely boosting statistical fit by chance. They explore the evolution of their core model – presidential vote as a function GNP growth and presidential popularity. They compare it to a more complex, “jobs” model they have developed over the years.

In “Forecasting Non-Incumbent Presidential Elections: Lessons Learned from the 2000 Election,” Andrew H. Sidman, Maxwell Mak, and Matthew J. Lebo of Stony Brook University use a Bayesian Model Averaging approach to the question of whether economic influences have a muted impact on elections without an incumbent as a candidate. The Sidman team concludes that a discount of economic influences actually weakens general forecasting performance.

In “Evaluating U.S. Presidential Election Forecasts and Forecasting Equations,” UB’s Campbell responds to critics of election forecasting by identifying the theoretical foundations of forecasting models and offering a reasonable set of benchmarks for assessing forecast accuracy. Campbell’s analyses of his trial-heat and economy forecasting model and of Abramowitz’s “time for a change” model indicates that it is still at least an open question as to whether models should be revised to reflect more muted referendum effects in open seat or non-incumbent elections.

In “Campaign Trial Heats as Election Forecasts: Measurement Error and Bias in 2004 Presidential Campaign Polls,” Mark Pickup of Oxford University and Richard Johnston of theUniversity of Pennsylvania provide an assessment of polls as forecasts. Comparing various sophisticated methods for assessing overall systematic bias in polling on the 2004 U.S.presidential election, Johnston and Pickup show that three polling houses had large and significant biases in their preference polls.

In “Prediction Market Accuracy in the Long Run,” Joyce E. Berg, Forrest D. Nelson, and Thomas A. Reitz from the University of Iowa’s Tippie College of Business, compare the presidential election forecasts produced from the Iowa Electronic Market (IEM) to forecasts from an exhaustive body of opinion polls. Their finding is that the IEM is usually more accurate than the polls.

In “The Keys to the White House: An Index Forecast for 2008,” Allan J. Lichtman of American University provides an historian’s checklist of 13 conditions that together forecast the presidential contest. These “keys” are a set of “yes or no” questions about how the president’s party has been doing and the circumstances surrounding the election. If fewer than six keys are turned against the in-party, it is predicted to win the election. If six or more keys are turned, the in-party is predicted to lose. Lichtman notes that this rule correctly predicted the winner in every race since 1984.

In “The State of Presidential Election Forecasting: The 2004 Experience,” Randall J. Jones, Jr. reviews the accuracy of all of the major approaches used in forecasting the 2004 presidential election. In addition to examining campaign polls, trading markets, and regression models, he examines the records of Delphi expert surveys, bellwether states, and probability models.

___________________________________________________________________________________

The American Association of Public Opinion Research (AAPOR)

This paragraph from the article says it all:
“What is important to note is that at the close of Election Day, exit poll results are weighted to reflect the actual election outcomes. It is in this way that the final exit poll data can be used for its primary and most important purpose – to shed light on why the election turned out the way it did. That is, exit polls are just as important for the information they gather about the voters’ demographics and attitudinal predispositions towards the candidates and the campaign issues as they are for making the projections reported by news organizations on Election Night”.

The purpose of the Final exit poll is to get accurate demographic data by matching to the actual vote count? Is this the way to conduct statistical research? What if the vote count is fraudulent? What is their Null Hypothesis? AAPOR refers to challenges facing exit pollsters, but they ignore the challenge of calculating the impact of election fraud on the recorded vote.

If the vote counts were accurate, the demographics would be correct. Since the recorded vote counts are bogus, so are the demographics. Assuming that the vote count is pristine is to immediately invalidate the demographics on which it is based. It’s a very simple concept if you really want to do the best analysis possible to get at the truth: It’s Basic Statistics 101.We need to analyze the raw, pristine, unadjusted exit poll data. One would assume that this august group would want to see it. But in their world, corruption is non-existent. They believe that the Recorded Vote is identical to the True Vote.

AAPOR also claims that: “An exit poll sample is not representative of the entire electorate until the survey is completed at the end of the day. Different types of voters turn out at different times of the day”. But they don’t mention the fact that Kerry led the exit polls from 4pm (8349 sampled voters) to 730pm (11027) and 1222am (13047) by a steady 51-47%. Or that uncounted votes are 70-80% Democratic and contribute significantly to the exit poll discrepancies.

AAPOR parrots the Reluctant Bush Responder (rBr) myth used by exit pollsters Edison-Mitofsky: “In recent national and state elections, Republicans have declined to fill out an exit poll questionnaire at a higher rate than Democratic voters, producing a slight Democratic skew”. But the 2004 Final Exit Poll indicated that Bush 2000 voters comprised 43% of the 2004 electorate (which was mathematically impossible) as opposed to 37% of Gore voters. And according to the E-M report, the highest exit poll refusal rates were in Democratic states. So much for the rBr myth.

___________________________________________________________________________________

1988-2004: Uncounted Votes and Exit Poll discrepancies

Uncounted Votes have steadily declined as a percent of total votes cast – from 10.4% in 1988 to 2.7% in 2004. When added to the recorded vote in order to derive the total votes cast for the five elections from 1988-2004, the average Democratic unadjusted exit poll share is within 0.1% of the adjusted vote.

Comparing the adjusted vote to the aggregate exit poll and recorded vote (2-party exit poll in parenthesis):

Year Democrat Recorded Exit Poll Adjusted
Average share 46.9% 48.8% (52.7%) 48.9%

1988 Dukakis 45.6 46.8 (47.3) 48.7
1992 Clinton 43.0 45.7 (56.8) 45.7
1996 Clinton 49.2 50.2 (55.8) 51.4
2000 Gore 48.4 49.4 (51.4) 49.7
2004 Kerry 48.3 51.8 (52.3) 49.0

Look at this graph. In each of the last five elections the unadjusted Democratic exit poll share exceeded the recorded vote. But which of the five stands out from the rest? The 2004 exit poll discrepancies were different in kind and scope from those of the prior four elections. Unlike 1988-2000, the 2004 discrepancies cannot be explained by uncounted votes alone.

There are some exit poll critics who claim that the large (5.4 WPE) 1992 exit poll discrepancy proves that 2004 exit poll analysis (7.1 WPE) which indicate that the election was stolen are “crap” and “bad science”. After all, they say, there were no allegations of fraud in 1992. They fail to mention (or are unaware of) the fact that in 1992 Clinton beat Bush I by a recorded 43.6-38.0m (43.0-37.4%) but 9.4m votes were uncounted – and 70-80% were Democratic. When the uncounted votes are added, the adjusted vote becomes 50.7-40.3m (45.7-36.4%), which exactly matched Clinton’s unadjusted exit poll.

From 1988-2000, after the uncounted adjustment, there was a 0.85% average Democratic exit poll discrepancy and 2.9 WPE. In 2004, after the 3.4m uncounted vote adjustment, there was a 2.8% discrepancy and Bush’s margin was reduced from 3.0m (62.0-59.0) to 1.3m (62.9-61.6). But uncounted votes were only one component of Election Fraud 2004. The Election Calculator Model determined that approximately 5m votes were switched from Kerry to Bush.

___________________________________________________________________________________
Projection and Post-election Models: Monte Carlo Simulation vs. Regression Analysis

There are two basic methods used to forecast presidential elections:
1) Projections based on state and national polling trends which forecast the popular and electoral vote, updated frequently right up to the election.
2) Regression models based on historical time-series which forecast the popular vote, executed months before the election.

Polling models when adjusted for undecided voters and estimated turnout, are superior to regression models. Models which predicted a Bush win in 2000 and 2004 were technically “correct”; Bush won the recorded vote. But Gore and Kerry won the True vote. Except for the Election Calculator (below), all models assume that elections will be fraud-free.

Academics and political scientists create multiple regression models which utilize time-series data as relevant input variables: economic growth, inflation, job growth, interest rates, foreign policy, historical election vote shares, etc. Regression modeling is an interesting theoretical exercise but does not account for the daily events which affect voter psychology. Fraud could conceivably skew regression models and media tracking polls.

Statistical analyses provided by Internet bloggers concluded that BushCo stole the 2004 election. Their findings were dismissed by the media as “just another conspiracy theory”. A few “conspiracy fraudsters” were banned after posting on various liberal discussion forums. And even today, the most popular polling sites never discuss election fraud. But the Democrats haven’t raised the issue after two presidential and scores of congressional and gubernatorial elections were stolen, and neither has the media, supposedly the guardian of democracy. Is there anyone who still truly believes that elections are legitimate?

There has been much misinformation regarding electoral and popular vote win probability calculations. In the Election Model, the latest state pre-election poll are used to project the vote after adjusting for undecided voters. The model assumes the election is held on the day of the projection.

The projections determine the probability of winning each state for input to the simulation. The probability of winning the popular vote is based on the 2-party projected vote share and an estimated margin of error:
P = NORMDIST (vote share, 0.50, MoE/1.96, True).

The expected electoral vote is the average of all the election trials. The probability of capturing at least 270 electoral votes is a simple ratio of the number of winning trials divided by the total number of trials.

 
4 Comments

Posted by on October 31, 2011 in Election Myths

 

Tags: , , , , ,

4 responses to “Election Fraud Lockdown: No Discussion by Politicians, Forecasters and Media Pundits

  1. Sugel

    November 26, 2011 at 9:16 am

    The Roper Center is making available to researchers its entire collection of state election day exit polls. This collection consists of numerous studies dating back to 1978 right up to the most recent polls conducted in 2010. Questionnaires and computer-readable datafiles are available for each study. Typically, state election day exit polls consist of congressional and/or gubernatorial vote questions in addition to questions on important statewide and local issues at the time of the polls. Like national polls, state Election Day exit polls include basic demographic variables in each study. Sample sizes usually range from 800-2000 respondents. Please email Data Services at DataServices-RoperCenter@uconn.edu for information concerning fees, other studies, or to answer any questions you may have.

     
  2. Lorbee

    August 2, 2012 at 5:36 pm

    You’re forgetting those hearings that followed and the witnesses from the Florida Panhandle who were told the polls were closed and the race was already called. They still had an hour to vote and were coming home from work but were called on their phones to head home, since it was “too late.” There is a time difference in the panhandle and ultimately, thanks to the liberal news channels, Florida had been called FOR GORE. I will never forget hearing the live on air Mary Matalin crying out “why are they calling this NOW?” NO one seems to remember that little tidbit. The panhandle would have definitely been for Bush and no one wants to admit it.

     
  3. Lorbee

    August 2, 2012 at 5:37 pm

    Not to mention all those votes that languishes on ships from the military that somehow got “hung up” and never counted either. Nice.

     

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
Follow

Get every new post delivered to your Inbox.

Join 761 other followers

%d bloggers like this: