RSS

Monthly Archives: October 2011

Election Fraud Lockdown: No Discussion by Politicians, Forecasters and Media Pundits

Election Fraud Lockdown: No Discussion by Politicians, Forecasters and Media Pundits

Richard Charnin (TruthIsAll)

Election forecasters measure their performance against the recorded vote. But there is a fundamental flaw in their models: Election Fraud is never mentioned as a factor. The implicit assumption is that the official recorded vote represents the True Vote (i.e. election will be fraud-free). But it cannot be since we know that millions of votes are uncounted in every election.

The forecasters disregard the Systemic Election Fraud Factor.
Recorded Vote = True Vote + Election Fraud

http://www.richardcharnin.com/AcademicandMediaNeverDiscussElectionFraud.htm

Forecasters who predicted a Bush win in 2000 and 2004 were only “correct” because of rigged recorded vote counts. Gore won the recorded vote by 540,000; he won the True Vote by 3 million. Kerry lost the recorded vote by 3 million; he won the True Vote by 10 million. The pattern continued in 2008. Obama won the recorded vote by 9.5 million; he won the True Vote by nearly 23 million.

This graph summarizes the discrepancies between the1988-2008 State Exit Polls vs. the corresponding Recorded Votes

In 2004, Kerry had a slight 1% lead in the weighted pre-election state and national polls. After allocating the 6% undecided voters, he was projected to win by 51.4-47.7%. Kerry had 51.7% in both the unadjusted state exit poll aggregate (70,000 respondents) and the unadjusted National Exit Poll, a subset of 13,660 respondents.

The 2004 Election True Vote Model is based on 2000 votes cast (includes uncounted votes), adjusted for voter mortality and 2000 voter turnout in 2004. Vote shares are based on the 2004 National Exit Poll “Voted 2000” crosstab. The model indicates that Kerry won by 53.2-45.4% (66.9-57.1m). It proves that for Bush to obtain his 3.0m margin in 2004, he would have required 21.5% of returning Gore voters!

Bush won the official recorded vote by 50.7-48.4%. The Final National Exit Poll was forced to match the recorded vote.

https://docs.google.com/spreadsheet/ccc?key=0AjAk1JUWDMyRdGN3WEZNTUFaR0tfOHVXTzA1VGRsdHc#gid=31

In 2008, the national aggregate of the unadjusted state exit polls (81,388 respondents, weighted by voting population) indicated that Obama won by 58.0-40.2%. There is a 97.5% probability that he had at least 57.5% (assuming an unbiased sample).

The unadjusted 2008 National Exit Poll (17,836 respondents) is a subset of the state polls. Obama won by a massive 61.0-37.2% margin. The probability is 97.5% that he had at least 60% (assuming an unbiased sample).

The 2008 True Vote Model is based on 2004 votes cast and the 2008 NEP “Voted 2004” crosstab. It indicates that Obama won by 58.0-40.5%.

Obama won the recorded vote by 52.9-45.6%. The Final National Exit Poll was forced to match the recorded vote.

Prominent election forecasters discussed their methodologies in the International Journal of Forecasting. The articles range from descriptions of diverse election forecasting models, such as those that use political futures markets and historical analysis, to those which evaluate the success of election forecasting in past elections. But none mention the taboo subject of historical election fraud. Are they that clueless? Or are they fearful of jeopardizing their positions by daring to suggest that our “democracy” is a myth?

This statement is from the American Association of Public Opinion Research (AAPOR):
“What is important to note is that at the close of Election Day, exit poll results are weighted to reflect the actual election outcomes. It is in this way that the final exit poll data can be used for its primary and most important purpose – to shed light on why the election turned out the way it did. That is, exit polls are just as important for the information they gather about the voters’ demographics and attitudinal predispositions towards the candidates and the campaign issues as they are for making the projections reported by news organizations on Election Night”.

So the purpose of the final exit poll is to get accurate demographic data by matching to the actual vote count. Is this the way to conduct statistical research, by adjusting the results to fit the recorded vote? What if the vote count is corrupted? They never even ask the question. The charade continues unabated.

Uncounted votes have steadily declined as percent of total votes cast – from 10.4% in 1988 to 2.7% in 2004. When added to the recorded vote in order to derive total votes cast from 1988-2004, the average Democratic unadjusted exit poll share was within 1% of the adjusted vote. But the 2004 exit poll discrepancies were different in kind and scope from the prior elections; the discrepancies cannot be explained by uncounted votes alone.

This article will discuss the following topics:
. Election 2004 Forecast Models: The Track Record
. The American Association of Public Opinion Research (AAPOR)
. Uncounted Votes and Exit Poll Discrepancies (1988-2004)
. Projection and Post-election Models: Monte Carlo Simulation vs. Regression Analysis
. Implausible: Returning Gore voters required for Bush’s 3.0m margin in 2004

___________________________________________________________________________________

Election 2004 Forecast Models: The Track Record

The following election forecast models were executed 2-9 months before the 2004 election. All except one forecast that Bush would win the 2-party popular vote with an average 53.9% share. Bush had a 51.2% recorded share, but just 47.5% according to the aggregate unadjusted state exit polls. Furthermore, the estimated popular vote win probabilities were incompatible with the forecast vote shares (they were too low). None of the models forecast the electoral vote. None mentioned the possibility of election fraud.

Author Date Pick 2-pty Win Prob
Recorded 2-Nov Bush 51.2 Final

Beck-Tien 27-Aug Kerry 50.1 50
Abramowitz 31-Jul Bush 53.7 -
Campbell 06-Sep Bush 53.8 97
Wlezien 27-Jul Bush 52.9 75
Holbrook 30-Aug Bush 54.5 92
Lockabie 21-May Bush 57.6 92
Norpoth 29-Jan Bush 54.7 95

Compare the above projections to these pre-election poll and exit poll-based models.

Election Model (11/01/04)
Assumption: Kerry wins 75% of undecided voters
Kerry 51.8%; 99.9% win probability
Monte Carlo EV Simulation: 4995 wins/5000 trials

Final 5 National Polls: Kerry 51.6%; 94.5% win probability
2004 Election Model Graphs
National Trend
http://www.richardcharnin.com/index_files/ElectionModel_9609_image001.png
Electoral vote and win probability
http://www.richardcharnin.com/index_files/ElectionModel_9609_image002.png
Electoral and popular vote
http://www.richardcharnin.com/index_files/ElectionModel_9609_image003.png
Undecided voter allocation impact on electoral vote and win probability
http://www.richardcharnin.com/index_files/ElectionModel_9609_image004.png
National Poll Trend
http://www.richardcharnin.com/index_files/ElectionModel_9609_image008.png
Monte Carlo Simulation
http://www.richardcharnin.com/index_files/ElectionModel_9609_image011.png
Monte Carlo Electoral Vote Histogram
http://www.richardcharnin.com/index_files/ElectionModel_9609_image012.png

Unadjusted State Exit Polls (70,000 respondents)
State Aggregate: Kerry 52.5%; 99.1% win prob.

National Exit Poll (12:22am, 13,047 respondents)
NEP 1: Kerry 51.9%; 96.9% win prob
39/41 Gore/Bush weights

NEP 2: Kerry 52.9%; 99.8% win prob.
37.6/37.4 adjusted, plausible weights

True Vote Model
Kerry 53.7%; 99.99% win prob.
12:22am NEP, 125.7m votes cast; 1.22% annual voter mortality, 95% voter turnout

The following article describes the methodologies used by a number of 2008 election forecasters. None of the articles discuss historical evidence of election fraud or its likely impact on the forecast.
__________________________________________________________________________________

Election Forecasters Preparing for Historic Election

Science Daily (June 23, 2008) — Anticipating what is likely to be one of the most interesting elections in modern history, University at Buffalo professor of political science James E. Campbell and Michael S. Lewis-Beck, professor of political science at the University of Iowa, have assembled the insights of prominent election forecasters in a special issue of the International Journal of Forecasting published this month.

Each of the articles demonstrates the challenges of election forecasting, according to Campbell, chair of UB’s Department of Political Science, who since 1992 has produced a trial-heat-and-economy forecast of the U.S. presidential election. His forecast uses the second-quarter growth rate in the gross domestic product and results of the trial-heat (preference) poll released by Gallup near Labor Day to predict what percentage of the popular vote will be received by the major party candidates.

The articles range from descriptions of diverse election forecasting models, such as those that use political futures markets and historical analysis, to articles that evaluate the success of election forecasting in past elections. Two of the articles address a topic particularly pertinent to the 2008 presidential election: whether open seat and incumbent elections should be treated differently by election forecasters.

“One of the biggest misunderstandings about election forecasting is the idea that accurate forecasts must assume that the campaign does not matter,” Campbell explains. “This is not true. First, one of the reasons that forecasts can be accurate is that they are based on measures of the conditions that influence campaigns. So campaign effects are, to a significant degree, predictable. Second, forecasters know that their forecasts are not perfect. Forecasts are based on imperfect measures and may not capture all of the factors affecting a campaign. Some portion of campaign effects is always unpredictable.”

Though some campaign effects are unpredictable “the extent of these effects is usually limited,” Campbell points out. In the historic contest between presumptive presidential nomineesBarack Obama and John McCain one thing is certain: “Forecasting this election will be more difficult than usual,” Campbell says: “First, there isn’t an incumbent. Approval ratings and the economy are likely to provide weaker clues to an election’s outcome when the incumbent is not running. Second, Democrats had a very divided nomination contest and it is unclear how lasting the divisions will be. Third, many Republicans are not very enthusiastic about McCain and it is unclear how strong Republican turnout will be for him.”

Of the six different forecast models described in the journal articles, only two have a forecast at this point. The other four will have forecasts between late July and Labor Day. The journal articles can be downloaded at sciencedirect.com. Below are brief descriptions:

In “U.S. Presidential Election Forecasting: An Introduction” journal co-editors Campbell and Lewis-Beck provide a brief history of the development of the election forecasting field and an overview of the articles in this special issue.

In “Forecasting the Presidential Primary Vote: Viability, Ideology and Momentum,” Wayne P. Steger of DePaul University takes on the difficult task of improving on forecasting models of presidential nominations. He focuses on the forecast of the primary vote in contests where the incumbent president is not a candidate, comparing models using information from before the Iowa Caucus and New Hampshire primary to those taking these momentum-inducing events into account.

In “It’s About Time: Forecasting the 2008 Presidential Election with the Time-for-Change Model,” Alan I. Abramowitz of Emory University updates his referenda theory-based “time for a change” election forecasting model first published in 1988. Specifically, his model forecasts the two-party division of the national popular vote for the in-party candidate based on presidential approval in June, economic growth in the first half of the election year, and whether the president’s party is seeking more than a second consecutive term in office.

In “The Economy and the Presidential Vote: What the Leading Indicators Reveal Well in Advance,” Robert S. Erikson of Columbia University and Christopher Wlezien of Temple University ask what is the preferred economic measure in election forecasting and what is the optimal time before the election to issue a forecast.

In “Forecasting Presidential Elections: When to Change the Model?” Michael S. Lewis-Beck of the University of Iowa and Charles Tien of Hunter College, CUNY ask whether the addition of variables can genuinely reduce forecasting error, as opposed to merely boosting statistical fit by chance. They explore the evolution of their core model – presidential vote as a function GNP growth and presidential popularity. They compare it to a more complex, “jobs” model they have developed over the years.

In “Forecasting Non-Incumbent Presidential Elections: Lessons Learned from the 2000 Election,” Andrew H. Sidman, Maxwell Mak, and Matthew J. Lebo of Stony Brook University use a Bayesian Model Averaging approach to the question of whether economic influences have a muted impact on elections without an incumbent as a candidate. The Sidman team concludes that a discount of economic influences actually weakens general forecasting performance.

In “Evaluating U.S. Presidential Election Forecasts and Forecasting Equations,” UB’s Campbell responds to critics of election forecasting by identifying the theoretical foundations of forecasting models and offering a reasonable set of benchmarks for assessing forecast accuracy. Campbell’s analyses of his trial-heat and economy forecasting model and of Abramowitz’s “time for a change” model indicates that it is still at least an open question as to whether models should be revised to reflect more muted referendum effects in open seat or non-incumbent elections.

In “Campaign Trial Heats as Election Forecasts: Measurement Error and Bias in 2004 Presidential Campaign Polls,” Mark Pickup of Oxford University and Richard Johnston of theUniversity of Pennsylvania provide an assessment of polls as forecasts. Comparing various sophisticated methods for assessing overall systematic bias in polling on the 2004 U.S.presidential election, Johnston and Pickup show that three polling houses had large and significant biases in their preference polls.

In “Prediction Market Accuracy in the Long Run,” Joyce E. Berg, Forrest D. Nelson, and Thomas A. Reitz from the University of Iowa’s Tippie College of Business, compare the presidential election forecasts produced from the Iowa Electronic Market (IEM) to forecasts from an exhaustive body of opinion polls. Their finding is that the IEM is usually more accurate than the polls.

In “The Keys to the White House: An Index Forecast for 2008,” Allan J. Lichtman of American University provides an historian’s checklist of 13 conditions that together forecast the presidential contest. These “keys” are a set of “yes or no” questions about how the president’s party has been doing and the circumstances surrounding the election. If fewer than six keys are turned against the in-party, it is predicted to win the election. If six or more keys are turned, the in-party is predicted to lose. Lichtman notes that this rule correctly predicted the winner in every race since 1984.

In “The State of Presidential Election Forecasting: The 2004 Experience,” Randall J. Jones, Jr. reviews the accuracy of all of the major approaches used in forecasting the 2004 presidential election. In addition to examining campaign polls, trading markets, and regression models, he examines the records of Delphi expert surveys, bellwether states, and probability models.

___________________________________________________________________________________

The American Association of Public Opinion Research (AAPOR)

This paragraph from the article says it all:
“What is important to note is that at the close of Election Day, exit poll results are weighted to reflect the actual election outcomes. It is in this way that the final exit poll data can be used for its primary and most important purpose – to shed light on why the election turned out the way it did. That is, exit polls are just as important for the information they gather about the voters’ demographics and attitudinal predispositions towards the candidates and the campaign issues as they are for making the projections reported by news organizations on Election Night”.

The purpose of the Final exit poll is to get accurate demographic data by matching to the actual vote count? Is this the way to conduct statistical research? What if the vote count is fraudulent? What is their Null Hypothesis? AAPOR refers to challenges facing exit pollsters, but they ignore the challenge of calculating the impact of election fraud on the recorded vote.

If the vote counts were accurate, the demographics would be correct. Since the recorded vote counts are bogus, so are the demographics. Assuming that the vote count is pristine is to immediately invalidate the demographics on which it is based. It’s a very simple concept if you really want to do the best analysis possible to get at the truth: It’s Basic Statistics 101.We need to analyze the raw, pristine, unadjusted exit poll data. One would assume that this august group would want to see it. But in their world, corruption is non-existent. They believe that the Recorded Vote is identical to the True Vote.

AAPOR also claims that: “An exit poll sample is not representative of the entire electorate until the survey is completed at the end of the day. Different types of voters turn out at different times of the day”. But they don’t mention the fact that Kerry led the exit polls from 4pm (8349 sampled voters) to 730pm (11027) and 1222am (13047) by a steady 51-47%. Or that uncounted votes are 70-80% Democratic and contribute significantly to the exit poll discrepancies.

AAPOR parrots the Reluctant Bush Responder (rBr) myth used by exit pollsters Edison-Mitofsky: “In recent national and state elections, Republicans have declined to fill out an exit poll questionnaire at a higher rate than Democratic voters, producing a slight Democratic skew”. But the 2004 Final Exit Poll indicated that Bush 2000 voters comprised 43% of the 2004 electorate (which was mathematically impossible) as opposed to 37% of Gore voters. And according to the E-M report, the highest exit poll refusal rates were in Democratic states. So much for the rBr myth.

___________________________________________________________________________________

1988-2004: Uncounted Votes and Exit Poll discrepancies

Uncounted Votes have steadily declined as a percent of total votes cast – from 10.4% in 1988 to 2.7% in 2004. When added to the recorded vote in order to derive the total votes cast for the five elections from 1988-2004, the average Democratic unadjusted exit poll share is within 0.1% of the adjusted vote.

Comparing the adjusted vote to the aggregate exit poll and recorded vote (2-party exit poll in parenthesis):

Year Democrat Recorded Exit Poll Adjusted
Average share 46.9% 48.8% (52.7%) 48.9%

1988 Dukakis 45.6 46.8 (47.3) 48.7
1992 Clinton 43.0 45.7 (56.8) 45.7
1996 Clinton 49.2 50.2 (55.8) 51.4
2000 Gore 48.4 49.4 (51.4) 49.7
2004 Kerry 48.3 51.8 (52.3) 49.0

Look at this graph. In each of the last five elections the unadjusted Democratic exit poll share exceeded the recorded vote. But which of the five stands out from the rest? The 2004 exit poll discrepancies were different in kind and scope from those of the prior four elections. Unlike 1988-2000, the 2004 discrepancies cannot be explained by uncounted votes alone.

There are some exit poll critics who claim that the large (5.4 WPE) 1992 exit poll discrepancy proves that 2004 exit poll analysis (7.1 WPE) which indicate that the election was stolen are “crap” and “bad science”. After all, they say, there were no allegations of fraud in 1992. They fail to mention (or are unaware of) the fact that in 1992 Clinton beat Bush I by a recorded 43.6-38.0m (43.0-37.4%) but 9.4m votes were uncounted – and 70-80% were Democratic. When the uncounted votes are added, the adjusted vote becomes 50.7-40.3m (45.7-36.4%), which exactly matched Clinton’s unadjusted exit poll.

From 1988-2000, after the uncounted adjustment, there was a 0.85% average Democratic exit poll discrepancy and 2.9 WPE. In 2004, after the 3.4m uncounted vote adjustment, there was a 2.8% discrepancy and Bush’s margin was reduced from 3.0m (62.0-59.0) to 1.3m (62.9-61.6). But uncounted votes were only one component of Election Fraud 2004. The Election Calculator Model determined that approximately 5m votes were switched from Kerry to Bush.

___________________________________________________________________________________
Projection and Post-election Models: Monte Carlo Simulation vs. Regression Analysis

There are two basic methods used to forecast presidential elections:
1) Projections based on state and national polling trends which forecast the popular and electoral vote, updated frequently right up to the election.
2) Regression models based on historical time-series which forecast the popular vote, executed months before the election.

Polling models when adjusted for undecided voters and estimated turnout, are superior to regression models. Models which predicted a Bush win in 2000 and 2004 were technically “correct”; Bush won the recorded vote. But Gore and Kerry won the True vote. Except for the Election Calculator (below), all models assume that elections will be fraud-free.

Academics and political scientists create multiple regression models which utilize time-series data as relevant input variables: economic growth, inflation, job growth, interest rates, foreign policy, historical election vote shares, etc. Regression modeling is an interesting theoretical exercise but does not account for the daily events which affect voter psychology. Fraud could conceivably skew regression models and media tracking polls.

Statistical analyses provided by Internet bloggers concluded that BushCo stole the 2004 election. Their findings were dismissed by the media as “just another conspiracy theory”. A few “conspiracy fraudsters” were banned after posting on various liberal discussion forums. And even today, the most popular polling sites never discuss election fraud. But the Democrats haven’t raised the issue after two presidential and scores of congressional and gubernatorial elections were stolen, and neither has the media, supposedly the guardian of democracy. Is there anyone who still truly believes that elections are legitimate?

There has been much misinformation regarding electoral and popular vote win probability calculations. In the Election Model, the latest state pre-election poll are used to project the vote after adjusting for undecided voters. The model assumes the election is held on the day of the projection.

The projections determine the probability of winning each state for input to the simulation. The probability of winning the popular vote is based on the 2-party projected vote share and an estimated margin of error:
P = NORMDIST (vote share, 0.50, MoE/1.96, True).

The expected electoral vote is the average of all the election trials. The probability of capturing at least 270 electoral votes is a simple ratio of the number of winning trials divided by the total number of trials.

 
4 Comments

Posted by on October 31, 2011 in Election Myths

 

Tags: , , , , ,

An Electoral Vote Forecast Formula: Simulation or Meta-analysis Not Required

An Electoral Vote Forecast Formula: Simulation or Meta-analysis Not Required

Richard Charnin

Oct. 31, 2011
Updated: Dec 9, 2012

Track Record:2004-2012 Forecast and True Vote Models https://docs.google.com/document/d/1zRZkaZQuKTmmd_H0xMAnpvSJlsr3DieqBdwMoztgHJA/edit

Regardless of the method used for state projections, only the state win probabilities are needed to calculate the expected electoral vote. A simulation or meta-analysis is required to calculate the electoral vote win probability.

Calculating the expected electoral vote is a three-step process:

1. Project the 2-party vote share V(i) for each state(i) as the sum of the final pre-election poll share PS(i) and the undecided voter allocation UVA(i):
V(i)= PS(i) + UVA(i)

2. Compute the probability of winning each state given the projected share and the margin of error at the 95% confidence level:
P(i) = NORMDIST (V(i), 0.5, MoE/1.96, true)

3. Compute the expected electoral vote as the sum of each state’s win probability times its electoral vote:
EV = ∑ P(i) * EV(i), for i = 1,51

The most efficient method for projecting the electoral vote win probability is Monte Carlo simulation. This technique is widely used in many diverse applications when an analytical solution is prohibitive. It is the perfect tool for calculating the EV win probability.

The 2012 Presidential True Vote and Election Fraud Simulation Model snapshot forecast exactly matched Obama’s 332 Electoral Votes. The model also forecast a 320.7 theoretical (expected) EV and a 320 simulation (mean) EV.

In 2008, it was just the opposite. Obama’s 365.3 expected theoretical electoral vote was a near-perfect match to his recorded 365 EV. The simulation mean EV was also a near-perfect 365.8. The snapshot EV forecast was a near-perfect 367. The 2008 Election Model exactly matched Obama’s 365 EV. His win probability was 100%; he won all 5000 election trials. His projected 53.1% share was a close match to the recorded 52.9%. But the Election Model was wrong. It utilized pre-election likely voter (LV) polls which understated Obama’s True Vote. The National registered voter (RV) polls projected 57% which was confirmed by the post-election True Vote Model (58%,420 EV), the unadjusted state exit polls (58%,420 EV) and the unadjusted National Exit Poll (61%).

What does this prove? That no more than 500 simulation trials are required to approach the theoretical forecast recorded EV. The simulation is based strictly on state win probabilities. The only reason a simulation is required is to calculate the electoral vote win probability (the percentage of winning election trials that exceed 269 EV). A simulation is not required to forecast the EV. It is merely the product sum of the state win probabilities and electoral votes.


Election blogs, media pundits and academics develop models for forecasting the recorded vote but do not apply basic probability, statistics and simulation concepts in their overly simplistic or complex models. They never mention the systemic election fraud factor. But it is a fact: the recorded vote differs from the True Vote in every election.

In each of the 1988-2008 elections, the unadjusted state and national presidential exit polls have differed from the recorded vote. The Democrats won the unadjusted poll average by 52-42% compared to the 48-46% recorded margin. The exit polls confirmed the 1988-2008 True Vote Model in every election.

The 2004 Monte Carlo Election Simulation Model calculates 200 election trials using final state pre-election polls and post-election exit polls.

2004 Election Model

The 2004 Election Model used a 5000 election trial simulation. The win probability is the percentage of winning election trials. The average electoral vote will approach the theoretical value (the EV summation formula) as the number of trials increase: the Law of Large Numbers (LLN) applies. The average and median EV’s are very close to the theoretical mean; no more than 5000 election trials are required to accurately derive the EV win probability.

The model projected that Kerry would have 337 electoral votes with a 99% win probability and a 51.8% two-party vote share. I allocated 75% of the undecided vote to Kerry.

Exit pollsters Edison-Mitofsky, in their Jan. 2005 Election Evaluation Report, showed an average within precinct discrepancy of 6.5%. This meant that Kerry had 51.5% and 337 electoral votes, exactly matching the Election Model.

The unadjusted state exit poll aggregate (76,000 respondents) on the Roper UConn archive website had Kerry winning by 51.0-47.5%. The unadjusted National Exit Poll (13,660 respondents) shows that he won by 51.7-47.0%.

Kerry had 53.5% in the post-election True Vote Model – a 67-57 million vote landslide. But it was not enough to overcome the massive fraud which gave Bush his bogus 3.0 million vote “mandate”.

The Election Model includes a sensitivity (risk) analysis of five undecided voter (UVA) scenario assumptions. This enables one to view the effects of the UVA factor variable on the expected electoral vote and win probability. Kerry won all scenarios.

Electoral vote forecasting models which do not provide a risk factor sensitivity analysis are incomplete.

Princeton Professor Wang projected that Kerry would win 311 electoral votes with a 98% win probability, exactly matching pollster John Zogby – and closely matching the exit polls.

But Wang was incorrect in his post-mortem to suggest that his forecast was “wrong” because Bush won the late undecided vote. All evidence indicates that Kerry easily won the late undecided vote and the historical recorded indicates challengers win undecideds 80% of the time.

Based on historic evidence, the challenger is normally expected to win the majority (60-90%) of the undecideds, depending on incumbent job performance. Bush had a 48% approval rating on Election Day. Gallup allocated 90% of undecided voters to Kerry, pollsters Zogby and Harris: 75-80%. The National Exit Poll indicated that Kerry won late undecided voters by a 12% margin over Bush.

Wang never considered that the election was stolen. Then again, neither did AAPOR, the media pundits, pollsters, academics or political scientists. But overwhelming statistical and other documented evidence indicates massive election fraud was required for Bush to win.

Meta analysis is an unnecessarily complex method and overkill for calculating the expected Electoral Vote; the EV is calculated by the simple summation formula given below.

2004 Election Model Graphs
State aggregate poll trend
Electoral vote and win probability
Electoral and popular vote
Undecided voter allocation impact on electoral vote and win probability
National poll trend
Monte Carlo Simulation
Monte Carlo Electoral Vote Histogram

2008 Election Model Graphs

Aggregate state polls and projections (2-party vote shares)
Undecided vote allocation effects on projected vote share and win probability
Obama’s projected electoral vote and win probability
Monte Carlo Simulation Electoral Vote Histogram

The 2012 Election Model exactly projected Obama’s 332 Electoral Votes (the actual snapshot total). The Expected EV based on the summation formula was 320.7

This is a one-sheet summary of 2004 and 2008 True Vote calculations with many links to relevant posts and data.

 

Tags: , , , , , , ,

The Unadjusted 2004 National Exit Poll: Closing the Book on the returning Gore voter “False Recall” Myth

The Unadjusted 2004 National Exit Poll: Closing the Book on the returning Gore voter “False Recall” Myth

Richard Charnin (TruthIsAll)

Oct. 17, 2011

“False recall” was the final argument promoted by exit poll naysayers to explain away the mathematically impossible 43/37% returning Bush/Gore voter mix in the 2004 Final National Exit Poll (NEP). It was an attempt to cast doubt on the preliminary NEP and the unadjusted state exit poll aggregate (Kerry won by 51-48%). It was a last-ditch attempt to maintain the fiction that Bush really did win fairly and that the unadjusted and preliminary exit polls “behaved badly”. The bottom line: exit polls should not be trusted (or even used) here in the U.S. – but they work fine in far away places like Ukraine and Georgia.

“False recall” stated that the mathematically impossible Final NEP mix was due to returning Gore voters who had the temerity of misstating their past vote to the exit pollsters, claiming they actually voted for Bush. This strange behavior was apparently due to faulty memory – a “slow-drifting fog” unique to Gore voters and/or a desire to be associated with Bush, the official “winner” of the 2000 election. The fact that he actually lost by 540,000 recorded votes was dismissed as irrelevant.

The unadjusted 2004 NEP on the Roper website should finally put “false recall” to eternal rest. Of the 13,660 respondents, 7064 (51.7%) said they voted for Kerry, 6414 (47.0%) for Bush and 182 (1.3%) for other third-parties. The NEP is a subset of unadjusted state exit polls (76,000 respondents). The weighted average of the aggregate state polls indicated that Kerry was a 51.1-47.5% winner.

1988-2008 State and National Unadjusted Exit Polls vs. Recorded Votes

This graph summarizes the discrepancies between the1988-2008 State Exit Polls vs. the corresponding Recorded Votes

But what did the respondents really say about how they voted in 2000? Of the 3,182 respondents who were asked, 1,222 (38.4%) said they voted for Gore, 1,257 (39.5%) said Bush, 119 (3.75%) said Other. The remaining 585 (18.4%) were either first-timers or others who did not vote in 2000. When the actual Bush/Gore 39.5/38.4% returning voter mix and the 12:22am preliminary NEP shares are used to calculate the total vote shares, Kerry has 51.7% – exactly matching the unadjusted NEP. But Kerry must have done better than that. The unadjusted 2000 exit poll indicated that Gore won by 5-6 million, so there had to be more returning Gore voters than Bush voters in 2004.

Although there is no evidence that Gore voters came to love Bush (even after he stole the 2000 election), or that returning Gore voters were more forgetful and dishonest than Bush voters, the “false recall” canard has been successful in keeping the “bad exit poll” myth alive. Such is the power of the mainstream media.

“False recall” was the equivalent of the famous “Hail Mary” touchdown pass. It followed the “reluctant Bush responder” (rBr) and “Swing vs. Red-shift” arguments, both of which had been refuted (see the links below).

Since unadjusted 2004 NEP data was not provided in the mainstream media, “false recall” was a possibility, however remote and ridiculous the premise. It was a very thin reed that has been surprisingly resilient. Apparently it still is to Bill Clinton, Al Franken and Michael Moore. Not to mention the mainstream “liberal” media who continue to maintain the fiction that Bush really did win.

We now have absolute proof that in order to match the recorded vote, the exit pollsters had to adjust the NEP returning voter mix from the (already adjusted) 12:22am timeline; the 41/39% mix was changed to an impossible 43/37%. But they had to do more than just that; the pollsters also had to inflate the 12:22am Bush shares of new and returning voters to implausible levels.

The earlier proof that the returning voter mix was adjusted in the Final NEP (even though it was mathematically impossible) to match the recorded vote is confirmed by the data itself. Now, with the actual responses to the question “Who did you vote for in 2000″, there is no longer any question as to whether Gore voters forgot or lied or were in a “slow moving” fog. The “pristine” results show that the actual Bush/Gore returning voter mix (39.5/38.4%) differs substantially from the artificial, mathematically impossible Final NEP (43/37%) mix.

https://docs.google.com/spreadsheet/ccc?key=0AjAk1JUWDMyRdFIzSTJtMTJZekNBWUdtbWp3bHlpWGc#gid=7

This is irrefutable evidence that the Final NEP is not a true sample. Of course, we knew this all along. The exit pollsters admit it but they don’t mention the fact that it’s standard operating procedure to force ALL exit polls to match the recorded vote. This is easily accomplished by adjusting returning voter turnout from the previous election to get the results to “fit”. Of course, the mainstream media political pundits never talk about it. So how would you know?

Political sites such as CNN, NY Times and realclearpolitics.com still display the 2004 Final National Exit poll and perpetuate the fiction that Bush won. But it’s not just the 2004 election. ALL FINAL exit polls published by the mainstream media (congressional and presidential) are forced to match the recorded vote. Unadjusted exit polls don’t “behave badly” – but the adjusted Finals sure do.

The unadjusted 1988-2008 state and national exit polls are now in the True Vote Model:
https://docs.google.com/spreadsheet/ccc?key=0AjAk1JUWDMyRdGN3WEZNTUFaR0tfOHVXTzA1VGRsdHc#gid=34

False recall followed the “reluctant Bush responder” (rBr) and “Swing vs. Red-shift” arguments (see links below), both of which have been refuted.
http://richardcharnin.com/2004FalseRecallUnadjEP.htm
http://richardcharnin.com/FalseRecallRebuttal.htm
http://richardcharnin.com/ConversationAboutFalseRecall.htm
http://richardcharnin.com/FalseRecallPetard.htm
http://richardcharnin.com/SwingVsRedshift1992to2004.htm
http://richardcharnin.com/SwingRedShiftHoisted.htm

The Final NEP is mathematically impossible since the number of returning Bush voters implied by the 43% weighting is 52.6 million (122.3 million votes were recorded in 2004). Bush only had 50.46 million recorded votes in 2000. Approximately 2.5 million died, therefore the number of returning Bush voters must have been less than 48 million. Assuming 98% turnout, there were 47 million returning Bush voters, 5.6 million fewer than implied by the Final NEP.

Based on 12:22am NEP vote shares, Kerry wins by 10m votes with 53.2% – assuming equal 98% turnout of returning Bush and Gore voters. He wins by 7 million given 98/90% Bush/Gore turnout. Total votes cast in 2000 and 2004 are used to calculate returning and new voters.

The Kerry vote share trend was a constant 51% at the 7:33pm (11027) and 12:22am (13047) time lines. Kerry gained 1085 votes and Bush 1025 from 7:33pm to 12:22am. Third-parties declined by 90 due to the 4% to 3% change in share of the electorate.

False recall is disproved in a number of ways.

1. False recall is based on a 3168 subset of the Final NEP 13660 respondents who were asked how they voted in 2000. But all 13660 were asked who they JUST voted for in 2004.

2. In the preliminary 12:22am NEP of 13047 respondents, approximately 3025 of the 3168 were asked how they voted in 2000. This estimate was derived by applying the same 95.4% percentage(13047/13660) to the 3168. The weighted result indicated that returning Bush voters comprised 41% (50.1m) of the electorate. The Final NEP “Voted in 2000″ cross tab (and all other cross tabs) was forced to match the recorded vote. This required that 43% (52.6m) of the electorate had to be returning Bush voters. The increase in the returning Bush 2000 voter share of the 2004 electorate (from 41% at 12:22am to 43% in the Final) was clearly impossible since it was based on a mere 143 (25% of 613) additional respondents.

a) There was an impossible late switch in respondent totals. Between 7:33pm and 12:22am, the trend was consistent: Kerry gained 254 votes, Bush 239. Third-parties declined by 13. But between 12:22am and the Final, Kerry’s total declined by 13, Bush gained 182 and third party lost 26.

b) It was also impossible that returning Bush voters would increase from 41% to 43% (122) and returning Gore voters would decline from 39% to 37% (8). Regardless, the Final 43/37% split was mathematically impossible. It implied there were 5.6 million more returning Bush voters than could have voted, assuming that 47 (98%) of the 48 million who were alive turned out.

c) The increase in Bush’s share of new voters from 41% to 45% (+31) was impossible; there were just 24 additional new voters. Kerry lost 2.

d) The changes in the Gender demographic were impossible. The Kerry trend was consistent at the 11027 and 13047 respondent time lines. Kerry gained 1085 and Bush 1025. Third-parties declined by 90.

e) There was an impossible shift to Bush among the final 613 respondents (from 13047 to 13660). Kerry’s total declined by 99, while Bush gained 706. Third-parties gained 6. That could not have happened unless weights and vote shares were adjusted by a human. In other words, it could not have been the result of an actual sample.

3. False recall assumes that 43/37% was a sampled result. But we have just shown that it is mathematically impossible because a) it implies there were 5.6 million more returning Bush voters than could have voted in 2004 and b) the 41/39% split at 12:22am could not have changed to 43/37% in the Final with just 143 additional respondents in the “Voted 2000″ category.

4. The exit pollsters claim that it is standard operating procedure to force the exit poll to match the recorded vote. The Final was forced to match the recorded vote by a) adjusting the returning Bush/Gore voter mix to an impossible 43/37% and b) simultaneously increasing the Bush shares of returning Bush, Gore and new voters to implausible levels using impossible adjustments.

5. Just reviewing the time line, it is obvious that the exit pollsters do in fact adjust weights and vote shares to force a match the recorded vote. It’s SOP. But it immediately invalidates the naysayer claim that the 43/37 split was due to Gore voter false recall. No, it was due to exit poll data manipulation.

6. Which is more believable: a) that the exit pollsters followed the standard procedure of forcing the poll to match the vote, or b) that at least 8% more returning Gore voters claimed they voted for Bush in 2000 than returning Bush voters claimed they voted for Gore?

7. As indicated above, there was a maximum number of returning Bush 2000 voters who could have voted in 2004: the ones who were still living. So the 43/37% split is not only impossible, it is also irrelevant. It doesn’t matter what the returning voters said regarding their 2000 vote. We already know the four-year voter mortality rate (5%) and maximum LIVING voter turnout (98%).

8. False recall assumes that the returning voter mix is a sampled result. But the 4% increase in differential between returning Bush and Gore voters (from 2% to 6%) is impossible since the total number of respondents increased by just 143 (from 3025 to 3168).

9. The false recall claim is based on NES surveys of 500-600 respondents that indicate voters misstate past votes. But the reported deviations are based on the prior recorded vote – not the True Vote. There have been an average of 7 million net uncounted votes in each of the last eleven elections. The majority (70-80%) were Democratic. In 2000, there were 5.4 million. When measured against the True Vote (based on total votes cast, reduced by mortality and voter turnout), the average deviations are near zero. Therefore, the NES respondents told the truth about their past vote.

10. The 2006 and 2008 Final National Exit Polls were forced to match the recorded vote with impossible 49/43% and 46/37% returning Bush/Kerry voter percentages. The 2008 Final required 12 million more returning Bush than Kerry voters. These anomalies are just additional proof that false recall is totally bogus – a final “Hail Mary” pass to divert, confuse and cover-up the truth. The exit pollsters just did what they are paid to do.

 
 

Tags: , , ,

To Believe that Obama Won in 2008 by 9.5 Million Votes, You Must Believe …

To Believe that Obama Won in 2008 by 9.5 Million Votes with a 52.87% Share, You Must Believe …

Richard Charnin (TruthIsAll)
Dec. 16, 2011

Track Record:2004-2012 Forecast and True Vote Models https://docs.google.com/document/d/1zRZkaZQuKTmmd_H0xMAnpvSJlsr3DieqBdwMoztgHJA/edit

You must believe that the Final 2008 National Exit Poll (NEP) is correct since it matched the recorded vote.

1. The Final NEP indicated that 46% (60.5 million) of the 131.4 million who voted in 2008 were returning Bush voters; 37% (48.6 million) returning Kerry voters.
2. 103% turnout of living Bush 2004 voters was required to match the 2008 recorded vote.
3. The Final 2008 NEP implied there were 12 million more returning Bush than Kerry voters.
4. The Final implied that Bush won in 2004 by 52.6-42.3%. He won the recorded vote by 50.7-48.3%.
5. Kerry won the 2004 unadjusted state exit polls (70,000 sample) by 52-47%.

You must believe that the 2008 and 2004 unadjusted state and national exit polls were wrong even though…
6. Obama won the state exit polls (81,388 sample) by 58.0-40.5% – a 23 million vote margin.
7. Obama won the unadjusted NEP (17,836), a subset of the state exit polls by 61-37%.
8. Of the 17,836, 4,178 were asked how they voted in 2004: 43.4% said Kerry, 38.6% Bush.
9. Obama’s 58.0% share was confirmed by the 2008 NEP shares and 43.4/38.6% mix .

You must believe the True Vote Model TVM) was wrong even though…
10. The TVM was the third confirmation of Obama’s 58.0% exit poll share.
11. It used Final 2008 NEP vote shares, combined with a realistic, plausible return voter mix (based on Kerry’s True Vote) which replaced the impossible Final NEP mix.
12. The sensitivity analysis shows that Obama won the worst case scenario by 19.5 million votes and a 56.7% share (he had 67% of new voters and 15% of returning Bush voters). Obama had a 58.0% True Vote share in the most-likely base case scenario based on his Final NEP 72% share of new voters and 17% share of returning Bush voters.

You must believe there is nothing suspicious about the following…

13. Obama had 52.3% of 121 million votes counted on Election Day and 59.2% of the final 10 million late (paper ballot) votes recorded after Election Day.
14. According to the Final 2008 NEP, returning 2004 third-party voters comprised 5.2 million (4%) of the electorate . But only 1.2 million third-party votes were recorded in 2004. This anomaly indicates that third party votes were uncounted and/or switched.
15. In the unadjusted 2008 NEP subsample, 1,815 (43.4%) said they voted for Kerry and 1,614 (38.6%) said Bush. But to match the recorded vote, the percentage mix had to be adjusted to 46% Bush/37% Kerry: the number of Kerry respondents was reduced from 1,815 to 1,546 (-14.8%) and Bush respondents were increased from 1,614 to 1,922 (+19.2%).

Proof that Obama won by at least 20 million votes:
http://richardcharnin.com/ObamaProof.htm

 
Leave a comment

Posted by on October 13, 2011 in 2008 Election, Election Myths

 

Tags: , , ,

A True Vote Model Tool for Analyzing Election Fraud: 1988-2008 Exit Poll Database

A Database and True Vote Model for Analyzing 1988-2008 State Exit Poll Discrepancies

Richard Charnin

Updated: April 10, 2012

Unadjusted exit polls are based on actual respondent totals. Final Exit Polls are always forced to match the recorded vote. The unadjusted exit polls are confirmed in other surveys and the True Vote Model.

This workbook includes 1988-2008 unadjusted “pristine” state exit poll data and reflects actual voter response.
http://richardcharnin.wordpress.com/2011/11/13/1988-2008-unadjusted-state-exit-polls-statistical-reference/

This graph summarizes the discrepancies between the1988-2008 State Exit Polls vs. the corresponding Recorded Votes

The True Vote Model

The TVM is based on Census votes cast, mortality, prior election voter turnout and National Exit Poll vote shares. The TVM closely matched the unadjusted exit polls in each election from 1988-2008. 

The 1988-2008 State and National True Vote Model
https://docs.google.com/spreadsheet/ccc?key=0AjAk1JUWDMyRdGN3WEZNTUFaR0tfOHVXTzA1VGRsdHc#gid=0

Roper Center: The Data Source
http://www.ropercenter.uconn.edu/elections/common/state_exitpolls.html#.Tr60gD3NltP

In the 1988-2008 presidential elections, the exit poll margin of error was exceeded in 126 of 274 state exit polls, with 123 shifting in favor of the Republicans and just 3 for the Democrats. At the 95% confidence level, one would expect that the margin of error would be exceeded in 7-8 states in favor of the GOP and 7-8 for the Democrat. The probability that the margin of error would be exceeded in a state election is 5% (2.5% for the Democrat and 2.5% for the Republican).

The Probability P that 123 of 274 exit polls would shift to the GOP beyond the margin of error) is calculated as:
P = 5.4E-106 = Poisson (123,.025*274, false) or
1 in 1.8 BILLION TRILLION TRILLION TRILLION TRILLION TRILLION TRILLION TRILLION TRILLION!

Therefore it is proof beyond ANY DOUBT that election fraud is systemic and virtually always favors the GOP. Based on the UNADJUSTED exit polls, the Democrats should have won ALL SIX elections.

https://docs.google.com/spreadsheet/pub?key=0AjAk1JUWDMyRdFIzSTJtMTJZekNBWUdtbWp3bHlpWGc&output=html

Data Summary

From the Intro worksheet:
The Democrats led in all the 1988-2008 presidential election averages…
1) recorded vote: 47.9 – 45.9%
2) unadjusted state exit poll aggregate: 51.8 – 41.6%; unadjusted national exit poll: 51.7- 41.7%
3) True Vote Model (methods 2-3): 51.6 – 42.9%
4) True Vote Model (method 4): 53.0 – 41.0%
5) Exit Poll (WPE/IMS method): 50.8 – 43.1%

These states flipped to the GOP from the exit poll to the recorded vote:

1988: CA CO IL LA MD MI MT NM PA SD VT 
Dukakis had a 51-47% edge in 24 battleground state polls.
He lost by 7 million votes.

1992: AK AL AZ FL IN MS NC OK TX VA 
Clnton had a 18 million vote margin in the state exit polls.
He won the recorded vote by just 6 million.

1996: AK AL CO GA ID IN MS MT NC ND SC SD VA 
Clinton had a 16 million vote margin in the state exit polls.
He won by just 8 million recorded votes.

2000: AL AR AZ CO FL GA MO NC NV TN TX VA 
Gore needed just ONE of these states to win the election.
He won the state exit polls by 6 million, matching the TVM. 

2004: CO FL IA MO NM NV OH VA
Kerry needed FL or OH to win.
He won the national and state exit polls by 5-6 million with 51-52%.
He won the TVM by 10 million with 53.6%.

2008: AL AK AZ GA MO MT NE 
Obama had 58% in the state exit polls, a 23 million margin (9.5 recorded).
He had 61% in the unadjusted National Exit Poll.
The True Vote Model indicated that he had 58.0% exactly matching the unadjusted state exit polls.

To force State and National Exit Polls to match the recorded vote, all demographic category weights and/or vote shares have to be adjusted.

Bush Approval Ratings
For example, to adjust Kerry’s 51.1 – 47.5% unadjusted exit poll margin to Bush’s 50.7- 48.3% recorded margin in the Final National Exit Poll.

Bush’s 50.3% unadjusted approval rating was increased to 53%.
Corresponding vote shares were increased as well.

Bush had just 48% approval in the final pre-election polls.
With 48% approval applied to the NEP shares, Kerry had 53.7% (see the sensitivity analysis table below).

Party-ID
Dem/Rep Party-ID was changed from 38.5-35.1% to 37-37% to match the vote.
There was a near-perfect correlation between Bush’s unadjusted state exit poll shares, approval ratings and Party-ID.

Note:
US Count Votes analysis of the Ohio 2004 exit poll discrepancies:
http://www.electionmathematics.org/em-exitpolls/OH/2004Election/Ohio-Exit-Polls-2004.pdf

UCSV proved the impossibility of the exit pollster’s reluctant Bush responder (rBr) hypothesis. The rBr theory was promoted to explain the cause of the 6.5% exit poll discrepancies. It stated that 56 Kerry voters responded to be interviewed for every 50 Bush voters.
http://www.electionmathematics.org/em-exitpolls/USCV_exit_poll_analysis.pdf

The Exit Poll Response Optimizer confirmed the USCV simulation.
http://www.richardcharnin.com/ExitPollResponseOptimization.htm

 
Leave a comment

Posted by on October 7, 2011 in True Vote Models

 

Tags: ,

The 2012 Presidential True Vote Model: How Obama Could Lose

The 2012 Presidential True Vote Model: How Obama Could Lose

Richard Charnin

Oct. 4, 2011
Updated: Aug. 11, 2012

The 2012 Presidential True Vote Model was created to take a first look at the election before the state and national pre-election polls became widely available. The model indicates that Obama needs a 55% True Vote to overcome the 5% fraud factor to win the majority of the popular vote.

On April 27, 2012, the Presidential True Vote Simulation Election Model (TVM) was created. The model utilized pre-election state and national polls in a Monte Carlo simulation to forecast the electoral vote and win probability. The combination of a True Vote Model and pre-election polling simulation is a unique forecasting tool to determine the likelihood of Obama overcoming the fraud factor and winning re-election.

In 2008, Obama had a 52.9% recorded vote share. Using the recorded share as a basis, Obama loses by 2.6 million votes, a 9 million vote switch in margin from the True Vote scenario. Bottom line: Obama needs at least a 55% True Vote share to win in 2012 if, as in 2008, he loses 5% due to fraud.

The National Exit Poll (NEP) is ALWAYS forced to match the recorded vote. It indicated that returning Bush 2004 voters comprised 46% of the 2008 electorate compared to just 37% for returning Kerry voters. This astounding anomaly is never discussed by academics, political scientists and media pundits.

In the TVM, the impossible 46/37 return voter mix was replaced with a feasible mix based on Kerry’s true 53.6% share, 5% voter mortality and an estimated 97% “habitual voter” turnout rate. Obama had a 58% True Vote share. The assumed TVM shares of new and returning voters were identical to the 2008 Final NEP shares.

Obama had 58.1% of 81,388 state exit poll respondents (weighted by state votes cast). The NEP is a subset (17,836 respondents) of the state exit polls. Obama had a remarkable 61.0% in the unadjusted NEP.

Of the 17,836 NEP respondents, 4,178 were asked how they voted in 2004: 43.4% said they voted for Kerry, 36.6% for Bush, 4.5% for Other and 13.4% did not vote. The percentages implied that Kerry won by 50.2-44.6%.

Using the implied 2004 shares, Obama’s share increases from 52.9% to 58.0% – exactly matching the True Vote Model and the unadjusted/weighted state exit poll aggregate!

An impossible returning/new voter mix was required in the 2008 NEP to force it to match the 52.9% recorded share.

2008 Unadjusted State and National exit polls vs. recorded and True Vote:
https://docs.google.com/spreadsheet/ccc?key=0AjAk1JUWDMyRdFIzSTJtMTJZekNBWUdtbWp3bHlpWGc#gid=1

For 2012, consider the following base case scenario:
– Obama’s 58% True Vote share is the basis for calculating returning voters.
– 90% of living 2008 Obama voters and 97% of McCain voters turn out in 2012.
– Obama wins 85% of his 2008 voters and 10% of returning McCain voters.
– Obama splits returning third-party (Other) and New (DNV) voters with the Republican candidate.
Based on these assumptions, Obama wins the election by 6.4 million votes with a 52.4% True Vote share.

Table 1: Nine vote share scenarios
In each scenario, 90% of living Obama voters and 97% of living McCain voters turn out.
In the worst case scenario, Obama wins 80% of returning Obama voters and 5% of returning McCain voters. Obama loses by 5 million votes with a 48.1% share.
In the most likely base case scenario, Obama has 85% of Obama and 10% of McCain voters. Obama wins by 6.4 million with a 52.4% share.
In the best case scenario, Obama wins 90% of returning Obama voters and 15% of returning McCain voters. Obama wins by 17.7 million with a 56.6% share.

Table 2: Nine voter turnout scenarios
In each scenario, Obama wins 85% of returning Obama and 10% of McCain voters.
In the worst case scenario, 85% of Obama and 100% of McCain voters turn out. Obama wins by 2.6 million with a 51.0% share.
In the most likely base case scenario, 90% of Obama and 97% of McCain voters turn out. Obama wins by 6.4 million with a 52.4% share.
In the best case scenario 95% of Obama and 92% of McCain voters turn out, Obama wins by 10.9 million with a 54.0% share.

Note that these scenarios are based on the 2008 True Vote. Unfortunately, pollsters, academics and media pundits do not consider or mention the True vote or Election Fraud for that matter. It’s not in their vocabulary. They can’t mention one without the other (the Recorded vote is equal to the True Vote plus an Election Fraud factor). To these forecasters, the recorded vote is sacrosanct. They base all of their pre-election and post-election analysis on the recorded vote. That is what they do.

 
 

Tags: , , ,

 
Follow

Get every new post delivered to your Inbox.

Join 772 other followers