Author Archives: Richard Charnin

About Richard Charnin

In 1965, I graduated from Queens College (NY) with a BA in Mathematics. I later obtained an MS in Applied Mathematics from Adelphi University and an MS in Operations Research from the Polytechnic Institute of NY. I started out as a numerical control engineer/programmer for a major defense/aerospace manufacturer and then moved to Wall Street as a manager/developer of corporate finance quantitative applications for several major investment banks. I consulted in quantitative applications development for major domestic and foreign financial institutions, investment firms and industrial corporations. In 2004 l began posting weekly "Election Model" projections based on state and national polls. As "TruthIsAll", I have been posting election analysis to determine the True Vote ever since.

A Simple Electoral Vote Simulation Model

A Simple Electoral Vote Simulation Model

Richard Charnin
July 27, 2015

Charnin Website
Look inside the book: Matrix of Deceit: Forcing Pre-election and Exit Polls to Match Fraudulent Vote Counts
Look inside the book:Reclaiming Science:The JFK Conspiracy

The purpose of the Monte Carlo Electoral Vote Simulation Model is to calculate the probability of a candidate winning at least 270 Electoral votes.

The model contains the following Obama 2-party vote shares:
2008- Unadjusted state exit polls and recorded votes
2012- True Vote Model shares (19 states were not exit polled) and recorded votes

The Electoral Vote Histogram shows the results of the 200 simulation trials.

There are four input methods. Enter 1,2,3,4
2008: 1- exit poll, 2- recorded votes;
2012: 3- True vote, 4- recorded votes

In order to see the effects of changes, a blank column is inserted so that vote shares can be overridden.

The Total Electoral Vote is calculated using individual state projections. But the probability of winning each state is required in order to calculate the total probability of winning 270 EV. The state win probability is calculated using the projected two-party vote share and the margin of error (MoE).

The Total EV is calculated as the sum of the products of the state win probabilities and corresponding electoral votes.
Prob = NORMDIST (vote share, 0.5, MoE/1.96, true)

1- The theoretical expected EV is the sum of the 51 state win probabilities multiplied by the corresponding EVs.
2- The snapshot EV is just the sum of the projected electoral votes. It cam be misleading if state elections are close.
3- The mean EV is the average of the 200 simulation trials.
The three methods yield similar EVs.

The 2012 Monte Carlo Simulation Forecast exactly matched Obama’s 332 electoral votes and 51.0% total vote share. In the True Vote Model he had 55.6% and 391 Electoral votes.

In the 2008 Election Model Obama’s 365.3 expected theoretical electoral vote was a near-perfect match to his recorded 365 EV. The simulation mean EV was 365.8 and the snapshot was 367. Obama’s won all 5000 election trials. His projected 53.1% share was a close match to the 52.9% recorded share.

Pre-election Registered Voter (RV) polls projected 57% for Obama Likely Voter (LV) pre-election polls are a subset of RV polls. The LVs eliminate many new voters or others who did not vote in the prior election. Therefore they understate the projected Democratic vote. But political pundits assume LV polls are accurate; after all, they have an excellent track record in predicting the recorded vote. The LV polls used in the Election Model perfectly projected the recorded vote.

But it has been proven beyond any doubt that the recorded votes are bogus and therefore so are the LV polls and the adjusted exit polls which are forced to match the recorded vote.

The RV polls were confirmed by the post-election True Vote Model. The TVM is based on a feasible estimate of returning and new voters – and corresponding candidate vote shares.

The TVM exactly matched the aggregate of the 2008 unadjusted state exit polls (58%,420 EV). If accurate, Obama won by 23 million votes, not the 9.5 million recorded. Obama led the unadjusted National Exit Poll (17,836 respondents, 2% MoE) by 61-37%. If accurate, he won by an astounding 30 million votes.


Leave a comment

Posted by on July 27, 2015 in 2012 Election



The Media and Scott Walker’s 2014 Election Fraud

Richard Charnin
July 25, 2015

This is an informative article and video from We the People Dane County Blog It contains links to Election Fraud articles (including many of my blog posts) and related videos.

Analysis of Scott Walker’s 2012 recall and the November 2014 election results can be shown to be mathematically implausible and cannot represent voter intent. The chance that Scott Walker has, in 2 consecutive election cycles, “won” with vote totals that each violate the Law of Large Numbers is zero.

While Scott Walker bases his 2016 Presidential Campaign on the statement he has won 3 elections in 4 years, in fact, at least 2 of these elections can be demonstrated to have been stolen. The embedded video below explains and highlights the media’s role in election fraud.

Leave a comment

Posted by on July 25, 2015 in Uncategorized


Tags: , ,

Wisconsin 2010 Senate: True Vote Model and Cumulative Vote shares indicate Feingold won

Wisconsin 2010 Senate True Vote Analysis

Richard Charnin
June 16, 2011
Updated May 6,2012 to include unadjusted exit polls
Updated July 21, 2015 to include Cumulative Vote share analysis

Charnin Website
Wisconsin blog posts

2010 Wisconsin Senate True Vote Model

Wisconsin exit polls
This is an updated analysis of the 2010 Wisconsin Senate race. The WI Exit Poll was forced to match the recorded vote (Johnson defeated Feingold by 52-47%). Forcing a match to the recorded vote is standard operating procedure. In order to force a match in the 2004 and 2008 presidential elections, the exit pollsters had to assume an impossible number of returning Bush voters from the previous election.

The returning voter mix should reflect the previous election True Vote, not the recorded vote. In the adjusted 2010 exit poll, 49% of the recorded votes were cast by returning Obama 2008 voters and 43% by returning McCain voters. The ratio is consistent with Obama’s 7.5% national recorded vote margin.

In Wisconsin, Obama had a 56.2% recorded share; Feingold just 47%. But Obama led the unadjusted Wisconsin exit poll by 63-36% (2,545 respondents; 2.4% margin of error). In Oregon, Obama had a 57% recorded share. Ron Wyden, a progressive Democratic senator running for re-election,had an identical 57%.

The probability is 97.5% that Obama’s true Wisconsin vote share exceeded 61%. Assuming Obama had 61%, how could Feingold have had just 47% two years later?

In the 2010 WI exit poll, Vote shares were not provided for returning third party (Other) voters and new (DNV) voters which represented 3% and 5% of the total recorded vote, respectively. In order to match the vote, Johnson must have won these voters by approximately 60-35%, which is highly unlikely. In 2008, Obama won returning third party voters by 66-20%.

A comparison of the demographic changes from 2004 to 2010 yields interesting results – but the 2010 numbers are suspect asthey are based on the the 2010 recorded vote:
– Johnson needed 70% of voters who decided in the final week to win.
From 2004 > 2010:
Females: 53% > 50% (is not plausible).
Voters over 45: 50% > 62% (seems high)
Party ID: 38R/35D > 37D/36R (more Democrats, so how did Feingold lose)
Independents for Feingold: 62% > 43% (implausible)
Labor for Feingold; 66% > 59% (why would he lose his base support?)
Milwaukee County for Feingold: 68% > 61% (10% of his base defected?
Suburban/Rural for Feingold: 51% > 43%

The True Vote Model
Using the unadjusted 2008 Wisconsin presidential exit poll as a basis, Feingold won by 52.6-45.5%, a 154,000 vote margin. The model assumes McCain returning voter turnout of 70% in 2010, compared to just 63% of Obama voters. It also assumes the adjusted exit poll shares that were required to match the recorded vote. The adjusted poll indicates that Feingold had an implausibly low 84% share of returning Obama voters. If Feingold had 89% (all else being equal), he would have won by 289,000 votes with a 56% total share.

Sensitivity Analysis
Vote shares are displayed for various scenarios of a) returning Obama and McCain voter turnout and b) Feingold’s share of returning and new voters. Although the exit poll was forced to match the recorded vote, the True Vote Model uses the adjusted vote shares as the base case. It is likely that the vote shares were also adjusted to force a match to the recorded vote.

The True Vote Base Case analysis assumes a 1.0% annual voter mortality rate, a 63% turnout of living Obama voters and a 70% turnout of McCain voters. The percentage mix of returning 2008 third-party (other) voters could not have been the 3% indicated in the WI exit poll. That would mean there were 65,000 third-party voters but there were just 44,000. Therefore, the model assigned the 1.5% excess of Other voters to New/DNV (first-time voters and others who did not vote in 2008).

Feingold was the winner in all scenarios of returning Obama and McCain voters. But it is important to keep in mind that the adjusted WI exit poll gave Feingold just 84% of returning Obama voters. It is difficult to accept the premise that nearly one of six Obama voters defected to Johnson.

Cumulative Vote Shares
The sharply increasing Johnson cumulative vote share in Milwaukee and other counties defies explanation. Democratic vote shares rise in large urban voting precincts.

1 Comment

Posted by on July 23, 2015 in Uncategorized


Tags: , , , ,

Edison Research Exit Poll Analysis: No Discussion of the Election Fraud Factor

Richard Charnin
July 20 2015

Charnin Website
Look inside the book: Matrix of Deceit: Forcing Pre-election and Exit Polls to Match Fraudulent Vote Counts
Look inside the book:Reclaiming Science:The JFK Conspiracy

Edison Research conducts exit polls. In this report, ER once again fails to mention the Election Fraud factor, which has skewed the True Vote in national, state and local elections for decades.

Frustrated voters who have seen their elections stolen need to know the facts. The corporate media never discusses Election Fraud – the third-rail of American politics. But it is no longer the dirty little secret it was before the 2000 election. This is an analytic overview of Historical Election Fraud:

My comments are in bold italics.

Edison: Of the surveys there were 19 states where the sample size was too small for individual state demographic or other breakouts.
That is absolute nonsense. In 2012, the National Election Pool (NEP) of six media giants which funds the exit polls said it did not want to incur the cost, so they would not run exit polls in 19 states. That was a canard. Could it be that the NEP and the pollsters did not want the full set of 50 state exit polls to be used in a True Vote analysis? The continued pattern of discrepancies would just further reveal built-in systematic fraud.

That is also why the question “How Did You Vote in 2008” was not published along with the usual cross tabs. The “How Voted” crosstab is the Smoking Gun of Election Fraud. In every election since 1988, the crosstab illustrates how pollsters adjust the number of returning Republican and Democratic voters (as well as the current vote shares) to match the recorded vote.

Edison: The majority of interviews are conducted in-person on Election Day in a probability sample that is stratified based on geography and past vote.
The past vote is the bogus recorded vote which favors the Republicans. Any stratification strategy is therefore biased and weighted to the Republicans.

Edison: The goal in this paper is not to provide a comprehensive and exhaustive discussion of the intricacies of the operational and statistical aspects of an exit poll but to provide additional discussion on various ways to incorporate probability distributions into an exit poll framework. The core of this discussion is based on discrete data in the exit poll. The examples used in this paper will be based on the data obtained from the 2012 presidential election and will specifically address the use of the Dirichlet and Normal distributions.
There is nothing intricate about forcing unadjusted exit polls to match the recorded vote. It is quite simple. And it happens in every election.

How does Edison explain the massive exit poll discrepancies?

– In 2008, Obama had 61% in the National Exit Poll (17836 respondents) and 58% in the weighted aggregate of the state exit polls. But he had a 52.9% recorded share. The probability of the discrepancy is ZERO.

– In 2004, John Kerry had 51.7% in the unadjusted National Exit Poll (13660 respondents)s. He led the state aggregate by 51.1-47.6%. But Kerry lost  the recorded vote by 50.7-48.3%.

– In 2000, Al Gore led the unadjusted National Exit Poll by 48.5- 46.3%. He led the state aggregate polls by 50.8-44.4%. But Gore was held to a 48% tie with Bush in the recorded vote.

Edison: A useful characteristic relating to probability distributions is the ability to use known data and then simulate from the posterior distribution. Using the exit poll framework, the statewide candidate estimates can be used and applied using the Dirichlet distribution approach. This means that the estimates from each state can be used to determine the probability that a given candidate will win each state. With the probability of success established for each state we can incorporate these probabilities into a winner-take-all Binomial distribution for all 50 states and the District of Columbia.
A simulation is not required to calculate the expected electoral vote if we already have calculated 51 state win probabilities, The expected EV is the product sum of the probabilities and corresponding EVs.
EV = SUMPRODUCT[prob(i) * EV(i)], where i =1,51.

In the 2012 True Vote Election Model, pre-election state win probabilities were calculated based on final Likely Voter (LV) polls. The model exactly projected Obama’s 332 EV. But Obama’s True Vote was much better than his recorded share. Note: LVs are a subset of Registered Voter (RV) polls which eliminate new, mostly Democratic, “unlikely” voters.

Edison: Clearly, ‘calling’ a national election based purely on sample data is not the most favorable strategy due to sampling variability. However, updating the probability that a candidate will win with additional known data in each of the given states will decrease the variability in the posterior distribution. This can be accomplished by using additional known prior data or, as is often the case in elections, by adding the final precinct election results provided shortly after the polling places close.

This is all good theoretically, but it assumes that the final precinct data has not been manipulated. In any case, a 10 million trial simulation is overkill. Only 500 Monte Carlo trials are necessary to calculate the probability of winning the electoral vote.

Edison: This can be accomplished by using additional known prior data or, as is often the case in elections, by adding the final precinct election results provided shortly after the polling places close. Due to the nature of elections, informed priors are often available and can be incorporated into the estimates to improve the probability distribution. In this way, specific models can be developed to handle states with more or less available prior data and improve the overall model.
Again, no mention of the votes being flipped in the precincts.

Edison: We can take the currently collected data and model the results using other quantities that are available. In some ways, due to the nature of linear regression, prior information is already implicitly included in exit poll regression models.
But prior election data is based on vote-miscounts. Garbage in, garbage out.

Edison: It is quite clear that the past Democrat vote from 2008 and the current exit poll vote from 2012 are very good predictors of the 2012 final precinct reported vote. Furthermore, using the classical linear regression, the R2 value is 0.95 indicating that a significant amount of variation in vote is explained by these two predictor variables.

Edison: There are two primary goals that are addressed by regression models in this paper:
1) general understanding of the data within a given state. In other words identifying variables that aid in a linear prediction of the candidate’s vote; and
2) predicting y, given x, for future observations.
Which data? The adjusted demographic data or the actual pristine data?
If Y = f(X), then X should not be forced to fit the recorded result.

Edison: For the purposes of this paper the sample of polling locations using the final end of night results are used as the response variable. Generally for all states past data tends to be a very good predictor of current results. In some states there are other predictors (e.g. precinct boundary changes, current voter registration, weather, etc.) that work well while in other states those same predictors provide no additional information and make the model unnecessarily complex.
But past data does not reflect the prior True Vote, so any regression analysis cannot predict the True Vote. It will however predict the bogus, recorded vote.

Edison: Again, the regression model presented here is an example model used for demonstration purposes (i.e. no formal model selection procedure was used). Furthermore, for this same purpose the non-informative prior is used. It’s clear from the output of the regression summary that there is a strong effect for 2008 candidate vote percentage, precincts with high Democrat vote in 2008 tend to have a very predictable Democrat vote in 2012. As one would expect the 2012 exit poll results have a strong effect when predicting the final polling location results. This example regression model for Florida is provided in Equation 2.
E (CANDj |x,θ) = β0 +β1 ·CANDEP2012j + β2 ·CAND2008j
All this is saying that a candidate’s vote share is predictable using regression analysis based on the 2008 recorded vote and 2012 adjusted precinct exit poll data. But if the precinct data is biased; the projection will reflect the bias. And the cycle continues in all elections that follow.

Edison: We can check to see if the observed data from the polling places are consistent with the fitted model. Based on the model and the predictive distribution, the model fits quite well without outliers in any of the precincts.
Of course the model will fit the bogus recorded vote quite well because it was forced to match the recorded vote.
But what if the observed recorded precinct vote data is manipulated?

Edison: Several important conclusions about the analysis of exit poll data can be drawn from this review of approaches using probability distributions. First, it is clear that there are many probability distribution components to an exit poll.
But the prior information (recorded vote and adjusted exit polls) used in the probability analysis is bogus as long as there is no consideration of the Election Fraud Factor.
Recorded Vote = True Vote + Fraud

Edison: This research on exit polling serves as an exploration of ways to investigate and analyze data and to provide alternate, complementary approaches that may be more fully integrated into standard election (and non-election) exit polling. These procedures are only a few of the many ways that can be used to analyze exit poll data. These approaches provide an alternate way to summarize and report on these data. It also provides additional visualization and ways to view the data and how the data are distributed.
But the core problem is not addressed here. All alternative models are useless if they are based on prior and current recorded vote data which has been corrupted.

Edison: Further topics include small sample sizes, missing data, censored data, and a deeper investigation into absentee/early voting. Additionally, these approaches can be used to investigate various complex sample design techniques (e.g. stratified, cluster, multi-phase, etc.) and evaluate how the designs interact with probabilistic approaches in an exit polling context. Further hierarchical modeling may provide additional insight into the complexities of the exit poll data.
These sample design techniques are all based on recorded vote data. Why are pristine exit polls always adjusted (forced) to match the Election Day recorded vote to within 0.1%?

Proof: Unadjusted Exit Polls are forced to match the Recorded vote:

Leave a comment

Posted by on July 20, 2015 in Uncategorized


Tags: ,

2016 Presidential Election: Will voter turnout overwhelm the built-in fraud factor?

Richard Charnin
July 16, 2015

Charnin Website
Look inside the book: Matrix of Deceit: Forcing Pre-election and Exit Polls to Match Fraudulent Vote Counts
Look inside the book:Reclaiming Science:The JFK Conspiracy

2016 Presidential Election: Will voter turnout overwhelm the built-in fraud factor?

Obama won the 2012 True Vote by 55-43%
In 2016, the Democrat wins
91% of returning Obama voters,
6% of Romney voters and
50% of New voters.

To win the popular vote, the GOP would need 97% of Romney voters to return compared to 77% of Obama voters. But that is implausible since Obama won the 2012 True Vote by approximately 15 million. A 20% split in 2012 voter turnout is not feasible; the GOP cannot win a fair election.

View the spreadsheet:

The Democrat would win easily if 90% of Obama 2012 voters turned out and the votes were counted fairly. But since the True Vote is never equal to the recorded vote, Democratic voters must come out in droves to overcome vote-switching and vote-dropping on proprietary voting machines which have been in place since 2002. The GOP realized that it could never win an honest election. HAVA look:

The published, official adjusted National Exit Poll is always forced to match the Election Day recorded vote. The NEP exactly matched Obama’s Election Day recorded share in 2008 and 2012. Was this just a coincidence?

In 2008, Obama had 52.71% and McCain 45.35% on Election Day.
The ADJUSTED National Exit Poll Gender cross tab matched the recorded vote exactly:
Obama 52.71%; McCain 45.35%.

Obama had 59.2% of 10.2 million Late Votes recorded after Election Day.

Obama won the UNADJUSTED 2008 National Exit Poll by 61-37%.
The UNADJUSTED 2008 state exit poll aggregate matched the True Vote Model:
Obama led both by 58.0-40.5%.

In 2012, Obama had 50.34% and Romney 48.07% on Election Day.
In the Gender crosstab, it was a near perfect match:
Obama led by 50.30-47.76%.
Obama had 60.23% of 11.7 million Late Votes.

In 2012, the National Election Pool decided not to run exit polls in 19 states.
The NEP claimed the polls were too expensive.
Or was it because the UNADJUSTED exit polls would be too revealing?

2008-2012 Adjusted National Exit Poll
..........2012 ......... 2008......... 2016 Tie Vote scenario
Gender Pct Obama Romney Obama McCain Dem Repub

Male....47.0 45.0 52.0 49.0 48.0 ... 43.4 53.7
Female..53.0 55.0 44.0 56.0 43.0 ... 54.0 45.0
Total..100.0 50.3 47.8 52.7 45.3 ... 49.0 49.1

2016 Tie Vote Scenario
2012.........Pct Dem Repub Ind Turnout
Obama.... 39.4% 91% 6% 3% 77%
Romney... 38.8% 6% 94% 0% 97%
Other..... 1.8% 47% 48% 5% 95%
DNV.......20.0% 50% 47% 3%
Votes......100% 66.2 66.4 2.5
Share......100% 49.0% 49.1% 1.9%

2012 True Vote
2008.....Pct Obama Romney Other

Obama.. 53.8% 90% 07% 3%
McCain. 37.2% 07% 93% 0%
Other....1.5% 51% 45% 4%
DNV......7.5% 55% 42% 3%
Vote.....100% 72.2 54.5 2.5
Share........ 55.9% 42.2% 1.9%
Recorded..... 65.9 60.9 2.3
Share........ 51.0% 47.2% 1.8%

Unadjusted 2008 National Exit Pool (17836 respondents)
Total....... Sample Obama McCain Other
Respondents 17,836 10,873 6,641 322
Vote Share. 100.0% 60.96% 37.23% 1.81%

Unadjusted 2008 National Exit Poll
2004 Votes %Mix Obama McCain Other

DNV.....17.7 13.4 71 27 2
Kerry...57.1 43.4 89 09 2
Bush....50.8 38.6 17 82 1
Other....5.9 4.50 72 26 2
Share..131.5 100.% 58.0% 40.4% 1.6%
Vote...........131.5 76.3 53.0 2.2

Final Adjusted 2008 National Exit Poll
(forced to match recorded vote with impossible returning Bush voters)
2004....Votes %Mix Obama McCain Other

DNV.....17.1 13 71 27 2
Kerry.. 48.6 37 89 9 2
Bush... 60.5 46 17 82 1
Other... 5.3 04 72 26 2
Total.. 131.4 100% 52.9% 45.6% 1.5%
Votes............... 69.50 59.95 2.02

Leave a comment

Posted by on July 16, 2015 in Uncategorized


Impossible Odds of a 70-68 score: The Isner–Mahut tennis match at 2010 Wimbledon

Richard Charnin
July 15, 2015

Charnin Website
Look inside the book: Matrix of Deceit: Forcing Pre-election and Exit Polls to Match Fraudulent Vote Counts
Look inside the book:Reclaiming Science:The JFK Conspiracy

The Isner–Mahut match at the 2010 Wimbledon Championships is the longest match in tennis history, measured both by time and number of games. In the Men’s Singles tournament first round, the American 23rd seed John Isner defeated the French qualifier Nicolas Mahut after 11 hours, 5 minutes of play over three days, with a final score of 6–4, 3–6, 6–7(7–9), 7–6(7–3), 70–68 for a total of 183 games.

What is the probability of a 70-68 set?

It’s the same as flipping a coin 70 times and always coming up heads
or a basketball player with a 50% average sinking 70 foul shots in a row
or a mediocre .500 baseball team winning 70 games in a row
or of 23 unnatural deaths among 1400 JFK witnesses in the first year following the assassination…

Assume the players are equally matched (each has a 50% chance of winning a game)
The probability P = .5^70 = 8.47E-22 = 1 in a BILLION TRILLION!

It never happened before and never will.
It is by far the most astounding result in sports history.
It defies explanation.
But it actually happened.

Leave a comment

Posted by on July 15, 2015 in Uncategorized



2016 Presidential Election: True Vote Model Preliminary Analysis

2016 Presidential Election: True Vote Model Preliminary Analysis

Richard Charnin
July 2, 2015

My Website: Election Fraud and JFK
Look inside the book: Matrix of Deceit: Forcing Pre-election and Exit Polls to Match Fraudulent Vote Counts
Look inside the book:Reclaiming Science:The JFK Conspiracy

On Election Day 2012, 117.4 million votes were recorded. Obama led by 50.34-48.07%. The National Exit Poll was published the day after the election. It was adjusted to match Obama’s Election Day share: 50.30-47.76%. However, 11.7 million Late votes were recorded after Election Day. Obama won them by 60.2-39.8%. The surge in Obama’s late votes increased his final total margin to 51.03-47.19%. But he actually had a 55% True Vote share. The systematic red-shift struck again.

It is way too early to make any predictions 16 months in advance. There is no reason to believe the 2016 election will be fraud-free. The Democratic True Vote is always greater than the recorded vote. But we can run True Vote Model scenarios to see what it would take for Clinton, Bush and Sanders to win.

There are two calculation methods:
Method 1: returning 2012 voters are based on the recorded vote- Obama had 51%.
This calculation assumes the election will be fraudulent since the prior recorded vote was fraudulent. Therefore, returning voter estimates are implausible. In any case, the model generates vote share scenarios based on various assumptions of Obama and Romney voter turnout.

Method 2: returning voters are based on the 2012 True Vote – Obama had 55%.
This calculation assumes that the election will be essentially fraud-free since the estimated number of returning voters is plausible.

Base case assumptions assume:
1) 2012 recorded vote shares.
2) 1.25% annual voter mortality (total 5%)
3) 95% turnout of living Obama and Romney voters.

Sensitivity Analysis

For Clinton to win, she needs at least 90% of returning Obama voters, 7% of returning Romney voters and 55% of new voters.

View four sensitivity analysis tables and graphs:
Clinton’s total vote share and margin for incremental changes in her shares of
1) New (51-59%) and returning Romney voters (5-9%)
Vote margins (in millions): Low: 0.91, Base: 4.59, High: 8.26
2) Returning Obama (88-92%) and Romney voters (5-9%)
Vote margins: Low: 0.01, Base: 4.59, High: 9.17

3) Clinton’s total vote share for (89-97%) Obama and (93-97%) Romney voter turnout
Vote margins: Low: 0.84, Base: 4.59, High: 6.58

4) Clinton’s probability of winning the popular vote if she wins (88-92%) of returning Obama voters and (5-9%) of Romney voters.
Win probabilities: Low: 50.28%, Base: 99.46%, High: 100.00%

For Bush to win, it is a fair guess the media will report that he had 8% of returning Obama voters, 95% of returning Romney voters and matched Clinton’s 45% share of voters who did not vote (DNV) in 2012 (recorded vote basis).

To calculate what Bush really needs to win, we assume the 2012 True Vote as a basis.
He needs at least 17% of returning Obama voters, 92% of returning Romney voters and match Clinton’s 47% of voters who did not vote in 2012.

For Sanders to win, he needs at least 50% of returning Obama voters, 20% of returning Romney voters and 40% of voters who did not vote in 2012 (recorded vote basis).

View the Clinton, Sanders, Bush Win Scenarios at the bottom of this sheet

Track record:

Leave a comment

Posted by on July 2, 2015 in 2016 election


Tags: , , ,

Richard Charnin's Blog

JFK Conspiracy and Systemic Election Fraud Analysis


Get every new post delivered to your Inbox.

Join 803 other followers