Category Archives: Uncategorized

The Media and Scott Walker’s 2014 Election Fraud

Richard Charnin
July 25, 2015

This is an informative article and video from We the People Dane County Blog It contains links to Election Fraud articles (including many of my blog posts) and related videos.

Analysis of Scott Walker’s 2012 recall and the November 2014 election results can be shown to be mathematically implausible and cannot represent voter intent. The chance that Scott Walker has, in 2 consecutive election cycles, “won” with vote totals that each violate the Law of Large Numbers is zero.

While Scott Walker bases his 2016 Presidential Campaign on the statement he has won 3 elections in 4 years, in fact, at least 2 of these elections can be demonstrated to have been stolen. The embedded video below explains and highlights the media’s role in election fraud.

Leave a comment

Posted by on July 25, 2015 in Uncategorized


Tags: , ,

Wisconsin 2010 Senate: True Vote Model and Cumulative Vote shares indicate Feingold won

Wisconsin 2010 Senate True Vote Analysis

Richard Charnin
June 16, 2011
Updated May 6,2012 to include unadjusted exit polls
Updated July 21, 2015 to include Cumulative Vote share analysis

Charnin Website
Wisconsin blog posts

2010 Wisconsin Senate True Vote Model

Wisconsin exit polls
This is an updated analysis of the 2010 Wisconsin Senate race. The WI Exit Poll was forced to match the recorded vote (Johnson defeated Feingold by 52-47%). Forcing a match to the recorded vote is standard operating procedure. In order to force a match in the 2004 and 2008 presidential elections, the exit pollsters had to assume an impossible number of returning Bush voters from the previous election.

The returning voter mix should reflect the previous election True Vote, not the recorded vote. In the adjusted 2010 exit poll, 49% of the recorded votes were cast by returning Obama 2008 voters and 43% by returning McCain voters. The ratio is consistent with Obama’s 7.5% national recorded vote margin.

In Wisconsin, Obama had a 56.2% recorded share; Feingold just 47%. But Obama led the unadjusted Wisconsin exit poll by 63-36% (2,545 respondents; 2.4% margin of error). In Oregon, Obama had a 57% recorded share. Ron Wyden, a progressive Democratic senator running for re-election,had an identical 57%.

The probability is 97.5% that Obama’s true Wisconsin vote share exceeded 61%. Assuming Obama had 61%, how could Feingold have had just 47% two years later?

In the 2010 WI exit poll, Vote shares were not provided for returning third party (Other) voters and new (DNV) voters which represented 3% and 5% of the total recorded vote, respectively. In order to match the vote, Johnson must have won these voters by approximately 60-35%, which is highly unlikely. In 2008, Obama won returning third party voters by 66-20%.

A comparison of the demographic changes from 2004 to 2010 yields interesting results – but the 2010 numbers are suspect asthey are based on the the 2010 recorded vote:
– Johnson needed 70% of voters who decided in the final week to win.
From 2004 > 2010:
Females: 53% > 50% (is not plausible).
Voters over 45: 50% > 62% (seems high)
Party ID: 38R/35D > 37D/36R (more Democrats, so how did Feingold lose)
Independents for Feingold: 62% > 43% (implausible)
Labor for Feingold; 66% > 59% (why would he lose his base support?)
Milwaukee County for Feingold: 68% > 61% (10% of his base defected?
Suburban/Rural for Feingold: 51% > 43%

The True Vote Model
Using the unadjusted 2008 Wisconsin presidential exit poll as a basis, Feingold won by 52.6-45.5%, a 154,000 vote margin. The model assumes McCain returning voter turnout of 70% in 2010, compared to just 63% of Obama voters. It also assumes the adjusted exit poll shares that were required to match the recorded vote. The adjusted poll indicates that Feingold had an implausibly low 84% share of returning Obama voters. If Feingold had 89% (all else being equal), he would have won by 289,000 votes with a 56% total share.

Sensitivity Analysis
Vote shares are displayed for various scenarios of a) returning Obama and McCain voter turnout and b) Feingold’s share of returning and new voters. Although the exit poll was forced to match the recorded vote, the True Vote Model uses the adjusted vote shares as the base case. It is likely that the vote shares were also adjusted to force a match to the recorded vote.

The True Vote Base Case analysis assumes a 1.0% annual voter mortality rate, a 63% turnout of living Obama voters and a 70% turnout of McCain voters. The percentage mix of returning 2008 third-party (other) voters could not have been the 3% indicated in the WI exit poll. That would mean there were 65,000 third-party voters but there were just 44,000. Therefore, the model assigned the 1.5% excess of Other voters to New/DNV (first-time voters and others who did not vote in 2008).

Feingold was the winner in all scenarios of returning Obama and McCain voters. But it is important to keep in mind that the adjusted WI exit poll gave Feingold just 84% of returning Obama voters. It is difficult to accept the premise that nearly one of six Obama voters defected to Johnson.

Cumulative Vote Shares
The sharply increasing Johnson cumulative vote share in Milwaukee and other counties defies explanation. Democratic vote shares rise in large urban voting precincts.

1 Comment

Posted by on July 23, 2015 in Uncategorized


Tags: , , , ,

Edison Research Exit Poll Analysis: No Discussion of the Election Fraud Factor

Richard Charnin
July 20 2015

Charnin Website
Look inside the book: Matrix of Deceit: Forcing Pre-election and Exit Polls to Match Fraudulent Vote Counts
Look inside the book:Reclaiming Science:The JFK Conspiracy

Edison Research conducts exit polls. In this report, ER once again fails to mention the Election Fraud factor, which has skewed the True Vote in national, state and local elections for decades.

Frustrated voters who have seen their elections stolen need to know the facts. The corporate media never discusses Election Fraud – the third-rail of American politics. But it is no longer the dirty little secret it was before the 2000 election. This is an analytic overview of Historical Election Fraud:

My comments are in bold italics.

Edison: Of the surveys there were 19 states where the sample size was too small for individual state demographic or other breakouts.
That is absolute nonsense. In 2012, the National Election Pool (NEP) of six media giants which funds the exit polls said it did not want to incur the cost, so they would not run exit polls in 19 states. That was a canard. Could it be that the NEP and the pollsters did not want the full set of 50 state exit polls to be used in a True Vote analysis? The continued pattern of discrepancies would just further reveal built-in systematic fraud.

That is also why the question “How Did You Vote in 2008” was not published along with the usual cross tabs. The “How Voted” crosstab is the Smoking Gun of Election Fraud. In every election since 1988, the crosstab illustrates how pollsters adjust the number of returning Republican and Democratic voters (as well as the current vote shares) to match the recorded vote.

Edison: The majority of interviews are conducted in-person on Election Day in a probability sample that is stratified based on geography and past vote.
The past vote is the bogus recorded vote which favors the Republicans. Any stratification strategy is therefore biased and weighted to the Republicans.

Edison: The goal in this paper is not to provide a comprehensive and exhaustive discussion of the intricacies of the operational and statistical aspects of an exit poll but to provide additional discussion on various ways to incorporate probability distributions into an exit poll framework. The core of this discussion is based on discrete data in the exit poll. The examples used in this paper will be based on the data obtained from the 2012 presidential election and will specifically address the use of the Dirichlet and Normal distributions.
There is nothing intricate about forcing unadjusted exit polls to match the recorded vote. It is quite simple. And it happens in every election.

How does Edison explain the massive exit poll discrepancies?

– In 2008, Obama had 61% in the National Exit Poll (17836 respondents) and 58% in the weighted aggregate of the state exit polls. But he had a 52.9% recorded share. The probability of the discrepancy is ZERO.

– In 2004, John Kerry had 51.7% in the unadjusted National Exit Poll (13660 respondents)s. He led the state aggregate by 51.1-47.6%. But Kerry lost  the recorded vote by 50.7-48.3%.

– In 2000, Al Gore led the unadjusted National Exit Poll by 48.5- 46.3%. He led the state aggregate polls by 50.8-44.4%. But Gore was held to a 48% tie with Bush in the recorded vote.

Edison: A useful characteristic relating to probability distributions is the ability to use known data and then simulate from the posterior distribution. Using the exit poll framework, the statewide candidate estimates can be used and applied using the Dirichlet distribution approach. This means that the estimates from each state can be used to determine the probability that a given candidate will win each state. With the probability of success established for each state we can incorporate these probabilities into a winner-take-all Binomial distribution for all 50 states and the District of Columbia.
A simulation is not required to calculate the expected electoral vote if we already have calculated 51 state win probabilities, The expected EV is the product sum of the probabilities and corresponding EVs.
EV = SUMPRODUCT[prob(i) * EV(i)], where i =1,51.

In the 2012 True Vote Election Model, pre-election state win probabilities were calculated based on final Likely Voter (LV) polls. The model exactly projected Obama’s 332 EV. But Obama’s True Vote was much better than his recorded share. Note: LVs are a subset of Registered Voter (RV) polls which eliminate new, mostly Democratic, “unlikely” voters.

Edison: Clearly, ‘calling’ a national election based purely on sample data is not the most favorable strategy due to sampling variability. However, updating the probability that a candidate will win with additional known data in each of the given states will decrease the variability in the posterior distribution. This can be accomplished by using additional known prior data or, as is often the case in elections, by adding the final precinct election results provided shortly after the polling places close.

This is all good theoretically, but it assumes that the final precinct data has not been manipulated. In any case, a 10 million trial simulation is overkill. Only 500 Monte Carlo trials are necessary to calculate the probability of winning the electoral vote.

Edison: This can be accomplished by using additional known prior data or, as is often the case in elections, by adding the final precinct election results provided shortly after the polling places close. Due to the nature of elections, informed priors are often available and can be incorporated into the estimates to improve the probability distribution. In this way, specific models can be developed to handle states with more or less available prior data and improve the overall model.
Again, no mention of the votes being flipped in the precincts.

Edison: We can take the currently collected data and model the results using other quantities that are available. In some ways, due to the nature of linear regression, prior information is already implicitly included in exit poll regression models.
But prior election data is based on vote-miscounts. Garbage in, garbage out.

Edison: It is quite clear that the past Democrat vote from 2008 and the current exit poll vote from 2012 are very good predictors of the 2012 final precinct reported vote. Furthermore, using the classical linear regression, the R2 value is 0.95 indicating that a significant amount of variation in vote is explained by these two predictor variables.

Edison: There are two primary goals that are addressed by regression models in this paper:
1) general understanding of the data within a given state. In other words identifying variables that aid in a linear prediction of the candidate’s vote; and
2) predicting y, given x, for future observations.
Which data? The adjusted demographic data or the actual pristine data?
If Y = f(X), then X should not be forced to fit the recorded result.

Edison: For the purposes of this paper the sample of polling locations using the final end of night results are used as the response variable. Generally for all states past data tends to be a very good predictor of current results. In some states there are other predictors (e.g. precinct boundary changes, current voter registration, weather, etc.) that work well while in other states those same predictors provide no additional information and make the model unnecessarily complex.
But past data does not reflect the prior True Vote, so any regression analysis cannot predict the True Vote. It will however predict the bogus, recorded vote.

Edison: Again, the regression model presented here is an example model used for demonstration purposes (i.e. no formal model selection procedure was used). Furthermore, for this same purpose the non-informative prior is used. It’s clear from the output of the regression summary that there is a strong effect for 2008 candidate vote percentage, precincts with high Democrat vote in 2008 tend to have a very predictable Democrat vote in 2012. As one would expect the 2012 exit poll results have a strong effect when predicting the final polling location results. This example regression model for Florida is provided in Equation 2.
E (CANDj |x,θ) = β0 +β1 ·CANDEP2012j + β2 ·CAND2008j
All this is saying that a candidate’s vote share is predictable using regression analysis based on the 2008 recorded vote and 2012 adjusted precinct exit poll data. But if the precinct data is biased; the projection will reflect the bias. And the cycle continues in all elections that follow.

Edison: We can check to see if the observed data from the polling places are consistent with the fitted model. Based on the model and the predictive distribution, the model fits quite well without outliers in any of the precincts.
Of course the model will fit the bogus recorded vote quite well because it was forced to match the recorded vote.
But what if the observed recorded precinct vote data is manipulated?

Edison: Several important conclusions about the analysis of exit poll data can be drawn from this review of approaches using probability distributions. First, it is clear that there are many probability distribution components to an exit poll.
But the prior information (recorded vote and adjusted exit polls) used in the probability analysis is bogus as long as there is no consideration of the Election Fraud Factor.
Recorded Vote = True Vote + Fraud

Edison: This research on exit polling serves as an exploration of ways to investigate and analyze data and to provide alternate, complementary approaches that may be more fully integrated into standard election (and non-election) exit polling. These procedures are only a few of the many ways that can be used to analyze exit poll data. These approaches provide an alternate way to summarize and report on these data. It also provides additional visualization and ways to view the data and how the data are distributed.
But the core problem is not addressed here. All alternative models are useless if they are based on prior and current recorded vote data which has been corrupted.

Edison: Further topics include small sample sizes, missing data, censored data, and a deeper investigation into absentee/early voting. Additionally, these approaches can be used to investigate various complex sample design techniques (e.g. stratified, cluster, multi-phase, etc.) and evaluate how the designs interact with probabilistic approaches in an exit polling context. Further hierarchical modeling may provide additional insight into the complexities of the exit poll data.
These sample design techniques are all based on recorded vote data. Why are pristine exit polls always adjusted (forced) to match the Election Day recorded vote to within 0.1%?

Proof: Unadjusted Exit Polls are forced to match the Recorded vote:

Leave a comment

Posted by on July 20, 2015 in Uncategorized


Tags: ,

2016 Presidential Election: Will voter turnout overwhelm the built-in fraud factor?

Richard Charnin
July 16, 2015

Charnin Website
Look inside the book: Matrix of Deceit: Forcing Pre-election and Exit Polls to Match Fraudulent Vote Counts
Look inside the book:Reclaiming Science:The JFK Conspiracy

2016 Presidential Election: Will voter turnout overwhelm the built-in fraud factor?

Obama won the 2012 True Vote by 55-43%
In 2016, the Democrat wins
91% of returning Obama voters,
6% of Romney voters and
50% of New voters.

To win the popular vote, the GOP would need 97% of Romney voters to return compared to 77% of Obama voters. But that is implausible since Obama won the 2012 True Vote by approximately 15 million. A 20% split in 2012 voter turnout is not feasible; the GOP cannot win a fair election.

View the spreadsheet:

The Democrat would win easily if 90% of Obama 2012 voters turned out and the votes were counted fairly. But since the True Vote is never equal to the recorded vote, Democratic voters must come out in droves to overcome vote-switching and vote-dropping on proprietary voting machines which have been in place since 2002. The GOP realized that it could never win an honest election. HAVA look:

The published, official adjusted National Exit Poll is always forced to match the Election Day recorded vote. The NEP exactly matched Obama’s Election Day recorded share in 2008 and 2012. Was this just a coincidence?

In 2008, Obama had 52.71% and McCain 45.35% on Election Day.
The ADJUSTED National Exit Poll Gender cross tab matched the recorded vote exactly:
Obama 52.71%; McCain 45.35%.

Obama had 59.2% of 10.2 million Late Votes recorded after Election Day.

Obama won the UNADJUSTED 2008 National Exit Poll by 61-37%.
The UNADJUSTED 2008 state exit poll aggregate matched the True Vote Model:
Obama led both by 58.0-40.5%.

In 2012, Obama had 50.34% and Romney 48.07% on Election Day.
In the Gender crosstab, it was a near perfect match:
Obama led by 50.30-47.76%.
Obama had 60.23% of 11.7 million Late Votes.

In 2012, the National Election Pool decided not to run exit polls in 19 states.
The NEP claimed the polls were too expensive.
Or was it because the UNADJUSTED exit polls would be too revealing?

2008-2012 Adjusted National Exit Poll
..........2012 ......... 2008......... 2016 Tie Vote scenario
Gender Pct Obama Romney Obama McCain Dem Repub

Male....47.0 45.0 52.0 49.0 48.0 ... 43.4 53.7
Female..53.0 55.0 44.0 56.0 43.0 ... 54.0 45.0
Total..100.0 50.3 47.8 52.7 45.3 ... 49.0 49.1

2016 Tie Vote Scenario
2012.........Pct Dem Repub Ind Turnout
Obama.... 39.4% 91% 6% 3% 77%
Romney... 38.8% 6% 94% 0% 97%
Other..... 1.8% 47% 48% 5% 95%
DNV.......20.0% 50% 47% 3%
Votes......100% 66.2 66.4 2.5
Share......100% 49.0% 49.1% 1.9%

2012 True Vote
2008.....Pct Obama Romney Other

Obama.. 53.8% 90% 07% 3%
McCain. 37.2% 07% 93% 0%
Other....1.5% 51% 45% 4%
DNV......7.5% 55% 42% 3%
Vote.....100% 72.2 54.5 2.5
Share........ 55.9% 42.2% 1.9%
Recorded..... 65.9 60.9 2.3
Share........ 51.0% 47.2% 1.8%

Unadjusted 2008 National Exit Pool (17836 respondents)
Total....... Sample Obama McCain Other
Respondents 17,836 10,873 6,641 322
Vote Share. 100.0% 60.96% 37.23% 1.81%

Unadjusted 2008 National Exit Poll
2004 Votes %Mix Obama McCain Other

DNV.....17.7 13.4 71 27 2
Kerry...57.1 43.4 89 09 2
Bush....50.8 38.6 17 82 1
Other....5.9 4.50 72 26 2
Share..131.5 100.% 58.0% 40.4% 1.6%
Vote...........131.5 76.3 53.0 2.2

Final Adjusted 2008 National Exit Poll
(forced to match recorded vote with impossible returning Bush voters)
2004....Votes %Mix Obama McCain Other

DNV.....17.1 13 71 27 2
Kerry.. 48.6 37 89 9 2
Bush... 60.5 46 17 82 1
Other... 5.3 04 72 26 2
Total.. 131.4 100% 52.9% 45.6% 1.5%
Votes............... 69.50 59.95 2.02

Leave a comment

Posted by on July 16, 2015 in Uncategorized


Impossible Odds of a 70-68 score: The Isner–Mahut tennis match at 2010 Wimbledon

Richard Charnin
July 15, 2015

Charnin Website
Look inside the book: Matrix of Deceit: Forcing Pre-election and Exit Polls to Match Fraudulent Vote Counts
Look inside the book:Reclaiming Science:The JFK Conspiracy

The Isner–Mahut match at the 2010 Wimbledon Championships is the longest match in tennis history, measured both by time and number of games. In the Men’s Singles tournament first round, the American 23rd seed John Isner defeated the French qualifier Nicolas Mahut after 11 hours, 5 minutes of play over three days, with a final score of 6–4, 3–6, 6–7(7–9), 7–6(7–3), 70–68 for a total of 183 games.

What is the probability of a 70-68 set?

It’s the same as flipping a coin 70 times and always coming up heads
or a basketball player with a 50% average sinking 70 foul shots in a row
or a mediocre .500 baseball team winning 70 games in a row
or of 23 unnatural deaths among 1400 JFK witnesses in the first year following the assassination…

Assume the players are equally matched (each has a 50% chance of winning a game)
The probability P = .5^70 = 8.47E-22 = 1 in a BILLION TRILLION!

It never happened before and never will.
It is by far the most astounding result in sports history.
It defies explanation.
But it actually happened.

Leave a comment

Posted by on July 15, 2015 in Uncategorized



JFK: Sensitivity Analysis of Unnatural Deaths and Homicides

JFK: Sensitivity Analysis of Unnatural Deaths and Homicides

Richard Charnin
June 30, 2015

JFK Blog Posts
Twitter Chronological Links

The cover of Reclaiming Science: The JFK Conspiracy shows a graph displaying probabilities of unnatural witness deaths assuming 1500, 2000 and 2500 witnesses over a range of 0 to 50 unnatural deaths. Of the 122 suspicious deaths in the JFK Calc spreadsheet,78 were ruled unnatural (34 homicides, 24 accidents, 16 suicides, 4 unknown).

The x-coordinate of the peak in each curve is the EXPECTED number of unnatural deaths given the number of JFK-related witnesses.

The graph illustrates the power of SENSITIVITY ANALYSIS to display how a target variable (the probability) changes as input variables change in value. In the JFK Calc spreadsheet, the calculation is given by the Poisson function.

N= 1500 = JFK-related witnesses (the universe)
n= 78 officially ruled unnatural deaths
T= 15 years (1964-78)
R= 0.000822 = average national unnatural mortality rate (unweighted)

We need to calculate E, the expected number of unnatural deaths:
E= N*R*T = 18.5 = 1500 * 0.000822 * 15
Now we can calculate P, the probability of 78 unnatural deaths:
P= POISSON(78, 18.5, false) = 4.15 E-25 (1 in 1 TRILLION TRILLION)

The graph shows the probability of
30 unnatural deaths; 1500 witnesses: 0.0034 (1 in 300)
40 unnatural deaths; 2000 witnesses: 0.0011 (1 in 1000)
50 unnatural deaths; 2500 witnesses: 0.0003 (1 in 3000)

The probability of 78 unnatural deaths for
1500 witnesses: 4.15 E-25 (base case: 1 in a trillion trillion)
2000 witnesses: 5.00 E-18 (1 in a 200,000 trillion)
2500 witnesses: 3.92 E-13 (1 in 2 trillion)

Note: The JFK-weighted unnatural rate is 0.000247 (weighted by official cause of death). Using this rate, the probabilities of the unnatural deaths are much lower than the above unweighted probabilities of 1 in trillions. So why bother?

In 1964-78, there were at least 34 officially ruled JFK-related homicides. A statistical estimation of the expected cause of death indicates that approximately 50 officially ruled accidents, suicides, heart attacks and sudden cancers were likely homicides (at least 80 homicides among the 122 suspicious deaths).

Given the average 0.000084 homicide rate for 1964-78, the probability of
34 homicides; 1500 witnesses: 1.4 E-30 (1 in a million trillion trillion)
50 homicides; 2000 witnesses: 3.7 E-46 (1 in a billion trillion trillion trillion)
80 homicides; 3000 witnesses: 6.7 E-75 (1/trillion^6)
80 homicides; 1500 witnesses: 3.7 E-98 (1/trillion^8)

If we triple the average homicide rate to 0.000252, the probability of
34 homicides; 1500 witnesses: 5.4 E-16 (1 in 2,000 trillion)
50 homicides; 2000 witnesses: 1.7 E-24 (1 in a trillion trillion)
80 homicides; 3000 witnesses: 5.0 E-40 (1 in a trillion trillion trillion)
80 homicides; 1500 witnesses: 1.3 E-61 (1/trillion^5)

View the JFK Calc: Sensitivity Analysis Tables

Of the 656 JFK-related individuals, 70 died suspiciously
(44 were ruled unnatural, including 22 homicides).
The probability of…
44 Unnatural deaths: 5.50E-19 (1 in one million trillion)
22 Homicides: 5.89E-24 (1 in 100 billion trillion)

JFK Calc: Simkin JFK Index

552 testified, 31 deaths suspicious, of which 16 were ruled unnatural (4 homicides).

Unnatural deaths; probability
16 4.91E-09 (1 in 200,000 billion – ruled base case)
18 8.81E-11 (1 in 100 billion)
21 1.43E-13 (1 in 7 trillion)

Homicides; probability
4 4.92E-03 (1 in 200 – base case)
10 3.77E-11 (1 in 30 billion)
17 3.11E-18 (1 in 300,000 trillion)

JFK Calc: Called to Testify

20 Suspicious deaths: 13 unnatural, 14 testified at Warren Commission

Witnesses; Probability of Unnatural Death
300; 3.60E-10 (1 in 2.7 billion)
400; 1.06E-08 (1 in 90 million)
500; 1.36E-07 (1 in 7 million)
600; 1.02E-06 (1 in 1 million)

JFK Calc: Dealey Plaza

HSCA – 1977
Suspicious deaths of 7 FBI officials called to testify in 6 month period
Official cause of death: 5 heart attacks, 2 accidents
FBI est.
called ; Probability
8 8.72E-18 (1 in 100,000 trillion)
20 5.22E-15 (1 in 200 trillion)
50 3.07E-12 (1 in 300 billion)
100 3.68E-10 (1 in 1 billion)

Calculated a 1 in 100,000 trillion probability of 18 material witness deaths (13 unnatural) in the three years following the assassination (actually over 40).
Weighted average unnatural mortality rate: 0.000209

Witnesses; probability of at least 13 unnatural deaths
454; 9.83 E-18 (1 in 100,000 trillion)
600; 3.36 E-16 (1 in 3,000 trillion)
800; 1.25 E-14 (1 in 80 trillion)
1000; 2.00 E-13 (1 in 5 trillion)
1200; 1.89 E-12 (1 in 500 billion)
1500; 2.85 E-11 (1 in 35 billion)
2000; 8.76 E-10 (1 in 1 billion)
5000; 1.99 E-05 (1 in 50,000)
10000; 7.07 E-03 (1 in 140)

Michael Benson: Who’s Who in the JFK Assassination 1400+ JFK-related individuals (97 suspicious deaths).
John Simkin: Spartacus Educational JFK Index 656 JFK-related individuals (66 suspicious deaths).
Jim Marrs: Crossfire
Richard Belzer and David Wayne: Hit List
Craig Roberts: Dead Witnesses

Leave a comment

Posted by on June 30, 2015 in Uncategorized



JFK: Analysis of Suspicious Witness Deaths in Simkin’s JFK Index

JFK: Analysis of Suspicious Witness Deaths in Simkin’s JFK Index

Richard Charnin
Updated: June 27, 2015
Click Reclaiming Science:The JFK Conspiracy to look inside the book.
JFK Blog Posts
JFK Calc Spreadsheet Database

This is a summary update of a previous post on John Simkin’s Index of 656 JFK-related individuals.

Simkin’s Index:

The list is in JFK Calc for reference and probability calculations.

Sixty-six (66) individuals in the JFK Index are also included among 122 suspicious deaths in the JFK Calc spreadsheet. Of the 122 suspicious deaths in JFK Calc, approximately 67 were called to testify in four investigations. The fact that both lists contain more than 60 names is proof that they are relevant. Naysayers can no longer make the ridiculous argument that they are not JFK-related.

Of the 66 suspicious deaths in Simkin’s index, 42 were OFFICIALLY RULED UNNATURAL, including 22 homicides. Only 8 unnatural deaths and ONE homicide would be expected in a random group of 656 from 1964-78 based on historical mortality rates.

The probability of 22 homicides among the 656 is 1 in 150 billion trillion (6.4E-24). If we triple the 0.000084 national homicide rate, the probability of 22 homicides is higher: 1 in 23 trillion (4.3E-14).

But these probabilities are too HIGH. Statistical expectation indicates that of the 45 suspicious deaths (officially ruled accidents, suicides, heart attacks and sudden cancers) approximately 26 were HOMICIDES. So there were approximately 48 homicides among the 66 suspicious deaths.

The probability of 48 homicides from 1964-78 among the 656 in the JFK Index is 1 in a trillion trillion trillion trillion trillion!

The Simkin JFK Index of 656 key individuals consists of 4 categories.
Suspicious deaths include:
10 of 190 Important Figures;
15 of 86 Important Witnesses;
5 of 206 Investigators, Researchers and Journalists;
36 of 174 Possible Conspirators

Leave a comment

Posted by on June 12, 2015 in Uncategorized


Tags: ,

Richard Charnin's Blog

JFK Conspiracy and Systemic Election Fraud Analysis


Get every new post delivered to your Inbox.

Join 803 other followers