Guilt does not require a normally working psyche. It requires understanding the difference between right and wrong. And by that we mean understanding that society has made some things illegal and expects you not to do them.
I am certain that Ethan Crumbley knew that some things are illegal. Therefore he is capable of guilt.
In all of those fringe cases, 12 people thought the person was guilty beyond any reasonable doubt. And beyond any reasonable doubt basically means 100% certainty (ie any doubt is unreasonable).
People who think it’s ok to execute someone when guilt is “100% certain” are the people who designed the current system.
I think you can. For example, I am 100% sure that Ethan Crumbley shot his classmates. (That doesn’t mean I think he should be executed though).
If they get one “unimportant” fact wrong, then why should I trust the “important” facts?
If none of the facts need to be correct except that police pointed a gun at someone’s head, why read the other 2000+ words in the article?
I don’t think that said there is footage of him buying rope, they said there is evidence of him buying rope. That could be something like a credit card charge, eyewitness reports, etc.
he was close enough to the ground that at least his feet were touching
FWIW, he was found in a seated position
deleted by creator
Even if he was the only one saying that, why are we giving him credit for it?
Maybe he was the first, but going forward anyone can follow his example and say things like, “Harris has a very real chance of winning. So does Trump. Also, Cruz and Allred both have very real chances of winning. So do Elizabeth Warren and her opponent, John Deaton”.
Silver showed that if you hedge by replacing a testable prediction with a tautology, then you can avoid criticism regardless of the result. I don’t think that is useful political analysis.
They can. IIRC, Amazon apps would check to make sure the Amazon App Store was still installed. And I’m pretty sure Netflix games stop working when you unsubscribe from Netflix
Because the developers want to check whether you got the app from the Play Store.
If the developers don’t care where you get the app from, then they won’t check.
Ok. But Silver’s model is proprietary and the details of its workings have not been presented to the public. So on what basis should we trust it?
I don’t expect a model to be perfect. But it is certainly possible for one model to be better than another, for example one might think the Weather Channel forecast is less accurate than AccuWeather (at least for your region).
Which, in turn, means that it is possible to decide when a forecast is more “right” or “wrong” than another, because what other basis would you have for judging which is better?
First, we need to distinguish Silver’s state-by-state prediction with his “win probability”. The former was pretty unremarkable in 2016, and I think we can agree that like everyone else he incorrectly predicted WI, MI, and PA.
However, his win probability is a different algorithm. It considers alternate scenarios, eg Trump wins Pennsylvania but loses Michigan. It somehow finds the probability of each scenario, and somehow calculates a total probability of winning. This does not correspond to one specific set of states that Silver thinks Trump will win. In 2016, it came up with a 28% probability of Trump winning.
You say that’s not “getting it wrong”. In that case, what would count as “getting it wrong”? Are we just supposed to have blind faith that Silver’s probability calculation, and all its underlying assumptions, are correct? Because when the candidate with a higher win probability wins, that validates Silver’s model. And when that candidate loses, that “is not evidence of an issue with the model”. Heads I win, tails don’t count.
If I built a model with different assumptions and came up with a 72% probability of Trump winning in 2016, that differs from Silver’s result. Does that mean that I “got it wrong”? If neither of us got it wrong, what does it mean that Trump’s probability of winning is simultaneously 28% and 72%?
And if there is no way for us to tell, even in retrospect, whether 28% is wrong or 72% is wrong or both are wrong, if both are equally compatible with the reality of Trump winning, then why pay any attention to those numbers at all?
We are talking about testing a model in the real world. When you evaluate a model, you also evaluate the assumptions made by the model.
Let’s consider a similar example. You are at a carnival. You hand a coin to a carny. He offers to pay you $100 if he flips heads. If he flips tails then you owe him $1.
You: The coin I gave him was unweighted so the odds are 50-50. This bet will pay off.
Your spouse: He’s a carny. You’re going to lose every time.
The coin is flipped, and it’s tails. Who had the better prediction?
You maintain you had the better prediction because you know you gave him an unweighted coin. So you hand him a dollar to repeat the trial. You end up losing $50 without winning once.
You finally reconsider your assumptions. Perhaps the carny switched the coin. Perhaps the carny knows how to control the coin in the air. If it turns out that your assumptions were violated, then your spouse’s original prediction was better than yours: you’re going to lose every time.
Likewise, in order to evaluate Silver’s model we need to consider the possibility that his model’s many assumptions may contain flaws. Especially if his prediction, like yours in this example, differs sharply from real-world outcomes. If the assumptions are flawed, then the prediction could well be flawed too.
Person B’s predicted outcome was closer to the truth.
Perhaps person A’s prediction would improve if multiple trials were allowed. Perhaps their underlying assumptions are wrong (ie the coins are not unweighted).
You are describing how to evaluate polling methods. And I agree: you do this by comparing an actual election outcome (eg statewide vote totals) to the results of your polling method.
But I am not talking about polling methods, I am talking about Silver’s win probability. This is some proprietary method takes other people’s polls as input (Silver is not a pollster) and outputs a number, like 28%. There are many possible ways to combine the poll results, giving different win probabilities. How do we evaluate Silver’s method, separately from the polls?
I think the answer is basically the same: we compare it to an actual election outcome. Silver said Trump had a 28% win probability in 2016, which means he should win 28% of the time. The actual election outcome is that Trump won 100% of his 2016 elections. So as best as we can tell, Silver’s win probability was quite inaccurate.
Now, if we could rerun the 2016 election maybe his estimate would look better over multiple trials. But we can’t do that, all we can ever do is compare 28% to 100%.
If you can only run an election once, then how do you determine which of these two results is better (given than Trump won in 2016):