# Phil 3.20.15

7:00 – 4:00 SR

• Still working on the lab problem. I’m using the roll of a six-sided dice to represent labs. Here’s the issue.
• For an even distribution over a set of options, the probability of an event converges nicely:
• A random distribution converges, but not so nicely:
• Loaded dice also converge, but in a different way:
• The question is, how do I tell that the difference between 1 and 2 is acceptable, but that the difference between 2 and 3 isn’t? Also very important is how many “rolls” do you need before you can say that the dice is loaded one way or another? I was looking at Chi Squares, but that doesn’t seem to be right. Currently looking at Binomial Distributions.
• Ok, it looks like Binomial Distributions are what we want. Here’s what it looks like:
• In the chart, probabilities accumulate for the first 100 rolls, and then a cumulative binomial distribution is calculated where at least one success happens in the next ten trials. The excel function that does this is:
• =1-BINOM.DIST(<successes-1>,<trial number starting at 2>, <calculated probability>, TRUE)
• Note that the probability of a future even sums to more than 100%. This is because the probability of one face coming up at least once in the next ten trials is pretty independent of the other faces, so they get added together. That being said, the differences between the likelihoods remains consistent until they go asymptotic at 100%. This is a pretty good indicator of how far out in the future a prediction can be made. In this case, it looks like 5-15 trials in the future before the high values converge. Note though that the very low values (F5 and F6,which were near zero) still do not have a significant impact on the outcome in this time horizon. So the ability to project into the future is strongly dependent on how spread out the initial probabilities are when the binomial calculations are invoked.