Simulation of Probability as a means to benchmark a CPU
Computers have been able to simulate probability for quite some time. I propose a program that is able to show just how fast it can do it.
The application has been updated to be able to calculate more repetitions per second than this screenshot shows, but the concept stays the same.
This Program, I call "ProbabilityApp" for lack of better name, will allow for simulation of probability events with a varying amount of outcomes with an end condition based on an amount of events, a time limit, or until a pattern is reached.
The Program can also conduct multiple trials of this experiment to be able to form an aggregate data - in this example, conducting 720 trials of one minute "experiments" takes roughly twelve hours, but shows, at the time that experiment was conducted, that my quad core Ryzen 3 1300X can simulate about 78.5 million repetitions per second, with a standard deviation of roughly 564 thousand.
This gives a CPU benchmarker a way to know how well a CPU performs on average, with a means to show how well it can keep that performance.
The experiment is also multi-threaded, meaning that each thread can simulate events with random number generation.
I published this Program on GitHub (Link here) and released it under the MIT License for anyone to be able to use/edit, but I do so with no liability for whatever happens (since stressed CPUs become hot, pull more voltage, and can sometimes crash), and I do so with full permission for people to contribute/edit this project.
The Program has abilities to export data in CSV values, plain text, LaTeX, and HTML, in its own folder.
It's a new way to benchmark a CPU that gives a basic idea of its relative performance compared to another computer with a different CPU, and how well it can keep that performance.
Update: May 13, 2018 -- I left version 1.0.0.2 of my program running for 1,440 trials of 60 seconds each, leaving it running for 24 hours, on my Ryzen 3 1300X overclocked to 3.7 GHz. The results of that run, as aggregate data, are below:
In 24 hours, my computer "flipped a coin" 25 trillion times, averaging around 293 billion flips per second, and was able to keep this average within three million flips per second, 85.42% of the time. The result of each trial was within six million flips per second 95.35% of the time, and it was within nine million flips per second 96.74% of the time. At its best, it was doing 297 billion flips per second, and, at its worst, it was doing 276 billion flips per second.
Why is this information important?
It's more of a way to measure relative performance of a CPU, and how well it can keep that performance. If the CPU "thermal throttles," or gets so hot that the processor makes itself under-perform to attempt to cool itself, it will be shown in a trial like this.
In this case, my minimum was much less than the average, indicating an outlier trial. This may have been happening while I was looking at my computer through TeamViewer to check in on it and to see if it was done yet.