Voting Fraud or Misunderstood CompSci?
With the joy that is and has been this election cycle, I stumbled across a website that proposes a rather nefarious plot and one that caught my attention. It states that countless voting machines have been deliberately designed to make it trivially easy to change votes. The mechanism for this heinous action, decimal data types.
To paraphrase, the theory is that by converting the vote counts from an integer data type to a decimal data type, this allows someone to round and scale these values. Therefore, our evildoers can scale certain votes to only count as 66% of a vote, and scale others to count as 133% of a vote, and so on. This sounds completely unjust, very illegal, and generally not a nice thing to do. Except, maybe that’s not what’s going on.
Computers can’t represent all numbers perfectly. Numbers are an abstract mathematical idea while even the most powerful computer is just a physical piece of equipment. To handle the different ways programmers need to use numbers on a computer, there are multiple ways of representing them depending on what we need to do, and each representation has their own pros and cons.
To start with, we have integers. Integers represent positive and negative whole numbers. They are great for counting things, and working with them is very fast on a computer. In general you cannot represent decimal numbers with integers. And there are size limits on them. On a 32 bit system, the maximum size of a native integer is 32 bits, which means the absolutely largest number that can be stored is 2,147,483,647 (about 2 billion).
If you need decimal numbers in your program, you can instead use a floating point number. Floating point numbers let you store decimals and also handle a lot of weird edge cases for you as best as possible. In general they are significantly slower to work with than integers. And there are size limits on these values too. If you don’t need a lot of precision, you can use a smaller 32 bit value. If you need more precision you can use a 64 bit value. These 64 bit values are usually called double precision numbers or doubles for short. They have been standardized for a very long time and have a very strict specification regarding how they must work.
Doubles have another non-obvious but pretty useful property. They can also store integer values precisely, up to a limit. For a double precision number that limit is 9,007,199,254,740,992. Using a double, you can count to 9 quadrillion (9,000 billion) without losing a single count. While they typically aren’t used this way, if a programmer does need to make sure they can count higher than 4 billion using a double precision number is a fine way to go.
Using double precision values to count votes does not imply anything malicious in and of itself about the design of the voting machine and does not prove deliberate fraud. I’m not saying there isn’t fraud, and I’m not saying there is. Given the importance that a voting machine has to our country and our principles, I strongly believe that all software used for voting should be independently vetted by multiple sources. It should also be available to the general public for review. Presenting the source code for public scrutiny would alleviate countless concerns and accusations.
Afterwards
What about 64-bits?
In the interest of being fair, I’d like to briefly discuss 64 bit integers. There are and have been integers that are 64 bits in size, and these integers can store values up to 9,223,372,036,854,775,807 (over 9 quintillion, or 9 million billion). The problem is support for this integer data type was still maturing while the first generation of voting machines were being built. The format wasn’t really standardized until the widespread adoption of 64 bit machines and even now there are legacy formats that are not always interoperable. Double precision numbers have been standardized for much longer and at the time would have been a better choice than the still evolving 64 bit integer.
How to cheat with just integers.
The implicit argument regarding using decimal types to scale values is also a bit short sighted. As a programmer I can scale any integer you give me by any amount without ever needing to change it to a decimal type. For example, if you want a subtotal of votes to only count for 66%, all that needs to be done is ((votes * 2) / 3), all as integers. And if you want other votes to count for 133%, we simply calculate ((votes * 4) / 3), and not a double precision number in sight.
Decimal Integers.
You actually can represent decimals as integers using fixed point arithmetic, but that’s typically only used when you don’t have any floating point support at all or you are really, really concerned about speed.