How JavaScript's Number Type Behaves Under IEEE 754

I recently explored how JavaScript’s number type behaves under IEEE 754-and what I found surprised me. While most developers (myself included) focus on ECMAScript compliance, I learned that even standards-compliant code can produce unexpected results in mission-critical systems. Here is what I learned, with real-world examples that show why precision matters. For example, 0.1 + 0.2 === 0.3 returns false in JavaScript due to binary floating-point precision. This is not a bug-it is IEEE 754 doing its job. IEEE 754 in Action: Unsafe comparison const a = 0.1; const b = 0.2; const sum = a + b; console.log(sum); // 0.30000000000000004 console.log(sum === 0.3); // false Safe Comparison Using Tolerance function nearlyEqual(x, y, epsilon = 1e-10) {  return Math.abs(x - y) < epsilon; } console.log(nearlyEqual(0.1 + 0.2, 0.3)); // true This method checks if two numbers are close enough, which is the recommended approach for comparing floats. Safer Arithmetic with Integers const priceCents = 10 + 20; // 0.1 + 0.2 -> 10 + 20 const totalCents = priceCents * 100; // 3000 console.log(totalCents / 100); // 30 Use integers for money, counts, and critical logic to avoid rounding surprises. Decimal Libraries for Precision HTML <script type="module"> import Decimal from ''https://lnkd.in/gi4T53Fe'; const x = new Decimal('0.1');  const y = new Decimal('0.2');  const z = x.plus(y);  console.log(z.toString()); // "0.3" </script> /* Use Node.js with ES Modules If you're using Node.js, either: Rename your file to .mjs Or add "type": "module" to your package.json */ import Decimal from 'decimal.js'; const x = new Decimal('0.1'); const y = new Decimal('0.2'); const z = x.plus(y); console.log(z.toString()); // "0.3" // Use require() in CommonJS (Node.js) const Decimal = require('decimal.js'); const x = new Decimal('0.1'); const y = new Decimal('0.2'); const z = x.plus(y); console.log(z.toString()); // "0.3" Libraries like decimal.js or Big.js offer exact decimal arithmetic, ideal for financial and mission-critical applications. This got me curious about real-world consequences of precision loss. I came across examples like: Patriot Missile Failure (1991): 0.34 sec drift due to rounding -> 28 lives lost Ariane 5 Rocket Explosion (1996): float-to-int overflow -> $370M loss Knight Capital (2012): float error in trading logic -> $440M loss in 45 mins Medical devices: insulin pumps miscalculating dosage due to float rounding These are not edge cases-they are reminders that even basic arithmetic can be dangerous in the wrong context. I am not an IoT or finance expert, but this learning made me rethink how I handle numbers in code. For critical logic, I now prefer: Using integers (e.g., cents instead of dollars) Leveraging BigInt or libraries like decimal.js Adding runtime checks for IEEE 754 compliance Avoiding implicit float-to-int conversions Have you ever run into float precision issues in your own projects?

To view or add a comment, sign in

Explore content categories