Why JavaScript's floating-point math can be inaccurate

⚙️ Why does 12.4338 + 10 become 22.433799999999998 in JavaScript? If you’ve ever noticed small inaccuracies in JavaScript math, such as unexpected decimal results, you’ve encountered a fundamental limitation of floating-point arithmetic — not a bug. JavaScript represents all numbers using the IEEE 754 double-precision (64-bit) standard. In this format, numbers are stored in binary, not decimal. However, many decimal fractions cannot be represented exactly in binary form — just like how 1/3 can’t be represented precisely in decimal (0.3333…). For example, the number 12.4338 cannot be stored exactly as-is. Internally, it becomes something like 12.433799999999998. So when you add 10, the result inherits this slight inaccuracy. These rounding artifacts are common in most programming languages that rely on floating-point math (JavaScript, Python, C#, etc.). If you are working with financial data or require exact decimal representation, it’s recommended to: - Use explicit rounding (.toFixed() or .toPrecision()), or - Use decimal arithmetic libraries such as decimal.js, big.js, or bignumber.js. Precision is rarely free — and understanding how numbers are stored is key to avoiding subtle data issues. #JavaScript #TypeScript #WebDevelopment #SoftwareEngineering #Programming #CleanCode #CodeQuality #TechEducation #FrontendDevelopment #FloatingPoint #IEEE754 #DeveloperTips #CodePrecision #LearnToCode #EngineeringExcellence

  • text

The .toFixed() saved me on many occasions.

To view or add a comment, sign in

Explore content categories