Why null should not exist in programming languages

Should null exist in a programming language? The short answer is - No I met a few moments that can describe my perspective: - Cowboys can’t decide which approach to use zero object or classic null handling - 10 floors null handling - Mess in the end. Code checkers show a possible null reference, but actually, it will never be null - Built-in options are a balance. It's about safety. True or false, you don’t need other states #programming #programmingLanguages

Nulls exist whether you’re sheltered from them or not. Some languages have runtime checks or patterns defined to shelter you from null conditions. Others do not. It’s possible to use a Hindley-Milner type checker with ADTs to build up monadic decomposition to avoid these checks. Or, one can make use of templates or macros to hide them. The moment you deal with the underlying machine or an FFI, however, nulls come back and must be accounted for. There is no real escape; there are only edifices built in the air, that must eventually come to rest on the shifting sands of reality. My personal favorite is to build function contracts and use a model checker to enforce null checks. If a null dereference is possible anywhere, then a model checker should be used to verify its absence. The developers working on Rust’s runtime library agree with me, which is why they use Kani to verify their unsafe code.

Amazing how many people say that null isn't a problem. When the guy who created null pointers said over a decade ago he thinks they've cost the world a billion dollars. Good software engineers aren't really a solution either, no-one is infallible and the more complex something gets the more likely you are to see a null pointer exception. Maybe/Option/Optional types are vastly superior and mean you can abstract over it if you want. Using those eliminates that entire class of errors completely forever. There's virtually no productivity penalty to using them and by not having those errors it's positive on code quality overall. People protesting against Maybe-like types sound like religious zealots trying to prevent the adoption of penicillin.

Nowadays modern typed languages have optional null-safety, depending on the language it means arguments can specify whether or not to allow null values. Allowing nullables are an explicit choice, and it can trigger either a compile error or a runtime error when these constraints are violated. From a language/compiler perspective this has been a solved problem but not all languages evolve and adopt at the same pace.

Nulls are about a technical decision made in the past that got, well, "copy-pasted" even if not that literally. For "local" objects/structs, null can be completely avoided if we know an object is needed. Same to input parameters (in C++, references, be them const or not, cannot be null except if you do some hacky stuff). Yet, for pointers... either without initialization everything is garbage... or we accept that they might start "uninitialized" (or initialized with the lack of value)... and null was the preferred path. So... my issue isn't with nulls... but with languages, like C#, that can't really enforce non-nullable parameters or fields would be respected. And for languages like C++... that fact pointer syntax is so easy is the issue, but we can have safer alternatives.

Sooner or later, you need a way to represent the case that "This value is not known". Then your code needs to be robustly written to deal with that case. Problems with null are invariably to do with the latter and not the former. Anyone saying "my language does it better" is really only saying "my language has a cuter way of expressing it". As noted, the issue is not whether you can express it, but whether your code deals well with those situations.

Like
Reply

NULL is a valuable tool. I’d rather use NULL than an arbitrary value that could lead to misunderstanding, errors or even be dangerous in some situations. Typing to include NULL as a possible value and not using loose “falsey” comparisons are key to using it properly though. But then you should have that covered anyway, otherwise your code is relying on luck to run properly in all situations.

In theory we don't need null, that's true. But it comes very handy to express that a reference is empty/not-inirializee, and it works for every defined or primitive type. Otherwise, then we will have to define this Empty/Un-initialized value in all type domains, which will make the code verbose. But I agree, an specialized domain type system, should not use null. That specific type domain will look more elegant without null.

Like
Reply
See more comments

To view or add a comment, sign in

Explore content categories