Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not to hurt anyone's feelings but no I don't get the idea. I find this irritating and very unconvincing. So much work and boilerplate to prove peanuts. Have you ever seen anyone doing serious software development like that? I mean, not just toy examples of some 10-line bean counting program blown up to 100 lines.

> void foo(ranged_int<1, 5> a, ranged_int<3, 7> b)

Why would I _ever_ write a function like that? Practical proofs need to be about runtime values, not just constants and types. They also need to put multiple values in relation. They need to handle type casts and many more things. Constant integer ranges might be of some use but it's a tiny fraction of the invariants that a programmer juggles in practice, and the source code is already blown up out of proportion just for these ranges. This thing is just not gonna fly.

Just multiply some "good" values a few times and there is no way that you can statically prove that the result is going to be in the range as well, even though it might always be the case due to how the values combine each time (simple example: offset + size that the code checked to be within array bounds). Invariants in practice are relations over the runtime values of multiple variables. The focus on single values is basically already where types fail as a correctness tool.



> Just multiply some "good" values a few times and there is no way that you can statically prove that the result is going to be in the range as well, even though it might always be the case due to how the values combine each time (simple example: offset + size that the code checked to be within array bounds).

I don't understand, there's literally a CVE about that elsewhere in this thread, basically

    struct my_structure { 
      int size;
      int offset;
    };
either you restrict both size and offset to e.g. 2^30 so that you can safely sum both, or you will have an issue because you'll be doing if(size + offset < max_size) and size + offset overflows because absolutely no one can be trusted to write overflow checks, you gotta use a library for that.


If you're not making assumptions about the values stored in your variables, then every little arithmetic operation can lead to unexpected wraps (UB for signed types). What follows is that we need to make assumptions. Don't write an if conditional in front of every operation like a forgetful person - but assume that "my_structure" contains good values that you can safely add.

Usually it's very easy - require that size and offset are validly indexing an object, and require that objects are of a sane size (I rarely have in-memory objects larger than a few megabytes, and for most code I posit that it is a mistake to not chunk and stream huge datasets). Done, no real need to worry.

If you cannot make any assumptions about the values contained in a "my_structure" then that means the code is at an API boundary, which is the perfect place where such check (if-statement) must be placed. At internal code places, checks are wasted effort, and at most a compiler switch to detect wraps (logic errors) would often be a much better choice than paranoid code where each line of code can't trust the previous line, and where one can't see actual work being done because it's all covered in mistrust. Unnecessary mistrust (i.e. mistrust at places strictly inside maintenance boundaries which are not strategic checkpoints) only makes it hard to write code that actually does something, and might possibly cause more bugs than it could ever save.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: