Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It likely doesn't need arbitrary precision arithmetic either. A lot of algorithms truly want modular arithmetic (hashes, checksums), many are dealing in domains that can't practically exceed a machine integer size (length of a string, number of vertices in a graph, etc). There is no reason that every library needs to deal in arbitrary precision.


> domains that can't practically exceed a machine integer size

This assumption is common among authors of C code, but is sometimes also exploitably incorrect. Even if you really don't care about supporting things larger than a certain size, you do still have to correctly account for overflow, not just ignore it.


Yes, overflow checks are critical, and tricky in the absence of language support. Yes, testing them is tricky, and often exploitably buggy in the absence of such tests. No, this does not mean that (for example) the Linux kernel should use arbitrary precision to represent pids (for example). Yes, this does mean that better approaches for more systematically dealing with overflow are a good idea.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: