Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I do not disagree with that but unless some real life example is shown of Go vs Java GC I am inclined to believe advantage is mostly theoretical.


binary-trees is Hans Boehm's benchmark of garbage collection performance (mostly throughput). Bump allocation in the nursery ends up being king here, and it's reflected in the numbers: https://benchmarksgame.alioth.debian.org/u64q/performance.ph...


From Hans Boehm's java program --

"The results are only really meaningful together with a specification of how much memory was used. It is possible to trade memory for better time performance. This benchmark should be run in a 32 MB heap, though we don't currently know how to enforce that uniformly."

For some years, the benchmarks game did show an alternative task where memory was limited -- but only for half-a-dozen language implementations.

Figuring out an appropriate max heap size for each program was too hands-on trial and error.


I am not sure in the apps where Go/Java languages are mostly targeted how many times people are implementing binary trees as their core/dominating business logic.


Header comment from Hans Boehm's original test program --

"This is no substitute for real applications. No actual application is likely to behave in exactly this way. However, this benchmark was designed to be more representative of real applications than other Java GC benchmarks of which we are aware."


Most allocations in real-world apps are nursery allocations (that's the generational hypothesis after all), so the speed of nursery allocations, which is what results in the throughput differential here, very much matters in practice.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: