Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

OK, having posted something defending Go in this thread, now let me exasperate everyone by going the other way. Because Erlang's GC is per Erlang-process whereas Go's GC is still OS-process-global, there really isn't a "faster than/slower than" comparison available, because their workloads are so dissimilar. When an Erlang GC runs, it may be running across a mere few hundred or thousand bytes, freezing only that one Erlang-process that was quite likely not running at the moment anyhow. Erlang also has the GC-time advantage that it doesn't have pointers, so there's no pointer-fixup penalty. (It may be a disadvantage at other times, but it's certainly an advantage at GC time.)


Golang's more recent async GC changes begin to resemble Erlang's per-process GC in how they would affect overall system performance.

When people talk about Go's GC freezes, they're talking about the spinup/spindown time before the async GC kicks in. That part is incomparable to Erlang, but its a part which has gotten much faster recently, specifically through virtue of becoming smaller.


> Golang's more recent async GC changes begin to resemble Erlang's per-process GC in how they would affect overall system performance.

They resemble generational GC more than anything. Generational GC has some of the advantages of Erlang (though I think the traditional HotSpot generational GC will end up working better than the one Go is going with) in the minor collections, but not in the major collections.


Go is adding to per-process heaps (there will still be a global heap).


Further complicating things are large binaries, which are handled in yet another way in Erlang. That might be an issue for some situations, and not at all a concern in others.


Do you have any benchmarks that running small GC collection ( per process ) vs one big Heap is faster?


Define "faster".

I actually have experience that looping over all Erlang processes and running a GC on each of them is definitely human-clock-time orders of magnitude slower than a Go garbage collection across a similar set of data. But who cares? First of all, that was a bit of a desperation play on my part anyhow, run for diagnostic purposes in the REPL, not an operation you do all the time, and secondly, only one process at time was frozen then anyhow, so I didn't care that it took about 10 seconds. It didn't take my service down.

Which was my point in the first place, that "faster" and "slower" don't really apply here, because what they're doing is so different from each other. There's too many different possible definitions of faster. And you have to be careful to use one that matters to your code, not just an artificial benchmark that shows your preferred choice in the better light.

(For those who may be curious, the problem that led me to that play was some now long-fixed issues with large binaries.)


Also important is the fact that short lived processes usually don't every need a GC on their heap. When they are finished they just free the full heap. For a web service this is very useful.


Stop the world vs stop one country while the rest of the world carries on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: