Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There was a comment several days ago (https://news.ycombinator.com/item?id=29825516) that made me reconsider the whole enterprise of operator overloading even for math, specifically its last paragraph.

The gist is that you can easily build them to preclude useful optimizations and efficient execution, when it's often more desirable to be fast than to have syntactic sugar, hence having explicit function calls like multiply_add(a, b, c) instead of a+b*c. If you really want syntactic sugar when it comes to math, operator overloading probably isn't the way to implement it, it'd be nicer to have something with the full context so there can be optimizing reductions. Lisp macros can do that, or you might have some other kind of parser (that might have to work on strings), or with sufficient cleverness you could build an overloaded operator nest full of context-accumulating operations-to-perform that either require some doMath wrapper at the end or a final overload of operations producing a fully computed return type.

I prefer languages that don't cripple expressive freedom and so overall I'm not anti-operator-overloading in general even if I think some overloads are pretty questionable (I dislike C++'s arrow overload for Optionals) but I no longer think that e.g. a math-focused library is an obvious win or exception to the downsides of the expressive power granted from operator overloading.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: