Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"done right" is a nice phrase to toss around, but unless you're also looking at companion requirements then it's completely subjective.

if it works but slowly, then it was almost certainly done correctly by my book. you've implied as such - otherwise, you'd be making it done correctly first and foremost.

"done fast" more often than not involves non-measured details from parties often hovering outside the orbit of the customer, and you can bet trying to hit both targets leads to delays in shipping v1, which appears to work.

i praise those who ship software that works correctly, understanding that requirements, deadlines, and project timelines are completely lost when the source code is read on its own.



I don't see how any of that makes sense in the context of database queries. Slow running queries are done correctly? Google has it all wrong then. I can't believe we're even arguing about the merits of low latency. If I can return the same data in 1/1000th of the time, what exactly is the issue?


you said you work in a corporate context, right?

let's take an example that's not too far-fetched: legal comes and says that every month, you need to generate an report of some sort to comply with some regulation that corporations over a certain size must comply with.

you sit down and run v1 of the software which works but takes 24 hours to run. is your first instinct to get infuriated, as you said, even though from a requirements/company perspective, this is completely "done right"?

"the issue" is that changing software involves risks, which may be acceptable to you, but may not be acceptable to e.g. legal. they couldn't care less if it took 29 days to run or 29 ms. what they require - again, this is the requirement - is a monthly report generated, correctly.

and yeah, 99 times out of 100 you change the SQL correctly and it runs in 1/1000 of the time the first time. then for whatever reason it messes up one month and legal asks "what the F was this guy doing mucking around with this software which worked 'right'"?


I'll give you some real corporate context. A scheduled task, pulling from a 65GB table with hundred of millions of rows, and no indexes. It runs for hours, and completes with accurate data. During that time, it also saturated a 10Gbit interface(seriously), and consumed half of the IOPS on one our NetApp controllers. Now multiply that by 2x, 10x, 1000x for all of the other junk running in the wild. Slow, accurate, and impacting the rest of the organization.


A competent and responsible programmer should know how good, on a scale from cheaply done prototype to provably optimal, the 24-hour report is. In the first case, rewriting for better performance is part of the first implementation, not a risky change.


i.e; premature optimization is root of all evil .

The example you give is sooo right. And understanding the many levels of compromise between code speed, quality, legal implications, customer needs, management needs, cost, deadline, maintainability, technological choices made elsewhere, etc. is part of the job.

(personally, I have some priorities : meet the deadline comes first (descoping included), management at customer side comes next, then end users and, in the end code speed.


> i.e; premature optimization is root of all evil

I'm gonna repost a chart I made previously[0]:

  Spectrum of performance:
  LO |---*-------*--------*------------*-------| HI
         ^       ^        ^            ^
         |       |        |            |_root of all evil if premature
         |       |        |_you should be here
         |       |_you can be here if you don't do stupid things
         |_you are here
Point being, people tend to invoke this cliche way too early. It's true that mucking with working software is a potentially risky thing, and in corporate context may require approval, but it's also kind of the job you're hired to do as a software engineer, and it's especially important if that extra efficiency buys company value.

--

[0] - https://news.ycombinator.com/item?id=20389856


> premature optimization is root of all evil

Aka the first refuge of the intellectually lazy.


When you add time constraints, budget constraints, human constraints, believe me, declining optimization effort is not an act of laziness

(and the one who answers you has spent thousands of hours optimizing assembly routines for 3D engines, optimizing SQL queries to get the max out of some server, optimizing network traffic to optimize parallel computations,...)


I disagree. It's sure not a cliché. I've often seen programmers gold plating their solutions. As a programmer myself I understand that very well. But most of the time, a half backed solution will unlock many things that are more important than the code. With some prototype level code, you can already test ideas, show things to a customer, start integrating with others, etc. And afterwards, you have the opportunity to decide if optimizing for speed/space is a worthy trade off, or if optimizing at all is important.

Now, this won't work for any kind of industry. Right now i'm in the business/government stuff. There, prototyping is much more important than speed of code. When I was in the gaming industry, the lack of speed was most of the time a technical debt. But even then, having my code working was much more important to the overall team effort than my code being fast.

So instead of your chart, I prefer : 1/ Code is working, 2/ Code is correct 3/ check for other priorities 4/ optimize as needed.

As for what you're expected to do and company value, my experience is that not so many people understand the link between actual code and company value (esp. in the top management where I sit regularly). You'd be surprised to see how much a deadline is more important than a fully working/optimized program ('cos for example the deadline is a trigger for a enormous change in the organization you work for (although we know the lack of speed in the code will have negative impact on dozens of end users))

so, well, it depeends :-) but still, a word of caution sound right to me :-)


Yeah when someone said that to me as an excuse for an indexless query on my team, it took a lot to remain calm and help them learn. I'd rather change it to: "excessive premature optimisation is the root of all evil".

If you're making a query that lots of users will run regularly, that needs an index. Period. Lots of the time there'll already be one, but that doesn't excuse you for not confirming and adding it if it's not there.


You'd probably be happier with the full quote:

"Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%." ~Donald Knuth (1974)


Yeah that makes a lot more sense. Although I think there's something to be said for optimisation habits when such habits don't add work or complexity.


There is. On the diagram I posted above, this is how you reach the "you can be here if you don't do stupid things" point. It's all simple things that you can do better by default by learning a bit about the language, runtime and libraries you use, and by caring at least a little about not being wasteful. They have little to no impact on readability, and arguably yield simpler code at times.


> it's especially important if that extra efficiency buys company value.

If you've figured out that this is the case, the optimization is no longer premature.


I can't recall the last time I heard someone say "premature optimization is root of all evil" as a sanity check to someone who was overdoing things. In fact it may have been more than 20 years ago.

For one, there are usually better arguments around code comprehension.

Pretty much the only time I hear this statement is as a response to criticism. That should raise some questions about the motivations of the speaker. Quite often it comes off as a dodge, and why and what are they dodging? Additionally, the whole quote is often at odds with the point the speaker is trying to make. It's not small inefficiencies, it's factors of 5, or 10, or an extra 6 months before we run out of headroom on something.


This issue is that it takes time and effort by good people to get there. No one prevents you from writing fast queries at BigCo. Mostly they end up being slow because the person writing them doesn't know any better, and there is no motivation by BigCo. to push back and have the developer spend time learning optimizations.

Nothing is free. Developers with more skills tend to cost more. BigCo. probably has some, but they are probably tasked with something else that BigCo. finds of greater value.


There is also your own gained knowledge or a fresh outlook if you look at the problem again but as you said these things take time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: