Also in terms of security, there was generally a much smaller potential attack surface and those surfaces were harder to reach because we were much less constantly connected.
I wouldn't go that far. As soon as you went online all bets were off.
In the 90s we had java applets, then flash, browsers would open local html files and read/write from c:, people were used to exchanging .exe files all the time and they'd open them without scrutiny (or warnings) and so on. It was not a good time for security.
Then dial-up was so finicky that you could literally disconnect someone by sending them a ping packet. Then came winXP, and blaster and its variants and all hell broke loose. Pre SP2 you could install a fresh version of XP and have it pwned inside 10 minutes if it was connected to a network.
Servers weren't any better, ssh exploits were all over the place (even The Matrix featured a real ssh exploit) and so on...
The only difference was that "the scene" was more about the thrill, the boasting, and learning and less about making a buck out of it. You'd see "x was here" or "owned by xxx" in page "defaces", instead of encrypting everything and asking for a reward.
Software has gotten drastically more secure than it was in 2000. It's hard to comprehend how bad the security picture was in 2000. This very much, extremely includes Linux.
Except that when you did connect Windows to anything it was hacked in less than 30 seconds (the user ignored the "apply these updates first, and then connect ..." advice, they wanted some keyboard driver. Hacked, whoops, gotta waste time doing a wipe and reinstall. This was back when many places had no firewalls). IRIX would fall over and die if you pointed a somewhat aggressive nmap at it, some buggy daemon listening by default on TCP/0, iirc. There was code in ISC DHCPD "windows is buggy, but we work around it with this here kluge..." and etc etc etc etc etc
Not just dhcpd. Besides the entire existance of Wine and Samba, Qemu has a workaround for win2k. Mkudffs has a workaround for MS-Windows not being able to read the filesystem without an mbr. Libc can work with local system time for those who dual-boot. Git can work around the difference in line endings. There are probably more of these kludges than you can shake a stick at.
But there was much less awareness of buffer overflows and none of the countermeasures that are widespread today. It was almost defining of the Win95 era that applications (eg. Word) frequently crashed because of improper and unsafe memory management.
I remember opening a webpage and being hacked seemed more likely. Adobe Flash and Java had more vulnerabilities and weaker (if any) sandboxes than JavaScript.
Another top tip is how to response to “can I just confirm”. No, they can't just confirm any details, until they have confirmed who they are, which they can't do without us calling them on the company's published support number.
Luckily my parents are appropriately cynical and have not fallen for anything like that, but I know a couple of people of my generation who have (in the worst case losing 5K+ in savings, back when there was no onus on UK banks to take any responsibility for such fraud through their systems so it was properly lost to them).
When I'm at home with the old man (mam is unfortunately in a care home), it _really_ irritates me how many scam calls he gets some days. Most of them are obvious: they just hang up when you pick up, the line is very bad or the caller is otherwise barely intelligible (i.e. they are speaking their 4th language), they refer to an account that doesn't exist or a fictitious government agency. But the occasional one is very smooth, and sometimes even have a few details about Dad's life and/or accounts that give pause (either of the form “could this actually be real” or “I wonder how have they collected and associated that?”).
If my family are anything to go by, they definitely target the elderly more than even one generation down (so it isn't just due to those of the younger generations often only having mobile phones and landlines are more targeted) because they know those tend to be more susceptible to the con and more likely to have some savings worth pillaging.
Also in DayJob, some of our C*s and others associated with them (PAs, office managers) have seen some pretty sophisticated phishing attempts, both targeting the business's dealings and their personal accounts. I get the impression that these are reducing in number ATM (or the filtering of them is improving) but that those coming in are making an increasing effort to be convincing.
I've worked for five companies, on the same products (well, variations there-of over time), for 25 years, due to take-overs (I nearly left ~10 years ago due to management numskullery, but a timely buy-out of the bit I worked for fixed my problems while the rest of the company died off).
Hanging around for a while (a long while) doesn't necessarily mean dedication worth investing in, it could just be that I have a shocking lack of ambition :)
> This is going to catch some heat, but what if the most important professional “developer skill” to learn or improve is how to effectively use coding agents?
If it does go as far that way as many seem to expect (or, indeed, want), then most people will be able to do it, there will be a dearth of jobs and many people wanting them so it'll be a race to the bottom for all but the lucky few: development will become a minimum wage job or so close to that it'll make no odds. If I'm earning minimum wage it isn't going to be sat on my own doing someone else's prompting, I'll find a job that involves not sitting along in front of a screen and reclaim programming for hobby time (or just stop doing it at all, I have other hobbies to divide my time between). I dislike (effectively) being a remote worker already, but put up with it for the salary, if the salary goes because “AI” turns it into a race-to-the-bottom job then I'm off.
Conversely: if that doesn't happen then I can continue to do what I want, which is program and not instruct someone else (be it a person I manage or an artificial construct) to program. I'm happy to accept the aid of tools for automation and such, I've written a few of my own, but there is a line past which my interest will just vanish.
What the people excited about the race to the bottom scenario don’t seem to understand is that it doesn’t mean low skill people will suddenly be more employable, it means fewer high skill people will be employable.
No one will be eager to employ “ai-natives” who don’t understand what the llm is pumping out, they’ll just keep the seasoned engineers who can manage and tame the output properly. Similarly, no one is going to hire a bunch of prompt engineers to replace their accountants, they’ll hire fewer seasoned accountants who can confidently review llm output.
And those that do have not yet understood what will happen when those seasoned workers retire, and there are no juniors or mid that can grow because they have been replaced by AI
> What the people excited about the race to the bottom scenario
I'm not excited about it. I just see it as a logical consequence if what people are predicting comes to pass, and I've thought about how I will deal with that.
To be that guy for a moment: those are not ASCII as those characters are not in the standard 7-bit ASCII set. The closest you get to rounded corners with just ASCII is something like:
/-----\ .-----.
| bah | or | bah | (the latter usually looks best IMO, but that can depend on font)
\-----/ `-----'
You'd get away with calling the linked diagram ANSI. While that is technically incorrect most people accept the term for something that uses one of the common 8-bit character sets that include those box-drawing characters (CP437, CP850, ISO-8859, Win1252, …), because that is what MS has for a long time called Win1252 in its tools.
> is already a red flag
I wouldn't call using box-drawing characters a red flag, I've known people use them this way for years and do so myself. The LLM generates them because they were in its training data because people use them. It might be something to consider amongst other flags but I don't consider it a strong signal on its own. The red flag for me is the alignment - if you are going to have your documentation ghostwritten at least make the effort to do a cursory editing pass afterwards.
My visceral AI disgust response here is just a subset of my more general lazyly-slapped-together-without-sufficient-testing-or-other-review disgust response. If it doesn't look bad in that way, whether hand-made or ghostwritten, then I'll not react to it that way. But if someone can't be bothered to do a simple clean-up pass on that documentation, what mess could they have left in the code too?
As far as I'm concerned the social contract (“the rules”) has already been broken by people taking insufficient care, and my reaction to that is a healthy one from the PoV of self-preservation. Acting in good faith works both ways, or it doesn't.
>Acting in good faith works both ways, or it doesn't.
Well said, polluting hacker news with low effort slop for self promotion cannot be good faith.
The rules are enforced on commenters but good faith is not expected for submissions somehow
Nah, working hours and make global assumptions of 0900-1230/1330-1730, M-F, and have an overly convoluted way to specify what working ours actually are in the relevant location(s).
Probably went with the simplest implementation, if starting from the current “seconds since epoch” value. Let the user do any calculations needed to translate three days into that measurement.
It also efficiently annoys the most people at once: those what want hours will complain if they set it to days, thought that want days will complain if hours are used. By using minutes or seconds you can wind up both segments while not offend those who rightly don't care because they can cope with a little arithmetic :)
Though doing what sleep(1) does would be my preference: default to seconds but allow m/h/d to be added to change that.
Hence the way I would do it (and have for other purposes), as stated in my final sentence. Have the human state the intent and convert to your own internally preferred units as needed.
No no no, see now we just say "computer! do tedious math!", and it will do some slightly different math for us and compliment us on having asked it to do so.
The one true unit of time is hexadecimal encoded nanoseconds since the unix epoch. (I'm only half joking because I actually have authored code that used that before.)
reply