I do M&As at my company - as a cto. I have seen lots of successful companies' codebases, and literally none of them elegant. Including very profitable companies with good, loved products.
The only good code I know is in the open source domain and in the demoscene. The commercial code is mostly crap - and still makes money.
True, but its really hard to name a family of commercial devices with security features in hardware, including serious security features, which were not eventually hacked.
Worse still, for new mainstream devices that are believed to be safe the state sponsored actors will likely operate unpublished exploits, and will exploit the misplaced faith people and judiciary will put in device attestation. I dont think the very likeable people who worked on Pegasus found themselves respectable jobs - they are likely still selling that sophisticated crap to all authoritarian regimes.
Exactly this. And whats more, the idea of device attestation makes people trust those devices, and the history of rooting consoles and phones proves that nothing holds, even tech backed by billions in commercial interest.
The whole point in reducing the blast radius is valid - by all means make this optional and allow the user to elect to tie their identity to the device. For everyone else, implement validation of actual transactions, not just user secrets and device secrets.
Fingers crossed for the judiciary - if the implementers ignore the intention of the law, then lawyers will have to help them understand the limits of corner cutting - and block this.
Step by step. We realize we will not get there in one day.
Its the same as with bicycle paths. Initially - those make no sense, leading from nowhere to nowhere. Give it a few years, and a usable network emerges.
Right now there is serious money and brainpower being poured into sovereign cloud tech. Thanks to the gift of open source and standards, its actually not impossible to create modern systems with zero US dependency.
I fear, though, that as with everything else Microsoft Excel will be the hardest dependency to deal with.
If this was a gym subscription, it would be an equivalent of some people going to the gym, and some people sending their android to the gym every day, for the whole day, and using as much equipment as the gym policy allows.
It would be like some people sending the gym's competitor's android to the gym instead of the android the gym provides. Said gym also doesn't have enough equipment for everyone's gym appointed android despite being more expensive. Said gym doesn't want to admit this, nor does it want to raise prices on an already more expensive subscription. Said gym doesn't want competitor's android to gain marketshare. Said gym blames competitor's android for using up gym equipment despite gym's own android being capable of using as much equipment.
> using as much equipment as the gym policy allows.
which said customer paid for. And now they want to back out of it because it turns out they thought users wouldn't do that.
I say they ought to be punished by consumer competition laws - they need to uphold the terms of the subscription as understood by the customer at the time of the sign up.
Dunno, maybe there is a bug. I was both a subscriber, and had the extra usage enabled, and paid for extra usage before, and didn't get the extra credits. I am on the max plan, so was rather looking forward to the extra $100 to burn on /fast mode.
My main home office has 5 monitors, and i still have to swipe between desktops regularly. I used to have 6, but two ultrawides stacked one above the other was a bit painful and I developed a back pain after a while.
My on the road setup typically involves a folding portable monitor (asus zenscreen duo, or something to that effect - that is 2x 1080p). Easily enough, and I don't really see a decrease in my efficiency.
But I sometimes do long distance flights and then I code/work on a single screen. I absolutely can do the same thing that I can do with my 6 screen setup with almost not noticeable effect on productivity as well. Could it be that the extra screens are just useless and an illusion of added productivity?
I find real loss on a single screen in many cases; so much so that I'll get up and move downstairs to get the extra screens.
It really depends on what kind of work I'm doing - and if I'm on the plane, I'm going to likely do work that does well single-screen; replying to emails, dicking around on HN, etc.
But in maxscreen mode (or at least two screens) then I'm "doing" something on the main screen while looking at reference material, output, chat, other things on the second.
If I am writing backend code I am mostly in a single IDE window moving tabs with code files to other screens is working but is inconvenient.
When I work on frontend I much rather have preview on second screen and most likely reference next to it.
When writing documentation or requirements I cannot imagine working on a single screen as essentially I am integrating multiple data sources into one, like I need to see how app looks now and before release, what changed and still have my working space for draft.
Switching windows to quickly look up documentation is fine but when creating requirements having time to understand what needs to be in which place how it has to evolve I need to have it right there so that my imagination doesn’t runaway.
Easy. That is the only social media site that is so comically bad, that it does not trigger me in any way with the feed. I am using it as a way to reach out to colleagues from the past - a bit like facebook circa 10 years ago.
I can't stand any of the other social media sites and have deleted accounts there years ago. So, if I need to organize a small reunion with friends from highschool, linkedin is the easiest solution.
I am old enough to remember the outages of aws, gcp and azure which predate the gen ai thing. And of course the countless, endless, hopeless procession of bugs in just about anything else.
I am running it in a large mid cap company (~25bn revenue). For the first time we are releasing stuff which does not suck, and we are releasing it 5x faster than before. Its real for us, produces real, measureable economic value.
Now, how does anthropic or google make any money on those 250 p/m subs i have no idea.
I do M&As at my company - as a cto. I have seen lots of successful companies' codebases, and literally none of them elegant. Including very profitable companies with good, loved products.
The only good code I know is in the open source domain and in the demoscene. The commercial code is mostly crap - and still makes money.
reply