Hacker Newsnew | past | comments | ask | show | jobs | submit | jhatemyjob's commentslogin

This is definitely not worth using. It doesn't even say what hypervisor its using. Is it using QEMU? Docker? Podman? Lima? Colima?

And also this chart is super weird:

    Solution  Latency  HiDPI    Native Integration  Setup Complexity
    Cocoa-Way Low      Yes      Native windows      Easy
    XQuartz   High     Partial  X11 quirks          Medium
    VNC       High     No       Full screen         Medium
    VM GUI    High     Partial  Separate window     Complex
A standard VM will always be the easiest to set up by far. And latency should be the same across all 4. I mean after all it's a VM running on your local machine. Honestly I don't even know what it means when it says "Latency".

I also looked at some of the code and it's using OpenGL 3.3 Core which is... super old. But it makes sense in the context of this being LLM-generated since most of its training data is probably OpenGL 3.3 Core code....

Overall this project is very strange. It makes me feel more confident in my skills, AI isn't all that great. It's all hype. You can get to the frontpage of HN. And if you're Peter Steinberger you can get acquired by OpenAI for a billion dollars. But that's about it. The code isn't getting any better.

This reminds me of that C-compiler-in-Rust publicity stunt by Anthropic. There's no substance. It's just a headline.


> It doesn't even say what hypervisor its using. Is it using QEMU? Docker? Podman? Lima? Colima?

I don't think it is, you use whatever you want and then run this to connect to Wayland on it? It could even be a separate machine, Linux host, aiui.


You are correct. I jumped the gun there, a little bit.

I agree with most of your points, but why would a Wayland compositor need a hypervisor at all?

While I agree with the rest of your comment, they do mention they use OrbStack as their hypervisor in their demo video.

Gotcha thanks for that info. Yeah that's insane. You have to read the description of a YouTube video to understand what a project on Github is doing. There is no architecture here.

Right under the the link to the video the README states:

> True protocol portability: Cocoa-Way rendering Linux apps from OrbStack via Unix sockets.

And it's been there for at least two months.


Good eye.

Windows was so bad that it made the web bad. Imagine the world we'd be in today if Internet Explorer never existed.

Well back in the 1990s Apple was on the ropes.

Classic MacOS was designed to support handling events from the keyboard, mouse and floppy in 1984 and adding events from the internet broke it. It was fun using a Mac and being able to get all your work done without touching a command line, but for a while it crashed, crashed and crashed when you tried to browse the web until that fateful version where they added locks to stop the crashes but then it was beachball... beachball... beachball...

They took investment from Microsoft at their bottom and then they came out with OS X which is as POSIXy as any modern OS and was able to handle running a web browser.

In the 1990s you could also run Linux and at the time I thought Linux was far ahead of Windows in every way. Granted there were many categories of software like office suites that were not available, but installing most software was

   ./configure
   make
   sudo make install

but if your system was unusual (Linux in 1994, Solaris in 2004) you might need to patch the source somewhere.

If it wasn't for NeXT and Valve we would still be in the dark ages. Linux sucked for gaming until Valve poured all that money into Wine.

I started with Windows 98. Didn't experience OSX until 2010. 9 years wasted.


It still sucks for gaming, those are Windows games running on Proton, not much different from running arcade games with MAME, Amiga games with WinUAE,...

I think it is different. As someone joked, "thanks to Wine, Win32 is the 'stable Linux ABI'" -- translating system calls is a lot different than emulating hardware, and the results prove it

And with the R&D that went into the Steam Frame, the difference between x64 and arm64 is becoming negligible. You can target x64 Windows and can reasonably expect it to run on Android via Winlator.

And to target it, studios use Windows alongside Visual Studio.

Similar experience. I use these AI tools on a daily basis. I have tons of examples like yours. In one recent instance I explicitly told it in the prompt to not use memcpy, and it used memcpy anyway, and generated a 30-line diff after thinking for 20 minutes. In that amount of time I created a 10-line diff that didn't use memcpy.

I think it's the big investors' extremely powerful incentives manifesting in the form of internet comments. The pace of improvement peaked at GPT-4. There is value in autocomplete-as-a-service, and the "harnesses" like Codex take it a lot farther. But the people who are blown away by these new releases either don't spend a lot of time writing code, or are being paid to be blown away. This is not a hockey stick curve. It's a log curve.

Bigger context windows are a welcome addition. And stuff like JSON inputs is nice too. But these things aren't gonna like, take your SWE job, if you're any good. It's just like, a nice substitute for the Google -> Stack Overflow -> Copy/Paste workflow.


Most devs aren't very good. That's the reality, it's what we've all known for a long time. AI is trained on their code, and so these "subpar" devs are blown away when they see the AI generate boring, subpar code.

The second you throw a novel constraint into the mix things fall apart. But most devs don't even know about novel constraints let alone work with them. So they don't see these limitations.

Ask an LLM to not allocate? To not acquire locks? To ensure reentrancy safety? It'll fail - it isn't trained on how to do that. Ask it to "rank" software by some metric? It ends up just spitting out "community consensus" because domain expertise won't be highly represented in its training set.

I love having an LLM to automate the boring work, to do the "subpar" stuff, but they have routinely failed at doing anything I consider to be within my core competency. Just yesterday I used Opus 4.6 to test it out. I checked out an old version of a codebase that was built in a way that is totally inappropriate for security. I asked it to evaluate the system. It did far better than older models but it still completely failed in this task, radically underestimating the severity of its findings, and giving false justifications. Why? For the very obvious reason that it can't be trained to do that work.


The people glazing these tools can't design systems. I have this founder friend who I've known for decades, he knows how to code but he isn't really interested in it; he's more interested in the business side and mostly sees programming as a way to make money. Before ChatGPT he would raise money and hire engineers ASAP. When not a founder he would try to get into management roles etc etc. About a year ago he told me he doesn't really write code anymore, and he showed me part of his codebase for this new company he's building. To my horror I saw a 500-line bash script that he claimed he did not understand and just used prompts to edit it.

It didn't need to be a bash script. It could have been written in any scripting language. I presume it started off as a bash script because that's probably what it started out as when he was exploring the idea. And since it was already bash I guess he decided to just keep going with it. But it was just one of those things where I was like, these autocomplete services would never stop and tell you "maybe this 500-line script should be rewritten in python", it will just continue to affirm you, and pile onto the tech debt.

I used to freak out and think my days were numbered when people claimed they stopped writing code. But now I realize that they don't like writing code, don't care about getting better at it, don't know what good code looks like, and would hire an engineer if they could. With that framing, whenever I see someone say "Opus 4.6 is nuts. Everything I throw at it works. Frontend, backend, algorithms—it does not matter." I know for a fact that "everything" in that persons mind is very limited in scope.

Also, I just realized that there was an em-dash in that comment. So there's that. Wasn't even written by a person.


> I know for a fact that "everything" in that persons mind is very limited in scope.

I agree, and I think it's quite telling what people are impressed by. Someone elsewhere said that Opus 4.6 is a better programmer than they are and... I mean, I kinda believe it, but I think it says way more about them than it does about Opus.


Yep that's from the same comment I quoted. Decent chance it's not even a real person.


> people who are blown away by these new releases either don't spend a lot of time writing code, or are being paid to be blown away

Careful, or you're going to get slapped by the stupid astroturfing rule... but you're correct. Also there's the sunk cost fallacy, post purchase rationalization, choice supportive bias, hell look at r/MyBoyfriendIsAI... some people get very attached to these bots, they're like their work buddies or pets, so you don't even need to pay them, they'll glaze the crap out it themselves.


> those were to allay fears of my partners to allow me to make the gift

I respect Carmack so much more now. I always scratched my head why he made Quake GPL. It was such a waste. Now it doesn't matter anymore. I so thankful copyleft is finally losing its teeth. It served its purpose 30 years ago, we don't need it anymore.


Corporate lawyers for id wanted GPL iirc


that's more or less what carmack said in the tweet in the OP, no?


I unironically want this service to exist. The GNU GPL "is a tumor on the programming community, in that not only is it completely braindead, but the people who use it go on to infect other people who can't think for themselves."

Historically, it was a good license, and was able to keep Microsoft and Apple in check, in certain respects. But it's too played out now. In the past, a lot of its value came from it being not fully understood. Now it's a known quantity. You will never have a situation where NeXT is forced to open source their Objective-C frontend, for example


It seems hostile to you, but surely you can see what he's replying to is way more hostile and passive aggressive?


Yes, seems clear, right? It was an extremely hostile pompous criticism of something he didn't understand at all, and the questions were rhetorical, not asked sincerely or in good faith:

> Is this a “this dev” thing, a Zig thing, or am just out of touch with modern language (or even larger scale development) projects?

No, none of those, it's him making numerous rash assumptions.

But my snarky post was probably poor judgment on my part. I won't be commenting further.


Yea, it's better to not feed the troll. I'm kind of shocked AndyKelley took the bait.


When I hit that part I was like.... you don't enjoy the work. It has nothing to do with AI. One way or another you're gonna get smoked by the ones who actually like doing this stuff. There's good money in the fitness industry.


> We built a custom hardware-accelerated renderer on WebCodecs / WebGL2, there’s no server-side rendering, no plugins, everything runs in your browser (client-side).

Aight imma head out. Holy moly.


haha xD


They don't have access to the same information as us. There's another comment that replied to you who brought up enshittification. I guarantee you he has not read the blog post by apenwarr. Or even knows who apenwarr is.


It seems to think there's valet, combined with the redditor anti-car pro-walkable-cities ideology.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: