Hacker Newsnew | past | comments | ask | show | jobs | submit | barbegal's commentslogin

The actual research paper shows it's pure nonsense

https://cdn.skoda-storyboard.com/2026/04/Skoda-DuoBell-Resea...

As expected ANC headphones cancel less noise at low frequencies so I guess the 780Hz is a trade off between high enough frequency to be a bell and low enough frequency to get attenuated a little bit less than high frequencies.

The research paper is pretty poor quality and this is mainly a marketing exercise.


I don't understand why people get so hung up on Chrome using so much memory. A lot of this memory is "discardable" so will get dropped when the system is under memory pressure and the amount of memory allocated for this type of usage will depend on how much memory your system has available. If Chrome is using lots of memory then it's almost always because your system has lots of available memory. It allows the browser to cache large images and video assets that would otherwise have to be re-downloaded over the internet.


Or another process will die at random instead, which might be your desktop environment, the main browser process, Signal (10% chance at corrupting message history each time), a large image you were working on in Gimp...

Firefox has gotten very good at safely handling allocation failures, so instead of crashing it keeps your memory snugly at 100% full and renders your system entirely unusable until the kernel figures out (2-20 minutes later) that it really cannot allocate a single kilobyte anymore and it decides to run the OOM killer

but also

it's not cheap? Why should everyone upgrade to 32GB RAM to multitask when all the text, images, and data structures in open programs take only a few megabytes each? How can you not get hung up about the senseless exploding memory usage


That's not how it works. Process killing is one of the last ways memory is recovered. Chrome starts donating memory back well before that happens. Try compiling something and see how ram usage in chrome changes when you do that. Most of your tabs will be discarded.


I've already described above what the browser's behavior is. That your browser works differently is good for you; I'm not using a Google product as my main browser. There are also other downsides that this other behavior does not fix, mentioned in sibling comments


This is not a chrome problem but an OS problem. Android does a much better job here by comparison. Desktop Linux is simply not well optimized for low RAM users.


"your tabs will be discarded" is not an excuse for using 2.4GB for a tab

I dunno I have 96GB of RAM and I still get the whole "system dies due to resource exhaustion" thing. Yesterday I managed to somehow crash DWM from handle exhaustion. Man, people really waste resources....


Reclaiming memory is not free.

It's better not to use 2.4G RAM in the first place. Imagine LinkedIn isn't so hostile to users and instead actually cares about user experience.


Well, a few GB here and a few GB there, soon you’re talking about real RAM issues.

The other day Safari was using over 50GB with only a few tabs open.

Maybe we should also acknowledge that some companies particularly have no compassion for users (and their desires or needs) and see them as hurdles in their way to take money from users.


It's memory that the kernel cannot use to cache other applications' files.


This isn't true for OS like Windows where the kernel is informed that the memory is discardable and it can prioritize discarding that memory as necessary. It's a shame that Linux doesn't have something similar.


Linux supports it too through madvise():

       MADV_FREE (since Linux 4.5)
              The application no longer requires the pages in the range
              specified by addr and size.  The kernel can thus free these
              pages, but the freeing could be delayed until memory
              pressure occurs.
and

       MADV_DONTNEED
              Do not expect access in the near future.  (For the time
              being, the application is finished with the given range, so
              the kernel can free resources associated with it.)

              After a successful MADV_DONTNEED operation, the semantics
              of memory access in the specified region are changed:
              subsequent accesses of pages in the range will succeed, but
              will result in either repopulating the memory contents from
              the up-to-date contents of the underlying mapped file (for
              shared file mappings, shared anonymous mappings, and shmem-
              based techniques such as System V shared memory segments)
              or zero-fill-on-demand pages for anonymous private
              mappings.
Does Chrome use it, though?


...still not an excuse for using 2.4GB for a tab.

I want my compiler, language server IDE, to do that not LinkedIn


Um.

The websites are jam packed with trackers and ads. I am utterly concerned about Chrome’s memory usage because it’s passively allowing this all to occur.

How about you let me blacklist sites that are using too much memory automatically, all that means is that those website owners FUCKING HATE THE REST OF US.

Any solution to this epic fucking problem would be wonderful.


uBlock origin on Firefox or Brave, which will block most of the tracker bloat, causing the RAM spike. It's not a perfect fix, but it will cut out a significant chunk of it. Tab Wrangler also helps by suspending inactive tabs automatically. You should try out both.


Step 0- don't use a browser created by an ad company


Nope - I have to close Chrome in order to compile.


I use a Mac which has really good memory management but still seeing that 10 GB of my SSD is clogged up with useless crap just because modern development systems are complete and utter crap feels bad.

March is "MARCHintosh" month for retro Macintosh computing, for fun I wrote a networked chat client. It has some creature comfort features like loading in chat history from the server, mentions, user info, background notifications, multiple session. It runs in 128 kilobytes of RAM.

Automatic garbage collection memory management was a mistake. The memory leaks we had when people forgot to free memory was nothing compares to the memory leaks we have now when people don't even consider what memory is.


Does the KV cache really grow to use more memory than the model weights? The reduction in overall RAM relies on the KV cache being a substantial proportion of the memory usage but with very large models I can't see how that holds true.


For long context, yes this is at least plausible. And the latest models are reaching context lengths of 1M tokens or perhaps more.


This series of graphs https://www.bmj.com/content/bmj/387/bmj-2024-082194/F1.large... shows that whilst those two professions are at the bottom of the distribution they are not particularly outlying and cherry picking of those professions has occurred. The statistical analysis should have adjusted for picking the best 2 occupations of the 443 in the study. That would likely show very little statistical significance.


Total receipts were over 11,000 so more like 100 hours or around $2000 so a similar price to the LLM.


This is good work. I wish branch predictor were better reverse engineered so CPU simulation could be improved. It would be much better to be able to accurately predict how software will work on other processors in software simulation rather than having to go out and buy hardware to test on (which is the way we still have to do things in 2026)


The memory overhead is fairly significant it uses between 1.5 and 3 times the space of the data stored.


For a real spinning top over engineered https://youtu.be/QLTsxXNekVE?si=S31kpZQHiYlUSedx


It's fascinating that you can get to the level of atomic material properties as a spinning top hacker. Diamond seems like it'd be the obvious winner, if you could somehow get a perfectly polished and smooth surface.

I'd love to see a small Prince Rupert's drop for a tip and a ruby/sapphire spinning surface - you'd need to make a ton of drops, probably, but having a round, nearly spherical contact geometry and super smooth surface seems like a winning combo.


Until it wears just a smidgen and explodes violently!


Link with spyware removed: https://youtu.be/QLTsxXNekVE


Thank you! This is what I really wanted!


Thanks! I came across http://www.pocketwatchrepair.com/how-to/jewels.php recently, hadn't realised the jewels weren't for aesthetics.


Only 2 students actually used an LLM in his exam, one well and one poorly so I'm not sure there is much you can draw from this experience.

In my experience LLMs can significantly speed up the process of solving exam questions. They can surface relevant material I don't know about, they can remember how other similar problems are solved a lot better than I can and they can check for any mistakes in my answer. Yes when you get into very niche areas they start to fail (and often in a misleading way) but if you run through practise papers at all you can tell this and either avoid using the LLM or do some fine tuning on past papers.


An interesting idea but in theory just three correct pass codes and some brute force will reveal the secret key so you'd have to be very careful about only inputting the pass code to sites that you trust well.

It's definitely computable on a piece of paper and reasonably secure against replay attacks.


I was wondering about the overall security. How did you determine that 3 pass codes and brute force will reveal the secret key?


Thinking about it, there are only 10 billion different keys and somewhat fewer sboxes.

So given a single pass code and the login time, you can just compute all possible pass codes. Since more than one key could produce the same pass code, you would need 2 or 3 to narrow it down.

In fact, you don't even need to know the login time really, even just knowing roughly when would only increase the space to search by a bit.


Also @MattPalmer1086 the best solution for this I have now is to have several secret keys and rotate usage. Would be nice to have some additional security boosts.


Key rotation among a set of keys only partially mitigates the issue (have to obtain more samples).

It has it's own synch problems (can you be sure which key to use next and did the server update the same as you, or did the last request not get through?).

This post on security stack exchange seems relevant.

https://security.stackexchange.com/questions/150168/one-time...


Yep known issue, was hoping someone could spice the protocol up without making it mentally to heavy, hn is full of smart playful people.


Yep, I am aware, 2 or 3 OTP's and timestamps plus some brute forcing using the source-code. Server-side brute force by input should or could be implausible. But that is why I am signaling here that I would love a genius or a playful expert/enthusiast contributing a bit or two to it - or becoming a co-author.


I'm not an expert, but roughly know the numbers. Usually with password-based key derivation, one would increase resource needs (processor time, memory demand) to counter brute forcing. Not an option for a human brain, I guess.

So the key would have to be longer. And random or a lot longer. Over 80 random bits is generally a good idea. That's roughly 24 decimal digits (random!). I guess about 16 alphanumerical characters would do to, again random. Or a very long passphrase.

So either remember long, random strings or doing a lot more math. I think it's doable but really not convenient.


A handful of words is generally more memorizable than the same number of bits as a random alphanumeric string. You wouldn’t need a very long pass phrase for 80 bits as long as you’re using a large dictionary.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: