Hacker Newsnew | past | comments | ask | show | jobs | submit | more pearjuice's commentslogin

As long as the US is a military super power there is no problem whatsoever.

See https://en.wikipedia.org/wiki/List_of_countries_by_military_...

Not trying to be a low-effort reply but any Economy 101 textbook will theorize this kind of debt/interest/inflation mix is impossible. Practically, the world is too dependent on the USD in one way or another. If they try to break loose, they might get confronted with those military expenditures which is a good enough incentive to keep using USD as a global reserve currency.


I wonder how much the regression of ChatGPT is due to it adding new content which has its origin from ChatGPT. The blog and SEO spam with ChatGPT fluff is going through the roof, eventually all of that will get crawled too and the model will just get positively reinforced on its own output. Or is that not a concern?


0.1% chance

My reasons are:

- I don't recall seeing any evidence that OpenAI has included new data in pretraining beyond the previous limit (Sept. 2021?) for GPT-3.5 or GPT-4

- Maybe they did finetuning or RLHF on new data but this is likely to be highly curated data

- AI generated content should be absolutely tiny in comparison to the data they are already working with.


Why would he be annoyed? The lifetime business value and goodwill from this public analysis probably earned a lot more for Patrick than any consulting gig Colin would have paid for.


https://archive.is/hejHt

Non-paywall link


and for some reason archive.is doesn't work on my MacBook


it's a problem with private relay, works if you turn it off


They don't have to. They just need enough time and believe for the investors to dump it on retail (IPO) or get bought-out by another company.


>fight back hard

Bad advice, especially in UK and in OP's age bracket. The UK has a sharp rise in knife incidents, moreso at younger ages.

https://news.sky.com/story/fatal-stabbings-in-england-and-wa...


it is also worth pointing out that there is an nascent increase in guns in addition to knives now. Plus fighting back often leads to academic consequences even in cases of self defence. I was placed into a behavioural management program the one time I fought back.


I'd take that over permanent brain damage any day.


Theoretically, but in the situation it's hard to percieve the threat level of your perpetrator and it can affect your education long term. Often bullies will be satisfied with only mild violence or insults, and trying but failing to defend yourself can often worsen matters and more permanent solutions are off the table thanks to criminal law.


Information & propaganda wars are so asymmetrical I sometimes wish there was less reporting until it was actually clear what is going on. It's all just one fog of uncertainty and whoever thinks what is being reported now is correct is probably mistaken. Somehow this buildup of troops was completely unreported, even though any intelligence agency worth its salt should have seen concentration and movement of troops for days if not weeks. Completely unreported in the media AFAIK, and all of a sudden all media is reporting Wagner is going full steam ahead towards Moscow.


There were russian and chinese blogs reporting this for days, but nobody took them seriously.

I guess it's too hard to tell signals from noises.


What do Chinese-language sources have to say about this?


Eh around the time of the Wagner pullout from Bakmut there has been signs that there was a large amount of discontent between the mercenaries and the MOD. They had been throwing threats at each other along with rumors of shooting at each other at times.

Oddly enough on some more of the hypothetical and conspiratorial parts of the net possibilities like this have been discussed for weeks. It's just so odd to see this happen in real life.


I don't know how well the results hold up after a year, but according to Redis the Dragonfly performance test is biased and with Redis configured properly it reached higher throughput than Dragonfly. YMMV but just putting this up here. Personally I never used Dragonfly so I wonder if the "marketing metrics" actually hold up in production.

https://redis.com/blog/redis-architecture-13-years-later/


Redis Cluster has lot of limitations though. It's unusable for multi-key operations, no scan, no transactions, single database only, client has to support it, the way it works makes it unusable when connecting to it outside the network (see https://redis.io/docs/management/scaling/) etc. At that point RedisCluster is not Redis anymore and it's disingenuous to call it that. I would rather have slightly lower performance and not have to deal with those limitations AND not have tot deal with orchestration.


Yeh but isn’t the Redis one just a biased?

What might have been interesting would be to test on a range of cores / clusters, and consider the overhead of managing 1VM vs 64VMs etc.


The Dragonfly benchmark runs one Redis instance on a 64-CPU machine and compares it with one Dragonfly instance on the same machine.

But there is nothing stopping you from running 64 Redis instances on one machine if it has 64 cores, which is what Redis did (actually, they ran just 40). That actually seems like a nicer design overall, as it scales "naturally" to multiple machines without any extra effort/code, it keeps the code simpler, you can also have one of these Redis instances segfault without bringing your entire cache down.

Other than that, they seem to have run the same benchmark. YMMV for other types of workloads of course, and perhaps Dragonfly could be configured better in some way.

Either way: it seems the Dragonfly benchmark is not just biased, but highly misleading. And while the Redis benchmark may be biased, it certainly doesn't seem highly misleading.


To me, spinning up multiple copies of the database is cheating. You're comparing a box of Apples to a single apple.

Yes, using a Redis cluster is the only way to get Redis to actually use system resources effectively, but its a relatively complex thing to create and manage compared to just running 1 server.


I don't think it's cheating at all; it's how its designed to work.

If you want to say "but this is more difficult": okay, fair enough (although in my experience it's not difficult at all), but then say that instead of posting a misleading benchmark which runs Redis in a way it's not supposed to run. You can place all sorts of artificial "yeah but I don't want to do it like this" constraints on all sort of things.


hmm and ships are designed to sail, yet you use planes to cross atlantic. Nokia was designed as strongest and most affordable phone, yet you use Iphone that costs 1000$. it's not about how it was designed but whether it addresses your current needs. Developers do not want to manage a cluster of single cpu processes. Not on their laptops and not in the production. And it's not just about management complexity. See this https://github.com/dragonflydb/dragonfly/issues/1229 and it's just one example. Single cpu - is just not enough for today use-cases.


That may all very well be the case – let us assume it is for the sake of the argument although I have some comments about that as well – but that still means the argument is "Redis is too complex to run on multiple CPUs" and/or "Redis is poor for these workloads" (I didn't investigate that issue in-depth), and not "Redis is unable to do much work with this very powerful AWS instance". There two are very different things. There is no nuance anywhere in the benchmark. A reader might very well believe that this is all the performance they're going to get out of Redis on that machine, which that's clearly not the case.

> Nokia was designed as strongest and most affordable phone, yet you use Iphone that costs 1000$

Actually I have a Nokia :-)


You are an exception, then :) But I still stand by the claim that fragmenting your stateful workload (i.e. Redis) into bunch of processes instead of having a single endpoint per instance is an acceptable approach in 2023. When your processes are excessively tiny, their load variability overshadows their average load. This imbalance results in unpredictable pauses, latencies, and Out of Memory (OOM) issues. This primarily occurs due to the absence of resource pooling under a single process. While it's challenging to exhibit this issue via synthetic benchmarks, it's certainly present.


I think you forgot "not" there before "an acceptable approach in 2023".

These are all fair and reasonable opinions to have, and to some degree I even agree with it, but none of that is captured in the rather simplistic benchmark. Everyone understands that even with the best of efforts it's hard to capture everything in a benchmark, but in this case it's just missing a very obvious way to run Redis.

It's like benchmarking PostgreSQL connections and coming to the conclusion there is no way PostgreSQL can handle more than n connections and that OtherSQL is much better. Is this true? Yes. But it's also true that half the world is running pg_bouncer and that this is widely seen as the way to run PostgreSQL if you need loads of connections. Is it a pain you need to run this and something that should be addressed in PostgreSQL? Absolutely. Such a benchmark would be correct in a strict narrow technical sense, but at the same time also misrepresentative of the real-world situation.


I understand what you are saying. How would you suggest to present it then? Dragonfly is not faster than Redis when running on a single cpu. It can not be, just because it has the overhead of the internal virtualization layer that composes all the operations over multiple shards (in general case). But Dragonfly can scale vertically with low latency and high throughput unlike other attempts of making multi-threaded Redis that used spinlocks or mutexes. So how do we demonstrate the added value?


> But Dragonfly can scale vertically with low latency and high throughput unlike other attempts of making multi-threaded Redis that used spinlocks or mutexes. So how do we demonstrate the added value?

Provide more advanced benchmarks which demonstrates those types of differences better.

The situation is that the differences are complex, both in terms of performance and operationally (e.g. running multiple instances is not a huge obstacle, but it is harder). That's always going to be hard to capture in a single graph or a single tagline; I appreciate this isn't easy.

It's your website; you can do what you want with it. And maybe I'm just a grumpy old curmudgeon who has seen too many hype cycles, but to me it just comes off as "too good to be true" – which it kind of is – and leaves a more negative than positive impression. The same applies to "The most performant in-memory data store on Earth" tagline, which seems a bit hyperbolic (what is "fastest" depends, as you mentioned that Redis will always be faster on a single core – some people only need a single core!)

I have the business acumen of a goat, so what do I know? But it seems to me that a lot of people appreciate when products are straight-forward about their weaker points as well, and even straight-up say they're not the best fit for all scenarios, and in that in the long run this is more beneficial.


> To me, spinning up multiple copies of the database is cheating.

What if the database was designed to be run that way?

> You're comparing a box of Apples to a single apple.

Precisely. Dragonfly is a box of apples. Redis is a single apple that can be put in a box with other apples. If you run a "benchmark" comparing your box of apples against a sole apple, you're being either stupid or dishonest.


At least on AWS it is kind of hard to get 40 tiny VMs with sufficient speed on the infra side. Given that laptops get 40+ vCores these days, I think a single instances od anything should have some multi threading.


The comment you replied to explicitly said (so you don't even have to read the redis article, which also clearly says so)

> The Dragonfly benchmark runs one Redis instance on a 64-CPU machine and compares it with one Dragonfly instance on the same machine.

They were not running 40 tiny VMs!


The should have chosen a 1024 core box and really shocked the world.


> and consider the overhead of managing 1VM vs 64VMs

They clearly are not running 64 VMs in the test they are describing.

They compare both databases on one VM of the exact same size, both deployed as their makers recommend to deploy them.


Meaning JPM was also granted a new exemption to the 10% deposit cap in this acquisition? There are regulations[0] preventing banks with >=10% of US deposits to grow larger in size by acquiring other banks. Another win for TBTF theorists.

[0] https://corpgov.law.harvard.edu/2018/05/29/regulatory-reform...


A win for anti-capitalist theorists as well.


Don't forget the cryptocurrency adherents..


Another bank, another leeway for the FDIC insurance limit?


It's an interesting question in this case because last month a consortium of banks made a big show of depositing $30B "uninsured" as a show of confidence, and it's not clear whether it's still there.


What do you mean it's not clear it's still there? It was there when they had their earnings call 4 days ago.


4 days is a long time in bank deposit land now!


I guess they just bought FRC's loan book at par.


Silicon Valley bank had sudden collapse. While I don't agree with the decision of FDIC to make everyone whole, I understand why.

First Republic Bank was collapsing for more than a month. If anyone still kept uninsured money there, it's mostly on them.


> First Republic Bank was collapsing for more than a month. If anyone still kept uninsured money there, it's mostly on them.

The big banks deposited 30-billion dollars uninsured to try to rescue FRC. Are you saying that you don't want big-banks trying to rescue smaller banks anymore?

Moral hazards, all the way down. We like the deposit they made, but they did so because FDIC seemed to offer assurances to cover even uninsured deposits. We're back to SIVB questions and just delayed by a month.


Big banks and small banks are businesses. I don’t want big bank to “rescue” smaller bank if it doesn’t make economical sense to them.


> If anyone still kept uninsured money there, it's mostly on them.

It's been very difficult for small businesses to open accounts at other banks to move money to. The waiting list is very long, because business accounts require a lot of KYC.

And importantly, the moving out of uninsured deposits (by people who were fortuitous enough to have other accounts to move it to) actually caused the problems we see today.


The whole point of the FDIC is that it's not good idea to reward people for participating in a bank run.


No the point is to prevent a bank run in the first place.


Are you genuinely confused about the mechanism which accomplishes this?


...Actually, there is no disincentive built into the FDIC for people to participate in bank runs. In fact, the entire point of the FDIC was to insulate depositors from banks doing stupid things with everyone's money.

Now that the Fed has bailed out the creme de la creme, I'd like to see the argument employed when the FDIC hasn't got the liquidity left to backstop every other depositor in subsequent bank failures.


They don't have to argue anymore, this isn't the era you grew up in, that world is gone. A reporter who has a question about it won't be selected to ask one.

Recent example of how fake it all is: https://nypost.com/2023/04/26/biden-cheat-sheet-shows-he-had...


Yeah, maybe learn how fake the nypost is. This is typical preparation for a high profile q&a: https://www.cnn.com/2023/04/27/politics/biden-note-card-whit...


CNN isn't exactly the gold standard either.


Except for the incoming question being on there, per your link:

> While it was notable that a potential question was written on Biden’s card, every White House press office takes scrupulous care to prepare their president for news conferences.


Except for the question on there wasn't the question that was asked. Do you understand how q&a (or for that matter court) prep goes? The question on there was an example question that they expected would be along the lines of what that person would be interested in and how they would phrase it (or what particular things they would try to attack/draw out). The article you just quoted even said that.


True. That’s why FDIC should fulfill its obligations. But not anything more.


That's just ensuring anyone over the insured limits participates in a bank run on every bank that isn't too big to fail (and the first to do so get rewarded by getting to keep their money).


No, that's ensuring that we follow FDIC rules. Change the rules, and be upfront about the cost, not relay on one-off precedences.

SV bank bailout was a scam. They said taxpayers won't pay for and it'll be paid by banks. Which means customers. I won't pay for it as a taxpayer, I'll just pay it as a bank customer - lovely.


No. The people that didn't participate in a bank run are the ones that get hurt. The whole point of depository insurance is that people like you and me with less than $250,000 in a single account shouldn't get screwed over with billionaires like Peter Thiel decide to play games with other people's money.


> The whole point of the FDIC is that it's not good idea to reward people for participating in a bank run.

> No. The people that didn't participate in a bank run are the ones that get hurt.

These are the same, right?


No. Because the people that participated got their money out leaving the people that didn’t participate with nothing in their accounts. This happens because of fractional reserve banking.

Think about it. Assume we both have an account with $1 dollar in it, and the bank only has $1 on hand. Now I create a run where I take my dollar out, but you don’t participate. I have my dollar, and now you have nothing because the bank failed.

How are we the same?


> and the bank only has $1 on hand

Well, that's not the truth either. The bank is holding onto 30Y mortgages / 30Y Treasuries that will be worth $2 in the year 2050, but is only worth $1.4 (fair market value) right now.

This loan was good 2 years ago (ie: its fair-market value was $2) in the year 2021. But the Fed rate-hikes have caused the loan's value to collapse, and so here we are.

You withdraw $1, the bank doesn't want to sell the bonds because it'd lock in the loss. The Fed provides a loan at the full principle of the bond (so the Fed now backstops the missing money). The Fed is now acting as the bank of last resort, providing $2 of true dollars to backstop the $1.4 (fair market value) of the bond, which will truly become $2 by the year 2050. The Fed will exist that long so everything should be kosher, in theory.

Or so goes the story one month ago. Why didn't this work? Why is FRC still collapsing despite these loans from the Fed?

----------

Believe it or not, life is a bit more complicated than just "fractional reserve banking". You're missing a huge part of the story if that's all your mind is open to. I'm not claiming to have all the answers, but I think it would behoove you to at least try to understand the current situation with a bit more nuance.


That’s a lot of words to intentionally obfuscate the simple fact that banks don’t have the money on hand to allow every depositor to simultaneously withdrawal their money.


It's important to know the difference if you hope to fix the system.

Blaming fractional reserve banking for something totally unrelated will help nobody.


That wasn’t the point.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: