Hacker Newsnew | past | comments | ask | show | jobs | submit | jrochkind1's commentslogin

When you serve video from R2, you do it directly from R2 to client, not with an additional Cloudflare CDN in front, and that works fine? I have been trying to understand video and R2.

I had not heard of the "La Liga situation", but googled and what I learned was that La Liga is a Spanish football (soccer) team, and their players did a protest action about not wanting a match to be staged in Florida, and the team owners tried to say it was an illegal strike, but a court recently disagreed and said it was protected protest....

I still have no idea what any of this has to do with any clients moving from Cloudflare to Bunny.net, what am I missing?


Cloudflare is blocked country-wide during matches. For example https://community.cloudflare.com/t/website-inaccessible-from...

La Liga is the national soccer organisation, which organizes the championship. They force the ISPs to block Cloudflare during games to block illegal streaming websites. But then it blocks a lot of websites that have nothing to do with it, and there are games fairly often.

As an anti piracy measure, La Liga (Spain's biggest football association) was able to push the government so that all ISPs have to block Cloudflare's IPs during matches.

It's ridiculous.


what am i missing?


It’s a reference to Iran attacking AWS data centres in those countries.

Iran made those AWS data centers... unhappy.

The comment is disingenuous, though, since Locker doesn't need AWS S3 to function.


It was meant as humor, not technical advice lol.

I wonder if it would be possible to do something like this that had transparent end-to-end encryption.

Honestly I think their status page just got more honest -- and they are graphing this in such a way that any partial outage to any service looks really bad on teh chart.

There were definitely partial outages to services inside that row of horizontal green dots, that the status page just wasn't advertising.


No way that holds up in court when they are marketing it for things other than entertainment.

AI makes this all go exponential.

We've had for a few years now almost universal video surveillance of all public spaces, but pre-AI it's just not realistic to monitor or search it all. Well, it is now. Video just being one example of the surveillance data firehouse that will become legible to the state -- or anyone else that can centralize their access to it all.

I think this will end up being actually the most impactful element of AI on society, and it's not going to be great.


> Republican Rep. Chris Richardson, an Elbert County Republican, argued that the bill is too broad and could regulate standard analytic usage in the workplace, such as a human resources software that recommends a pay band for employees based on performance.

He does not think this is is just selling it further? Oh no, it might prohibit software automatically determining my wages, how could we even have a society if we don't let computers figure out the least they can pay me without me quitting.


Oh man the Colorado GOP is a complete mess these days.

One of the frontrunners for the governorship is just spouting straight antisemitic garbage: https://www.9news.com/article/news/politics/gop-gubernatoria...

Edit: He withdrew this morning and is running for the GOP chair now.

Bobert is quiet these days but I'm sure she'll ramp up after her primary closes.

The various school boards are perennial sources of my idiocy. My (former) board would go into public meetings and just openly and freely admit to crimes.

The county commissioners in DougCo recently decided to fine the victims of shoplifting from r not reporting it. No, you didn't read that wrong.

So, in summary, the GOP and many, but not all, of their state level membership aren't really sending their best these days.


How do you get from "pay band [...] based on performance" to "least they can pay me without quitting"?

Why would corporate software be incentivized to recommend a pay band any higher than the least the employee would take? The incentives are not aligned

Choosing a pay band based on performance and setting the pay bands as low as they can losing all their employees are orthogonal.

Suppose you are an employer and you have 5 junior engineers. You wish to promote one to senior engineer, which includes a move to a higher pay band. How do you decide which one gets the promotion?

Most companies are going to decide which one to promote at least partly based on performance data. Do they consistently finish things on time? What is the defect rate in their work? Do they work well with others? Do they need a lot of help compared to their peers or are the who their peers turn to when the peers need help? Does their work show skill above what would normally be found in junior engineer work?

From what has been quoted by or about the objects that one representative had it is that he thinks the bill has been written too broadly and could be construed as prohibiting using job performance data like that in deciding promotions.


My teamlead decides to promote me, not robots or algorithms.

Most companies want not the least an employee will take right now, but the least that will keep that employee around rather than jumping ship.

And when an industry at large is using RealPage for Wages, those two numbers may become increasingly similar

I think most people getting a paycheck get there on their own, and this guy is accidentally helping to sell the bill.

current and historical capitalist trends, that's how

“I absolutely agree that consumers and wage earners should not be exploited by the use of their data,” he said. “But it’s still overly broad and it’s still overly vague in very important parts. And I believe it’s overly simplistic in its definition of wage setting.”

That's standard conservative speak for "don't interfere with business practices". "Overly broad" is just a way to shut down discussion.

That’s juicy coming from the current Republican Party.

If you didn't have a sibling to do it for you free/cheap, I wonder how many months of a human receptionist (or service) the fee to build (and maintain) such a thing would cover.


"For a reviewer, it’s demoralizing to communicate with a facade of a human."

This is so important. Most humans like communicating with other humans. For many (note, I didn't say all) open source collaborators, this is part of the reward of collaborating on open source.

Making them communicate with a bot pretending to be a human instead removes the reward and makes it feel terrible, like the worst job nobody would want. If you spent any time at all actually trying to help the contributor underestand and develop their skills, you just feel like an idiot. It lowers the patience of everyone in the entire endeavor, ruining it for everyone.


Already back in ye olde times, "let me google that for you" which I see so often posted on Reddit. Sometimes you just wanna exchange with a human, and absorb some of their wisdom, which is the whole point of asking a question. Not so different than wanting to shop at a butcher you can establish a relationship with, rather than a faceless supermarket meat counter.


This is a similar hot issue in academia right now. The ability to generate content in papers via llm is much easier than the ability to thoughtfully review them. There are now two tracks, at least in ICML that I saw, one for AI submitted papers and one for non AI submitted papers. And it works the same respectively for reviewers. However even for AI submitted papers, you cannot have only AI review it. Of course it needs human analysis, but still its tricky what you are going to get. And they are reviewing whether anonymity can still stand or if tying your credibilty to the review process is now necessary.

As for open source PRs, I wonder if for trust's sake you would need to self identify the use of AI in your response (All AI, some AI, no AI). And there would need to be some sort of AI detection algorithm flag your response as % AI. I wonder if this would force people to at least translate the LLM responses to their own words. It would for sure stop the issue of someone's WhatsApp 24/7 claw bot cranking out PR slop. Maybe this can lessen the reviewers burden. That being said, more thought is needed to distinguish helpful LLM use that enhances the objective vs unhelpful slop that places burden on the reviewer.

For instance I copy pasted the above to gemini and it produced an excellent condensing of my thoughts, "It is now 10x easier to generate a "plausible" paper or Pull Request (PR) than it is to verify its correctness."


It’s probably already too late to put these horses back in the barn, but having an “allow AI commits / PRs” would have probably been a good idea for GitHub to make available to projects. Even better might have been something like a robots.txt for repos with rules that could be auto-evaluated and PRs auto-rejected if they weren’t followed.

Then again, we see how well robots.txt was honored in practice over the years. As with everything in late-stage capitalism, the humans who showed up with good intentions to legitimately help typically did the right things, and those who came to extract every last gram of value out of something for their own gain ignored the rules with few consequences.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: