How is this a counter-argument? I often read this, as if there's some international trusted organization of logical thinkers that has approved inclusion of slippery slope to a list of logical fallacies that must never be invoked in a conversation.
Every single time five years later it turns out that the slope actually was slippery.
Why do people imagine that I said words I didn’t say, get mad at those words, then reply as if I had said them? This happens all the time.
Humans are stupid and I sincerely believe that we, as a species, will fail because we are so prone to this kind of behavior. We really are a garbage race.
Everyone who rants about slippery slopes being a fallacy also loves the boiling frog analogy (which technically might be a bit closer to what they're going for).
> Everyone who rants about slippery slopes being a fallacy also loves the boiling frog analogy
I didn’t. So why do you say “everyone”? Stop imagining people saying things that they didn’t actually say.
Every step we take down this “slope” is intentional and happens because there is more force pushing things down the slope than there is force resisting that push. There is no slippage, just people who refuse to act in their own best interests letting people who are acting in their own best interests do whatever they want.
If I had to guess, I think it’s marketing — just like adding weights to the insides to make them feel more “premium”.
I’d guess that the target audience would argue that real lossless music experience requires high-bandwidth wires, and is not possible over the air without degradation.
But that’s the thing, apparently it’s not using Bluetooth but actually uses their new wireless chips to transmit the data over radio (maybe it uses WiFi, maybe something else). So it’s not using Bluetooth, which doesn’t have enough bandwidth for lossless.
I don’t think “it’s just marketing” is the reason, Apple always positioned themselves as the premium option with these things. Being the only wireless lossless headphone would be right on Apple’s expected feature list.
I think the most feasible solution is to make companies liable for damages, not in a light way but rather that every person can sue (or in a class action) for hefty amounts, so that a breach could cost e.g. 100mil+
that should incentivize them to actually invest some money in security. right now its just tiny numbers which are easier to just pay off and forget about
You'd have to deal with all of the binding arbitration agreements first.
That said, class action lawsuits also are part of the cost of business. Nothing is ever going to change unless the boards of directors (not CEOs) can be held liable for the behavior of the companies that they direct.
As I understand, lidars don't work well in rain/snow/fog. So in the real world, where you have limited resources (research and production investment, people talent, AI training time and dataset breadth, power consumption) that you could redistribute between two systems (vision and lidar), but one of the systems would contradict the other in dangerous driving conditions — it's smarter to just max out vision and ignore lidar altogether.
When it's not safe to drive, it's not safe to drive.
I've been in zero-road-speed whiteout conditions several times. The only move to make is to the side of the road without getting stuck, and turning on your flashers.
Low-light cameras would not have worked. Sonar would not have worked. Infrared would not have worked.
I think the weather where cameras/sensors start having problems is much better than zero-vis whiteout.
If we could make sensors that lets an autonomous vehicle drive reliably in any snow/rain where a human could drive (although carefully) then we're good. But we are a long way from that. Especially since a lot of sensor tech like cameras tend to fail in 2 ways, both through their performance being worse in adverse condition but also simply failing to function at all if they are covered in ice/snow/water.
If you have multi-return lidar, you can see through certain occlusions. If the fog/rain isn't that bad, you can filter for the last return and get the hard surface behind the occlusion. The bigger problem with rain is that you get specular reflection and your laser light just flies off into space instead of coming back to you. Lidar not work good on shiney.
No, it isn't "smarter." Camera-only driving is the product of a stubborn dogmatic boss who can't admit a fundamental error. "Just make it work" is a terrible approach to engineering.
Criticism of Musk isn't hate of Musk. The point is completely valid and the results of this management style infuses all of his businesses albeit with differing results.
It's significant that a truly hard problem like autonomous driving doesn't respond to a "brute force" management style. Rockets aren't in this category because the required knowledge and theory is fairly complete, whereas real autonomous driving is completely novel.
Hmm. Is it ragebaiting to respond to a tired and wrong statement by saying that it's tired and wrong and that the situation is merely the product of piss poor management decisions? People get understandably frustrated seeing the same wrong talking point that people with domain knowledge in computer vision and robotics have repeatedly explained is wrong in extremely fundamental ways.
> I don't own a Tesla.
n.b. The shoe/foot comment was not about you. It was about Musk. It wouldn't make any idiomatic sense for the expression to be about you given what you said and what you were responding to. If they'd said "pot, meet kettle", then it would have been about you. In that context, saying that you don't own a Tesla feels like a weird thing for you to insert in your comment. It potentially comes across as suspiciously defensive.
Why does this matter? You have to slow down in rain/snow/fog anyway, so only having cameras available doesn't hurt you all that much. But then in clear weather lidar can only help.
If your vision is good enough to drive in rain/snow/fog, you don't need lidar in clear conditions. If you planned to spend $10B on vision and $10B on lidar — you would be better off spending $20B on better vision.
It still infuriates me that Tesla went so long being able to call their feature “auto pilot.“ Then they had the audacity to call it user error when people thought the car would automatically pilot itself.
> If yo[u can] drive in rain/snow/fog, you don't need lidar in clear conditions
Of course you do, you're driving at much higher speeds and so is the surrounding traffic. You can't just guess what you might be looking at, you have to make clear decisions promptly. Lidar is excellent in that case.
Also, military sensor use shows the best answer is to have as many different types of sensors as possible and then do sensor fusion. So machine vision, lidar, radar, etc.
That way you pick up things that are missed by one or more sensor types, catches problems and errors from any of them, and end up with the most accurate ‘view’ of the world - even better than a normal human would.
It’s what Waymo is doing, and they also unsurprisingly, have the best self driving right now.
People who don't understand that sensor fusion is an entire field of study with tons of existing work and lots of expertise have been fooled by a fake argument of "If the camera and lidar disagree, what do you do?"
It's frustrating to still see it repeated over a decade later. It was always bullshit. It was always a lie.
This is silly. Cameras are cheap. Have both. Sensors that sense differently in different conditions is not an exotic new problem. The kalman filter has existed for about a billion years and machine learning filters do an even better job.
1) it's not cheap to produce lidars at a stable predictable quality in millions;
2) car driving training data sets for lidars are much scarcer (and will always be much scarcer due to cameras' higher prevalence) and at a much lower quality;
3) combined camera+lidar data sets are even scarcer.
> 1) it's not cheap to produce lidars at a stable predictable quality in millions;
It wasn't cheap to produce accelerometers at a stable predictable quality in millions before smart phones either. Mass production shakes things up somewhat. See the headline for reference.
1. Automotive LiDAR is down to $350 in China already. BYD is starting to put LiDAR in even entry level cars. (It's been in their mid and high end cars for a while).
2+3. BYD collects extensive training data from customers, much like Tesla does. They will have no trouble with training.
Do cameras work well in those conditions? Nope. Also cameras don't work well with certain answer of glare, so as a consumer I'd rather have something over-engineered for my safety to cover all edge cases...
> Countries rise to power because they are in the right place at the right time, even if monarchs and nationalists will always attribute it to God preference
Isn’t it literally the God’s preference of a country for this place and time, from both secular and religious points of view?
I would guess that you just remember at which file offsets you need to insert what, and which offset ranges you need to delete from the original file — and on file save you just do a single linear sweep to update the file contents on disk.
As far as I understand, bad faith actors already have wide possibilities for disruption and abuse. This system allows for better good-faith coordination for mutual benefit.
From what I see, the code is incorrect in reading “messages” from TCP socket stream, and will be failing randomly in production with messages longer than 1500 bytes, and also sometimes when even shorter.
Instead, the TCP socket must be treated as a stream of bytes, and use either some delimiter as message boundary (like \n, while escaping any newlines inside JSON), or write message size before the message bytes itself, so that the code knows how many bytes to read until full message is read.
Edit: to clarify, TCP protocol does not guarantee that if you write some bytes in one go, they will be read in one go as well. Instead, they may be split into multiple “reads”, or glued together with the preceding chunk, or both. It’s a “stream of bytes” protocol, it only guarantees that written bytes come one after another in the same order.
So the “naive” message separation used in code above (read a chunk and assume it’s the entire message that was written) will work in manual tests, and likely even in local automated tests, but will randomly break when exposed to real network conditions.
Thanks - I had a quick scan through the code and noticed the 4096 byte buffers and wondered how larger messages were handled and couldn't see anything but wondered if I was missing something!
How is this a counter-argument? I often read this, as if there's some international trusted organization of logical thinkers that has approved inclusion of slippery slope to a list of logical fallacies that must never be invoked in a conversation.
Every single time five years later it turns out that the slope actually was slippery.