YouTuber Mark Rober has kicked the hornet's nest with his latest video. But are Tesla fanboys right mistakes were made?
In the piece — titled "Can You Fool a Self Driving Car?" — Rober found that a Tesla car on Autopilot was fooled by a Wile E. Coyote-style wall painted to look like the road ahead of it, with the electric vehicle plowing right through it instead of stopping.
The footage was damning enough, with slow-motion clips showing the car not only crashing through the styrofoam wall but also a mannequin of a child. The Tesla was also fooled by simulated rain and fog.
I can’t wait for all this brand loyalty and fan people culture to end. Why is this even a thing? Like talking about box office results, companies financials and stocks…. If you’re not an investor of theirs, just stop. It sounds like you’re working for free for them.
I think it comes from Depression era kids who found a brand that didn’t create cheap junk and so they spread the word. But of course, that has been co-opted by capitalist pirates who buy a brand famous for quality, gut expensive manufacturing with the cheap alternatives and then count on making a profit before word-of-mouth catches up to them.
Sears retailer. Gibson guitars. Off the top of my head. Thousands more examples over the years.
My guess as for why people do it today was because their grandparents or previous generations did that as a survival necessity but now we are seeing the behavior warped from its original purpose. Like opening and raising your right hand to show you had no weapon became a friendly wave hello nowadays. Maybe that’s not an analogous example but you should get the idea.
I can’t wait for all this brand loyalty and fan people culture to end.
My blackest pill in my adult life was the realization that we've leveled off as a species. This is as good as it gets.
Our brains made monumental leaps in development over the last half-million years, with the strongest changes being made during the last ice-age, times when resources were scarce, and survival was extremely difficult and humanity was caught up in many wars and fights with other humans and animals and weather alike. Our brains were shaped to do a couple of things better than others: invent stories to explain feelings, and join communities. These adaptations worked amazingly, it allowed us to band together and pool resources, to defend each other and spot signs of danger. These adaptations allowed us to develop language and agriculture and formed our whole society, but lets not forget what they are at heart: brains invent stories to explain feelings, and we all want social identity and in-group. Deeply. This shit is hardwired into us.
Nearly every major societal problem we have today can be traced back to this response system from the average human brain to either invent a story to explain a discomfort, and those discomforts are often the simple desire to have a group identity.
Our world will get more complicated, but our brains aren't moving. We can only push brains so far. They're not designed to know how to form words and do calculus, we trained our brains to do those things, but our systems are far more complicated than language and calculus. Complex problems produce results like lack of necessities, which create negative feelings, which the brain invents stories to explain (or are provided stories by the ruling class.)
So this is it. Nobody is coming. Nothing is changing.
We MIGHT be able to rein in our worst responses over enough time, we MIGHT be able to form large enough groups with commonalities that we achieve tenuous peace. But we will never be a global species, we will never form a galactic empire, we will never rise above war and hate and starvation and greed. Not in our current forms at least. There's no magic combination of political strategies and social messages that will make everyone put down their clubs and knives.
This is it, a cursed, stupid primate on a fleck of dust spinning around a spark in a cloud of sparks, just looking at every problem like it's either a rival tribe or a sabertooth-cat hiding in the bushes. Maybe if we don't destroy ourselves someday our AI descendants will go out into the larger universe, but it certainly won't be us.
Well said. Thank you for sharing. This is a nice piece to help those to self reflect once in a well, it feels….. grounding. Curious what the positive sequel would be…
I hope some of you actually skimmed the article and got to the "disengaging" part.
As Electrek points out, Autopilot has a well-documented tendency to disengage right before a crash. Regulators have previously found that the advanced driver assistance software shuts off a fraction of a second before making impact.
It's a highly questionable approach that has raised concerns over Tesla trying to evade guilt by automatically turning off any possibly incriminating driver assistance features before a crash.
It’s a highly questionable approach that has raised concerns over Tesla trying to evade guilt by automatically turning off any possibly incriminating driver assistance features before a crash.
That is like writing musk made an awkward, confused gesture during a time a few people might call questionable timing and place.
Don't get me wrong, autopilot turning itself off right before a crash is sus and I wouldn't put it past Tesla to do something like that (I mean come on, why don't they use lidar) but maybe it's so the car doesn't try to power the wheels or something after impact which could potentially worsen the event.
On the other hand, they're POS cars and the autopilot probably just shuts off cause of poor assembly, standards, and design resulting from cutting corners.
Wouldn't it make more sense for autopilot to brake and try to stop the car instead of just turning off and letting the car roll? If it's certain enough that there will be an accident, just applying the brakes until there's user override would make much more sense..
I see your point, and it makes sense, but I would be very surprised if Tesla did this. I think the best option would be to turn off the features once an impact is detected. It shutting off before hand feels like a cheap ploy to avoid guilt
Rober seems to think so, since he says in the video that it's likely disengaging because the parking sensors detect that it's parked because of the object in front, and it shuts off the cruise control.
Yeah but that's milliseconds. Ergo, the crash was already going to happen.
In any case, the problem with Tesla autopilot is that it doesn't have radar. It can't see objects and there have been many instances where a Tesla crashed into a large visible object.
That's what's confusing me. Rober's hypothesis is without lidar the Tesla couldn't detect the wall. But to claim that autopilot shut itself off before impact means that the Tesla detected the wall and decided impact was imminent, which disproves his point.
If you watch the in car footage, autopilot is on for all of three seconds and by the time its on impact was already going to happen. That said, teslas should have lidar and probably do something other than disengage before hitting the wall but I suspect their cameras were good enough to detect the wall through lack of parallax or something like that.
I've heard that too, and I don't doubt it, but watching Mark Rober's video, it seems like he's deathgripping the wheel pretty hard before the impact which seems more likely to be disengaging. Each time, you can see the wheel tug slightly to the left, but his deathgrip pulls it back to the right.
It doesn't guarantee them protection from liability, but it makes it easier to muddy the waters.
They never have to claim that autopilot or self driving was on during a crash in any comment to the press, or the courts. They never have to admit that it was directly the result of the crash, only that it "could have" led to the crash.
It just makes PR easier, and allows them to delay the resolution of court cases.
Vacuum doesn't run outdoors and accidentally running into a wall doesn't generate lawsuits.
But, yes, any self-driving cars should absolutely be required to have lidar. I don't think you could find any professional in the field that would argue that lidar is the proper tool for this.
...what is your point here, exactly? The stakes might be lower for a vacuum cleaner, sure, but lidar - or a similar time-of-flight system - is the only consistent way of mapping environmental geometry. It doesn't matter if that's a dining room full of tables and chairs, or a pedestrian crossing full of children.
Yes it plays like an infomercial for lidar. So take that portion with some skepticism. I can think of some issues exclusive to lidar like 2+ lidar cars blinding each other which needs to be solved, e.g. some kind of light pattern encoding to mask out unwanted signals.
It absolutely 100% demonstrates the issue with camera-only technology in Tesla vehicles.
Teslas used to have cameras + radar but they cheaped out and removed the radar. I think it would have passed all the tests if they still had the front facing radar but they don't. The problem with cameras alone is obvious - they can't see what they can't see and probably don't have an innate sense to slow down if there is rain, fog, ice or whatever else that might cause a human to.
The hyper-positivity and enthusiasm is because his content is aimed at kids as much as it is adults. A lot of kid-oriented science content I remember, from tv shows/documentaries to guest speakers, to science-centre guides had that affect.
As Electrek points out, Autopilot has a well-documented tendency to disengage right before a crash. Regulators have previously found that the advanced driver assistance software shuts off a fraction of a second before making impact.
This has been known.
They do it so they can evade liability for the crash.
That makes so little sense...
It detects it's about to crash then gives up and lets you sort it?
That's like the opposite of my Audi who does detect I'm about to hit something and gives me either a warning or just actively hits the brakes if I don't have time to handle it.
If this is true, this is so fucking evil it's kinda amazing it could have reached anywhere near prod.
The point is that they can say "Autopilot wasn't active during the crash." They can leave out that autopilot was active right up until the moment before, or that autopilot directly contributed to it. They're just purely leaning into the technical truth that it wasn't on during the crash. Whether it's a courtroom defense or their own next published set of data, "Autopilot was not active during any recorded Tesla crashes."
even your audi is going to dump to human control if it can't figure out what the appropriate response is. Granted, your Audi is probably smart enough to be like "yeah don't hit the fucking wall," but eh.... it was put together by people that actually know what they're doing, and care about safety.
Tesla isn't doing this for safety or because it's the best response. The cars are doing this because they don't want to pay out for wrongful death lawsuits.
If this is true, this is so fucking evil it’s kinda amazing it could have reached anywhere near prod.
It's musk. he's fucking vile, and this isn't even close to the worst thing he's doing. or has done.
Every Tesla driver would need super human reaction speeds to respond in 17 frames, 680ms(I didn't check the recording framerate, but 25fps is the slowest reasonable), less than a second.
They're talking about avoiding legal liability, not about actually doing the right thing. And of course you can see how it would help them avoid legal liability. The lawyers will walk into court and honestly say that at the time of the accident the human driver was in control of the vehicle.
And then that creates a discussion about how much time the human driver has to have in order to actually solve the problem, or gray areas about who exactly controls what when, and it complicates the situation enough where maybe Tesla can pay less money for the deaths that they are obviously responsible for.
If the disengage to avoid legal consequences feature does exist, then you would think there would be some false positive incidences where it turns off for no apparent reason.
I found some with a search, which are attributed to bad software. Owners are discussing new patches fixing some problems and introducing new ones. None of the incidences caused an accident, so maybe the owners never hit the malicious code.
The given reason is simply that it will return control to the driver if it can’t figure out what to do, and all evidence is consistent with that. All self-driving cars have some variation of this. However yes it’s suspicious when it disengages right when you need it most. I also don’t know of data to support whether this is a pattern or just a feature of certain well-published cases.
Even in those false positives, it’s entirely consistent with the ai being confused, especially since many of these scenarios get addressed by software updates. I’m not trying to deny it, just say the evidence is not as clear as people here are claiming
if it randomly turns off for unapparent reasons, people are going to be like 'oh that's weird' and leave it at that. Tesla certainly isn't going to admit that their code is malicious like that. at least not until the FBI is digging through their memos to show it was. and maybe not even then.
Breaks require a sufficient stopping distance given the current speed, driving surface conditions, tire condition, and the amount of momentum at play. This is why trains can't stop quickly despite having breaks (and very good ones at that, with air breaks on every wheel) as there's so much momentum at play.
If autopilot is being criticized for disengaging immediately before the crash, it's pretty safe to assume its too late to stop the vehicle and avoid the collision
So, as others have said, it takes time to brake. But also, generally speaking autonomous cars are programmed to dump control back to the human if there's a situation it can't see an 'appropriate' response to.
what's happening here is the 'oh shit, there's no action that can stop the crash', because braking takes time (hell, even coming to that decision takes time, activating the whoseitwhatsits that activate the brakes takes time.) the normal thought is, if there's something it can't figure out on it's own, it's best to let the human take over. It's supposed to make that decision well before, though.
However, as for why tesla is doing that when there's not enough time to actually take control?
It's because liability is a bitch. Given how many teslas are on the road, even a single ruling of "yup it was tesla's fault" is going to start creating precedent, and that gets very expensive, very fast. especially for something that can't really be fixed.
for some technical perspective, I pulled up the frame rates on the camera system (I'm not seeing frame rate on the cabin camera specifically, but it seems to either be 36 in older models or 24 in newer.)
14 frames @ 24 fps is about 0.6 seconds@36 fps, it's about 0.4 seconds. For comparison, average human reaction to just see a change and click a mouse is about .3 seconds. If you add in needing to assess situation.... that's going to be significantly more time.
AEB braking was originally designed to not prevent a crash, but to slow the car when a unavoidable crash was detected.
It's since gotten better and can also prevent crashes now, but slowing the speed of the crash was the original important piece. It's a lot easier to predict an unavoidable crash, than to detect a potential crash and stop in time.
Insurance companies offer a discount for having any type of AEB as even just slowing will reduce damages and their cost out of pocket.
Not all AEB systems are created equal though.
Maybe disengaging AP if an unavoidable crash is detected triggers the AEB system? Like maybe for AEB to take over which should always be running, AP has to be off?
It's a highly questionable approach that has raised concerns over Tesla trying to evade guilt by automatically turning off any possibly incriminating driver assistance features before a crash.
So, who's the YouTuber that's gonna test this out? Since Elmo has pushed his way into the government in order to quash any investigation into it.
Notice how they're mad at the video and not the car, manufacturer, or the CEO. It's a huge safety issue yet they'd rather defend a brand that obviously doesn't even care about their safety. Like, nobody is gonna give you a medal for being loyal to a brand.
These people haven't found any individual self identity.
An attack on the brand is an attack on them. Reminds me of the people who made Stars Wars their meaning and crumbled when a certain trilogy didn't hold up.
Thus it ever is with Conservatives. They make $whatever their whole identity, and so take any critique of $whatever as a personal attack against themselves.
I blame evangelical religions' need for martyrdom for this.
So literally every single above average sports fan?
The pathological need to be part of a group so bad it overwhelmes all reason is a feature I have yet to understand. And I say that as someone who can recognize in myself those moments when I feel the pull to be part of an in group.
To be fair, and ugh, I hate to have to stand up for these assholes, but...
To be fair, their claim is that the video was a lie and that the results were manufactured. They believe that Teslas are actually safe and that Rober was doing some kind of Elon Musk takedown trying to profit off the shares getting tanked and promote a rival company.
They actually do have a little bit of evidence for those claims:
The wall changes between different camera angles. In some angles the wall is simply something painted on canvas. In other angles it's a solid styrofoam wall.
The inside the car view in the YouTube video doesn't make it clear that autopilot mode is engaged.
Mark Rober chose to use Autopilot mode rather than so-called Full Self Driving.
They did the experiment twice, once with a canvas wall, then a few weeks later with a styrofoam wall. The car smashed right into the wall the first time, but it wasn't very dramatic because the canvas just blew out of the way. They wanted a more dramatic video for YouTube, so they did it again with a styrofoam wall so you could see the wall getting smashed. This included pre-weakening the wall so that when the car hit it, it smashed a dramatic Looney-Tunes looking hole in the wall. When they made the final video, they included various cuts from both the first and second attempts. The car hit the wall both times, but it wasn't just one single hit like it was shown in the video.
There's apparently a "rainbow" path shown when the car is in Autopilot mode. [RAinbows1?!? DEI!?!?!?!] In the cut they posted to YouTube, you couldn't see this rainbow path. But, Rober posted a longer cut of the car hitting the wall where it was visible. So, it wasn't that autopilot was off, but in the original YouTube video you couldn't tell.
He used Autopilot mode because from his understanding (as a Tesla owner (this was his personal vehicle being tested)), Full Self Driving requires you to enter a destination address. He just wanted to drive down a closed highway at high speed, so he used Autopilot instead. In his understanding as a Tesla owner and engineer, there would be no difference in how the car dealt with obstacles in autopilot mode vs. full self driving, but he admitted that he hadn't tested it, so it's possible that so-called Full Self-Driving would have handled things differently.
Anyhow, these rabid MAGA Elon Fanboys did pick up on some minor inconsistencies in his original video. Rober apprently didn't realize what a firestorm he was wading into. His intention was to make a video about how cool LIDAR is, but with a cool scene of a car smashing through a wall as the hook. He'd apparently been planning and filming the video for half a year, and he claims it just happened to get released right at the height of the time when Teslas are getting firebombed.
The styrofoam wall had a pre-cut hole to weaken it, and some people are using it as a gotcha proving the video was faked. It would be funny if it wasn't so pathetic.
Yeah, but it's styrofoam. You could literally run through it. And I'm sure they did that more as a safety measure so that it was guaranteed to collapse so nobody would be injured.
But at the same time it still drove through a fucking wall. The integrity doesn't mean shit because it drove through a literal fucking wall.
For more background, Rober gave an interview and admitted that they ran the test twice. On the first run, the wall was just fabric, which did not tear away in a manner that was visually striking. They went back three weeks later and built a styrofoam wall knowing that the Tesla would fail, and pre-cut the wall to create a more interesting impact.
I believe the outrage is that the video showed that autopilot was off when they crashed into the wall. That's what the red circle in the thumbnail is highlighting. The whole thing apparently being a setup for views like Top Gear faking the Model S breaking down.
Autopilot shuts itself off just before a crash so Tesla can deny liability. It's been observed in many real-world accidents before this. Others have said much the same, with sources, in this very thread.
I am not going to click a link to X, but this article covers that, and links this raw footage video on X which supposedly proves this claim to be false.
In addition to the folks pointing out it likes to shut itself off (which I can neither confirm nor deny)
Some skeptical viewers claim Autopilot was not engaged when the vehicle ran into the wall. These allegations prompted Rober to release the "raw footage" in a X post, which shows the characteristic signs of Autopilot being engaged, such as a rainbow road appearing on the dash.
When he was in the Tesla asking if he should go for a ride I was screaming "Yes! Yes Mr. President! Please! Elon, show him full self driving on the interstate! Show him full self driving mode!"
The president can't drive by law unless on the grounds of the White House and maybe Camp David. At least while in office. They might be allowed to drive after leaving office...
This isn't true at all. I can't tell if you're being serious or incredibly sarcastic, though.
The reason presidents (and generally ex presidents, too) don't drive themselves is because the kind of driving to escape an assassination attempt is a higher level of driving and training than what the vast majority of people ever have. There's no law saying presidents are forbidden from driving.
In any case, I would be perfectly happy if they let him drive a CT and it caught fire. I'd do a little jib, and I wouldn't care who sees that.
Man these cars don`t have a Radar ? Only eyes like most of the animals? Not even as a backup ? Not talking about Lasers, but Radar? Truck drivers, better not paint a scenery on the back of your truck.
Tesla use to have radar I believe, musk over ruled teslas using Lidar and insisted on vision only. Lidar powered autonomous vehicles like Waymo are already cruising the streets unsupervised beating tesla.
A genius! For decades, all carmakers have tried to fix human errors and compensate for our lack of abilities. Then there’s Elon, cutting out technologies because "humans don’t have them."
LIDAR generally works better at relatively short distances (like less than a km). Several other car companies are going with LIDAR and do alright. Musk thinks cameras with image recognition would be sufficient without anything else. It goes without saying that Musk is very wrong.
Yeah, he’s a total idiot. That decision has held them back from so much progress… and for what? Saving a negligible amount of money on a very expensive car. Nice.
Yes but i am not talking over a running system through AI, i only say as a backup for redundancy only, which takes control to save you and tells AI to shut the fuck up.
To be fair, if you were to construct a wall and paint it exactly like the road, people will run into it as well. That being said, tesla shouldn't rely on cameras
Edit: having just watched the video, that was a very obvious fake wall. You can see the outlines of it pretty well. I'm also surprised it failed other tests when not on autopilot, seems pretty fucking dangerous.
Watch the video it's extremely obvious to a human driver that there is something wrong with that view ahead. It's even pointed out in the video that humans use additional visual clues when a situation is ambiguous.
The cars don't have deduction and reasoning capabilities so they need additional sensors to give them more information to compensate due to their lack of brains. So it's not really sensible to compare self-driving systems to humans. Humans have limited sensory input but it's compensated for by reasoning abilities, Self-Driving cars do not have reasoning abilities but it's compensated for by enhanced sensory input.
Huh, I thought the exact opposite. The clues were small. While they were sufficient for a focussed driver at slow speeds, it also looked like something that would fool a human at typical speeds and attention span.
Painting exactly like the road is a gimmick that really doesn’t demonstrate anything.
Personally I wished they went full looney tunes to better entertain us and to demonstrate that even significant clues may not be enough
To be fair, if you were to construct a wall and paint it exactly like the road, people will run into it as well.
this isn't being fair. It's being compared to the other- better- autopilot systems that use both LIDAR and radar in addition to daylight and infrared optical to sense the world around them.
Teslas only use daylight and infrared. LIDAR and radar systems both would not have been deceived.
The new BYD cars that are coming out also have self-driving probably to directly compete with Tesla.
However they do use lidar, and radar, and cameras, and infrared cameras, and ultrasonic sensors. All have to be working or the car won't go into self-drive. So other companies consider even one of their sensors failing to be enough to disable self-driving capabilities yet Tesla are claiming that it's perfectly safe to drive around with those features not even installed let alone functional.
I don’t see how this test demonstrates anything is better. It is a gimmick designed for a specific sensor to get an intended result. Show me a realistic test if they want to be useful, or go full looney tunes if they want to be entertaining
I'd take that bet. I imagine at least some drivers would notice something sus' (due to depth perception, which should be striking as you get close, or lack of ANY movement or some kind of reflection) and either
slow down
use a trick, e.g. flicking lights or driving a bit to the sides and back, to try to see what's off
or probably both, but anyway as other already said, it's being compared to other autopilot systems, not human drivers.
The video does bring up human ability too with the fog test ("Optically, with my own eyes, I can no longer see there's a kid through this fog. The lidar has no issue.") But, as they show, this wall is extremely obvious to the driver.
I wondered how the hell it managed to fool LIDAR, well...
The stunt was meant to demonstrate the shortcomings of relying entirely on cameras — rather than the LIDAR and radar systems used by brands and autonomous vehicle makers other than Tesla.
If I could pass one law, requiring multiple redundant scanning tech on anything autonomous large enough to hurt me might be it.
I occasionally go to our warehouses which have robotic arms, autonomous fork lifts, etc. All of those have far more saftey features than a self driving Tesla, and they aren't in public.
The tl;dr here is that Elon said that humans have eyes and they work, and eyes are like cameras, so use cameras instead of expensive LIDAR. Dick fully inside car door for the slam.
In theory he's not wrong, except for that part where neither the optics nor (especially) the software come anywhere close to matching the performance of human eyes and brains and won't for the foreseeable future.
The worst part is that LiDAR isn’t even expensive anymore. Hell, my phone has LiDAR. He originally said that to justify the fact that they were dealing with a component shortage and he needed to keep shipping vehicles. So he simply shipped them without the LiDAR systems that he couldn’t get ahold of, and claimed it was because he didn’t need LiDAR.
But now LiDAR is much more advanced and cheaper. But since he refused to admit it was because of a component shortage, adding LiDAR now would require Musk to publicly admit he was wrong. And we all know that will never happen.
The entire premise of the joke is that we could mistake a sufficiently detailed image of a road for an actual road. That humans are susceptible to such a failure does not mean it is reasonable for a robot to share the same flaw.
To be clear, Elon Musk removed radar from Tesla vehicles and not Lidar, but a) he had it removed even from vehicles that had the hardware for radar and b) radar would have been enough to pass all the tests in the video anyway.
Cost cutting. Lidar is cheaper now but was relative expensive and increased tech debt and maintenance. Also he legit thought that "human see good - then car see good too". Tesla is being led by a literal idiot.
Read about this somewhere. Iirc, Elon felt cameras were better than LiDAR at a time when that was kinda true, but the technology improved considerably in the interim and he pridefully refuses to admit he needs to adapt. [Edit: I had hastily read the referenced article and am incorrect here; link to accurate statements is linked in a reply below.]
I don't even understand that logic. Use both. Even if one is significantly better than the other, they each have different weaknesses and can mitigate for each other.
He didn't think they were better. He thought Tesla could get away without the more expensive lidar. Basically "humans can drive with just vision, that should be enough for an autonomous vehicle also." Basically he did it because lidar is more expensive.
He use lidar in SpaceX because he knows it's the right tool for their specific job.
His stance is it's not that cameras are better, but that cameras have to be so good for a truly AV that putting effort into both means you're not going to make your cameras good enough to do it and rely on lidar instead. That and cost.
If the car can't process and understand the world via cameras, it's doomed to fail at a mass scale anyway.
It might be a wrong stance, but it's not that lidar is flawed.
Tesla even uses lidar to ground truth their cameras
Edit: just adding a late example - Waymo, Cruise, and probably everyone out there still use humans to tell the car what to do if it gets stuck. I even bet Tesla will if they ever launch a robotaxi as they need a way to somehow help the car if it gets stuck. When we see these failures with Waymo and Cruise, it's less "is something there" and more "I don't understand this situation". The understanding comes from vision. Lidar just gives the something is there, but it isn't solving their problem.
There was a comedy channel on Youtube aeons ago that would do "if x were honest" videos. Their slogan for Valve was "We used to make games. Now we make money."
You can get a Tesla for $42,000... They aren't that expensive.
With that said, they've really cheaped out and even removed the cheaper radar sensors they used to have because Elon wanted to save a buck and really thinks all you need is cameras because he's an idiot.
Because commonly they use radar instead, the modern sensors that are also used for adaptive cruise control even have heaters to defrost the sensor housing in winter
Light aren't radar systems don't work internationally because they're functionally band in many asian and european countries. Instead of making one system that was almost complete finished, they went all camera and now none of it works right.
I got a Tesla from my work before Elon went full Reich 3, and try this:
break on bridge shadows on the highway
start wipers on shadows, but not on rain
break on cars parked on the roadside if there's a bend in the road
disengage autopilot and break when driving towards the sun
change set speed at highway crossings because fuck the guy behind me, right?
engage emergency break if a bike waits to cross at the side of the road
To which I'll add:
moldy frunk (short for fucking trunk, I guess?), no ventilation whatsoever, water comes in, water stays in
pay attention noises for fuck-all reasons masking my podcasts and forcing me to rewind
the fucking cabin camera nanny - which I admittedly disabled with some chewing gum
the worst mp3 player known to man, the original Winamp was light years ahead - won't index, won't search, will reload USB and lose its place with almost every car start
bonkers UI with no integration with Android or Apple - I'm playing podcasts via low rate Bluetooth codecs, at least it doesn't matter much for voice
unusable airco in auto mode, insists on blowing cold air in your face
Say what you want about European cars, at least they got usability and integration right. As did most of the auto industry. Fuck Tesla, never again. Bunch of Steve Jobs wannabes.
This, if the so called Tesla fans even drive the car, they know all of the above is more or less true. Newer cars have fewer of these issues, but the camera based Auto Pilot system is still in place. The car doesn't even allow you to use cruise control under certain circumstances, because the car deems visibility too poor. The camera also only detects rain when its pouring, every other situation it will just randomly engage/disengage.
I drive a Tesla Model 3 (2024) daily and I wouldn't trust the car driving itself towards a picture like that. It would be an interesting experiment to have these "Tesla Fans" do the same experiment and use a concrete wall for some additional fun. I bet they won't even conduct the experiment, because they know the car won't detect the wall.
Frunk is short for front trunk. The mp3 issues mostly goes away if you pay for LTE on the car. The rest of the issues I can attest to. Especially randomly changing the cruise control speed on a highway because Google maps says so, I guess? Just hard breaking at high speeds for no fucking reason.
I know what frunk stands from: funky trunk, given the smell. And I had premium connectivity included, doesn't do squat unless you use Spotify, which no thanks (for different reasons). I have carefully curated "car music" on an USB drive, but noo. Search will only return Spotify results.
Our Mazda 3's adaptive cruise thought a car that was exiting was in our lane and hit the brakes, right in front of a car I had just passed. Sorry, dude, I made the mistake of trusting the machine.
Incidents like that made me realize how far we have to go before self driving is a thing. Before we got that car, I thought it was just around the corner, but now I see all the situations that car identifies incorrectly, and it's like, yeah, we're going to be driving ourselves for a long time.
Had a situation driving behind a Tesla on a freeway in clear conditions with no other cars nearby, where they suddenly braked, strongly. I actually had to swerve to go around him. He looked over at me sheepishly. I was a skeptic about the FSD concerns before that happened. Now I try to be cautious, but there are so many Teslas on the road now, I can't double check that all of them won't suddenly freak out
It costs too much. It's also why you have to worry about panels falling off the swastitruck if you park next to them. They also apparently lack any sort of rollover frame.
He doesn't want to pay for anything, including NHTSB crash tests.
It's literally what Drumpf would have created if he owned a car company. Cut all costs, disregard all regulations, and make the public the alpha testers.
The guy bankrupted a casino, not by playing against it and being super lucky, but by owning it. Virtually everything he has ever touched in business has turned to shit. How do you ever in the living fuck screwup stakes at Costco? My cousin with my be good eye and a working elbow could do it.
And now its the country's second try. This time unhinged, with all the training wheels off. The guy is stepping on the pedal while stripping the car for parts and giving away the fuel. The guy doesn't even drive, he just fired the chauffeur and is dismantling the car from the inside with a shot gun...full steam ahead on to a nice brick wall and an infinity cliff ready to take us all with him. And Canada and Mexico and Gina. Three and three quarters of a year more of daily atrocities and law breakage. At least Hitler boy brought back the astronauts.
Sorry but I don't get it. You can getva robot vacuum with lidar for $150. I understand automotive lidars need to have more reliability, range etc. but I don't understand how it's not even an option for $30k car.
You don't necessarily need to implement lidar the way Waymo does it with the spinning sensor. IPad Pros have them. Could have at least put a few of these on the front without significantly affecting aesthetics.
There's a reason they do that, he actually covers that in the video. Lidar spins a single line many, many many times a second. Processing the differences in that line scan to scene makes the point cloud generation many times easier allowing the scan to be exponentially more dense.
The iPhone uses a diffraction grating to shoot static dots at your face and looks for the subtle movements of your face and phone to generate a 3D scan.
The diffraction method is tiny and good for static identification but bad for high-speed outdoors.
The spinny towers give it a better field of view. you could probably put shorter towers on each corner, or even build them into the body panels, but it's a delicate, expensive instrument and it's not what's currently holding back self driving anyway :)
Everyone and their dog uses radar for distance sensing for the adaptive cruise control. You take the same migh speed sensor and use it for wall detection. It's how the emergency stop functions work where it detects a car in front of you slamming on the brakes.
Tesla cars are stupid tech. As the cars that use lidar demonstrated, this is a solved problem. There don’t have to be self driving cars that run over kids. They just refuse to integrate the solution for no discernible reason, which I’m assuming is really just “Elon said so.”
It's even worse than that. Not only is it a solved problem, but Tesla had it solved (or closer to solved, anyway) and then intentionally regressed on the technology as a cost cutting measure. All the while making a limp-wristed attempt to spin the removal of key sensor hardware -- first the radar and later the ultrasonic proximity sensors -- as a "safety" initiative.
There isn't a shovel anywhere in the world big enough for that pile of bullshit.
Yeah, it's infuriating! Elon said something along the lines of Humans drive all the time just using their eyes, so we can replicate that with just cameras. Leaving out the fact that one of the benefits of a self driving system should surely be that it's in many ways BETTER than humans which are often terrible at driving in fog, torrential rain, low light/night time etc!? It was almost a point of pride that his cars would be every bit as shitty as a human driver to a fault!
I guess his robots are going to be just as weak and frail as humans and need sick days and simulate getting tired and dropping things too?? I can just imagine one of his robots entering a room and saying What did I come in here for again?? I think I need a nap!?
He said it was his own car when he was interviewing with Phillip DeFranco. He also said he's still planning on getting another Tesla when an updated model comes out.
I have no clue what you're trying to say, but the significant amount of outrage a day or two later that I suddenly saw explode on Twitter was mind boggling to me. Couldn't tell if it was bots or morons but either way, people are big mad about the video.
For the record, I do want the bar for self-driving safety to be high. I also want human drivers to be better... Because even not-entirely-safe self-driving cars may still be safer than humans at a certain point.
Right, those were the failures that really matter, and Rober included the looney tunes wall to get people sharing and talking about it. A scene painted on wall is a contrived edge case, but pedestrians/obstacles in weather involving precipitation is common.
props to the LiDAR car for trying to drive through that heavy rain - does it just have enough resolution to see through the droplets to determine that there isn't a solid object within braking distance?
EDIT: actually maybe it didn't, and just stopped when it realised that it couldn't see through the water?
It was super annoying how scared he acted when he knew it was styrofoam and it wasn't even going to leave a scratch on the car. I would have like it much better if the car crashed into and actual wall and burst into flames.
Instinctively, human brains generally don't like large objects coming to them unbidden at high speed. That isn't going to help things, even if you're consciously aware that the wall is relatively harmless.
What would definitely help with the discussion is if Mark Rober the scientist left a fucking crumb of scientific approach in his video. He didn’t really explain how he was testing it just slam car into things for views. This and a collaboration with a company that makes lidar made the video open to every possible criticism and it’s a shame.
Did he enable the autopilot? When? What his inputs to the car were? Is if fsd? What car is that?
You can make every car hit a wall, that is the obvious part, but by claiming (truthfully, I have no doubt) that the car hit it on its own I would like to know what made it do it.
But I do feel there is something strange about the car disengaging the auto pilot (cruise control) just before the crash. How can the car know it's crashing while simultaneously not knowing it's crashing?
I drive a model 3 myself, and there is so much bad shit about the auto pilot and rain sensors. But I have never experienced, or heard anyone else experiencing a false positive were the car disengage the auto pilot under any conditions the way shown in the video with o sound or visual cue. Considering how bad the sensors on the car is, its strange they're state of the art every time an accident happens. There is dissonance between the claims.
Mark shouldn't have made so many cuts in the upload. He locks the car on 39mph on the video, but crashes at 42mph. He should have kept it clean and honest.
I want to see more of these experiments in the future. But Marks video is pretty much a commercial for the Lidar manufacturer. And commercials shouldn't be trusted.
Actually, his methodology was very clearly explained. Did you watch the whole video? He might have gushed a bit less about LiDAR but otoh the laymen don't know about it so it stands to reason he had to explain the basics in detail.