"Self driving cars" are a growing menace to cyclists and pedestrians

Self driving car are never going to be safe in anything but the most controlled of conditions. I’m a long term software developer and I would never trust any driving aids more complex than regular old cruise control and, to a lesser extent, proximity sensors that alert a fully in control human driver of things nearby. This Tesla tried to veer into a bollard separated bike lane.

Tesla FSD Beta Caught Hitting Something On Camera (jalopnik.com)

5 Likes

Driver-assist, especially the way Tesla is implementing it, is just following current patterns of driver sociopathy. I got a loaner car from my mechanic and it was the first car I’ve ever driven that has driver-assist features. It’s a Honda Civic Sport. I do not like it.

The first feature I involuntarily tried was “smart” following distance using cruise control. while it’s convenient that it recognizes to slow down from or speed up to the set speed based on the vehicle in front, it is not capable of planning ahead (I’m always looking for brake lights far up ahead as well as through the immediate car’s windshields to the next car – ordinarily, if I see a slowdown coming, I hit “cancel” on my cruise to start decelerating without touching my brakes. So it doesn’t predict slowdowns, it only reacts to them (later than I would), which results in overly aggressive slowing down and too late. This is the exact sort of pattern that leads to that inchworm effect in traffic slowdowns (if you’ve ever watched a traffic study video, you know what I’m talking about). One set of brakes leads to a slowdown that ripples backward and can be felt in heavy traffic for a long time. It gets even worse when someone changes lanes in front of the car. Instead of just decelerating, it feels like it’s stomping on the brakes to create as much space in front of itself as fast as possible. Unfortunately, I could not turn off this auto-follow distance without also turning off cruise control.

The second feature I tried which is more elective is lane control (keeping the car between the line so to speak). On the freeway, it did an adequate job, but I did observe that it cut to the very apex of every curve, hugging the line tightly. On a multi-lane highway, that is not the way to do it. On surface streets, it would hug the door zone of parked cars (like uncomfortably close) and stay as far away from the centerline. I immediately turned it off in that situation. The one place it did a better job than I’ve seen humans do is on N Vancouver (didn’t end up on N Williams) where it completely respected staying out of the buffer zone of the bike lane where most drivers straddle the left line or the whole buffer. That is, it did well there except for where it encountered the buffer zone’s left line worn down so much that it immediately started veering right into the newly available “free” lane space.

The whole experience makes me doubly doubt PBOT and ODOT’s penchant for using paint as delineating safe spaces for bicycling.

6 Likes

IIRC computers still have trouble understanding contextual cues. For example, they can’t instinctively identify that a boat driving down the road is an exception.

1 Like

I had a near-miss with a Tesla that I believe was in autopilot, since they were crossing over to a right turn lane and would have clearly seen my arm signals out in front of them if it was the human driver paying attention, rather than the car just pulling up right beside me cautiously.

In their current form, they are extremely unsafe for cyclists and pedestrians.

Here’s some Tesla stans/Elon Musk worshippers making a video about the car’s Full Self Driving (FSD) feature. In the middle of the video, the car tries to turn into a cyclist. It’s crazy these things are allowed to be used in public.

This Tesla Using FSB Beta Tries Driving Into A Cyclist (jalopnik.com)

2 Likes

We’re probably training these things every time you do a Captcha - notice how they’re frequently street/vehicle related? Don’t think that’s a coincidence. And not an inspiring one.

1 Like

Yes, especially reCAPTCHA. It’s amazing how much data they use from those: Why AI developers love those annoying CAPTCHAs

1 Like

What are the ethical differences between fatalities caused by autonomous vehicles as opposed to human drivers?

Let’s make it ceteris paribus, so don’t assign special values to categories of victims such as “vulnerable road users” or “other car occupants.” I already understand that ethical distinction and I’m just asking about the abstract analysis. In other words, if the only difference in a death is the controller of the vehicle, are there different ethical considerations based on the nature of that controller?

It’s a sincere question, I do not know.

1 Like

That’s a little too philosophical for me. I’m more interested in the practical. Get the things off the road before they injure or kill more people than they already have. And if they’re on the road, who’s legally responsible for the people they injure and kill? For damn sure, don’t beta test the things on public roads like is already happening en masse by a number of irresponsible corporations like Google and Tesla.

Those videos are scary, indeed. If I were riding with a human who drove like that I would be concerned and not ride with that person again.

I’m still curious what others think about my question on ethics, but just slightly less philosophically, how would you ('all) feel about self driving cars if they actually caused fewer deaths than human drivers?

I don’t know if that’s true or not, but at least some people make that claim, for example:

  • With autopilot disengaged and without active safety features, Tesla vehicles were involved in one accident for every 978,000 miles driven in Q1 2021, which is down from every 1.42 million miles driven in Q1 2020.
  • With autopilot Engaged, Tesla vehicles were involved in one accident for every 4.19 million miles driven in Q1 2021, which is actually down from one every 4.68 million miles driven in Q1 2020.

Maybe the question should be, what’s the threshold of death that a device can have before it is considered a menace to society? Do microwave ovens cause death? Do toaster ovens?
I’m being serious, as I really don’t have the answer (and I’m too lazy to internet search).
It seems like if a microwave oven started causing 10 deaths (just to pull a number out of the air) a week (for whatever reason) would we issue a recall? Should other technology be any different? If the new fangled XYZ Self Drive Car started causing 10 deaths a week too, shouldn’t it be recalled as well?
I think the goal should be 0 deaths of course. But reality is, accidents do happen. So what is society’s threshold before we start issuing recalls for those killer microwave ovens and self driving cars?

In their current state self-driving cars should be banned period from any roads with cyclists on them. I say it should only be enabled on the interstate or busy highway only. The AI is not ready yet.

Also, if you watch any of ThunderF00ts videos on YouTube, you’ll see that Elon Musk’s projects, Starlink, the boring tunnel, neuralink, etc, are all scams that consecutively miss goal timelines by miles, and that Elon Musk is the worlds greatest con artist.