Self-driving cars are often marketed as safer than human drivers, but new data suggests that may not always be the case.

Citing data from the National Highway Traffic Safety Administration (NHTSA), Electrek reports that Tesla disclosed five new crashes involving its robotaxi fleet in Austin. The new data raises concerns about how safe Tesla’s systems really are compared to the average driver.

The incidents included a collision with a fixed object at 17 miles per hour, a crash with a bus while the Tesla vehicle was stopped, a crash with a truck at four miles per hour, and two cases where Tesla vehicles backed into fixed objects at low speeds.

    • No1@aussie.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 hour ago

      Bro, anybody who has watched a Predator movie knows this is fact.

      Just how much K do you need to take to argue this?

    • slevinkelevra@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      67
      arrow-down
      1
      ·
      1 day ago

      Yeah that’s well known by now. However, safety through additional radar sensors costs money and they can’t have that.

      • tomalley8342@lemmy.world
        link
        fedilink
        English
        arrow-up
        66
        ·
        22 hours ago

        Nah, that one’s on Elon just being a stubborn bitch and thinking he knows better than everybody else (as usual).

        • 73ms@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          8 hours ago

          Well I mean if you believe that it is possible in a safe way it’s the one thing that Tesla’s got going for it compared to Waymo which is way ahead of them. Personally I don’t but I can see the sunk cost.

        • ageedizzle@piefed.ca
          link
          fedilink
          English
          arrow-up
          25
          arrow-down
          6
          ·
          edit-2
          21 hours ago

          He’s right in that if current AI models were genuinely intelligent in the way humans are then cameras would be enough to achieve at least human level driving skills. The problem of course is that AI models are not nearly at that level yet

          • CheeseNoodle@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            6 hours ago

            Also the Human brain is still on par with some of the worlds best supercomputers, I doubt a Tesla has that much onboard processing power.

            • ageedizzle@piefed.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 hours ago

              Good point. Though I’ve heard some of these self driving cars connect remotely to a person to help drive when the AI doesnt know what to do, so I guess it’s conceivable that the car could connect to the cloud. That would be super error prone though. Connectivity issues cloud brick your car.

          • T156@lemmy.world
            link
            fedilink
            English
            arrow-up
            41
            ·
            20 hours ago

            Even if they were, would it not be better to give the car better senses?

            Humans don’t have LIDAR because we can’t just hook something into a human’s brain and have it work. If you can do that with a self-driving car, why cut it down to human senses?

          • kameecoding@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            13 hours ago

            I am a Human and there were occasions where I couldn’t tell if it’s an obstacle on the road or a weird shadow…

            • merc@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              52 minutes ago

              And, we humans have built-in binocular vision that we’ve been training for at least 1.5 decades by the time we’re allowed to drive.

              Also, think about what you do in that situation where there’s a weird shadow. Slow down, sure. But, also move our heads up and down, side to side, trying to use that powerful binocular vision to get different angles on that strange shadow. How many front-facing cameras does Tesla have. Maybe 3, and one of those is mounted on the bumper? In theory, 3 cameras could give it 3 different “viewpoints” for binocular vision. But, that’s not as good as a human driver who can shift their eyes around to multiple points to examine a situation. And, if one of those 3 cameras is obscured (say the one on the bumper) you’re down to basic binocular vision without even the ability to take a look from a different angle.

              Plus, we have evidence that Tesla isn’t even able to use its cameras to achieve binocular vision. If it worked, it shouldn’t have fallen for the Wile E. Coyote trick.

            • ageedizzle@piefed.ca
              link
              fedilink
              English
              arrow-up
              4
              ·
              12 hours ago

              Yes. In theory cameras should be enough to get you up to human level driving competence but even that is a low bar.

              • NιƙƙιDιɱҽʂ@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                3 hours ago

                I feel like camera only could theoretically pass human performance, but that hinges entirely on AI models that do not currently exist, and that those models, when they do exist, being capable of running inside of a damn car.

                At that point, it’d be cheaper to just add LiDAR…

      • paraphrand@lemmy.world
        link
        fedilink
        English
        arrow-up
        35
        ·
        edit-2
        24 hours ago

        just one more AI model, please, that’ll do it, just one more, just you wait, have you seen how fast things are improving? Just one more. Common, just one more…

      • parzival@lemmy.org
        link
        fedilink
        English
        arrow-up
        10
        ·
        22 hours ago

        I’m not too sure it’s about cost, it seems to be about Elon not wanting to admit he was wrong, as he made a big point of lidar being useless

      • halcyoncmdr@piefed.social
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        2
        ·
        22 hours ago

        I don’t think it’s necessarily about cost. They were removing sensors both before costs rose and supply became more limited with things like the tariffs.

        Too many sensors also causes issues, adding more is not an easy fix. Sensor Fusion is a notoriously difficult part of robotics. It can help with edge cases and verification, but it can also exacerbate issues. Sensors will report different things at some point. Which one gets priority? Is a sensor failing or reporting inaccurate data? How do you determine what is inaccurate if the data is still within normal tolerances?

        More on topic though… My question is why is the robotaxi accident rate different from the regular FSD rate? Ostensibly they should be nearly identical.

        • NotMyOldRedditName@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          17 hours ago

          Regular FSD rate has the driver (you) monitoring the car so there will be less accidents IF you properly stay attentive as you’re supposed to be.

          The FSD rides with a saftey monitor (passenger seat) had a button to stop the ride.

          The driverless and no monitor cars have nothing.

          So you get more accidents as you remove that supervision.

          Edit: this would be on the same software versions… it will obviously get better to some extent, so comparing old versions to new versions really only tells us its getting better or worse in relation to the past rates, but in all 3 scenarios there should still be different rates of accidents on the same software.

          • 73ms@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            3
            ·
            14 hours ago

            The unsupervised cars are very unlikely to be involved in these crashes yet because according to Robotaxi tracker there was only a single one of those operational and only for the final week of January.

            As you suggest there’s a difference in how much the monitor can really do about FSD misbehaving compared to a driver in the driver’s seat though. On the other hand they’re still forced to have the monitor behind the wheel in California so you wouldn’t expect a difference in accident rate based on that there, would be interesting to compare.

            • NotMyOldRedditName@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              4 hours ago

              There are multiple unsupervised cars around now, it was only the 1 before earnings call (that went away), then a few days after earnings they came back and weren’t followed by chase cars. There’s a handful of videos over many days out there now if you want to watch any. The latest gaffe video I’ve seen is from last week where it drove into (edit: road closed) construction zone that wasn’t blocked off.

              I would still expect a difference between California and people like you and me using it.

              My understanding is that in California, they’ve been told not to intervene unless necessary, but when someone like us is behind the steering wheel what we consider necessary is going to be different than what they’ve been told to consider necessary.

              So we would likely intervene much sooner than the saftey driver in California, which would mean we were letting the car get into less situations we perceive to be dicey.

              • 73ms@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 hours ago

                Yeah I seen that video and another where they went back and forth for an hour in a single unsupervised Tesla. One thing to note is that they are all geofenced to a single extremely limited route that spans about a 20 minute drive along Riverside Dr and S Lamar Blvd with the ability to drive on short sections of some of the crossing streets there, that’s it.