• tony@l.bxy.sh
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    There’s a few versions of this and several generations with different capability. […]

    This raises its own issues but is the nature of the “move fast and break things” ethos of tech today. While it has its benefits; is it suitable for vehicles, particularly their safety systems? It isn’t clear to me, as it is a double-edged sword.

    As a Level 2 system, the Tesla is not capable of injuring or killing someone. The driver is responsible for that.

    But I’d ask- if a Tesla saw YOUR loved one in the road, and it would have reacted but it wasn’t in FSD mode and the human driver reacted too slowly, how would you feel about that? I say this not to be contrarian, but because we really are approaching the point where the car has better situational awareness than the human.

    I would be angry that such a modern car with any form of self driving doesn’t have emergency braking. Though, that would require additional sensors…

    I’d also be angry that L2 systems were allowed in that environment in the first place, but as you say it is ultimately the drivers fault.

    Like cruise control having minimum speeds that generally prevent it being used in town; I would hope that the manufacturer would make it difficult to use L2 outside of motorway driving. This doesn’t prevent people bypassing it but means someone doing so was trying to do something they shouldn’t.

    With a connected vehicle, being able to limit L2 use outside of motorway should be straightforward.

    Then it becomes akin to disabling traction control or adaptive cruise control and having an accident that could be prevented. The tools are there, the default is on, a driver deliberately disabled it. The manufacturer did as much as they reasonably could.

    In the example above, if the car didn’t have the self driving package because the guy couldn’t afford it, wouldn’t you prefer that a decent but better than human self driving system was on the car?

    I would prefer they had no self driving rather than be under the mistaken impression the car could drive for them in the current configuration. The limitations of self driving (in any car) are often not clear to a lot of people and can vary greatly. I feel this is where accidents are most likely - in the stage between fully manual and fully autonomous.

    If Tesla offer a half-way for less money would you not expect the consumer to take the cheapest option? If they have an accident it is more likely someone else is injured, so why pay more to improve the self driving when it doesn’t affect them?

    If you can use cameras and make a system that costs the manufacturer $3000/car, and it’s 50 times safer than a human, or use LiDAR and cost the manufacturer $10,000/car, and it’s 100 times safer than a human, which is safer?
    The answer is the cameras, because it will be on more cars, thus deliver more overall safety.

    I agree an improvement is better than none, but I’m not sure your conclusion can be made so easily? Tesla is the only company I know steadfastly refusing to use any other sensor types and the only reason I see is price.

    Thinking about it, drum brakes are cheaper than disc brakes… (said with tongue-firmly-in-cheek)

    Another concern is that any Tesla incidents, however rare, could do huge damage to people’s perception of self driving. People mightn’t know there is a difference between Tesla and other manufacturer’s autonomous driving ability.

    For many people Tesla is self-driving cars, if a Tesla has an accident in L2 even though this is the driver’s fault the headlines will be “Tesla autopilot hits school child” not “Driver inappropriately uses limited motorway assistance mode of car in small town hitting school child

    What about the impact on the industry? If Tesla is much cheaper than LIDAR-equipped vehicles will this kill a better/safer product a-la betamax?

    IMHO safety shouldn’t take a lower priority to price/CEO demands. Consumers often don’t know and frankly shouldn’t need to know the details of these systems.

    Do you pick your airline based on the plane they fly and it’s safety record or the price of the ticket, being confident all aviation is held to rigorous safety standards?

    As has been seen recently with a certain submarine, safety measures should not be taken lightly.

    • SirEDCaLot@lemmy.fmhy.ml
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      While it has its benefits; is it suitable for vehicles, particularly their safety systems? It isn’t clear to me, as it is a double-edged sword.

      Perhaps, but if you are developing a tech that can save lives, doesn’t it make sense to put that out in more cars faster?

      I would be angry that such a modern car with any form of self driving doesn’t have emergency braking. Though, that would require additional sensors…

      Tesla does this with cameras whether you pay for FSD or not. It can also detect if you’re near an object and slam on gas instead of brake, it will cancel that out. These are options you can turn off if you don’t want them.

      I’d also be angry that L2 systems were allowed in that environment in the first place, but as you say it is ultimately the drivers fault.

      I’m saying- imagine if the car has L2 self driving, and the driver had that feature turned off. The human was driving the car. The human didn’t react quickly enough to prevent hitting your loved one, but the computer would have.
      Most of the conversation around FSD type tech revolves around what happens when it does something wrong that the human would have done right. But as the tech improves, we will get to the point where the tech makes fewer mistakes than the human. And then this conversation reverses- rather than ‘why did the human let the machine do something bad’ it becomes ‘why did the machine let the human do something bad’.

      I would hope that the manufacturer would make it difficult to use L2 outside of motorway driving.

      Why? Tesla’s FSD beta L2 is great. It’s not perfect, but it does a very good job for most parts of driving on surface streets.

      I would prefer they had no self driving rather than be under the mistaken impression the car could drive for them in the current configuration. The limitations of self driving (in any car) are often not clear to a lot of people and can vary greatly.

      This is valid. I think the name ‘full self driving’ is problematic somewhat. I think it will get to the point of actually being fully self driving, and I think it will get there soon (next year or two). But they’ve been using that term for several years now and especially the first few versions of ‘FSD’ were anything but. And before they started with driver monitoring, there were a bunch of people who bought ‘FSD’ and trusted it a lot more than they should have.

      If Tesla offer a half-way for less money would you not expect the consumer to take the cheapest option? If they have an accident it is more likely someone else is injured, so why pay more to improve the self driving when it doesn’t affect them?

      That’s not how their pricing works. The safety features are always there. The hardware is always there. It’s just a function of what software you get. And if you don’t buy FSD when you buy the car, you can buy it later and it will be unlocked over the air.
      What you get is extra functionality. There is no ‘my car ran over a little kid on a bike because I didn’t pay for the extra safety package’. It’s ‘my car won’t drive itself because I didn’t pay for that, I just get a smart cruise control’.

      Tesla is the only company I know steadfastly refusing to use any other sensor types and the only reason I see is price.

      Price yes, and difficulty integrating different data sets. On their higher end cars they’ve re-introduced a high resolution radar unit. Haven’t see much on how that’s being used though.
      The basic answer is they can get to where we need with cameras alone because our software is better than others. For any other automaker that doesn’t have Tesla’s AI systems, LiDAR is important.

      Another concern is that any Tesla incidents, however rare, could do huge damage to people’s perception of self driving.

      This already happens whether the computer is driving or not. Lots of people don’t understand Teslas and think that if you buy one it’ll drive you into a brick wall and then catch on fire while you’re locked inside. Bad journalists will always put out bad journalism. That’s not a reason to stop tech progress tho.

      If Tesla is much cheaper than LIDAR-equipped vehicles will this kill a better/safer product a-la betamax?

      Right now FSD isn’t a main selling point for most drivers. I’d argue that what might kill others is not that Tesla’s system is cheaper, but that it works better and more of the time. Ford and GM both have a self driving system, but it only works on certain highways that have been mapped with centimeter-level LiDAR ahead of time. Tesla has a system they’re trying to make general purpose, so it can drive on any road. So if the Tesla system takes you driveway-to-driveway and the competition takes you onramp-to-offramp, the Tesla system is more flexible and thus more valuable regardless of the purchase price.

      Do you pick your airline based on the plane they fly and it’s safety record or the price of the ticket, being confident all aviation is held to rigorous safety standards? As has been seen recently with a certain submarine, safety measures should not be taken lightly.

      I agree standards should apply, that’s why Tesla isn’t L3+ certified even though on the highway I really think it’s ready for it.