Tesla crashing, but why?

formatting link

For brian, its a video of a tesla crashing into a test car made out of what looks like cardboard.

Tesla uses radar detection doesn't it? Is the car a reflector of radar? Its not obvious that it is.

Reply to
dennis
Loading thread data ...

Amongst other things, I would hope ....

Reply to
Jethro_uk

what-happens-next

A better description of what happened follows.

The tesla if driving along at say about 30 following another car, this othe r cars sees a stopped car in front of it, so gently pulls into another lan e to avoid it, the tesla then continues to drive straight into the back of the parked life size cardboard car.

Personlly I don't understand how this can happen but it seems it has.

This is why and where cognative recognition is needed and the next thing af ter AI, as AI just doens't 'cut the mustard' is the only term I can think o f.

LIDAR IIRC

but it's not a car it's cardboard painted to look like a car.

To the human eye it looks like a car, but it isn't.

What we need to know is did the tesla see it or what did it think it was a low hanging cloud ?

Reply to
whisky-dave

The car or the company?

formatting link
or
formatting link

Reply to
Chris Hogg

AI doesn't really exist at the moment. When computers can understand language, I'll be more interested.

Reply to
Jethro_uk

Assuming they didn't fudge the test by setting up a flimsy model that the Tesla couldn't see, then there's another simple explanation. As follows.

The car in front of the tesla masks the stationery vehicle until the last moment. Then, it turns sharply into the next lane. A very strange manoeuvre, probably practised a few times.

The tesla then sees the stationery vehicle at the last moment, applies the brakes, but there just isn't enough space to stop.

The first question is why the tesla does not switch lanes to avoid the impact? And the answer is probably that it has been programmed just to apply the brakes and not do emergency evasive manoeuvres.

The second question is whether a human driver of the tesla would have done any better? I suspect not.

Reply to
GB

Cardboard is transparent to radio waves. Myself I think that we have a long way to go for safe driverless cars. However it has also to be remembered that drivered cars are not safe either, and it could be argued that distraction of a driverless system is less likely. One person said the other day, the programmers do not know how self learning software makes decisions, but do we really know how we make decisions? I learned that we have two decision making systems in our brains. One is very energy intensive and slow, but makes good decisions, the other is quick and dirty and simply looks up similar situations and does what worked before. That is probably how most software will do it. One decision, however is made on the outcome of a previous one and the trail gets very blurred in both human and computer cases.

If you came around a blind corner on a dark road and somebody had put a huge mirror across the road with the only free path where you saw the reflection of your own car, you would probably swerve straight into the object hidden by the mirror. Brian

Reply to
Brian Gaff

It didn't appear to slow down at all, certainly not attempt an emergency stop.

Reply to
Max Demian

I would it expect it to avoid running into empty cardboard boxes fallen from a lorry. A human would do so, if possible.

It couldn't see the artic in the first Tesla Autopilot fatality.

Reply to
Max Demian

And the answer is ... Tesla "Autopilot" is no such thing.

Reply to
Huge

Why do we care if a Tesla doesn't avoid a fake car? It's designed to stop crashing into real objects.

Reply to
Jimmy Wilkinson Knife

I prefer braking myself, instead of swerving into the unknown.

Reply to
Jimmy Wilkinson Knife

Wrong wrong wrong. Studies have shown driverless cars are already TWENTY times safer than the average human driver.

Reply to
Jimmy Wilkinson Knife

The same as a couple of recently reported US accidents. I still think it is amazing that they do as well as they do, the algorithms must be pretty complicated. Some of the responses obviously need to be fairly well damped, it sort of sounds as if some of the time the equivalent of our emergency stop reflex is over-damped.

And the "other" problem, of recognising the difference between an overhead gantry and a trailer crossing in front of you that only a Lamborghini can get under is probably pretty difficult too.

Reply to
newshound

Not quite. However I was not arguing for one over the other. Did you read my whole message about the way we are fallible?

The point is that the safety of any vehicle depends on the understanding of the situation at any given moment in time. If its not in the realm of experience of either a human or a machine, unpredictable things happen. The problem is that we seem to be playing the blame game here, rather than trying to understand the requirements to make things as safe as possible. If you notice many drivers seldom get taught motorway driving, or snow driving, probably as there is not a lot of snow about, but with the simulator now around this could be done of course. So the real issue is do we trust programmers to think of everything when they produce software to drive a car, or anything else. We do however seem very happy to put a car in the control of a young bloke with hardly any training.

Brian

Reply to
Brian Gaff

It depends on how you define safe. If you simply go for "has fewer accidents per mile", then chances are they already have humans well and truly beaten.

You can see a time where insurance premiums will rise for driers who want to take manual control of their cars ;-)

Reply to
John Rumm

formatting link

My latest car (a 2013 model) has collision avoidance radar, for which I received an insurance discount. It 'bongs' at you if it thinks you're going to run into the car in front, and if you don't apply the brakes, it applies them for you. Irritatingly, it has a warning light to say it is on, whereas one to warn you it was off would make more sense.

It also has "distance keeping" cruise control, as part of the same system.

Reply to
Huge

what do yuo mean by couldnlt see hasn't AI evolved to see things ?

Isn;t that the case with most driving situations, I don;t drive myself but I'm pretty sure that when I've been in a car then there's usaully a few other cars on the road not just 2 cars on the road.

didn't seem that sharp.

I guess no one has needed to make such a manoeuvre, seems a little odd to me.

Does it, there's a gap of about 6 or so car lenghs, maybe it was too far away to be seen, but the humans saw it, why didn't the tesla ?

It didn't lok like it applied the breaks.

mabe there's not enough i (eye) in Ai to 'see' :-D

I suspect they would have although that depends on the person I guess, who only observes the car infront when driving ?

Reply to
whisky-dave

Yes I agree, but surely there's more than just radar to detect such things. But why didn't those testing this know, perhaps it was a setup (conspiracy theory).

Yes a very long way.

But that distraction was needed, the attention needed to be diverted from a moving vehical to a stationary one, perhaps the AI assumed a stationary vehical can't cause an accident.

That's what fuzzy logic was for.

Likke closing our eyes if somethjing is libel to enter them, and why we see things in slow motion sometimes even though the speed of the actual event hasn't changed.

Yes it gets fuzzy.

Reply to
whisky-dave

Because it could have been a warning sign saying bridge collapsped. Or if it was a box fallen from a lorry, suerly it's best to aviod a collision with any object and why it didn't try to aviod the object.

But what I'd want to know is why they bothered making a cardboard car surely they could have found a real car to use.

Reply to
whisky-dave

HomeOwnersHub website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.