Audi’s AI future tackles autonomous driving one solution at a time

APD NEWS

text

If you want a self-driving car, you need artificial intelligence. At its first Audi Tech Summit, the automaker unveiled Audi AI, its long-term plans to stay ahead of -- or at least keep pace with -- its competitors.

To help understand where the company is right now and where it intends to go (and when), we spoke with Dr. Miklos Kiss, head of predevelopment and driver assistance systems at Audi.

The automaker is tackling the tough problems by taking on simpler solutions for the various levels of autonomy (Level 0 is basically manual driving, Level 5 is full autonomy).

What is the state of AI within Audi?

"AI within Audi means Audi intelligence. That means we present more functionality than the customer would expect. That differs very much from the AI topic that is just machine learning.

What we are presenting right now is that Audi has the first passenger car that is ready for Level 3 driving, an automation level where, for the first time, we take over the whole responsibility of the driving task.

So the driver no longer has to monitor the driving task in a traffic jam up to 60 kilometers per hour (37 miles per hour); you just sit back and relax.

That's new. Therefore we needed a new architecture. We now have a central computing unit for drivers' assistant systems that gathers all the sensor data in one unit."

How is that different from what Audi's been doing before?

"It differs very much from what we did before where certain systems didn't talk to each other. So we had a system responsible for the longitudinal under control.

We had the Audi Lane Assist responsible for lateral control and these were two completely independent systems. Now we brought them together into an integrated system. That does longitudinal and lateral and lack of control. This gives us much more possibilities to new scenarios in traffic."

Right now you have Level 3. What do you see for the future? Is Level 4 just around the corner?

"Well the first thing is the levels of automotive driving. The second thing is the scenarios of automated driving. For the highway scenario, we see Level 3 for the next couple of years.

It's very hard to get to Level 4 because we would have to come up with any conceivable situation on any highway in any country.

That's quite a hard one. On the other hand, a Level 3 system on the highway is very comfortable for the driver. So it's a valuable system.

When you think about the automotive parking garage pilots, a Level 3 system wouldn't help very much because the driver would have to go to the store and then walk out to their car just as before.

But a Level 4 system in the parking garage would be the comfort system. Exit the car at the entrance, go on your way and the car would park itself.

That would be nice. So this is a limited-use case for low speeds that we can think of for a Level 4 system that'll come much earlier than driving on a highway.

So the important thing is to separate the levels from the functions."

How do you train these systems?

"Well, they're two different things. We have object detection algorithms and we train them with pictures and video streams we have from all over the world. Lots of data.

The second thing is the algorithm for the maneuvering itself. This is physical. We have no trained algorithms in the car right now because we describe them in physical algorithms that come from engineers. So this is the big difference. There's no self-learning car that drives on AI data."

Is the goal to have a self-learning car that learns and it uploads to the cloud in the cloud shares it with other cars?

"We call it a vision, not an actual goal."

What is the vision for the next five to 10 years?

"The vision is more complex scenarios. When we envision a future research project of driving in the city, we're facing complex intersections where there are pedestrians, cyclists, cars and trucks, and distinguishing all the possible situations and do the right hypothesis. I think this is the big thing we can do with AI.

At the moment, there are much more simpler things. For example, describing a parking lot. What is a parking lot? What's not? That's very hard to describe to an algorithm.

But for a self-learning algorithm we could feed millions of pictures from all the world.

That's a possible solution. So we're testing simple scenarios and coming to the more complex ones."

(ENGADGET)