Robotaxis

Elon Musk claims Tesla will have 1 million robotaxis on roads next year. Currently, Tesla offers Autopilot – an advanced driver assistance system – as a standard feature in its cars. According to the company’s website, Autopilot can automatically hold a car in its lane and accelerate or brake robotically, for example, in response to pedestrians or other cars in its way. Tesla can improve Autopilot with new features (or bug fixes) over time via over-the-air updates, as well. In addition, Tesla sells a “Full Self-Driving,” or FSD, package for its vehicles.

Some would say that Tesla is years ahead of the competition when it comes to powertrain efficiency and software. Since October of 2016, all Tesla’s sold are “capable of full autonomy” pending software and an updated processor released this year. Each Tesla has 360-degree vision from 8 cameras, 12 ultrasonic sensors, and a forward-looking radar.

Despite the equipment that comes with a Tesla there are people who believes Tesla’s vision-based autonomy strategy will not work with existing computer vision and AI. Sean Chandler from Seeking Alpa don’t think it will be reliable enough for regulatory approval.   

Sean thinks it is clear that in the long run, a vision-based autonomous car is the most cost-effective way to go. Humans drive with just two eyes; Tesla has 8 eyes, 12 ultrasonic sensors, and a forward radar. However, humans have deep understandings of the roads that we drive and the things that we see. Having more eyes and sensors means that the car doesn’t have to be as smart as a human, but it still needs to be very smart. He would argue that Tesla has yet to prove that they can do autonomy across a variety of scenarios.

Earlier this year during a podcast with ARK Invest, Elon Musk said that the cars would soon be feature complete, or level 3/4 on the scale below.

SAE.org

Here’s what Elon Musk said during the podcast:

I think we will be feature complete full self driving this year. Meaning the car will be able to find you in a parking lot, pick you up, take you all the way to your destination without an intervention. This year. I would say that I am certain of that; that is not a question mark.

Surprisingly, on Tesla’s recent earnings call he confirmed that it’s still possible that something will be out this year:

  • “While it’s going to be tight, it still does appear that there will be at least an early access release of a feature-complete full self-driving feature this year. So it’s not for sure, but it appears to be on track for at least an early access release of a fully functional Full Self-Driving by the end of this year.”

With the exception of a demo in 2016 and a demo for investors earlier this year at Autonomy Day, customers and investors have no idea what to expect. (Videos to demos: 2016, 2019). Some positives we have seen though is substantial improvements to Autopilot and Navigate on Autopilot, which is the best level 2 system on the market.

However, unlike companies like Waymo (GOOG) (NASDAQ:GOOG) or Zoox that are constantly releasing footage of what their vehicles are capable of – Tesla doesn’t release anything.

Here’s videos of a Waymo car navigating a dust storm and a police-controlled intersection. Zoox has videos of its cars navigating San Francisco and Las Vegas, such as passing double-parked cars and yielding to dozens of pedestrians.

Recent updates to Autopilot have shown that it can detect stoplights, stop signs, and cones. Tesla’s Autopilot driver-assist features appears to be more advanced than any other production car available to consumers.

Tesla justified the latest price hike to its Full Self-Driving option with the release of Smart Summon, which was released days before Q3 ended, allowing Tesla to recognize $30M worth of Full Self-Driving revenue on its quarterly report.

Sean thinks the Smart Summon is great and appears to work most of the time but it’s far from perfect. Parking lots are chaotic and happen to be an entirely new driving category for Tesla.

Last month, Scott Kubo released a video of his Model 3 heading straight towards a movable barrier on Coronado Bridge in San Diego. This happened to be an extremely rare situation.

The concrete barrier is moved twice per day to regulate traffic flow between San Diego and Coronado Island. The driver was heading towards Coronado Island at approximately 5:40PM during sunset, likely before or while the barrier was being moved back (usually starts around 5:30).

Like Scott said in his video, the sun was setting and the barrier on the right was casting a long shadow across the road and part of the movable barrier on the left. Only a tiny part of the barrier was in direct sunlight.

Dash Cam Footage. Source: Scott Kubo

The perfect size of the reflection looks very much like a lane marking and the point in which the barrier meets the road is actually quite subtle.

The Autopilot system, which is constantly searching for lane markings, likely identified the reflection of the sun as a lane marking and misidentified the barrier as part of the road.

With computer vision, a big problem with roads is that they can vary in shades of gray. Some sections can be concrete (light gray) asphalt (dark gray) and some road sections may be newer (darker).

When Smart Summon was delayed, Elon Musk said that they were obsessed with curbs and that the hardest part about concrete is the excessive shades of gray. Throw in shadows, which can be at varying angles and intensities throughout the day/year and this becomes even more difficult for a vision-only system.

The situation Scott came across is so rare that just replicating the conditions would be difficult. During the summer, when the sun sets much later (long after the barrier is moved), these conditions aren’t possible. And due to the thickness of the shadow, this event can only happen for just a few minutes each day.

This is one of those scenarios where engineers are chasing the 9s to make the system 99.9999-something percent accurate.

On the plus side, Tesla’s large fleet gets to encounter and discover unique situations like this that most likely wouldn’t occur in a simulation.

On the flip side though, it raises questions about autonomy under Tesla’s strategy that relies primarily on vision. The system also uses ultrasonics and a front radar, but I’m not sure what happened in this case.

While there are many ways that this situation could have been prevented (such as reading and understanding the flashing “Merge Right” sign, or the traditional merge sign), it goes to show that the system needs a lot of work.

In this case, a redundancy such as LiDAR would have been an easy way to identify the barrier.

When it comes to LiDAR, Elon Musk calls it “lame”, or more specifically an expensive “crutch” when it comes to self-driving cars. He’s not wrong. LiDAR is expensive and once computer vision is mastered it may not be necessary. However, the question remains as to how long will it take for computer vision to be mastered. Zoox CTO Jesse Levinson said it could be decades. That’s quite far off, but even if it took 3-5 years that could be devastating, at least if the competition figures it out first.

In suburban Phoenix, Waymo is operating a fleet of self-driving cars and some of those cars are no longer using safety drivers. If Waymo can successfully deploy a large fleet of fully autonomous vehicles in this test area, their efforts will likely be accelerated across the globe. Mastering a single location would likely reduce complications with regulators; in fact, if Waymo is successful in one city, regulators in other cities may be asking Waymo to be in the next city on the list.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s