BY THIS POINT, a modified Lincoln MKZ driving itself around San Jose isn’t anything special. With more than two dozen companies are testing autonomous tech in California, what’s one more joining the pack?
Not much, until you find out what’s missing on the sedan cruising down the highway and winding through city streets at night. No spinning lidar sensor on the roof. No radar tucked behind the body panels. No ultra-accurate GPS unit—all standard self-driving hardware. In fact, according to Jianxiong Xiao, the car navigates using nothing but a handful of cameras he bought at Best Buy for $50 a pop.
Xiao is the founder of AutoX, the company behind this latest spin on autonomy. He left his role running Princeton’s Computer Vision and Robotics lab last year to launch the startup, with the mission to “democratize autonomy and make autonomous driving universally accessible to everyone.”
That is to say, if you can lower the price of entry by ditching expensive hardware, you can deliver the tech’s safety and convenience benefits to more people, faster. Xiao (who goes by ‘Professor X’ because people make a mess pronouncing his name, and it sounds cool), says more established players have a fixed mindset about the sensor suite robocars demand, and don’t give a lot of thought to budget.
“Google has traditionally been the most dominant, and lots of companies have tried to copycat their approach,” he says. He believes big companies could just throw money at the problem, particularly in the early days, but that doesn’t scale to mass production.
Hence the Best Buy run. AutoX slapped seven cheap cameras around the exterior of the car, pointing in different directions for a 360-degree view. Instead of Differential GPS, which gives a location accurate to a few inches, the engineering team just grabs the signal from the car’s navigation system, accurate to more like 50 feet.
Don’t think this means you can withdraw a few hundred bucks and nab some leftover PC bits to make your car drive itself. First off, Xiao says eventually, he’ll incorporate other sensors to build a fully operational, totally safe system. That includes lidar, the ultra-expensive laser scanning system that most experts (just not Tesla) think is essential for 100 percent driving safety. (The price is dropping thanks to increased competition in the space, and Xiao says he’ll wait until it’s affordable to add it to his cars.)
And second, you don’t have the stuff that Xiao is really focused on: the AI software that processes the data you pull from your sensors. That, and a trunk stuffed with $10,000 worth of chips from Intel and Nvidia. Maybe Xiao will need lidar down the road, but his first demonstration video seems to prove he can not only handle feeds from seven cameras at once, but he can use their data to steer a car around San Jose.
AutoX plans to work as a supplier, slinging self-driving software to automakers and others in need. In tech terms, the company is building the “full stack,” with a system that can perceive the environment, plan a course of action, and control the car. At the moment, those tasks are are typically split: Mobileye’s popular cameras, for example, are awesome at perception, but don’t do the other bits. (Given that Intel just bought Mobileye for $15 billion, Xiao might have a nice payday in his future.)
Outsiders buy into Xiao’s idea that an integrated approach to self-driving software could make a more sophisticated “brain”. “You could imagine a better architecture would allow some feedback,” says Bart Selman, professor of computer science at Cornell. “The planning unit could ask questions, and go back to the vision system to say, ‘Reanalyze that part of the road because I’m uncertain about it.’” That kind of “active vision” is a long-standing goal in the AI world.
But if AutoX wants to master this field, it needs more than clever software. “There are so many interacting components, planning, behavioral rules, perception, location and localization, and all these need to work together,” says Raj Rajkumar, who studies autonomous driving at Carnegie Mellon. “That takes a big team.”
Xiao won’t provide a timeframe for getting his system into cars en masse, but says the software could be ready in two to three years. So when your car eventually drives you to your favorite big-box retailer, there’s still a chance you’ll be able to say “this is where it all started.”