The shift to self-drive cars is coming from two directions.
Google is developing a top-down solution. Their cars are designed from the start to be completely autonomous.
Car manufacturers are working on gradual improvements to their existing platforms, so that cars will steadily become more autonomous in certain situations, such as freeway driving, and parking.
The Technology Is Already Here
The New Division of Labor by Frank Levy and William Flew in 2004 put information processing on a spectrum. At one end were simple math tasks that require only application of clear rules. If that was all your job required, then computers have already taken that. But at the other end of that spectrum are jobs that can't be boiled down to simple rules, especially ones which require the human skill of pattern recognition. As an example, they cited a car driving in traffic. Such a job was never going to be taken by a computer in the foreseeable future, they said in 2004.
That was confirmed by the first DARPA (Defence Advanced Research Projects Agency) Grand Challenge: to build a completely autonomous vehicle that could complete a 150-mile course through Mojave Desert. The first run, in 2004, was a debacle. 15 vehicles entered; 2 didn't make it to the start line, one flipped over in the start area, and the "winner" only managed 7 miles out of the 150 before veering off course and into a ditch.
But by 2012 Google's automated cars could safely drive anywhere Google had meticulously mapped the environment, and were licensed for the road in Nevada.
We are at an inflection point - a time when the rules change dramatically.
Effect of exponential growth. Moore's Law - basically, the amount of computing power per dollar is doubling every (year/18 months). Every so often there's a prediction that Moore's Law because of some fundamental physical fact. But "brilliant tinkering" has found ways around the various roadblocks every time.
There has been no time in history where cars or planes got twice as fast or twice as efficient every year, so we have trouble understanding impact of this constant doubling. Best way to explain is the grain of rice on each square of chessboard story.
The chessboard story is impt because the board has two halves. We can get our heads around the numbers on the first half of the chessboard. After 32 squares the emperor had 'only' given away 4 billion grains of rice - about the yield of one large field. But when get into second half of the board, the numbers quickly get weird - way past our comprehension.
.... a computer called ASCI Red, designed to be the first supercomputer to process more than one teraflop. A 'flop' is a floating point operation, i.e. a calculation involving numbers which include decimal points (these are computationally much more demanding than calculations involving binary ones and zeros). A teraflop is a trillion such calculations per second. Once Red was up and running at full speed, by 1997, it really was a specimen. Its power was such that it could process 1.8 teraflops. That's 18 followed by 11 zeros. Red continued to be the most powerful supercomputer in the world until about the end of 2000.
I was playing on Red only yesterday - I wasn't really, but I did have a go on a machine that can process 1.8 teraflops. This Red equivalent is called the PS3: it was launched by Sony in 2005 and went on sale in 2006. Red was only a little smaller than a tennis court, used as much electricity as eight hundred houses, and cost $55 million. The PS3 fits underneath a television, runs off a normal power socket, and you can buy one for under two hundred quid. Within a decade, a computer able to process 1.8 teraflops went from being something that could only be made by the world's richest government for purposes at the furthest reaches of computational possibility, to something a teenager could reasonably expect to find under the Christmas tree.
All the technology is in place - the sensors, the hardware and the computer programs, have all been invented.
They are not yet cheap enough, nor are they reliable enough in less-than-ideal conditions, to be completely autonomous.
But all the hard stuff has been done in the last 10 years. The refining won't take another ten.
How Tech Advances
The sensors need to improve, and this is how it's happening
Superfine visual accuracy for driveless cars:
The new chip uses an established detection and ranging technology called LIDAR (Laser Illuminated Detection And Ranging), used in autonomous vehicles, for example. With LIDAR, a target object is illuminated with scanning laser beams. The reflected light is then analyzed to provide information about the object’s size and its distance from the laser to create an image of its surroundings.
However. the new camera chip has an entire array of tiny LIDARs on a coherent imager, so “we can simultaneously image different parts of an object or a scene without the need for any mechanical movements within the imager,” Hajimiri says.
Hajimiri says the current array of 16 pixels could also be easily scaled up to hundreds of thousands. One day, by creating such vast arrays of these tiny LIDARs, the imager could be applied to a broad range of applications from very precise 3D scanning and printing to helping driverless cars avoid collisions to improving motion sensitivity in superfine human machine interfaces, where the slightest movements of a patient’s eyes and the most minute changes in a patient’s heartbeat can be detected on the fly.
Vision algorithms are now good enough to let a car drive automatically with just one camera.
Most self-driving vehicles, including Google’s various prototypes, are bedazzled with sensors, including cameras, ultrasound, high-accuracy GPS, and expensive laser-ranging instruments known as lidar. These devices help the cars build up a composite picture of the surrounding world in order to drive safely. But some components, such as the lidar, cost tens of thousands of dollars.
In demo that shows how quickly some of the technology is advancing, Magna, a company that supplies components to most large carmakers, showed recently that it can make a car drive itself (on the highway at least) using just a single camera embedded in the windshield. Magna hasn’t said how much the technology would cost carmakers, but vehicle camera systems tend to cost hundreds of dollars rather than thousands. The feat is made possible thanks to rapid progress in the software, which comes from the Israeli company MobileEye, that’s used to interpret a scene.
Nathaniel Johnson, lead control algorithm engineer at Magna, took me for a ride in a Cadillac equipped with the technology. After pulling onto the I-94 just north of Ypsilanti, Michigan, he pressed a button on the steering wheel to activate the system, and then sat back and let the car take over.
“It can drive itself in many situations,” Johnson explained, as the car followed the curve of the road. “It uses various image-processing techniques.”
The entertainment display on the car’s dashboard showed the video feed being processed by MobileEye’s software. Lane markings were highlighted in green, and green boxes were drawn around each vehicle ahead, with numbers showing their distance in feet. The software also instantly recognized traffic signs, and Johnson explained that the automated driving system could be configured to stick to whatever speed the signs showed. It was possible for him to take the wheel for a few seconds, then relinquish it, and have the self-driving system retake control.
Magna has been testing the technology for the past year in trials in the U.S., Germany, the U.K., and most recently China.
The technology wouldn’t be used this way by a carmaker, but would likely be combined with other sensor systems. Even so, it shows that automated driving capabilities could be added to vehicles relatively cheaply. “For higher levels of autonomy, we will require more sensors,” Johnson said. “But this is a nice introductory level of autonomy. It’s something people can afford, and get into their cars.”
Today, automated driving features such as adaptive cruise control and hands-free parallel parking are only offered on high-end vehicles. The Mercedes S-Class sedan, which can automatically follow the car ahead in stop-and-go traffic and will take the wheel to help swerve around obstacles, starts at $94,400 in the U.S. and can cost as much as $222,000.
The price of sensors and related systems will need to come down significantly if the technology is to have as big an impact as many people hope it will.
New Lidar Tech
THE EYES OF a self-driving car are called LIDAR sensors.
LIDAR is a portmanteau of “light” and “radar.” In essence, these sensors monitor their surroundings by shining a light on an object and measuring the time needed for it to bounce back. They work well enough, but they aren’t without their drawbacks. Today’s self-driving cars typically use LIDARs that are quite large and expensive. Google, for instance, used $80,000 LIDARs with its early designs. “Most vehicles in the DARPA urban challenge put half-a-million-dollars worth of sensors on the car,” says Daniela Rus, the director of MIT’s Computer Science and Artificial Intelligence Laboratory, referring to the government-backed competition that helped spawn Google’s autonomous vehicles.
But researchers at the University of California, Berkeley say they’ve developed a new breed of laser technology that could significantly reduce the size, weight, cost, and power consumption of LIDARs, potentially leading to a much broader range of autonomous vehicles. “This is important for unmanned vehicles on land and in the sky,” says Weijian Yang, one of the researchers behind the project.
Yang’s work is part of a wider effort to refine LIDARs and build a cheaper breed of autonomous cars and other vehicles. A German company called SICK already offers a LIDAR that sells for less than $10,000, and researchers from MIT and the National Research Foundation of Singapore, including Rus, recently built a self-driving golf cart using no more than four of these units (see video below). As LIDAR technology improves - and as we improve the algorithms that process the data gathered from these sensors - we’ll bring autonomy not just to cars but smaller contraptions, including golf carts, robots, and flying drones.
Anatomy of a LIDAR
A LIDAR operates by repeatedly changing the wavelength of a laser, so that the sensor can properly identify the light as it bounces off an object and returns to the sensor, and such wavelength changes require the precise manipulation of a mirror—or sometimes multiple mirrors. Typically, a separate electrical device moves these mirrors to and fro. But at Berkeley, Yang and his team developed a new option. They can move the mirrors with the laser itself.
“You don’t need an external electrical source,” says Yang, the lead author on the paper describing the technology, which was published today in the journal Scientific Reports. “The laser can change the position of the mirror automatically. The light has some kind of force.”
The result: they don’t need that outside electrical device, the sensor is smaller and lighter, and it consumes less power. The laser can be integrated with the mirror. The whole device can squeeze into a few hundred square micrometers of space. And it can be powered with the equivalent AA battery.
A More Accurate Picture
According to Yang, this same technology could improve optical coherence tomography, or OCT, which is used in medical imaging equipment. But the most intriguing possibilities lie in the world of robotics. Among other things, Yang explains, Berkeley’s method allows lasers to change wavelengths more frequently—one microsecond versus 10 or so milliseconds—and that means a LIDAR could potentially take more readings, more quickly. In other words, it could provide a more accurate picture of its surroundings.
Emilio Frazzoli, an MIT researcher who worked alongside Rus on those self-driving golf carts, says that smaller, cheaper LIDARs aren’t essential to the near future of self-driving cars. “Right now, these sensors are still expensive, but they’re becoming better and cheaper, and I don’t see them as a bottleneck,” he says, pointing out that even with today’s sensors, the price of a self-driving car compares favorably to how much you’d speed for a standard car and a full-time driver. But he says that better sensors are certainly welcome, particularly for other applications. Indeed, Yang believes that his work could help drive the creation of additional autonomous vehicles and robots, including contraptions the size of a smartphone. In the years to come, more machines will have eyes than you might expect.
Real time data sharing
Car manufacturers and the U.S. government are seriously looking into and researching two technologies that would enable future cars to communicate with each other and with objects around them.
Imagine approaching an intersection as another car runs a red light. You don't see them at first, but your car gets a signal from the other car that it's directly in your path and warns you of the potential collision, or even hits the brakes automatically to avoid an accident. A developing technology called Vehicle-to-Vehicle communication, or V2V, is being tested by automotive manufacturers like Ford as a way to help reduce the amount of accidents on the road.
V2V works by using wireless signals to send information back and forth between cars about their location, speed and direction. The information is then communicated to the cars around it in order to provide information on how to keep the vehicles safe distances from each other. At MIT, engineers are working on V2V algorithms that calculate information from cars to determine what the best evasive measure should be if another car started coming into its own projected path. A study put out by the National Highway Traffic Safety Administration in 2010 says that V2V has the potential to reduce 79 percent of target vehicle crashes on the road.
But researchers aren't only considering V2V communication, vehicle-to-infrastructure communication, or V2I, is being tested as well. V2I would allow vehicles to communicate with things like road signs or traffic signals and provide information to the vehicle about safety issues. V2I could also request traffic information from a traffic management system and access the best possible routes. Reports by the NHTSA say that incorporating V2I into vehicles, along with V2V systems, would reduce all target vehicle crashes by 81 percent
Driving In The Dark
A self-driving car has successfully navigated winding desert roads in complete darkness in trials in the United States.
The car drove at night through the Arizona desert without human interference and with its headlights switched off, to test the limits of its artificially intelligent navigation system.
Manufacturers of self-driving cars believe that their vehicles will be much safer to drive at night than human-controlled cars. More than half of all deaths in traffic accidents occur after dark, even though the number of miles driven decreases substantially at night compared with the daytime, according to a UK study.
The test car, a Ford Fusion Hybrid, was equipped with laser-based radar, or light detection and ranging (lidar), as its main road-sensing system. Engineers wearing night-vision goggles were on board ready to take control if the vehicle strayed off the test track.
Wayne Williams, who sat in one of the vehicles, said: “Inside the car, I could feel it moving, but when I looked out the window, I saw only darkness. I was following the car’s progression in real time using computer monitoring.”
Lidar systems on self-driving cars are normally complemented by cameras on the outside, which track road markings, surface patterns and other indicators. However, the cameras are useless in pitch-black conditions, meaning that vehicles must navigate using lidar, radar and, in some cases, global positioning systems.
Lidar is used in most self-driving cars and is indicated by spinning cylinders on the roof. Every second these cylinders shoot out millions of laser beams. The system measures the distance that each beam travels so that the car’s computer can create a constantly updated three-dimensional map of the car’s surroundings.
High-resolution 3D maps in the car’s “brain” contain detailed information about roads, such as signs, topography and buildings. The live lidar maps are matched against these to allow the car to determine its exact whereabouts. The live lidar maps are also complemented by data from the car’s radar, GPS and, in the daytime, its cameras.
“Thanks to lidar, the test cars aren’t reliant on the sun shining, nor cameras detecting painted white lines on the asphalt,” Jim McBride, from Ford, said. “Lidar allows autonomous cars to drive just as well in the dark as they do in the light of day.”
Ford’s tests took place on private roads without traffic lights, pedestrians or other obstructions. The company is to triple the size of its autonomous vehicle test fleet to about 30 cars this year.
Four trials of driverless cars will take place in England this year. Pod-style vehicles will be tested in pedestrianised areas of Greenwich, southeast London, and Milton Keynes.
Getting To Acceptance
You may not expect the humble lift to capture the interest of Silicon Valley’s boffins, but Google is studying the history of the elevator. I learnt this week that when elevators were first unveiled they were terrifying.
And for good reason — you’re suspended hundreds of feet in the air in a box attached to a wire. A mishap could easily spell a messy death. The first automatic elevator was invented in 1900 and people hated it. They wanted an elevator operator — a man to push the buttons for them.
This continued for nearly half a century. But in 1945 New York’s elevator operators went on strike. More than a million office workers couldn’t get to their offices. Enough was enough: the era of the automatic lift had arrived.
Today similar anxieties surround self-driving cars. Google is spending a fortune on inventing one. But will we trust them?
If the history of the lift is a guide, we eventually will. But probably because of some nifty manipulation of our perceptions.
What made the automatic lift acceptable, apparently, was a big red button with “stop” printed on it. Of course, if something went seriously wrong, the button would do nothing. But it gave passengers the illusion of control. My thoughts went back to a story I wrote a while ago about one of Google’s prototype driverless cars. It doesn’t have a steering wheel.
But guess what — there’s a big button that says “stop”.