One month removed from the announcements at CES, competition in the autonomous vehicle space continues to intensity, as government regulators search for answers and philosophers ponder the technology’s sense of morality.
US DOT Autonomous Car Summit March 1
One of the biggest question marks around the deployment of driverless cars on American roads has been how the government will regulate the technology. As of now, advancements have been moving so fast they’ve essentially made a variety of current auto regulations and laws obsolete.
Perhaps hoping to clear up some of that uncertainty, the US Department of Transportation recently announced an autonomous cars summit on March 1, where auto manufacturers, tech industry leaders and US policymakers will gather to hash through some of these challenges.
Among the big focal points will be addressing policy proposals that could quicken the launch of autonomous vehicles on public streets. Given the fevered rate at which companies are already working on this technology, it’s not clear if this conference could actually speed up development even more.
What the summit could do, however, is create a timeline greenlighting these vehicles for public use, which would then help hasten consumer adoption (which, as we’ve pointed out before, is a major goal that even LeBron James has been called in to assist).
While the Trump administration has said to expect new regulations for autonomous cars to be announced this summer, current safety standards around the technology were written with the belief that a human driver would be present in the vehicles.
Given that the vision for companies like Lyft is to launch fleets of completely self-driving, ride-hailing cars into urban, American streets within the next couple years, hopefully this conference will tackle the idea that the only humans inside these vehicles will be the passengers.
Elon Musk Calls Out LIDAR Again
The race to develop and perfect self-driving cars is so expansive, fast, and relentless, it’s almost unlike almost anything else currently happening in product development. In other ways though, it still maintains the hallmark of American innovation: competition.
While no two companies are developing autonomous vehicles the exact same way, one of the common ingredients for many organizations is the use of LIDAR technology — an advanced laser sensor that offers a 360-degree view around the vehicle, enabling the self-driving system to anticipate potential road hazards and perform appropriate reactions.
As is so often the case, Tesla has been an exception. During an earnings call earlier this month, Tesla’s founder and CEO, Elon Musk, reiterated a stance he’s made in the past: LIDAR isn’t necessary for autonomous vehicle development.
“In my view, it’s a crutch that will drive companies to a local maximum that they will find very hard to get out of,” Musk said, according to The Verge. He added, “Perhaps I am wrong, and I will look like a fool. But I am quite certain that I am not.”
In Musk’s view, LIDAR is too expensive and bulky for Tesla’s self-driving creations. Instead, Tesla’s autonomous vehicles will rely on cameras, radar and ultrasonic sensors. Time will tell which company wins the LIDAR debate, but Musk has never been one to bow to conformity.
Philosophers Creating Ethical Algorithms for Self-Driving Cars
Underneath all the technological advancements for autonomous vehicles lies some significant questions about their impact on our society.
Not just about the various scenarios they’ll introduce into our culture (naps, movies and videogames while you roll down the highway!) but also some deeper ideas about morality as well.
For instance, what happens if a driverless car in a major city loses control and has to make a choice between putting the vehicle’s passengers in danger or perhaps harming pedestrians? How should the vehicle be programmed to respond to events where there is no good outcome?
With the backing of the National Science Foundation, a small team of philosophers is working with an engineer to write algorithms targeting some of the toughest ethical theories driverless cars will face.
Assuming all lives are equally weighted, there’s definitely some debate over which theory is correct for given situations.
“We might think that the driver has some extra moral value and so, in some cases, the car is allowed to protect the driver even if it costs some people their lives or puts other people at risk,” Nicholas Evans, a philosophy professor at UMass Lowell who is working on the study, told Quartz. “As long as the car isn’t programmed to intentionally harm others, some ethicists would consider it acceptable for the vehicle to swerve in defense to avoid a crash, even if this puts a pedestrian’s life at risk.”
Once the algorithms are finished, Evans is hoping to share them for collaboration purposes with companies creating autonomous vehicles.
Learn how a Fortune 100 semiconductor company is meeting challenges and functional safety standards for its automotive-related technologies with an integrated and compliance-ready solution in our white paper, “Driving Compliance with Functional Safety Standards for Software-Based Automotive Components.”