The cars are using artificial-intelligence, as reported by The New York Times, to make themselves aware of obstcacles and respond as a normal driver would to road conditions.

The cars have always had a technician behind the wheel, in case the computer malfunctioned, but the test has had surprising results. Google drove seven cars for 1,000 miles with zero human assistance and 140,000 with occasional human intervention. There has been one accident — one of the test cars was rear-ended at a light.

As The New York Times reports, “Robot drivers react faster than humans, have 360-degree perception and do not get distracted, sleepy or intoxicated, the engineers argue. They speak in terms of lives saved and injuries avoided — more than 37,000 people died in car accidents in the United States in 2008. The engineers say the technology could double the capacity of roads by allowing cars to drive more safely while closer together. Because the robot cars would eventually be less likely to crash, they could be built lighter, reducing fuel consumption. But of course, to be truly safer, the cars must be far more reliable than, say, today’s personal computers, which crash on occasion and are frequently infected.”

The cars can be programmed to mimic our own driving styles. They can drive more safely or aggressively depending on what is input by the driver.

Google’s willingness to investigate future technologies is nothing new for the company. And while they may not have a clear business model in place for how to actually capitalize if this were to become public, it’s possible they could sell navigational software to compliment an autonomous vehicle.

Does this mean we’ll be seeing autonomously driven cars in the future? Right now it’s too early too say, but Google is committed to making the technology safe and pushing the boundaries of what man and machine can achieve.

Related Posts

New study shows AI isn’t ready for office work

A reality check for the "replacement" theory

Google Research suggests AI models like DeepSeek exhibit collective intelligence patterns

The paper, published on arXiv with the evocative title Reasoning Models Generate Societies of Thought, posits that these models don't merely compute; they implicitly simulate a "multi-agent" interaction. Imagine a boardroom full of experts tossing ideas around, challenging each other's assumptions, and looking at a problem from different angles before finally agreeing on the best answer. That is essentially what is happening inside the code. The researchers found that these models exhibit "perspective diversity," meaning they generate conflicting viewpoints and work to resolve them internally, much like a team of colleagues debating a strategy to find the best path forward.

Microsoft tells you to uninstall the latest Windows 11 update

https://twitter.com/hapico0109/status/2013480169840001437?s=20