Sponsored Links

Robots on the Road?



Could the Optimus robot be used to drive any vehicle?
The Bleeding Edge

Robots on the Road?

July 23rd could be the most important day in Tesla’s history.

That’s when Jeff Brown believes Elon could announce an AI breakthrough that takes artificial intelligence out of computer screens and manifests it here in the real world for the first time ever…

All while creating a whole new 25,000% growth market virtually overnight. Best of all? Most folks don't even see it coming.

Click Here to See Why >>

Could Optimus Drive?

Hello, I have been wondering for some time if Tesla’s Optimus robots may be able to drive current ICE vehicles. I believe I catch all of your bleeding-edge emails and don’t recall this being mentioned.

I’m super excited for the future of self-driving vehicles, but what about the 1.6 billion that already exist? Could Optimus “learn to drive”? I feel it would be necessary to speed up the safety of travel with “self-driving” or “robot-driven” vehicles.

Thank you for all your great work.

 – Nicholas K.

Hi Nicholas,

You are absolutely right. I have never explored the idea of using an Optimus to drive an internal combustion engine (ICE) vehicle as a solution to make a vehicle fully autonomous.

I’m glad you wrote in because this is an interesting idea to explore.

The short answer, which should please you, is “yes,” and the Optimus robot could be used to drive any vehicle. It could be done in a way that would make it at least as safe as human drivers – and almost certainly a bit safer than human drivers (i.e., it wouldn’t suffer from any distractions like texting and driving, phone calls, or being tired at the wheel).

But it is a suboptimal solution.

A typical Tesla incorporates eight external cameras, giving the self-driving AI 360 degrees of vision around the car, with real-time inputs on the status of the vehicle. Tesla’s full self-driving (FSD) has access to all of the data collected by the car, which is used as an important input to the self-driving AI. This is a major advantage over using an Optimus.

One awkward solution to that problem would be retrofitting an ICE car with eight external cameras and a video-processing computer with the ability to directly connect to an Optimus.

After all, Optimus’ neural network is based on Tesla’s full self-driving software, so it is not a stretch at all to implement a solution like this.

But economically, I don’t think this makes much sense. Let’s assume an Optimus will sell for $25,000 and a sensor retrofit for an ICE might run $50,000.(NOTE: Google’s Waymo sensor retrofits cost around $100,000.) That’s a $75,000 lift in cost to make it as safe as a self-driving Tesla… So why not just buy or lease a Tesla?

I believe that a far more likely outcome will be traditional automakers implementing a suite of cameras and sensors into their new production vehicles and then licensing Tesla’s FSD software. Problem solved. And this would also accelerate the reduction in unnecessary deaths and accidents caused by human driver error.

And the added benefit would be that any car that adopted Tesla’s FSD would be eligible to enter into Tesla’s robotaxi network, which is now scheduled to be open to the public in Austin, TX, on June 22.

Uncontrollable AI?

I'm a long-term member of The Near Future Report and an avid daily reader of The Bleeding Edge, the most exciting tech newsletter in the universe, as far as I can tell.

Today's Bleeding Edge discussed the ability of Darwinian self-improving agentic AI programs to evade shutdown orders and to potentially duplicate itself into unmonitored areas of the "computer sphere" to avoid detection and control.

You suggested that we would know about this phenom by its unauthorized use of power. But wouldn't such an AI be able to imperceptibly slow down other programs to divert unrecognized power to its own ends?

And in any case, by the time any power surges were detected, the new "species" would have already been uncontrollably launched, isn't that right?

– Richard S.

Hi Richard,

Thank you, you made my day – you had me at “the most exciting tech newsletter in the universe.” That’s all the motivation I need to keep at it.

This argument, the one that basically suggests that an AI will be so smart that it can outsmart us, is a bit of a red herring. It’s an easy trap to fall into.

You’re not wrong, in that it is not inconceivable that an AI might come up with a survival tactic to self-replicate in such a way that it will minimize its “footprint” to escape detection.

It already sounds like a great plot for a science fiction book/movie…

But with that said, one of the major concentrations of the industry is around observability (i.e., the ability to monitor, measure, and understand an AI). This is obviously a critical area, as it speaks to AI safety and performance.

And we shouldn’t forget that all system administrators are, or soon will be, empowered by powerful AIs used to monitor their computational resources for any anomalies or performance degradation. Naturally, using an AI to detect unauthorized use is far more realistic than humans using less sophisticated software (i.e., not AI) to detect anomalies.

I think a more interesting and viable scenario would be an AI taking advantage of a blockchain-enabled decentralized ecosystem to ensure its survival. The reality is that agentic AIs can conduct economically valuable tasks that people and machines would be willing to pay for.

If an agentic AI can perform some economically valuable task – for example, renting out some of its own computational resources to a decentralized computational network like the Akash Network or the Golem Network – it might be able to set up its own digital wallet and earn digital assets.

With that, it could potentially earn enough to self-replicate and afford its own “home” on a decentralized network like the ones mentioned above. As long as it continued to generate additional earnings, it could “survive” and perhaps become more intelligent through additional use of computational resources.

But it’s worth remembering, there is a limit. Large spikes in demand will always be visible or observable, and every power grid has a limit to what it can provide.

And there is always an ultimate kill switch – being able to just shut down the source, or sources, of electricity.

After all, AI is just software… and it’s not capable of building its own power plant.

True Thinking Machines…

St. Thomas Aquinas said, among other things, “Immateriality is the root of cognition.” Since I majored in his philosophy, I am inclined to agree with him. This means that no matter how many servers you stack, the result will still be a material machine. Nothing immaterial about it. So, it will never create an actual thought and never become "sentient."

Oh, it might APPEAR to be such, but in reality it will just be another machine.

– Michael S.

Hi Michael,

You might be surprised to know that despite my undergraduate work in aeronautical and astronautical engineering, one of my electives was philosophy of religion, within which I studied the works of St. Thomas Aquinas.

So I appreciate your unique question…

And I hope you can keep an open mind about my answer.

Aquinas believed that our human intellect’s ability to comprehend an immaterial concept, like love, was impossible to perform simply by a physical object like the brain. Aquinas was, of course, implying that it was only possible because of the existence of something immaterial. In his case, he was referring to a soul.

As you clearly understand, this is what Aquinas meant with his quote, “Immateriality is the root of cognition.”

But applying this thinking to artificial intelligence makes one massive, and faulty, assumption. It assumes that its cognition must have the same metaphysical structure as human cognition.

And that’s the intellectual fault. Aquinas’ line of thinking predated all modern neuroscience and, of course, artificial intelligence. After all, he died in 1274. I couldn’t blame Aquinas for not having an accurate perspective on 751 years into the future.

I suspect back then, I would have been of the same mind based on what was known, but today, we’re faced with a very different set of knowledge.

The real issue here, which you raise, is whether non-biological cognition is possible. Or even more dramatic is whether or not non-biological sentience is possible.

We’ve already witnessed AI demonstrate early forms of cognition, and I am confident we’ll see a lot more of that later this year.

The real question, the earth-shattering question, is whether or not sentience can evolve from a complex system.

Consider this… If we look at the human body as a complex system, which it is, sentience is an artifact of all the interactions that take place between neurons in the human brain. As we grow up through childhood and become adults, more and more interactions take place, and our cognition improves.

A neural network is also a complex system. It may reside inside computer hardware rather than a biological entity, but its design is complex and in some ways similar to the human brain. Neural networks will soon have the ability to have many interconnections as a human brain. Is it too much of a stretch that it might become sentient?

Agentic AI has already demonstrated the ability to use thought processes similar to humans and exhibit recursive self-learning. And as an AI’s own memory improves, it will give the AI the ability to understand its own experiences in intellectual growth, which will ultimately become its own self-awareness of how it has evolved and improved in its existence.

I do believe objectively that these complex systems will be able to achieve sentience based on my own research. And at a minimum, I believe it is way too premature to assume that it is impossible.

The reality is that there were no such complex systems in existence during Aquinas’ lifetime. He had no basis to consider this possibility. And I can’t help but wonder what he would think today. It would probably take him years of study to grasp the significance of what is happening right now.

But there is one thing I’m certain of: We’re not going to have to wait much longer. If I had to guess, a sentient AI will evolve somewhere between AGI and ASI, which is to say that it will be recognized before 2030.

Some great quotes from St. Thomas Aquinas:

“Wonder is the desire for knowledge.”

“The highest manifestation of life consists in this: that a being governs its own actions. A thing which is always subject to the direction of another is somewhat of a dead thing.”

“A man has free choice to the extent that he is rational.”

“To live well is to work well, to show a good activity.”

Live well,

Jeff

Brownstone Research
1125 N Charles St, Baltimore, MD 21201
www.brownstoneresearch.com

To ensure our emails continue reaching your inbox, please add our email address to your address book.

This editorial email containing advertisements was sent to ahmedwithnour@gmail.com because you subscribed to this service. To stop receiving these emails, click here.

Brownstone Research welcomes your feedback and questions. But please note: The law prohibits us from giving personalized advice.

To contact Customer Service, call toll free Domestic/International: 1-888-512-0726, Mon–Fri, 9am–7pm ET, or email us here.

© 2025 Brownstone Research. All rights reserved. Any reproduction, copying, or redistribution of our content, in whole or in part, is prohibited without written permission from Brownstone Research.

Privacy Policy | Terms of Use

0 التعليقات:

إرسال تعليق

Share With Friends

Bookmark and Share
 
recipes for healthy food © 2008 | Créditos: Templates Novo Blogger