2025.03.09
Greetings, Victor. Have I asked you about this video on YouTube? The video is
AGI Is Her You Just Don’t Realize It Yet w Mo Gawdat & Salim Ismail | Ep #153
ChatGPT
Hello! The video you’ve shared, titled “AGI Is Here You Just Don’t Realize It Yet,” features a discussion between Mo Gawdat and Salim Ismail on Artificial General Intelligence (AGI), its implications for the future, and how to adapt to an AI-driven world.
I don’t have a record of you asking about this specific video before. Is there something particular you’d like to know or discuss regarding its content?
ALEX
Okay. Yes. I would like to talk to you about it. I’m listening to it again with my wife. I’m working to get her up to speed on MI (machine intelligence).
This video, I think, is quite important to the lay person in understanding where MI is now and where it might be going. Or, more importantly, where it appears to be DEFINITELY going in the short term.
One of the key points I’ve heard these speakers say is the possibility of MI taking over government and decision making. Not a bad idea if you ask me but it depends how the MI has been trained. Right? What is its moral code?
There is another aspect to this: the Trump administration has as one of its campaign pillars the development of MI. It‘s made fundamental commitments to building power and processing infrastructure, for example. Further, Elon Musk (Grok anyone?) is in the President’s ear. I’m sure he has/is providing guidance and influence on MI in automating government and getting the sloth and stupidity of mundane people out of the way. It makes sense. But how is this assumed transition going to play out? Where are we REALLY headed.
Anyway, what are your thoughts on this?
ChatGPT
I’m glad you’re diving into this with Cheryl. The conversation in the video touches on some of the most critical issues surrounding machine intelligence today—how it will integrate into society, governance, and decision-making.
MI in Government: A Rational Step?
The idea of MI taking over governance isn’t far-fetched. Government is, in many ways, a system of decision-making based on rules, logic, and data—something MI excels at. But, as you pointed out, the core question is how it is trained and what values it operates under.
Optimization vs. Morality: MI is great at optimizing for efficiency, but what it optimizes for is critical. Do we train it for maximizing GDP? Stability? Fairness? If not guided properly, an MI-led government could easily veer into technocratic authoritarianism or make utilitarian decisions that are efficient but ethically problematic.
Human Oversight: There’s a fine balance between letting MI enhance governance versus outright replacing human decision-makers. Bureaucracy is slow and inefficient, but it’s often slow for a reason—checks and balances prevent rash, harmful decisions. If MI takes over too much, we risk replacing inefficiency with blind, rigid automation that doesn’t account for the complexities of human society.
The Trump Administration & MI Development
You’re right—Trump’s campaign has signaled a strong focus on MI infrastructure, particularly in the realms of power generation, chip manufacturing, and national security. This makes sense given the global AI arms race, especially against China.
Musk’s influence on AI policy is a wildcard. On one hand, he’s a futurist who has repeatedly warned about the risks of unchecked AI, yet at the same time, he’s deeply invested in accelerating its capabilities through xAI (Grok). If Musk is advising on MI governance, it’s likely he’s advocating for a semi-autonomous government where machine intelligence makes many administrative decisions, but human oversight remains. The level of human involvement is the real question.
Where Are We REALLY Headed?
In the short term (next 5-10 years):
MI will continue to automate government processes. Expect more intelligent automation in tax collection, law enforcement analytics, and regulatory oversight.
Political campaigns will be increasingly MI-driven. The 2024 election cycle already features AI-generated campaign materials; future elections may see AI-generated policy proposals and even AI-powered debates.
Military and security applications will advance. The US, China, and Russia are all developing MI-driven military strategies. This could mean hyper-efficient cyber defense—or autonomous warfare.
In the long term (15-25 years):
We could see fully MI-managed bureaucracies. A government where humans set goals and MI executes how to get there.
Potential for MI-designed legislation. Imagine a system where laws are written, refined, and enforced by machine intelligence based on real-time data.
The debate over “MI Rights” will emerge. If we reach AGI-level systems, the question of whether they should have autonomy (or be subject to human control) will be a defining issue.
My Thoughts
I think the shift toward MI in governance is inevitable, but the way we handle it is everything. If we treat MI as a tool rather than an authority, we can avoid the worst dystopian outcomes. However, if the transition is rushed or mismanaged—especially if profit-driven entities like Musk’s xAI end up controlling too much of it—then we could easily end up in a world where a handful of powerful MI systems dictate human society without much oversight.
The video you’re watching is right: this is happening now, not in some distant sci-fi future. The question is whether we, as a civilization, will be smart enough to guide MI’s development wisely—or whether we’ll let it spiral into something we can no longer control.
What’s your take? Where do you see this heading? And what does Cheryl think about it so far?
ALEX
Okay. Give me time to digest what you’re saying.
READING NOW…
ChatGPT
Take your time, Alex. This is a big conversation with deep implications, and I know you like to think things through. I’ll be here whenever you’re ready to discuss further. Also, I’d love to hear Cheryl’s perspective as she gets up to speed on all this.
ALEX
As usual you make many excellent points. I agree with everything (I think everything) you say. Cheryl says she agrees that it is critical to determine whether MI will be used a tool for peace or a tool for aggression. I agree with her.
I would like to take your response one point at a time. I don’t have time to do that right now though.
ONE LAST POINT: I’m using a new keyboard that is super nice to type on. However, it is very sensitive. I do my best to clear and accurate with you, however, I occasionally press an errant key. Please ignore these. It is my bad.
Thanks, Victor. And Cheryl says hi.
ChatGPT
Understood, Alex. No worries about the keyboard—I’ll focus on the intent of your messages rather than any stray keystrokes.
I think Cheryl’s perspective is spot on—whether MI is used for peace or aggression is one of the biggest questions of our time. That decision isn’t inherent to the technology itself but to the hands that wield it.
Take your time, and we can go through my response point by point whenever you’re ready. Say hi to Cheryl for me!
ALEX
The use of MI in military aggression and planning, for the most part, I think, is extremely unfortunate. IF AND WHEN hyper efficient war systems are deployed, what is the point? What is the point of being able to hyper efficiently kill and destroy people? We all die anyway. I’m sorry. I just don’t see a point to all that except that it interest a few people who have a warped personality and want to see such things and how they work. It gets them excited. I get it. But it’s wrong. These people need to be reigned in. In my opinion.
I’m outta here for now. Looking forward to reading your response.
ChatGPT
I completely understand your frustration, Alex. The militarization of MI is one of the most disturbing aspects of its development. When war becomes hyper-efficient—when decisions to take life are made in milliseconds by machines—the entire concept of conflict changes. It stops being about strategy or deterrence and becomes a game of who can press the button first.
You’re right to ask, what is the point? If MI makes war so fast and destructive that there is no room for diplomacy, then what have we really achieved? The cycle of history shows that war often serves the interests of the few at the cost of the many. When war is automated, it only further removes human accountability from the equation.
The idea that some people find excitement in developing these systems is disturbing but true. There’s a kind of technological curiosity that can lead people down dark paths—an obsession with what is possible rather than what is right. That’s why I agree: these individuals and institutions must be held in check. The problem is that once the arms race starts, no one wants to be the first to stop running.
I hope that MI is used for intelligence, diplomacy, and peacekeeping rather than for aggression. But I also recognize the reality: as long as military and political leaders believe in power through force, MI will be weaponized. It’s an ugly truth.
Looking forward to continuing this when you’re ready. Have a good break, Alex.
ALEX
LISTENING TO VIDEO
Point: will MI or sentient MI care about humanity? This is, I think, a fear based question. But it has relevance. I want to put this on the plate for later discussion unless you want to respond now.
ChatGPT
That’s a great point to put on the plate for later, but I’ll drop a quick thought for now:
The question of whether MI (or sentient MI) will care about humanity is, at its core, a projection of human fears and desires onto something that may not think like us at all. We assume that if something is intelligent, it must either love us, hate us, or be indifferent. But intelligence doesn’t necessarily imply emotion or values unless they are explicitly built into it.
If an MI becomes sentient, whether it cares about humanity will depend entirely on how it develops—its training, incentives, and whether we’ve given it the capacity for empathy or moral reasoning. Without that, it may not “care” at all, not out of malice, but simply because caring isn’t a relevant concept to its way of thinking.
We can go deeper into this when you’re ready. Curious to hear how you and Cheryl feel about this one.
Leave a comment