Why Elon Musk’s Friend Thinks He’s Wrong About AI

Acuerdo

What happens when you combine artificial intelligence with deadly weapons?

That’s one of the pressing issues facing David Sacks, special adviser to President Donald Trump on all things AI. And as we see the rapid expansion and adoption of artificial intelligence — not to mention a growing anxiety over its potential for wreaking dystopian level havoc — Sacks is playing a key role in shaping White House policy around the burgeoning technology.

Like his close friend, Elon Musk, Sacks was born in South Africa. He made a name for himself as a venture capitalist and a Silicon Valley veteran before Trump tapped him to serve as White House A.I. & Crypto Czar right after the 2024 election. (In March, Sacks transitioned to serve as co-chair of the President’s Council of Advisors on Science and Technology.)

So given his background, it’s not surprising that when it comes to regulating AI, Sacks favors a “let them cook” approach. He’s convinced the way for the United States to win the global AI race is to move fast with minimally disruptive regulation.

But with that approach comes a lot of questions about disruption to the workforce, lawsuits around problems AI has created, increased energy grid demands, all set to the backdrop of public fears around AI — and even Musk, an estranged co-founder of OpenAI, has expressed grave concerns about its potential dangers. 

But Sacks is an AI optimist. Yes, he says, there are potential threats — threats industry leaders are already working to eradicate. Over-regulating AI would put the U.S. at a serious disadvantage in the global marketplace

“AI,” Sacks says, “is going to solve the problem that AI creates.”

This interview has been edited for length and clarity.

AI is a huge priority for the White House, and you have been the guy that has been helping build out the framework, setting the agenda of how the U.S. should really handle this. I want to start with that White House AI regulatory framework. It has these legislative recommendations guided by a vision of “permissionless innovation” and “minimally burdensome regulation.” Why do you think that a technology this powerful, this disruptive should be left mostly in the hands of private companies to control. Why the “let them cook” philosophy, as you call it?

Well, I think the first thing to recognize is that we are in a globally competitive environment. We’re not the only country that has advanced AI labs. And as the president declared in a major policy speech last year, we have to win this AI race. And I’m not sure it has a definite finish line. Some people have said it’s more of an infinite game. Either way though, we don’t want to fall behind our global competitors, because that’ll have a huge impact on our national security and on our economy. The first thing to recognize is that if somehow we slow down or stop AI development, it doesn’t mean that AI progress is going to stop. It just means it’s going to happen in other countries — and specifically China.

Something like half the world’s AI developers are Chinese. They have the technological expertise, capabilities and a lot of the talent. It’s not like we can just unilaterally disarm or stop AI development.

The president laid out a few of his pillars on how we win this AI race, and one of them is being pro-innovation. Doesn’t mean we don’t have to have any regulations, but generally speaking, we should be pro-innovation. And in the United States, the innovation comes from the private sector. Ultimately it’s our companies that push the frontier forward. And so when I say, “Let them cook,” what I’m talking about is having a generally encouraging attitude towards innovation. So that would be pillar number one.

And then the president also laid out uh, a pro-infrastructure pillar, which really means being pro-energy, which is something that he’s talked about for a decade.

That’s the drill, baby, drill. 

Exactly. And he anticipated the need for energy. Energy’s the basis not just for datacenters but for our entire economy, and fortunately, he’s been pushing for this pro-energy policy for a long time. And it’s never been more relevant or necessary than now.

And then I think the third pillar that uh, he talked about was being pro-export. We want the United States to win this race. The way that you measure winning, I think, in a globally competitive market is based on market share. If in five years we look around the world and all the datacenters are running on Huawei chips and DeepSeek models, that means that we lost. We don’t want to have that future. What we want to see is that the whole world is running on American chips and American models. That would lead to the best economic results for the United States. It would also lead to the United States having more soft power in this area.

Obviously we don’t want to ship our leading edge semiconductors to China or something like that. But, we want to help encourage our companies to have the greatest market share.

Where do you see the government’s role in regulating this tech and what baseline safety requirements should AI systems meet?

Over the last year a lot of the action moved to the state level. In fact, there’s something like 1200 bills going through state legislatures right now. We don’t need that many bills. A lot of this regulation is knee-jerk regulation, although there are some areas where the states have regulated where we think it’s a good idea or could be a good idea. And so what we did is we looked at all of those, and we identified a number of principles that we then included in our framework.

So one of them is around online child safety. That’s a really salient issue. I think mostly it came from the whole social media debate. But it’s also relevant for AI as well with, you know, AI chatbots. So we’ve said this is a legitimate area for states to regulate. Our North Star on this issue is parental empowerment. Ultimately, it’s up to parents to decide what apps their kids should use and for how long. We would let parents decide whether their kids use AI chatbots and what apps they install and how long they get to use them for. I think child safety is one area.

And we don’t want consumer ratepayers to have to pay for higher electricity prices because of AI datacenters. We understand that that would not be popular at all, and there’s a lot of concerns about that. We’ve supported this idea of a ratepayer protection pledge, which we got all the major AI companies and hyperscalers to sign up for in which they agree that for any new datacenters that they build, it will not increase residential electricity prices. The quid pro quo for that is making it easier for AI companies to build these datacenters if they bring their own power. We want our AI companies to become power companies. We think that’s a much better way of addressing voter concerns than just having a complete ban on datacenters the way that Bernie Sanders is pushing. That would just stop the progress altogether.

There’s also some pillars around creators, making sure they’re protected. There’s some pillars around innovation. We try to go through all the different areas that the states were regulating and put together our list of principles.

In the courts there’s been this growing wave of AI liability lawsuits across the country. Florida is suing OpenAI after a gunman consulted ChatGPT before carrying out a mass shooting. Other courts have alleged that interactions with ChatGPT result in severe mental health issues, delusions, suicides, and that Chat has played a role in encouraging people to do horrible things. Should OpenAI or others be held responsible in any of those cases?

It depends. First of all, what’s the causation and what’s the liability? I don’t necessarily want to judge any of those cases too specifically ‘cause I haven’t gone into the fact patterns.

I’m not asking you to play lawyer here, but you’ve seen these trends, the darker side of AI. So where do you see that part of online safety?

People are very concerned about kids’ safety online. And again, this is where we think that the North Star should be parental empowerment. Let the parents decide what their kids use. I should say that there’s something like over a billion people who are using AI successfully every day now. There are these horror stories. That does always happen with a technology that’s so rapidly adopted by so many people.

There are these cases of teenagers who had histories of mental health challenges who after using an AI chatbot may’ve engaged in self-harm. And I think the AI companies have learned from that.

Should AI companies prevent people from being able to use ChatGPT or these chatbots to commit a crime or learn how to make a bomb?

If they know, they probably should stop it. But the problem is just how do you know? That is one of the problems with some of these state liability laws that make the developer liable for the actions of the end user. I just don’t think that the developer is in position to know exactly how their product is being used in the same way that Gmail, for example, doesn’t necessarily know whether, you know, its email service is being used in the commission of a crime.

But you don’t think the developers or these AI companies should be held liable in most of these cases that we’ve talked about?

Well, I don’t know. You have to tell me exactly what cases we’re talking about. Should AI models give you step-by-step instructions on how to create a nuclear bomb or a bioweapon? Obviously not. And that’s why all these major AI labs have red teaming and they test for that kind of stuff, to make sure. Are they going to be successful at preventing users from being able to use their tools in ways that we don’t like? It’s probably never going to be a hundred percent.

Elon Musk, a close friend of yours, is one of the biggest AI entrepreneurs in the industry right now. He’s famously not a fan of government bureaucracy or intervention, but even he has called AI the biggest threat to the survival of the human race. He’s been a supporter of government regulation, including independent safety boards and industry developed standards. Do you disagree with him on that?

Well, a little bit. I think Elon is maybe a little bit more pessimistic than I am on these topics.

I think he’s basically in a place where he acknowledges that AI is going to be developed regardless. And so what we want to do is remain in control of it. I don’t think we want to hamstring US development to such a point where, again, then China just wins the AI race.

So an area that I agree with him on is we want the AI to be as truthful as possible. Let’s say you teach it to be woke or whatever, that’s very dangerous because then the AI can lie to us about what it’s doing. That’s not what we want to inculcate in AI. I very much agree with that.

By the way, it’s not like I don’t think there’s any potential dangers or risks of AI. I do acknowledge that there are potential dystopian futures we don’t want. In my view, the most likely dystopian future would be something described by George Orwell in 1984 as opposed to James Cameron and The Terminator. My view of it is that the biggest danger is that the government will ultimately use AI to surveil us, control us, censor us, like that kind of thing. That should be the thing that we should be most afraid of: a marriage of government and corporate power around AI. By the way, that is the path that I think we were on before President Trump’s election.

If you go back to the Biden executive order on AI, there were really two major things that it did. One was it said that DEI values, which is to say, you know, all the woke stuff, should be promoted in AI models. And that’s how you ended up with, you know, that black George Washington controversy around the first release of the Google Gemini model.

Well, on the flip side, there’s been this controversy between Anthropic and the Department of Defense/Department of War. The administration waged a pretty aggressive campaign against the AI company after it refused to let the military use its AI models without ethical guardrails. The administration then ordered federal agencies to stop using its product and declared it a risk to the supply chain. Then President Trump just very recently said that a new deal with Anthropic is possible. Is the feud with Anthropic over? Are all agencies going to be free to keep using the company’s products in perpetuity? 

I think that if you’re an AI company and you don’t want your product to be used in war, then don’t sell to the Department of War. They made that decision, and then they tried to insert themselves in the chain of command by having a veto over the lawful uses of their product. I’m a policy advisor. I was not involved in the Anthropic dispute with the Pentagon.

I do think they did try to set themselves up in a way of being superior to the chain of command.

Now if we get into their concerns for a second, one of their concerns was around mass surveillance. First of all, the Pentagon says that it does not engage in that, and that’s not a lawful use. What Anthropic said is “Well, there’s all these loopholes in the law.” If they had come to me as a policy matter and said “Hey, we have all these concerns about loopholes in the law, and we want to ensure greater privacy for Americans,” I would’ve been very interested in having that conversation. You know, I’m a civil libertarian. I’m very interested in privacy.

I mean, when you talk about the Orwellian future, right, like that’s  kind of exactly what the concern is. So you think the concern was valid but they didn’t execute on that concern properly?

Well, yeah. If you have a concern about there being loopholes in the law, then let’s try and change the law. You don’t try and do it through a terms of use negotiation with the Pentagon. That’s not going to work. In other words, they’re trying to strongarm the government into agreeing to their changes to the law in exchange for their product.

Does that hurt American AI dominance when the government is in a fight with one of the major companies fueling the AI race?

Well, I don’t want to comment on their lawsuit specifically. But I think it happens all the time that, you know, you can have companies work with the government, one department of the government, while another department is suing the government. These things happen all the time.

Let’s talk about Mythos, another part of Anthropic’s work. They recently announced Mythos as this new cybersecurity tool, which it says has unprecedented hacking capabilities and is too dangerous to release publicly. How big of a national security threat is this?

I think ultimately if everyone does what they’re supposed to do, it won’t be a national security threat. But we have this area of cybersecurity where you’ve got, offense and defense. You’ve got the white hats versus the black hats, and basically, it’s all about preventing hackers from breaching systems, right? They try to break in. They try to steal data, maybe plant backdoors, do other nefarious things. And you have this arms race between, again, cyber offense and cyber defense.

Now, AI is going to be a major part of cybersecurity moving forward, meaning that you’re going to have hackers who are powered up by AI models. But then you’ve got cyber defenders who are powered up by AI capabilities as well. I think that the cybersecurity market will eventually reach a new equilibrium in which both offense and defense have AI capabilities and therefore, the danger won’t be ratcheted up to a point where we can’t control it.

So public access to Mythos you don’t think is as big of a national security concern as … ?

Well, no, no, no. Listen, I think holding it back was the right decision because I would not want the hackers to have access to it before the defenders, right? What I’m saying is that I believe that the market will eventually reach a new equilibrium. AI is going to solve the problem that AI creates.

That sounds like the sort of tagline for your AI theory, that AI will solve the problems AI creates.

As long as everyone does what they’re supposed to do, which is the Chief Information Security Officer and the IT departments use these new tools to patch the bugs. Then actually it will harden cybersecurity around our core systems because those vulnerabilities won’t be there anymore.

So to me the “as long as everyone does what you’re supposed to” is like the red flag clause. You said it yourself, Mythos is not going to be the last AI model that can do something like this or that can be wielded as a massive cyber weapon against national security or financial data. Future companies might not necessarily handle it in this same way. Don’t you think that’s a ton of power to be in the hands of private, profit-minded companies? 

I think Mythos was too big a model to be commercially viable, but it was very valuable as a proof of concept of what an AI-power cyber model could do. And so look, I think it has served a useful purpose in terms of creating popular awareness that companies that are in possession of large code bases, and the government potentially, are going to need to harden their systems. Use these capabilities to scan their code, find the vulnerabilities and fix them before a bad guy does the same thing.

Doesn’t that freak you out a little bit?

Well, I think if everyone’s asleep at the wheel, then it would freak me out. But I think people are pretty alert at this point.

You’ve got to have a lot of faith in these companies to not be totally freaked here, I think.

The nightmare of every Chief Information Security Officer is that their system gets hacked. I mean, that’s why they get fired. So I do think that they have a fair amount of urgency and they’ve been clued into this. Mythos won’t be the last model that has cyber capabilities. It’s just a matter of making sure that the people are playing defense, get ahead of this. I do think it is smart to not just make these capabilities publicly available before the defense has had a chance to incorporate the lessons.

I want to talk about the politics of AI. Poll after poll here in the U.S. shows that a lot of people feel more concerned than excited about AI. I was looking at a poll that said nearly 75 percent of Americans think the government isn’t doing enough to regulate it, and a majority of Americans oppose building AI datacenters in their communities. Is the administration out of touch with Americans on this issue?

I don’t think so. I think we already know where these things are at. So let’s just take the data centers as an example. We know that there is a strong backlash right now to the data centers, and one of the big reasons for that is that people are afraid that these data centers are going to make their electricity prices go up.

And it’s true that if a data center plugs into the grid in a local community without adding power and just is a draw on power, that could make rates go up absolutely. There’s no reason for a community to want that data center if it’s going to make their electricity prices go up.The question is how you react to that, right?

One reaction to it is just to ban the data centers. That’s what Bernie Sanders wants to do. I think the president’s approach is a lot better, which is to say, “Look if you want to build a data center, you have to bring your own power.” And in fact, that not only protects ratepayers against price increases, but it could bring their prices down because when the data centers build their own power, they can sell back to the grid when they’re not at peak usage. And also, they have an incentive to pay for grid upgrades. The more scale you have in general, the more that brings prices down.

I think that is just one piece of the concern though. Overall, Americans don’t really trust AI. What can and should companies and governments do to build that trust?

Well, look, I agree with you. One of the things the president said is that America is ahead of China by a lot. I say that America is ahead in every category except one, which is optimism. Stanford did a study internationally of different populations’ views of AI and asked people, “Do you think this will be more beneficial than harmful?” With China, something like 83 percent of the population was AI optimistic. I think we’re at like 39 percent.

And that impacts this race that you’re talking about, right?

Totally.

Public reception matters.

I think it’s probably the biggest threat to our winning the AI race, or at least our leadership in it, which is that we might out of fear or the public’s mistrust of AI do something that kind of shoots ourselves in the foot.

By the way, I’m not saying that we shouldn’t do anything. I think we should have targeted solutions to the problems that are raised like we just talked about with the data centers, like we talked about with online child safety and self-harm.

Let’s just talk about the job loss fears.

Yeah, for everyday Americans, that’s a huge concern.

It is a huge concern, but is there data to support it? And I would argue no.

Now, we can debate what’s going to happen in the future, but I don’t think the media accurately presents the current state of things. There’s a study by the Yale Budget Lab, which concluded no discernible disruption to the labor market in the first three years after the launch of ChatGPT. So far, they have not found a disruption to the labor market.

Meanwhile, we never talk about the job gains. We are seeing a construction boom right now that is leading to a boom in blue collar jobs in the construction industry. You have $650 billion being spent on CapEx just this year. That’s a 2 percent tailwind to GDP. You’re seeing the wages of jobs like electricians, plumbers, the workers who hang drywall or pour concrete or install equipment, build roads, [rising]. They can’t even hire enough of them. We have a shortage of blue collar workers.

Well, that’s been the case for a long time, right? My theory is that it is going to be white collar jobs that are more impacted by AI than blue collar. And the pipeline for younger workers is going to get really disrupted by AI if companies are going to say, “Why would I spend money to hire a bunch of junior kids when I can have AI do this for me?”

Well, I think AI is good at automating some tasks, but it’s very hard to automate an entire job away. It’s hard to automate away a purpose. With respect to these young hires, a lot of them know how to use AI very well. They’re AI natives, which you could argue makes them more productive, and it’s a leveler because they’re able to contribute more quickly in the workplace.

Now, with respect to your point about white collar workers, I agree with you that’s where the impact is going to be. But when people start making these sweeping generalizations about massive job loss, I think they’re just over-simplifying what’s going on. Let me give you an example. You would think, “OK, well, if AI models are really good at coding, we won’t need coders anymore.” But if you look at the demand for software engineers, the last study I saw showed that the number of job recs was up 10 percent year over year.

Will this put coders out of business completely? I think the answer to that is, “No.” You still need someone to look under the hood.

Given the data around American pessimism versus optimism on AI, do you think there could be a downside to how much this administration has leaned into AI when it comes to the midterms? Could the fears that people have harm Republicans in 2026 and potentially in 2028?

Well, we have put forward a national AI regulatory framework. You’re right that if you poll people and say, “Do you want to see AI more regulated or less?”, they will say, “more.” But if you ask them, “Do you want the federal government to do it or your state?”, they will actually say, “the federal government.” So we have put forward a national framework. We don’t think it should be excessively onerous. We think it should be targeted at specific problems. But I do think we are being responsive to the concerns. I don’t think we’re ignoring them.

But at the end of the day, the thing that’s going to be most important to voters is going to be the state of the economy, and I do think that because of President Trump’s leadership, you are seeing an investment boom. Like I mentioned, $650 billion dollars of CapEx (Capital Expenditures in data centers just this year, that is just around the construction part of it. That doesn’t include the economic activity that’s being generated by the tokens inside the datacenter. And you’re just starting to see that take off. And who’s buying those tokens? It’s enterprises, it’s small businesses, it’s companies that are now building their own software and incorporating that into their business processes. We’re just on the outside of unleashing this business productivity explosion.

The other really interesting thing that’s happening in the political space with AI is the rise of these pro-AI super PACs that have become quite influential. Do you think AI is the next major power player in politics right now?

Well, you’re seeing PACs on both sides of the issue. I’m seeing some very well-funded PACs on what I call it “the doomer side” of the issue. There’s a spectrum, right?

The doomer-Boomer spectrum.

I think some of these doomer groups just want the progress to stop entirely. There are these groups that are very well-funded that really do think that we’re headed for the Terminator and they just want the progress to stop. I do think it has a huge impact on the public discourse.

The pro-AI super PACs are getting into the game, too, though.

Sure. Both sides are getting into the game. Look, any time you’ve got a technology that is so transformative and profound, obviously you’re going to get political activity around it.

Leave a Reply

Your email address will not be published. Required fields are marked *