Please ensure Javascript is enabled for purposes of website accessibility
Opinion

Biden botches again: On AI

The president’s executive order to regulate AI is the wrong path. It will create new bureaucracies and stifle positive innovation.


  • Sarasota
  • Opinion
  • Share

We have reached the proverbial fork in the road with respect to regulation of artificial intelligence. 

One path is to watch carefully for harmful abuses or misuses of AI the market cannot or will not fix and create regulations to resolve and/or minimize those problems. 

The other path is to create regulations designed to prevent from occurring in the first place any imaginable or possible abuse or misuse of AI. 

The path we’ve been on has been tremendously successful. It allows for innovation and exploration, with a certain amount of trial and error. It relies on the desire of both providers of AI products and consumers for something that users find valuable and safe and to fix any problems that occur before they affect any or most consumers. 

Regulation’s role is to step in to solve real problems that emerge where government intervention is the only fix. This is often called permissionless innovation. 

The new path we are being asked to consider does not allow a lot of innovation and exploration. It entails various bureaucracies charged with defining in advance what can and cannot be tried and explored. Innovators and researchers must explain what they want to try and obtain permission before doing so. 

Meanwhile, bureaucrats who are not inventors try to imagine what ideas might be out there, how they might go wrong and then regulate those theoretical harms before they happen. This is often called using the precautionary principle. 

On Oct. 30, President Biden released an executive order on regulating AI. Unfortunately, these regulatory procedures fully embrace the second approach, casting AI as a risk that must be carefully controlled rather than an opportunity to trust but verify. 

Indeed, the executive order justifies federal regulation of AI by invoking the Korean War-era Defense Production Act, initially a law to ensure production of war materials when needed and prevent U.S. firms from supplying our enemies. But increasingly, this law has been used to justify all sorts of federal interventions in the economy.

 

What is AI?

Ten years ago, to almost anyone you asked that question, he or she would have answered it is machine intelligence that has human ability to reason. The famous Turing Test was an early explanation: If you interacted repeatedly with an entity and could not tell it wasn’t human, it would be AI. 

But the meaning is different today. That old definition is still there, but thanks mostly to clever marketing, we now give the AI label to a wide range of mathematical programs (algorithms) and software code that can learn and solve problems.

In many ways, the now-famous ChatGPT and other similar AI programs are like spellcheck programs on steroids. They take a large amount of information and apply complex algorithms to organize that information in response to user queries. Software programs have been trying to get better at this for many years, and now they are finally reaching a new level of success. 

Let’s be clear: They are not attempts to create an independently reasoning intelligence.

 

Problems with Biden’s order

U.S. law and President Biden’s executive order recognize this and define AI as algorithms, simply software tools designed to do certain tasks very well and applied in ways that people find exciting and useful. 

The executive order starts by asserting that there may be ways the programs can be used by bad actors and therefore must be regulated. It requires any “dual use” AI model to be regulated, meaning any AI program that has one intended purpose but could possibly be used by someone to create a national security threat, which makes it “dual use.” The definition is broad, not requiring it just be possible to use the AI program for ill, but even if it can be used to evade oversight or control. 

Well, anyone with a tiny bit of imagination can think of a way virtually any AI program could be used by bad actors to do bad things. Just like they can use Microsoft Word or Excel, text messaging, database programs — you name it — to facilitate their crimes or attacks. 

So, industry has already pointed out this essentially means all AI products are now regulated. The executive order requires sharing with regulators any new AI model being developed, including the inner working and innovations involved; reporting any collaboration or work with foreign researchers or providers; and asking permission to develop new types of AI, which is a sure recipe for insuring many new ideas and innovations won’t get anywhere. 

It also helps insure big companies that can more easily wade the regulatory swamp will have huge new advantages over startups and small competitors who don’t have that capacity. 

The executive order also requires reporting by companies acquiring, developing or possessing “large-scale computing clusters,” so that any entity capable of having an AI model, whether it does or not, whether they plan to or not, just if they have the capability, are subject to the executive order’s requirements. 


Labor protections

Not surprisingly, given the Biden administration’s collaborations with organized labor, the executive order directs the secretary of labor to work with labor unions and publish “principles and best practices” for mitigating AI’s potential harms to jobs.  

Companies seek the most efficient and least costly way to provide goods and services to customers. As technologies improve that often means substituting machines of various kinds for labor.  

Centuries of technological development and history show that this process always creates more jobs than it replaces, though it does create a need for workers to transition if their job becomes feasible for machines to perform.  

But there is also a long history of legislators trying to “protect” workers from technological change. It hasn’t worked and has only harmed consumers by getting in the way of advances in production, improvements in products and reductions in costs. 

AI can and should replace some human workers. Rather than trying to stop that, we should be working to help workers navigate the transition to new roles. Biden’s executive order does recognize this as well. It asks federal agencies to look at ways AI can help workers be more productive and to “modernize immigration pathways for experts in AI and other critical and emerging technologies.”


Embrace change

I have only started learning how to use ChatGPT and other AI tools, but I have colleagues who are doing amazing things with them, finding ways to accomplish tasks that simply would have been too hard or too labor intensive just two years ago. I want to see more of that happening in our economy, in our schools and in our arts and entertainment. 

This executive order makes a terrible mistake in treating AI algorithms as guilty until proven innocent. 

This can only discourage innovation and the advancement of what are already amazing new software tools with possibilities most of us cannot imagine. 

The way to prevent harmful uses of such innovations is not to imagine bad acts with them and prevent them while they are still imaginary. 

That way lies stagnation. 

The government’s energy would be much better spent trying to understand and evaluate AI developments and hone the ability to identify real market failures where problems emerge that the market does not solve and regulation is necessary. 

That way lies progress.

 

author

Adrian Moore

Adrian Moore is vice president of the Reason Foundation and a regular contributor to the Observer. He lives in Sarasota.

Latest News