Tech leaders sound off on new AI regulations

AI is looking regulation in the face—and many in the industry are nervous about what they see.

Tech leaders sound off on new AI regulations
conrado/Shutterstock

Last month, the Biden administration issued a sweeping executive order focusing on artificial intelligence. The edict particularly focused on privacy concerns and the potential for bias in AI-aided decision-making. Either could potentially violate citizens' civil rights. The executive order was a tangible indication that AI is on the government's regulatory radar.

We spoke to AI practitioners about the order and found they were concerned about both the nature of the proposed regulations and the potential for further restrictions. No industry likes being regulated, of course, but it's worth listening to what those working in the trenches have to say. Their comments highlight the likely pain points of future interactions between the US government and the fast-growing AI industry.

Regulation can help minimize risk

Some practitioners were encouraged that the federal government was beginning to regulate AI. "Speaking as both an investor and a member of society, the government needs to play a constructive role in managing the implications here," says Chon Tang, founder and general partner of Berkeley SkyDeck Fund. "The right set of regulations will absolutely boost adoption of AI within the enterprise," he says. Clarifying the guardrails around data privacy and discrimination will help enterprise buyers and users "understand and manage the risks behind adopting these new tools."

Specific areas of the executive order also came in for praise. "The most promising piece of the EO is the establishment of an advanced cybersecurity program to detect and fix critical vulnerabilities," says Arjun Narayan, head of trust and safety for Smart News. "This program as well as the big push on advancing AI literacy and hiring of AI professionals will go a long way to establishing much-needed oversight, improving safety guardrails, and most importantly governance—without stifling much-needed AI-driven research and innovation across critical sectors of the economy."

Enforcement is key ... and unclear

Much of the reaction was less than positive, however. For instance, one of the most critical aspects of regulation is enforcement—but many in the AI industry say it is unclear how the executive order will be enforced.

"There appears to be no tangible framework that this EO will be enforceable or manageable at this stage," says Yashin Manraj, CEO of Pvotal Technologies. "It remains just a theoretical first step towards governance."

Bob Brauer, founder and CEO of Interzoid, believes that the lack of specifics will hold back real-world practitioners. "Much of the document remains ambiguous, planting the seeds for future committees and slow-moving regulatory bodies," he says. "Concern arises, for example, with mandates that AI models must clear yet-to-be defined government 'tools' before their release. Considering the rapid development and diversity of AI models, such a system seems impractical."

Scott Laliberte, managing director and global leader at Protiviti's Emerging Technology Group, elaborates on the gaps between the order's mandates and the realities of their practical application. "Many of the [executive order's] suggestions do not have feasible solutions yet, such as the AI-generated content marking and bias detection," he says. "Common methodologies for suggested processes, such as red-team safety tests, do not exist, and it will take some work to develop an industry-accepted approach. Laliberte says the call for global coordination "is commendable, but we have seen the struggle for years to come up with a common approach for global privacy, and getting a global consensus on guidelines for AI will prove even more difficult."

The threat of a quiet exodus

The global AI landscape was top of mind for many of the experts we spoke to. All forms of regulation in the absence of international coordination can lead to "regulatory arbitrage," where relatively portable industries seek the least regulated jurisdictions to do their work. Many practitioners believe that AI, which has captured the imagination of many technologists around the world, is particularly ripe for such moves.

"The oversight model will severely slow down the rate of progress and put complying US firms at a significant disadvantage to companies operating in countries like China, Russia, or India," says Pvotal's Manraj. "We are already seeing a quiet exodus from startups to Dubai, Kenya, and other locations where they will have more freedom and cheaper overhead. Manraj notes that companies that set up shop elsewhere can still benefit from US technologies "without being hindered by government-imposed regulatory concerns."

As the founder of Anzer, a company focused on AI-driven sales, Richard Gardner is definitely feeling the pressure. "Given these considerations alone, we're considering relocating AI operations outside of the United States," he says. "No doubt that there will be a mass exodus of AI platforms contemplating the same move, particularly since new reporting obligations will put a halt to R&D activities."

Tang of the Berkeley SkyDeck Fund sees the issue extending beyond the corporate world. "There is a real risk that some of the best open source projects will choose to locate offshore and avoid US regulation entirely," he says. "A number of the best open source LLM models trained over the past six months include offerings from the United Arab Emirates, France, and China." He believes the solution lies in international cooperation. "Just like arms control requires global buy-in and collaboration, we absolutely need countries to join the effort to design and enforce a uniform set of laws. Without a cohesive coordinated effort, it's doomed to failure."

An unbalanced playing field for startups

Even within the United States, there are worries that regulations could be onerous enough to create an unbalanced playing field. "Centralized regulations impose hidden costs in the form of legal and technical compliance teams, which can unfairly favor established companies, as smaller businesses may lack the resources to navigate such compliance effectively," says Sreekanth Menon, global AI/ML services leader at Genpact. This burden makes it difficult, he says, "for enterprises to jump on the centralized regulatory bandwagon."

Jignesh Patel is a computer science professor at Carnegie Mellon University and co-founder of DataChat, a no-code platform that enables business users to derive sophisticated data analytics from simple English requests. Patel is already contemplating what future regulations might mean for his startup. "Right now, the executive order does not significantly impact DataChat," he says. "However, if, down the line, we begin to go down the path of building our own models from scratch, we may have to worry about additional requirements that may be posed. These are easier for bigger companies like Microsoft and Meta to meet, but could be challenging for startups."

"We should make sure the cost of compliance isn't so high that 'big AI' begins to resemble 'big pharma,' with innovation really monopolized by a small set of players that can afford the massive investments needed to satisfy regulators," adds Tang. "To avoid the future of AI being controlled by oligarchs able to monopolize data or capital, there must be specific carve-outs for open source."

Why reinvent the wheel?

While almost all the experts we spoke to believe in the potentially transformative nature of AI, many wondered if creating an entirely new framework of regulations was necessary when the government has decades of rules around cybersecurity and data safety on the books. For instance, Interzoid's Brauer found the privacy-focused aspects of the executive order somewhat puzzling. "AI-specific privacy concerns seem to overlap with those already addressed by existing search engine regulations, data vendors, and privacy laws," he says. "Why, then, impose additional constraints on AI?"

Joe Ganley, vice president of government and regulatory affairs at Athenahealth, agrees. "Regulation should focus on AI’s role within specific use cases—not on the technology itself as a whole," he says. "Rather than having a single AI law, we need updates to existing regulations that utilize AI. For example, if there is bias inherent in tools being used for hiring, the Equal Employment Opportunity Commission should step in and change the requirements."

Some practitioners also noted that the administration's executive order seems to have a lighter touch with some industries than others. "The executive order is surprisingly light on firm directives for financial regulators and the Treasury Department as compared to other agencies," notes Mark Doucette, senior manager of data and AI at nCino. "While it encourages beneficial actions related to AI risks, it largely avoids imposing binding requirements or rulemaking mandates on financial oversight bodies. This contrasts sharply with the firmer obligations and directives imposed on departments like Commerce, Homeland Security, and the Office of Management and Budget elsewhere in the sweeping order."

However, Protiviti's Laliberte assumes that the weight of the federal government will come down on most industries' use of AI eventually—and, as Ganley and Brauer suggest, will do so within existing regulatory frameworks. "While US regulation in this space will take time to come together, expect the executive branch to use existing regulations and laws to enforce accountability for AI, similar to how we saw and still see it use the Federal Trade Commission Act and Consumer Protection Act to enforce privacy violations," he says.

Prepare now for regulation to come

Despite the worries and talk of a mass AI exodus, none of the practitioners said they believed industry upheaval was imminent. "For most US technology companies and businesses, the executive order will not have immediate consequences and will have a negligible effect on day-to-day operation," said Interzoid's Brauer. Still, he added, "Anyone vested in the nation's innovative landscape should vigilantly monitor the unfolding regulations."

Protiviti's Laliberte believes that anyone in the AI space needs to realize that the wild west days may be coming to an end—and should start getting ready for regulation now. "Companies, especially those in regulated industries, should prepare by having an AI governance function, policy, standard, and control mapping to avoid claims of negligence should something go wrong," he says. "It would also be advisable to avoid or at least put heavy scrutiny on the use of AI for any functions that could lead to bias or ethical issues, as these will likely be the initial focus for any enforcement actions." With this order, he says, "the executive branch has signaled it is ready to take action against bad behavior involving the use of AI." 

Copyright © 2023 IDG Communications, Inc.