Please ensure Javascript is enabled for purposes ofwebsite accessibility

Artificial Intelligence, Part 4: What's next for AI? To regulate, or not to regulate?


The camera is seen on a facial recognition device as U.S. Customs and Border Protection officers use it Miami International Airport to screen travelers entering the United States on February 27, 2018 in Miami, Florida. (Photo by Joe Raedle/Getty Images)
The camera is seen on a facial recognition device as U.S. Customs and Border Protection officers use it Miami International Airport to screen travelers entering the United States on February 27, 2018 in Miami, Florida. (Photo by Joe Raedle/Getty Images)
Facebook Share IconTwitter Share IconEmail Share Icon

With a grasp now on how artificial intelligence can both help and harm humanity, the ultimate question plaguing lawmakers and scientists is how to integrate it into society safely.

But the prospect introduces a plethora of legal concerns right out of the gate.

One concern many have flagged is the use of facial recognition for its potential to intrude on peoples’ privacy. Cynthia Rudin, a computer science professor at Duke University, cited a case last year where a lawyer from a firm suing the parent company of Radio City Music Hall in New York City was kicked out of the venue after facial recognition technology identified her trying to see a Rockettes performance.

“Right now, you walk down the street and you have a reasonable expectation of biometric privacy,” Rudin said. “I am concerned that that reasonable expectation of privacy is going to disappear.”

It’s true that law enforcement, retail stores and airports are ramping up their biometric surveillance. You may have opened your phone or tablet using similar technology just so you didn’t have to type in a password.

States have passed laws limiting facial recognition technology, but on the national level, lawmakers have tried and failed twice now to get anywhere.

Senate Democrats introduced another bill this Congress that would prevent the government from using any such technology. They cite research finding the faces of black, brown and Asian individuals are 100 times more likely to be misidentified than white male faces. The future of the legislation remains unclear.

Lawmakers like Rep. Ted Lieu, D-Calif., hope to get Congress moving on regulation. He’s been one of the few vocalizing concerns in Congress. Just last week, he and colleagues from both sides of the aisle introduced a bill to prevent AI from launching a nuclear weapon – but more is on the to-do list, and he admitted Congress is “absolutely” behind the curve.

“One of the reasons [why] is because of the lightning speed at which technology and artificial intelligence is moving,” Lieu said. “But also, we do move far too slowly on a number of issues in Congress, and I’m trying to at least get Congress more up to speed on both the amazing benefits of AI and the potential harms.”

The Democrat from California is currently working on legislation that would create a bipartisan Blue Ribbon Commission to make recommendations to Congress on what types of AI to regulate and how to go about it.

The issue has the potential for cooperation across political affiliations. Rep. Michael McCaul, R-Texas, said in a statement how the U.S. learns to leverage AI “could determine whether our nation continues to lead or shrinks back on the world stage.”

“Technology, as developed, is neutral, but as we’ve seen in authoritarian countries, it can do harm in the wrong hands,” McCaul said. “That’s why it’s critical America prioritize the development of artificial intelligence that is safe, secure and supplements human ability to ensure this rapidly advancing technology is used for good.”

Senate Majority Leader Chuck Schumer, D-N.Y., unveiled a framework regulating AI last month in an effort to hold AI labs accountable. It would require companies to allow independent experts to review and test their products before they’re released to the public.

Schumer said he’ll meet with academics, advocates and industry leaders to refine the proposal to “prevent potentially catastrophic damage to our country while simultaneously making sure the U.S. advances and leads in this transformative technology.”

Sen. Michael Bennet, D-Colo., also introduced a bill last week to create a task force to look at U.S. policies on AI and how to reduce threats to privacy, civil liberties and due process. The AI Task Force would include an official from the Office of Management and Budget, the National Institute of Standards and Technology and the Office of Science and Technology Policy.

That task force would work for 18 months before issuing a final report.

Over at the White House, the Biden administration said it’s seeking public comment on potential accountability measures for AI systems. President Joe Biden himself said it remains to be seen whether AI is dangerous.

“Tech companies have a responsibility, in my view, to make sure their products are safe before making them public,” he said.

The Federal Trade Commission warned it won’t “hesitate to crack down” on businesses that use AI for nefarious purposes.

“There is no AI exemption to laws on the books,” said FTC Chair Linda Khan.

Last year, the White House released a non-binding “AI Bill of Rights” in hopes to get ahead of AI companies and make sure they deploy their technology wisely and limit surveillance.

But all of these efforts fall short of any sort of federal law regulating AI or its applications.

Yet, there are many areas law experts are already puzzling over when it comes to AI. One of those is defamation.

Potentially the first lawsuit against OpenAI is pending in Australia by a regional mayor, Brian Hood.

Members of the public informed him ChatGPT was falsely naming him as a guilty party in a foreign bribery scandal – something he denies, pointing to the fact he was never charged with a crime.

The chatbot said Hood had served time in prison over it, so his lawyers told OpenAI to fix the error, or they’d sue for defamation. OpenAI has not commented on the suit or responded to requests for comment for this article.

Who’s responsible for that? That’s a very difficult question.”

Dan Burk is the chancellor's professor of law at the University of California, Irvine, and specializes in intellectual property and copyright law involving technology. He pointed out that making a defamation case in the U.S. requires “actual malice” (actual knowledge of falsity or reckless disregard for the truth) by the entity that made the statement against a public figure.

In cases with AI, the plaintiff would have to prove the machine, the person entering the prompt or the people who programmed the system knew the truth and disregarded it.

But, the machine has no awareness or intent to do something; the person entering the prompt into the chatbot had no idea what would come out; and the people who programmed the system can’t monitor every output or foresee every falsehood.

“It looks like nobody was intended to be hurt here, which might mean that nobody is responsible,” Burk said. “That would be a real problem, right?”

He suggested a new approach could be something like disclaiming a product’s liability, but it’s never been done before with technology like this. Other lawyers say a creative plaintiff’s lawyer could argue AI programmers didn’t deploy proper safeguards to ensure “journalistic” standards.

Another murky area is copyright law. ChatGPT and programs like it are able to generate new texts, images and other content – but they’re trained to generate these works by being exposed to large quantities of existing works.

One artist highlighted this by submitting an AI-generated piece, called The Electrician, to the World Photography Organization’s Sony World Photography Awards. The piece looks like an old photograph with two women, one crouching behind the other. The artist, Boris Eldagsen, won, but declined the first-place award, stating, “They are different entities. AI is not photography.”

Just as photography replaced painting in the reproduction of reality, AI will replace photography,” he wrote in a description. “Don’t be afraid of the future. It will just be more obvious that our mind always created the world that makes it suffer.”

The contest’s organizers said they didn’t know the extent to which the work utilized AI, and accused Eldagsen of deliberately misleading them.

Burk pointed to the fact that U.S. law allows “fair use” in copyright law, meaning it’s OK to copy all or part of creative work without permission if you’re doing it for the right reasons or to create something else that doesn’t compete with the original product.

“The argument being made right now is maybe this machine learning output from machine learning systems should be the same thing,” he said. “It’s true, we have to copy data in order to train the systems. But if we’re not selling those copies, if we’re not using those copies to compete and we’re just using them to create the statistical model that artificial intelligence systems use to create new works, maybe that’s a fair use. Maybe that’s OK.”

That assertion is currently being litigated in the U.S. and the United Kingdom. Getty Images – a stock photo provider – is suing AI company Stability AI Inc., which generates images from text inputs.

Getty accused Stability of copying millions of its photos without a license to train Stable Diffusion to generate images – despite the fact Getty has licensed “millions of suitable digital assets” to other leading AI labs. The suit also accuses Stability of infringing on its trademarks, because some of the AI-generated images show Getty’s watermark.

The European Union is ahead of the U.S. in proposing new copyright rules for generative AI. Companies like ChatGPT would have to disclose any copyrighted material used to develop their systems.

However, the “who’s responsible?” question remains to be answered, and Burk said the requirement of intent is what’s going to separate these everyday, accidental damage situations from cybercrimes like deepfakes, misinformation and defrauding someone since prosecutors can argue those in court and a jury can decide if someone meant to harm another with AI.

As we’ve covered, there’s harm AI can cause that humans can’t even foresee yet. But people will sue over these harms, and current U.S. law doesn’t quite determine who’s responsible for harm put out by a robot, chatbot or unaware, unconscious being. The issue is working its way through the courts.

Some people are asking if a machine can be considered an inventor, but Burk said asking those questions misses the point.

This is not your science fiction movie, this is not your favorite Star Trek or Star Wars robot,” he said. “It has no awareness, no cognition, no idea of itself. It’s following its programming, the statistical model that it has of its data.”

He predicted as the technology gets litigated around the country and the world, the blame is probably going to fall on the people who designed the machine, or Congress will write a law protecting them from liability as it did with Section 230 and social media companies.

Some advocates don’t want to wait for courts to litigate these issues, though. Anthony Aguirre at Future of Life Institute proposed something similar to what Schumer is pushing – a third-party verification of what companies are working on.

“This is what we would do with many other industries,” Aguirre said. “If you’re doing a biology experiment or something that involves, say, human subjects, you go to the Institutional Review Board. Somebody else signs off.”

He said even soft law, like simple norms and industry standards, could go a long way in keeping a level playing field and preventing potential harm. Or else, companies may take advantage of the unregulated market and get incomplete products to the market faster without testing them for problems.

Rudin pointed out AI companies already are testing their products on human subjects, including children and teenagers.

“Right now, all the tech companies are controlling us, and the people have not spoken back,” she said. “If we don’t sort it out now, those tech companies are going to get entrenched to the point where there’s no way out of itI don’t want to be subjected to those few tech companies and their monopolistic practices.”

But considering how Congress is still working on regulating other types of technology like social media platforms, concrete federal law may be a faraway solution.

The Congressional Research Service, a nonpartisan shared staff to congressional committees and members themselves, wrote a letter on the topic of AI in copyright law earlier this year and said lawmakers do have the option of considering amendments to the Copyright Act or other legislation.

But the service added, “Given how little opportunity the courts and Copyright Office have had to address these issues, Congress may wish to adopt a wait-and-see approach. As the courts gain experience handling cases involving generative AI, they may be able to provide greater guidance and predictability in this area through judicial opinions.”

Burk said this is a more prudent approach, as opposed to trying to amend the law or propose new laws like the EU. Judges will make rulings and if there’s enough of a common theme that points to a change in the law, or if courts aren’t taking it in the direction people and lawmakers think it should be going, then Congress can step in.

“Technology is always out ahead of the law, right? People who are being innovative and creating new technologies tend not to be thinking about regulation or about the social impact,” Burk said. “It’s almost impossible to foresee what’s going to happen, because the technology is continually changing, continually being updated.

“If the legislature gets involved too early with whatever the technology is today, tomorrow, it’s going to look different, and whatever laws they enacted are probably going to be the wrong laws.”

Regardless of if/when lawmakers intervene and try to mitigate AI’s harms and maximize its benefits, there was one thing all experts agreed on: AI is transformative – it will upend life as we know it.

In 1765, it was mechanization. In 1870, it was oil and gas. In 1969, it was nuclear energy. Now, the fourth industrial revolution may be in the 2020s, and it may very well be artificial intelligence.

Remember when Economics Professor Anton Korinek said no job is safe from AI? He reiterated – that may not be a bad thing.

“We may be on the verge of entering the ‘Age of AI,’ in which machines and robots are going to produce our food. And we, humans, can focus on what makes life the most fun. We can focus on social connections, we can focus on the activities that give us the most meaning.

“That would really be a radical transition. We could be much happier.”

Loading ...