Please ensure Javascript is enabled for purposes ofwebsite accessibility
Weather Alert
WIND ALERT
Show Less
Close Alert

Senators reaffirm worries about AI during hearing with ChatGPT founder


OpenAI CEO Sam Altman speaks before a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence, Tuesday, May 16, 2023, on Capitol Hill in Washington. (AP Photo/Patrick Semansky)
OpenAI CEO Sam Altman speaks before a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence, Tuesday, May 16, 2023, on Capitol Hill in Washington. (AP Photo/Patrick Semansky)
Facebook Share IconTwitter Share IconEmail Share Icon

As tech companies race to develop cutting edge artificial intelligence products, many in Congress are realizing they should act just as quickly to regulate them.

“There’s no way to put this genie in a bottle. Globally this is, it’s exploding," Sen. Cory Booker, D-N.J., said Tuesday during a Senate Judiciary Committee hearing on oversight of AI, which is essentially nonexistent at this point.

Lawmakers have begun exploring ways to impose oversight that strikes a balance of fostering technological innovation and ensuring moral and ethical practices. Members of Congress, the executive branch, scholars and industry leaders have raised concerns about the risks of rapid AI advancements and the potential for copyright infringement, the spread of misinformation, impersonation, election interference, among other risks.

The New York Times reported on research at Microsoft found AI showed signs of operating like a human brain.

Potential regulations include licensing, product testing requirements, and disclosures or watermarks on AI-generated material.

Among the hearing witnesses was Samuel Altman, the CEO of OpenAI, the company behind ChatGPT. He called for a combination of companies behaving well, government regulation and public education.

He said his company recognizes people are anxious about the ways AI can change their way of life and said they're anxious, too.

“I think if this technology goes wrong, it can go quite wrong," Altman said.

Michigan State University professor of Responsible AI Anjana Susarla said the public education component is critical, something she referred to as "algorithmic literacy" – understanding the impact of algorithms on people's lives.

She is particularly concerned about data collection.

“AI is everywhere in our life. We don’t realize, even if you order an Uber or you know, you’re going to Facebook and checking your feed or you're ordering groceries. Pretty much every aspect of our life there is a lot of data being collected about us. We generate what I would call our digital traces," Susarla said.

She said at the very least, the federal government should impose AI data privacy and protection safeguards.

“Different states are adopting a very piecemeal approach. I think it would be greatly wonderful if we have some sort of national move toward you know, we have some control over what data is collected and how companies use it," Susarla said.

The potential for AI to facilitate the misuse or abuse of facial recognition technology has also been raised by experts like Dr. Cynthia Rudin, a professor of computer science and engineering at Duke University and last year's recipient of the prestigious Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity.

Rudin said she's unsure about the efficacy of a mandatory label to distinguish all AI material, arguing overuse could diminish its significance. Rudin does, however, advocate for product inspection before it goes to market.

"And currently these companies you know, it benefits them to just release it because they’re making a huge amount of money from it, right? And the question is, is it good for the rest of us when they do that?” Rudin said.

She argues time is of the essence for lawmakers to act.

“It’s very hard to undo it," Rudin said. "Once the technology is there and it’s in everybody’s phones, it’s very difficult to say, ‘Oh nope, you’re not allowed to use that anymore.”

Loading ...