The CEO of the company that made ChatGPT has expressed concerns about the lack of regulation in the field of artificial intelligence (AI).
Sam Altman, CEO of the startup OpenAI, which is supported by Microsoft, stated: “People in our industry criticize regulation a lot. We have been advocating for regulation, but only for the most powerful systems.
“Models that are 10,000 times more powerful than GPT4, models that are as intelligent as human civilization, probably deserve some regulation.”
Speaking at an AI event in Taiwan hosted by the charitable foundation of Terry Gou, the founder of major Apple supplier Foxconn, Mr. Altman mentioned that there is a “reflexive anti-regulation sentiment” in the tech industry.
He said that although he is not overly concerned about excessive government regulation, it is a possibility.
“Regulation hasn’t been pure good, but it has been beneficial in many ways. I don’t want to have to evaluate the safety of every airplane I board, but I trust that they are generally safe, and I believe regulation has played a positive role there,” he said.
“It is possible for regulation to be flawed, but we don’t live in fear of it. In fact, we believe some form of regulation is important.”
Many countries are currently planning to regulate AI, with the UK hosting a global AI safety summit at Bletchley Park, Britain’s Second World War codebreaking base, in November.
Greg Clark, chair of the science and technology committee and a Conservative MP, warned that the government may need to act with “greater urgency” to ensure that potential legislation does not quickly become outdated as powers such as the US, China, and the EU consider their own rules regarding AI.
The conference will focus on understanding the risks posed by the emerging technology and how to establish national and international frameworks for its regulation.
‘Nowhere close’ to existential AI threat
This comes after the US Pentagon’s director of computer intelligence stated that the world is “nowhere close” to facing an existential threat from AI.
Dr. Craig Martell clarified that recent headlines about generative models like ChatGPT have given people a false impression of their capabilities.
“AI is not a singular technology where its presence guarantees success or its absence poses a danger,” he said.
“It is neither a panacea nor Pandora’s box.”