Latest Turing Award winners again warn of AI dangers | The Verge


AI Summary Hide AI Generated Summary

Key Concerns

Andrew Barto and Richard Sutton, the 2024 Turing Award recipients, expressed concerns about the inadequate testing of AI products before public release. They compared the current practice to 'building a bridge and testing it by having people use it,' highlighting the lack of safety measures in AI development.

Reinforcement Learning and the AI Boom

Barto and Sutton's award recognizes their contributions to reinforcement learning, a crucial machine learning technique that has driven progress in AI, powering models like ChatGPT and AlphaGo. However, they argue that this progress has outpaced safety considerations.

Criticism of AI Companies

The researchers criticized AI companies for prioritizing business incentives over safety, echoing similar concerns raised by other leading figures in the field, such as Yoshua Bengio and Geoffrey Hinton. A statement by top AI researchers in 2023 also warned of the risk of AI-caused extinction.

OpenAI's Response

OpenAI, despite repeated promises to improve AI safety, recently announced plans to transition into a for-profit company, raising further questions about the balance between safety and commercial interests. The company's CEO, Sam Altman, was briefly removed from his position due to concerns about over-commercialization.

  • Inadequate testing of AI products
  • Prioritization of business incentives over safety
  • Concerns about AI-caused extinction
  • OpenAI's transition to a for-profit model
Sign in to unlock more AI features Sign in with Google

Two trailblazing scientists who today received this year’s Turing Award for creating fundamental artificial intelligence training techniques are using the spotlight to shine concern on the dangers of rushing AI models out for public consumption.

University of Massachusetts researcher Andrew Barto and former DeepMind research scientist Richard Sutton warned that AI companies are not thoroughly testing products before releasing them, likening the development to “building a bridge and testing it by having people use it,” according to The Financial Times.

The Turing Award, often referred to as the “Nobel Prize of Computing,” carries a $1 million prize and was jointly awarded to Barto and Sutton for developing “reinforcement learning” — a machine learning method that trains AI systems to make optimized decisions through trial and error. Google’s senior vice president Jeff Dean describes the technique as “a lynchpin of progress in AI” and has remained “a central pillar of the AI boom” that led to breakthrough models like OpenAI’s ChatGPT and Google’s AlphaGo before that.

“Releasing software to millions of people without safeguards is not good engineering practice,” Barto told The Financial Times. “Engineering practice has evolved to try to mitigate the negative consequences of technology, and I don’t see that being practised by the companies that are developing.”

Unsafe AI development has been notably criticized by Yoshua Bengio and Geoffrey Hinton — two of the “godfathers of AI” who are also previous Turing Award recipients. A statement was also issued by a group of top AI researchers, engineers, and CEOs in 2023, including OpenAI CEO Sam Altman, warning that “mitigating the risk of extinction from AI should be a global priority.”

Barto called out AI companies for being “motivated by business incentives” instead of focusing on advancing AI research. OpenAI, which has made repeated promises to improve AI safety and briefly ousted Altman, in part, for “over commercializing advances before understanding the consequences,” announced plans in December to transform itself into a for-profit company.

Was this article displayed correctly? Not happy with what you see?


Share this article with your
friends and colleagues.

Facebook



Share this article with your
friends and colleagues.

Facebook