In downtown Berkeley, an old hotel has become a temple to the pursuit of artificial intelligence and the future of humanity. Its name is Lighthaven.
Covering much of a city block, this gated complex includes five buildings and a small park dotted with rose bushes, stone fountains and neoclassical statues. Stained glass windows glisten on the top floor of the tallest building, called Bayes House after an 18th-century mathematician and philosopher.
Lighthaven is the de facto headquarters of a group who call themselves the Rationalists. This group has many interests involving mathematics, genetics and philosophy. One of their overriding beliefs is that artificial intelligence can deliver a better life if it doesn’t destroy humanity first. And the Rationalists believe it is up to the people building A.I. to ensure that it is a force for the greater good.
The Rationalists were talking about A.I. risks years before OpenAI created ChatGPT, which brought A.I. into the mainstream and turned Silicon Valley on its head. Their influence has quietly spread through many tech companies, from industry giants like Google to A.I. pioneers like OpenAI and Anthropic.
Many of the A.I. world’s biggest names — including Shane Legg, a co-founder of Google’s DeepMind; Anthropic’s chief executive, Dario Amodei; and Paul Christiano, a former OpenAI researcher who now leads safety work at the U.S. Center for A.I. Standards and Innovation — have been influenced by Rationalist philosophy. Elon Musk, who runs his own A.I. company, said that many of the community’s ideas align with his own.