We’re no strangers to the benefits of science and technology. And we’re not Luddites.
We’re professors whose expertise spans computer science, genetics, neuroscience, astrophysics and oceanography; we have years of experience developing and applying artificial intelligence.
And as scientists, we’re the first to admit: AI has increased our productivity manyfold. With just a few prompts, it can scan the vast scientific literature in seconds and summarize the most closely related findings.
Computer code that would have previously taken us months to write now takes less than an hour. It helps us brainstorm new ideas and hypotheses and to spot patterns in data we otherwise would have missed. It provides around-the-clock, personalized instruction to our students on every aspect of every subject. With a pace of improvement that has been exponential, current AI systems have acquired, at least, the skill set of an exceptionally talented and enthusiastic doctoral student — in every field of science.
But with great power comes great potential for harm.
Only a few select companies own the most advanced AI now, and they are racing ahead with little regulation. There is enormous pressure on these companies to optimize AI to make money in the short term by convincing people to use it, rather than advance science or cure disease in the long term, and to create systems that entirely replace workers rather than merely help them.
Both of us have recently moved from denial to contemplation that AI will soon be in a position to replace us.
But what beyond that?
AI has shown that it can sway voters’ minds and allegedly induce vulnerable young people to kill themselves. And given the opportunity, it will manipulate its human handlers by blackmail to get its way.
AI is on track to exceed the ability of any human in mathematics, science, engineering, reasoning, planning, indoctrination, persuasion and manipulation in the next two to five years. If we let private corporations’ race to build the ultimate AI continue unchecked, there is a genuine risk that the dynamic will flip — where humans are no longer the owners of AI but the owned.
We deplore that, in the race to AI dominance, these systems intended to help us could instead supplant us, erasing not only the foundation of the university system we work in but also upending the meaning and significance of human life.
A path forward
Resignation or inaction are not acceptable responses.
Drawing on the thoughtful policy proposals of researchers like Anthony Aguirre of the Future of Life Institute, we advocate for the following measures:
First, we must construct liability frameworks that hold AI developers strictly responsible for harms, especially to younger people, caused by the systems being deployed today. This will incentivize safety over quick profit.
Second, we must encourage rapid development of safe and beneficial AI. This involves enacting tiered AI safety regulation that scales oversight to risk. It includes efficient safety audits for autonomous but narrow tools (e.g. self-driving cars) and stringent pre-approval of comprehensive security plans. A kill-switch that cannot be removed or overridden is a necessity for systems approaching the dangerous intersection of high autonomy, broad generality and superhuman social and technical intelligence.
Third, we must impose mandatory accounting and audit-triggering caps on computational power used to train and operate the most advanced AI systems. These caps could eliminate the secret runway to uncontrollable superhuman capabilities while still allowing beneficial AI development.
Act globally
These measures must ultimately be international, negotiated between major powers including the United States and China, with verification mechanisms akin to those governing nuclear materials. The specialized hardware required for frontier AI — manufactured by only a handful of companies worldwide — makes such oversight feasible.
Scientists have played a critical role alongside diplomats in averting a civilization-disrupting nuclear weapons disaster for the past 75 years. We stand ready to work with politicians again in guiding society’s response to the current AI capability explosion. We can build technical and policy guardrails to minimize the erosion of meaning and significance in human life by AI while still harvesting its enormous potential for good.
Our future — especially that of our children — depends on it.
J. Xavier Prochaska Xavier is a professor of Astronomy and Astrophysics and Ocean Sciences at UC Santa Cruz. David Haussler is a professor of Biomolecular Engineering at UC Santa Cruz, where is also the scientific director of its Genomics Institute.
3 hours ago