A Privacy Hero’s Final Wish: An Institute to Redirect AI’s Future


Yesterday, hundreds in Eckersley’s community of friends and colleagues packed the pews for an unusual sort of memorial service at the church-like sanctuary of the Internet Archive in San Francisco—a symposium with a series of talks devoted not just to remembrances of Eckersley as a person but a tour of his life’s work. Facing a shrine to Eckersley at the back of the hall filled with his writings, his beloved road bike, and some samples of his Victorian goth punk wardrobe, Turan, Gallagher, and 10 other speakers gave presentations about Eckersley’s long list of contributions: his years pushing Silicon Valley towards better privacy-preserving technologies, his co-founding of a groundbreaking project to encrypt the entire web, and his late-life pivot to improving the safety and ethics of AI.

The event also served as a kind of soft launch for AOI, the organization that will now carry on Eckersley’s work after his death. Eckersley envisioned the institute as an incubator and applied laboratory that would work with major AI labs to that take on the problem Eckersley had come to believe was, perhaps, even more important than the privacy and cybersecurity work to which he’d devoted decades of his career: redirecting the future of artificial intelligence away from the forces causing suffering in the world, toward what he described as “human flourishing.”

“We need to make AI not just who we are, but what we aspire to be,” Turan said in his speech at the memorial event, after playing a recording of the phone call in which Eckersley had recruited him. “So it can lift us in that direction.”

The mission Eckersley conceived of for AOI emerged from a growing sense over the last decade that AI has an “alignment problem”: That its evolution is hurtling forward at an ever-accelerating rate, but with simplistic goals that are out of step with those of humanity’s health and happiness. Instead of ushering in a paradise of superabundance and creative leisure for all, Eckersley believed that, on its current trajectory, AI is far more likely to amplify all the forces that are already wrecking the world: environmental destruction, exploitation of the poor, and rampant nationalism, to name a few.

AOI’s goal, as Turan and Gallagher describe it, is not to try to restrain AI’s progress but to steer its objectives away from those single-minded, destructive forces. They argue that’s humanity’s best hope of preventing, for instance, hyperintelligent software that can brainwash humans through advertising or propaganda, corporations with god-like strategies and powers for harvesting every last hydrocarbon from the earth, or automated hacking systems that can penetrate any network in the world to cause global mayhem. “AI failures won’t look like nanobots crawling all over us all of the sudden,” Turan says. “These are economic and environmental disasters that will look very recognizable, similar to the things that are happening right now.”

Gallagher, now AOI’s executive director, emphasizes that Eckersley’s vision for the institute wasn’t that of a doomsaying Cassandra, but of a shepherd that could guide AI toward his idealistic dreams for the future. “He was never thinking about how to prevent a dystopia. His eternally optimistic way of thinking was, ‘how do we make the utopia?’” she says. “What can we do to build a better world, and how can artificial intelligence work toward human flourishing?”



Source link