Professional Documents
Culture Documents
Salamon, Anna, and Luke Muehlhauser. 2012. Singularity Summit 2011 Workshop Report.
Technical Report, 2012-1. The Singularity Institute, San Francisco, CA.
The Machine Intelligence Research Institute was previously known as the Singularity Institute.
Workshop Participants
Anna Salamon (organizer)
Nick Beckstead
David Brin
David Chalmers
Paul Christiano
Kevin Fischer
Entrepreneur
Alexander Funcke
Julia Galef
Phil Goetz
Science writer
Researcher, J. Craig Venter Institute
Katja Grace
Robin Hanson
Louie Helm
Jey Kottalam
Nathan Labenz
Co-founder, Stik.com
Zvi Mowshowitz
Co-founder, Variance
Luke Nosek
Carl Shulman
Co-founder, Paypal
Researcher, Singularity Institute
Tim Sullivan
Co-founder, CO2Stats
Jaan Tallinn
Co-founder, Skype
Michael Vassar
Alexander Wissner-Gross
Eliezer Yudkowsky
Executive Summary
The Singularity Summit 2011 Workshop was held at the Marriott Courtyard Hotel in
New York City on October 1718, 2011. Participants discussed many topics in small
and large groups. This report includes just two of the results:
1. The results of the longest single discussion, about whole brain emulation.
2. A list of questions to which participants said they would like to know the answer.
estimate from a weighted average of others estimates, weighted by his estimate of each
participants expertise. His given estimates were a roughly 14% chance of win if WBE
or neuromorphic AI comes first and, coincidentally, a roughly 14% chance of win if de
novo AI (Friendly AI or not) came first.
The original plan for deciding whether to recommend the acceleration of WBErelevant tech was to figure out which win probability was larger: prob(win | WBE or
neuromorphic) or prob(win | de novo AI).
The tiebreaker consideration deemed most important was timelines. Pushing on
WBE-related tech would, if anything, cause machine superintelligence to be created
sooner. All else being equal, sooner timelines would seem to decrease the probability of
a win scenario. Thus, despite many shortcomings in the design of this expert elicitation,
the groups current collective best guess, given the considerations evaluated so far, is that
humanity should not accelerate WBE-related tech.
Tough Questions
At another point during the workshops, participants wrote down a list of questions for
which knowing the answer would plausibly aect their immediate actions. Below is a
complete list of the questions.
Can you measure curiosity? How do you increase it?
Do values converge?
How well does dual n-back work?
Can large groups be reasonably goal-seeking instead of just doing the thing that
the group does?
How much does rationality help ventures?
How plausible are certain AI architectures? When should they arrive?
How do the benefits of rationality training compare to other interventions like
altruism training?
How hard is it to create Friendly AI?
What growth rates can be sustained?
What is the strength of feedback from neuroscience to AI rather than brain emulation?
Is synthetic biology easier and likely to to become a threat sooner than we realize?
References
Bostrom, Nick. 2002. Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards.
Journal of Evolution and Technology 9. http://www.jetpress.org/volume9/risks.html.
Bostrom, Nick, and Anders Sandberg. 2009. Cognitive Enhancement: Methods, Ethics, Regulatory
Challenges. Science and Engineering Ethics 15 (3): 311341. doi:10.1007/s11948-009-9142-5.
Sandberg, Anders, and Nick Bostrom. 2008. Whole Brain Emulation: A Roadmap. Technical Report,
2008-3. Future of Humanity Institute, University of Oxford. http : / / www . fhi . ox . ac . uk /
Reports/2008-3.pdf.
Shulman, Carl, and Anna Salamon. 2011. Whole Brain Emulation as a Platform for Creating Safe
AGI. Talk given at the Fourth Annual Conference on Artificial General Intelligence, Mountain
View, CA, August 36.