FANDOM


Template:Stub

The potentially cosmic expected utility[1] which can be gained from reducing Existential Risk makes this intervention a candidate for being a top utilitarian priority.Jesper.ostman 04:54, 30 December 2009 (UTC)

The Singularity Institute for Artificial IntelligenceEdit

The Singularity Institute for Artificial Intelligence (SIAI) researches the singularity, among other futurist and rationalist topics.

Own statementsEdit

"Offering unusually good philanthropic returns — meaning greater odds of a positive Singularity and lesser odds of human extinction" -SIAI [2]

"Among SIAI's core aims is to continue studying "Friendly AI": AI that acts benevolently because it holds goals aligned with human values" -SIAI [3]

"we've been doing more than laying the groundwork for Friendly AI. We've been raising the profile of AI risk and Singularity issues in academia and elsewhere, forming communities around enhancing human rationality, and researching other avenues that promise to reduce the most severe risks the most effectively." -SIAI [4]

EvaluationEdit

Here is a list of what SIAI did in 2009 2009 SIAI Accomplishments.

As of 2010, SIAI has stopped its suboptimal coding projects and started focusing on publishable academic research with the goal of breaking into academic communities and potentially attracting large-fry donor attention (e.g., maybe a Bill Joy or Gates donation at some point, or else sparking the idea to larger government agencies or foundations for, say, a $1 billion AI-risk project done outside SIAI.)

The case for and against donatingEdit

  • Summary. If readers are looking for a public charity where they might donate, I would provisionally, based on my current knowledge, recommend contributing research grants to the Singularity Institute for Artificial Intelligence (SIAI). While I have a few concerns about possible outcomes of friendly artificial intelligence, I largely support the effort. Moreover, much of the research that SIAI intends to do would be extremely valuable for those interested in preventing vast amounts of suffering in the multiverse; from the perspective of maximizing your donation's expected value, I think research organizations like SIAI win hands-down. SIAI's donation page allows visitors to make a small, quick contribution; donors considering a larger contribution might consider funding a particular research project of high value for utilitarians, and should contact the team to discuss further the options in this area.[5]
  • Felicifia: "Reasons SIAI (and research generally) is not optimal?" [6]
  • The possibility of earmarking money (or even proposing a new one) for a certain program. By earmarking donations it may be possible to fund projects with an even higher expected utility than average SIAI programs:

Earmarked donationsEdit

If you make a donation to the Singularity Institute, you can choose which grant proposal your donation should help to fill. Any time a grant proposal is fully funded, it goes into our “active projects” file: it becomes a project that we have money enough to fund, and that we are publicly committed to funding. (Some of the projects will go forward even without earmarked donations, with money from the general fund — but many won’t, and since our work is limited by how much money we have available to support skilled staff and Visiting Fellows, more money allows more total projects to go forward.) [7]

Any remaining money allocated to partially funded grants on Feb 28 (at the close of the Challenge Campaign) will be returned to the general fund."[8]

Some work has been done on estimating which matching donations would have the highest utility. Discussion on the Reducing Suffering blog

These are past grant proposals [9]

Objection to earmarked donations as providing higher expected utility than general donations to SIAIEdit

The possibility of funding particular reasearch programs may give donors an illusion of control. This is because earmarked money to a certain program frees up non-earmarked money that would have been spent on that program even without earmarked money. Compare the discussion of restricted funding on the GiveWell blog. As GiveWell points out, this problem is avoided if the organisation has no unrestricted funding. However, that does not appear to be the case for SIAI. [10]

One reply is that it is uncertain how many of the proposed papers will be funded without earmarked money. The objection is strongest if we have good reason to believe all proposed papers will be funded. The claim that "many won't" could be taken as evidence that there is some risk that papers might not be funded without earmarked money. One can avoid this problem by proposing a new paper. SIAI has been known to pursue proposed projects, that it would not have otherwise pursued. This avoids the stated problem.

Edit

SIAI collects statistics on their own operations, and they estimated a 100% internal rate of return over the past 12 months or so. Maybe this is biased high, but it does suggest that market rates of return aren't good rules of thumb when considering projects like this. OTOH, implicit rates of return on personal wisdom can be even bigger -- even a million or billion percent.

  • It may be more effective invest and donate later than to donate now. [11]
  • It may be beneficial to wait for more knowledge on expected utility of different causes before donating. If one's predictions of which donations would maximize expected utility has fluctuated the past years, or if their expected utility has kept increasing a lot (judged by one's present state of belief) this argument may be strong. On the other hand, if they have converged more and more on SIAI it would be weaker. This objection could be avoided for funding of research which could increase this knowledge, such as the Academic Paper Grant: AI Risks Philanthropy: How Many Lives Can We Save per Dollar?. That's what Gaverick advocates in his piece.

Objections to donatingEdit

  • Non-utilitarian motivations?
  • "Among SIAI's core aims is to continue studying Friendly AI: AI that acts benevolently because it holds goals aligned with human values[12]. This could be a problem from a utilitarian perspective. Human values and utilitarian values need not coincide, even if it is likely that they do. If the motivations of the people working in SIAI are not completely utilitarian, their work promoting FAI may assume a non-utilitarian definition of 'friendly'. If their research will influence future AI development this could at worst lead to a future controlled by beings with non-utilitarian motivations, something which could lead to disutility at a cosmic scale. On the other hand, if human values are thought to approximate utilitarianism, then such a proposal could be the most utilitarian kind of AI proposal, because it is popular and thus more likely to succeed than purely utilitarian AI.
  • The focus on FAI

If much of SIAI's resources are used on research aiming to prevent non-friendly AI and other Existential risks are more likely or cheaper to reduce it may be unwise to donate to SIAI.

Alternatives to SIAIEdit

The Future of Humanity Institute and possibly LifeBoat Foundation as alternatives to SIAI when it comes to reducing existential risk. However, these organisations share many collaborators. To what degree are they distinct?

Does Lifeboat achieve very much? SIAI has given many utilitarians world-view-changing "aha" moments and Nick Bostrom has some mainstream publicity (TED talks, Nature commentaries, etc.).

Further ReadingEdit

SIAI Homepage

SIAI Wikipedia Entry

Template:Practice

Ad blocker interference detected!


Wikia is a free-to-use site that makes money from advertising. We have a modified experience for viewers using ad blockers

Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected.