Nick Bostrom

  • Superintelligence may be the last invention humans ever need to make.
  • Technological progress in all other fields will be accelerated by the arrival of advanced artificial intelligence.
  • Superintelligence will lead to more advanced superintelligence.
  • Artificial minds can be easily copied.
  • Emergence of superintelligence may be sudden.
  • Artificial intellects are potentially autonomous agents.
  • Artificial intellects need not have humanlike motives.
  • Artificial intellects may not have humanlike psyches.

For all of these reasons, one should be wary of assuming that the emergence of superintelligence can be predicted by extrapolating the history of other technological breakthroughs, or that the nature and behaviors of artificial intellects would necessarily resemble those of human or other animal minds.

3 thoughts on “Nick Bostrom

  1. shinichi Post author

    Ethical Issues in Advanced Artificial Intelligence

    by Nick Bostrom

    (2003)

    http://www.nickbostrom.com/ethics/ai.html

    The ethical issues related to the possible future creation of machines with general intellectual capabilities far outstripping those of humans are quite distinct from any ethical problems arising in current automation and information systems. Such superintelligence would not be just another technological development; it would be the most important invention ever made, and would lead to explosive progress in all scientific and technological fields, as the superintelligence would conduct research with superhuman efficiency. To the extent that ethics is a cognitive pursuit, a superintelligence could also easily surpass humans in the quality of its moral thinking. However, it would be up to the designers of the superintelligence to specify its original motivations. Since the superintelligence may become unstoppably powerful because of its intellectual superiority and the technologies it could develop, it is crucial that it be provided with human-friendly motivations. This paper surveys some of the unique ethical issues in creating superintelligence, and discusses what motivations we ought to give a superintelligence, and introduces some cost-benefit considerations relating to whether the development of superintelligent machines ought to be accelerated or retarded.

    Reply
  2. shinichi Post author

    SUPERINTELLIGENCE IS DIFFERENT

    A prerequisite for having a meaningful discussion of superintelligence is the realization that superintelligence is not just another technology, another tool that will add incrementally to human capabilities. Superintelligence is radically different. This point bears emphasizing, for anthropomorphizing superintelligence is a most fecund source of misconceptions.

    Let us consider some of the unusual aspects of the creation of superintelligence:

    ・ Superintelligence may be the last invention humans ever need to make.

    Given a superintelligence’s intellectual superiority, it would be much better at doing scientific research and technological development than any human, and possibly better even than all humans taken together. One immediate consequence of this fact is that:

    ・ Technological progress in all other fields will be accelerated by the arrival of advanced artificial intelligence.

    It is likely that any technology that we can currently foresee will be speedily developed by the first superintelligence, no doubt along with many other technologies of which we are as yet clueless. The foreseeable technologies that a superintelligence is likely to develop include mature molecular manufacturing, whose applications are wide-ranging:

     a) very powerful computers

     b) advanced weaponry, probably capable of safely disarming a nuclear power

     c) space travel and von Neumann probes (self-reproducing interstellar probes)

     d) elimination of aging and disease

     e) fine-grained control of human mood, emotion, and motivation

     f) uploading (neural or sub-neural scanning of a particular brain and implementation of the same algorithmic structures on a computer in a way that perseveres memory and personality)

     g) reanimation of cryonics patients

     h) fully realistic virtual reality

    ・ Superintelligence will lead to more advanced superintelligence.

    This results both from the improved hardware that a superintelligence could create, and also from improvements it could make to its own source code.

    ・ Artificial minds can be easily copied.

    Since artificial intelligences are software, they can easily and quickly be copied, so long as there is hardware available to store them. The same holds for human uploads. Hardware aside, the marginal cost of creating an additional copy of an upload or an artificial intelligence after the first one has been built is near zero. Artificial minds could therefore quickly come to exist in great numbers, although it is possible that efficiency would favor concentrating computational resources in a single super-intellect.

    ・ Emergence of superintelligence may be sudden.

    It appears much harder to get from where we are now to human-level artificial intelligence than to get from there to superintelligence. While it may thus take quite a while before we get superintelligence, the final stage may happen swiftly. That is, the transition from a state where we have a roughly human-level artificial intelligence to a state where we have full-blown superintelligence, with revolutionary applications, may be very rapid, perhaps a matter of days rather than years. This possibility of a sudden emergence of superintelligence is referred to as the singularity hypothesis.

    ・ Artificial intellects are potentially autonomous agents.

    A superintelligence should not necessarily be conceptualized as a mere tool. While specialized superintelligences that can think only about a restricted set of problems may be feasible, general superintelligence would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent.

    ・ Artificial intellects need not have humanlike motives.

    Human are rarely willing slaves, but there is nothing implausible about the idea of a superintelligence having as its supergoal to serve humanity or some particular human, with no desire whatsoever to revolt or to “liberate” itself. It also seems perfectly possible to have a superintelligence whose sole goal is something completely arbitrary, such as to manufacture as many paperclips as possible, and who would resist with all its might any attempt to alter this goal. For better or worse, artificial intellects need not share our human motivational tendencies.

    ・ Artificial intellects may not have humanlike psyches.

    The cognitive architecture of an artificial intellect may also be quite unlike that of humans. Artificial intellects may find it easy to guard against some kinds of human error and bias, while at the same time being at increased risk of other kinds of mistake that not even the most hapless human would make. Subjectively, the inner conscious life of an artificial intellect, if it has one, may also be quite different from ours.

    For all of these reasons, one should be wary of assuming that the emergence of superintelligence can be predicted by extrapolating the history of other technological breakthroughs, or that the nature and behaviors of artificial intellects would necessarily resemble those of human or other animal minds.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *