This article pointed me toward the book Superintelligence which pointed me toward a quote from a shorter document by the author from which I’ve quoted below. Anyone wondering what to gift me this Christmas can look into the linked book above. I enjoy being terrified by other people’s thoughts.

Two things are striking from the below quote:

  1. The future of humanity could be decided by algorithms—iterated through countless other machine-iterated algorithms—beginning with something being coded today (hopefully without any bugs or typos).
  2. The implication that superintelligence would eradicate human invention.

The first point is terrifying. I’d like to believe that if such a superintelligence is brought forth it would be smart enough to fix any bugs or major design flaws in the original. Of course I assume that what a superintelligence wants and what mere human intelligences want will differ in profound ways. What then?

The second point is—I believe—wrong (assuming we’re using the word ‘invention’ similarly). Unless humanity has been exterminated by this superintelligence then invention will not cease. The more fictional forms of invention (e.g. art) should flourish. I strongly believe that humans are a necessary component in art. Creation, reception, critique, categorization, and other components require human beings.

Superintelligence, if/when it materializes, will spur a Renaissance in human artistic production.

<that’s what I think anyway>

Superintelligence would be the last invention biological man would ever need to make, since, by definition, it would be much better at inventing than we are. All sorts of theoretically possible technologies could be developed quickly by superintelligence — advanced molecular manufacturing, medical nanotechnology, human enhancement technologies, uploading, weapons of all kinds, lifelike virtual realities, self‐replicating space‐colonizing robotic probes, and more. It would also be super‐effective at creating plans and strategies, working out philosophical problems, persuading and manipulating, and much else beside.

It is an open question whether the consequences would be for the better or the worse. The potential upside is clearly enormous; but the downside includes existential risk. Humanityʹs future might one day depend on the initial conditions we create, in particular on whether we successfully design the system (e.g., the seed AIʹs goal architecture) in such a way as to make it ʺhuman‐friendlyʺ — in the best possible interpretation of that term.

via Nick Bostrom.