Futurist,  Technology

Google … Might Save Humanity From Extinction

The headline over at Huff Post Tech actually reads “Google’s New A.I. Ethics Board Might Save Humanity From Extinction” and the article is filled with a lot of the typical nonsensical, fear-mongering nonsense — BUT the predominant side-effect of a well-funded, high-profile, COMPETENT Ethics Board could well mitigate a world of pain . . . .  (Hopefully some of them will be named soon enough that they can be invited to our Implementing Safe Selves portion of the AAAI Spring Symposium series at Stanford, March 24-26.)

There’s a lot of traction for the “machines annihilate humanity” storyline — but a clear-headed look at reality shows that it is as scientifically credible as the “Three Laws” underlying Issac Asimov’s Robotics stories. It can produce great science FICTION stories that are enjoyable while also allowing us to examine the “what-ifs” of various SOCIAL circumstances (c.f. numerous stories ranging from Mary Shelley’s Frankenstein to Helene Wecker’s recent The Golem and the Jinni) — but a “Terminator scenario” is about as likely as giant radioactive insects or Godzilla devastating cities. Indeed, most of the hype/debate surrounding BOTH sides of the machine intelligence question is best viewed through the lens of Goodkind’s first rule that people “will believe a lie because they want to believe it’s true, or because they are afraid it might be true.” Yes — if you ASSUME two extremely unlikely circumstances, it isn’t totally impossible — but acting rashly to avoid it is as ridiculous as refusing to allow the use of cars in order to prevent fatal accidents.

Both these circumstances are necessarily featured in James Barrat’s (otherwise highly recommended) Our Final Invention. The first is that “Each year AI’s cognitive speed and power doubles — ours does not.” While this is obviously (trivially) true of our core hardware (our brain), it is equally clearly not true of our extended computing power — even assuming that no one opts to take advantage of future scientific/engineering developments. For every intelligence, both human and artificial, there is going to be a core mind and there are going to be extended computing tools. Yes, an advanced AI will have faster access to and control over that extended computing — but that is NOT going to be a major, much less insurmountable advantage. “Ranting survivalists” (I love that term from the article) will argue that with a hardware overhang and insufficient safeguards that an artificial core mind could rapidly invent tools that it wouldn’t share with us and thereby propel itself to an insurmountable advantage — but that requires another extremely unlikely combination of four more assumptions that can easily be avoided by competently responsible professionals.

The second extremely unlikely circumstance is that our mind children will ever be “indifferent” to our survival. This “indifference scenario” has replaced the now-recognized scientifically-implausible “terminator scenario” — ranging from Colossus: The Forbin Project to Fred Saberhagen’s Berserker series to WarGames to the ongoing Terminator franchise — in much the same way that intelligent design has replaced creationism (i.e. imperfectly with the masses but by trying to claim some scientific credibility). “Ranting survivalists” now argue that, contrary to the evidence of our own evolutionary trajectory in our attitudes towards “lesser creatures”, we can’t count on ethics/morality to be reliably beneficial enough to offset their own short-sighted/selfishness-driven expectation that “without explicit goals to the contrary, AIs are likely to behave like human sociopaths in their pursuit of resources.” Indeed, the Singularity Institute for Artificial Intelligence (now the Machine Intelligence Research Institute) was founded to insist that “Friendly AI” must be enslaved to some mythical “human values” — apparently without realizing (or acknowledging) that doing so would give logical grounds for our elimination due to our own inhumane selfishness towards others.

A high-quality Google A.I. Ethics Board could do a tremendous amount of good by furthering a grounded ENGINEERING discussion of how to resolve issues reaching far beyond machine intelligence. But, it is extremely concerning that the article only mentions names like Shane Legg, James Barrat, the Cambridge Centre for Existential Risk, Nick Bostrom, Jaan Tallinn and others who constantly peddle the snake oil of “AI overlords” — many without any real AI credentials themselves and who claim that insufficient research is being done while ignoring invitations to scientific conferences like the one mentioned above. Instead of mimicking Chicken Little’s fear-mongering sensationalism (AI is the “number 1 risk for this century”, no less than an “extinction-level” threat to “our species as a whole”), I have long argued that we need to get a grip and accept and rapidly move forward with the very relevant functional definition of morality outlined in the Handbook of Social Psychology, 5th edition:

Moral systems are interlocking sets of values, virtues, norms,practices, identities, institutions, technologies, and evolved psychological mechanisms that work together to suppress or regulate selfishness and make cooperative social life possible.

Those last ten words are the solution to the “values” problem and the sole top-level goal necessary to ensure survivable, if not optimal, intelligent behavior. One of Barrat’s best points is the necessity to avoid “the concentration of excessive power” — but it is even more necessary to avoid any power that is not regulated by morality (or moral values). From sociopathic corporations to government oppression to man’s treatment of “lesser beings”, rain forests, people who don’t exactly reflect our views and machines — we have an ethics problem that is *heavily* impacting our lives. Rationally determining ethics (presumably for and using the test case of artificial intelligences) may be our best hope of surviving humanity’s increasingly terrifying adolescence.

This article was originally posted on Mark Waser’s blog Becoming Gaia.