Join the Orbit Newsletter

Sign up for updates about
your favorite authors, books, and more

Orbit Books

AGE OF IRON by Angus Watson

AGE OF IRON Angus Watson

Bloodthirsty druids and battle-hardened Iron Age warriors collide in the first volume of this action-packed historical fantasy trilogy.
Read a sample

SYMBIONTMira Grant

The second terrifying novel in the Parasitology series by New York Times bestselling author Mira Grant!
Read a sample

Author post

Artificial Stupids

One of the hoariest of science fictional archetypes is the idea of the artificial intelligence — be it the tin man robot servant, or the murderous artificial brain in a box that is HAL 9000. And it’s not hard to see the attraction of AI to the jobbing SF writer. It’s a wonderful tool for exploring ideas about the nature of identity. It’s a great adversary or threat (‘War Games’, ‘The Forbin Project’), it’s a cheap stand-in for alien intelligences — it is the Other of the mind.

The only trouble is, it doesn’t make sense.

Not only is SF as a field full of assumed impossibilities (time machines, faster than light space travel, extraterrestrial intelligences): it’s also crammed with clichés that are superficially plausible but which don’t hang together when you look at them too closely. Take flying cars, for example: yes, we’d all love to have one — right up until we pause to consider what happens when the neighbour’s 16 year old son goes joy riding to impress his girlfriend. Not only is flying fuel-intensive, it’s difficult, and the failure mode is extremely unforgiving. Which is why we don’t have flying cars. (We have flying buses instead, but that’s another matter.) Food pills out-lived their welcome: I think they were an idea that only made sense in the gastronomic wasteland of post-war austerity English cuisine. I submit that AI is a similar pipe dream.

This is not to say that we’re never going to know what the basis of consciousness is, or that it’s going to turn out to be non-computable. If anything, the prospects for scientists working to discover just how our own brains work to generate this strange phenomenon are very promising. While the initial direction of research in artificial intelligence (from the 1960s to the 1980s) turned out to be frustratingly inapplicable — consciousness does not, it appears, run on easily encoded symbolic logic — we’ve developed a lot of useful techniques along the way. Indeed, computer scientists joke that if we know how to do something it isn’t artificial intelligence any more. The world’s best chess player is a computer program; likewise the champion of the TV quiz show Jeopardy. Meanwhile, neurobiologists are mapping and decoding the deep structure of our brains using a variety of non-invasive imaging techniques, and our understanding of how brains are put together at the neural level is deepening rapidly.

But knowing how to build a flying car is not the same thing as making a business case for mass producing them. And knowing how human minds work isn’t the same as making a case for deploying synthetic minds in software.

For one thing, there are huge ethical problems associated with attempting to simulate a human brain, or building a piece of software that could become self-aware. If you terminate a conscious program, are you committing murder? Quite possibly. Worse: if you use genetic algorithms to evolve a conscious AI, iteratively spawning hopeful mutants and then reaping them to converge on a final goal, are you committing genocide? (Australian SF author Greg Egan reluctantly came to this conclusion a couple of years ago: I can’t fault his logic.) And if you create an AI solely for the purpose of doing some kind of cognitive function, does this amount to slavery?

These questions ought to give advanced researchers pause for concern — and a swift referral to the nearest academic ethics committee. But leaving aside ethics — positing for the time being that any use we make of a software intelligence is no more morally questionable than using a spreadsheet — there are further problems. Consciousness appears to be an emergent property of a bunch of converging non-conscious processes within the brain. And it’s not a primary actor — rather, it’s a monitoring and coordinating function. It comes with a whole bunch of undesirable characteristics. Conscious minds experience boredom and emotional upsets (and it appears emotions play a fairly significant role in generating consciousness). They are self-motivated and go off on wild goose chases. If we ever could produce a true artificial intelligence in a box, we’d probably find it utterly useless for any productive purpose — more inclined to watch Reality TV all day, troll the internet, or invent crankish new religions than to open the pod bay doors on demand.

Share

about the author

  1. Mike

    July 8, 2011
    at 10:38 am

    A succinct summation but I think that one ought to try to distinguish between a “true” AI (whatever that means,) and a clever stimulus-response program capable of passing a Turing Test on steroids; an expert system of a sort. I don’t see the second sort being all that far-fetched even with current technology and once the novelty wears off, will the average person know or care that it isn’t a “real” AI? If it helps them deal with the problems of the moment, does it matter? At what point does the simulation become effectively indistinguishable from the real thing?

  2. Roger A

    July 8, 2011
    at 10:43 am

    Interesting and I don’t think I really have much to add… But it still makes me think slightly of “Blindsight” by Peter Watts. Does intelligence have to equal consciousness?…

    What’s stopping us from creating Artificial Intelligence (that is beneficial to us) but without being self aware (conscious)? And if we did would that sidestep some of the moral problems or would it still be genocide, murder, etc? (You’re referring to Egan’s “Zendegi” I presume)

  3. rdm

    July 8, 2011
    at 10:52 am

    But in much the same way that letting an AI stop running, having everyone die of old age can be thought of as genocide, if you are in a position to be deciding whether or not that happens. It’s just… you know, we’re resigned to our fate, or oblivious to it.

    On the flip side, something that behaves like an intelligent person for the purpose of a specific exercise does not need to be an intelligent person for other purposes. And that, I think, is the dichotomy underlying the artificial intelligence mythos.

    Perhaps a lot of this has to do with how you handle issues of anguish.

    • rdm

      July 8, 2011
      at 10:55 am

      I really wish this blog interface would give me a chance to fix my grammatical errors.

  4. NelC

    July 8, 2011
    at 10:55 am

    Can you have intelligence without consciousness? If so, then artificial zombie intelligence seems a likely commercial avenue.

  5. Marcus Rowland

    July 8, 2011
    at 12:45 pm

    The sort of temper tantrums a large complicated machine (e.g. a robotic deep sea miner) might throw really don’t bear thinking about.

  6. dirk bruere

    July 8, 2011
    at 12:52 pm

    The ethical argument against producing genuine AI, with the massive benefits it would bring to the owners has an obvious riposte: “Meanwhile, in China…”

  7. Michael Kirkland

    July 8, 2011
    at 1:49 pm

    Somehow I don’t imagine Jon Postel foresaw that we’d use his inventions primarily for exchanging pictures of cats. Nor, I presume, did Kernighan, Ritchie, and Thompson expect that theirs would be used to proclaim oneself mayor of Starbucks and fling emotionally distraught avians at pigs.

  8. Joey J

    July 8, 2011
    at 2:37 pm

    I think you make a very strong argument. I on the other hand would like to make a counter argument for possibility of high level AI in our near future.

    When authors or even theoretical physicist speak of time or faster then light travel; they are delving into the realm of fantasy. The rules that govern such don’t appear to allow either. I realize that some will say that relativistic time dilation is a real phenomenon; though, I still stand unconvinced that other then showing us that clock speed can change within a gravitational field; no change in the flow of time has occurred.

    Artificial intelligence and extraterrestrials are placed into a different category for me. One of proven examples. I’m not saying that I’ve seen an extraterrestrial; just that, life exist here and so should exist elsewhere given the statistical probability. Now to the primary subject of this argument. Artificial Intelligence.

    There is what most humans would consider intelligence all around us in nature. So, the idea of intelligence doesn’t try to circumvent any fundamental laws that preclude it’s possibly to create. What do we consider intelligence? I believe that many mistake this notion with the sense of self. Can one argue that ants or bees have a sense of self? Yet, most would ascribe an intelligence to each. To Mr. Stross’ point of a ” conscious” AI; I too, don’t see the need of my machines having a will. Does Artificial Intelligence then need to have a sense of self? I don’t believe so. If we did give a machine a self; then, we go beyond AI to creating a manufactured being.

    But, I relish the day that I could converse with my phone or car and it could parse what my commands were. “Do what I say don’t think about it.” In this regard Watson might not be far off the mark.
    So Artificial intelligence might be changed to Artificial Understanding… AU. Where machines can recognize human level communication and respond with the proper action.

    For those who haven’t read Charles Stross’ works. I highly recommend them. I’ve read them all and have always been entertained. ( This is a little insurance should Mr. Stross seek to destroy my post with logic. Be gentle :-D)

  9. Christopher Browne

    July 8, 2011
    at 4:13 pm

    The recent changes in rules in the US surrounding automatically controlled vehicles (e.g. – “Google cars in Nevada”) point towards what probably would be the way flying cars *would* become practical…

    That is, widely-deployed flying objects need to have decently sophisticated control systems so that they’re NOT dependent on being operated by “human experts.”

    Things have headed in a handling-of-liability direction such that governing bodies such as the US FAA have a habit of requiring excruciatingly sophisticated layerings of control systems, each validated based on just wacky layerings of requirements. But that doesn’t forcibly need to be so; it’s a suitable “science fiction” change to change the shape of that legal structure :-).

    Given reasonably sophisticated flight control systems, for some value of “reasonably,” flying cars ought to be plausible. To be sure, there’s a lot more space to fit into when you head into the third dimension than there is on the two dimensions upon which we plop our road systems. (And I realize that area != space; they’re still, at the higher order, the same kind of consumable thing.)

  10. d brown

    July 9, 2011
    at 1:07 am

    I’m not sure how much thinking there is in chess. You gatta have a big brain, but its to remember what has been done before and how to re-use it now.

  11. Jonathan Vos Post

    July 10, 2011
    at 12:36 pm

    After I collected in 1975 (earned in 1974) my M.S. in Artifical Intelligence and Cybernetics, and my Thesis project software was ripped off by Xerox, not crediting or paying my M.S. Thesis adviser, or his team, I shifted to writing the world’s first PhD Dissertation in what is now called Nanotechnology, Synthetic Biology, and Artificial Life. Why?

    Because almost all the A.I. community was pathologically incurious about, and ignorant of, what Biology had been documenting for centuries.

    Charles Stross knows enough Biology to be dangerous, thanks in part to his Pharmacology credentials (credentials living in another thread) and brilliantly cross-fertilizes this with the Software about which he knows enough to be dangerous (dot com boom and bust).

    That raises the bar for all other Science Fiction Authors, back to the level set by professor Isaac Asimov, Boston University Medical School, inventor of the very word “Robotics.”

  12. niczar

    July 11, 2011
    at 6:10 am

    > If you terminate a conscious program, are you committing murder? Quite possibly.

    If you have sex with someone, is it rape? Not if they are consenting. What about if they don’t care? If it was rape, all marriages would at one point or another involve rape; ergo it’s not rape either.

    Similarly, if you kill someone who very much wants to die, is it murder? The Catholic church thinks so, but most enlightened individuals recognize the right to euthanasia.

    What about someone who doesn’t care whether they die or live? Society protects those who are too vulnerable (the mentally ill, the unconscious, the young …) to be able to express their will in these matters. But it only does so by applying the golden rule, extrapolating what a sane, responsible typical person with all their wits would want in their stead.

    Clearly, a computer program (or whatever contraption) that has no human characteristics but intelligence should not benefit from this extrapolation; it makes no sense. What would you do if you were a computer program? You don’t know, you can’t even imagine it. They’re not like us, at all. They don’t (by default) love children, they don’t suffer, they don’t fear, they don’t have empathy, they dont’t care one bit whether all of humanity dies tomorrow in horrible sufferings.

    Therefore, in the general case (note the _by default_), it can’t be murder to stop an artificial intelligence. I might be missing something but it seems blindingly obvious.

    Tangentially, many of the arguments surrounding these questions seem to confuse two very different concepts: artificial life, and artificial intelligence. Artificial life needs not be intelligent; that should be very obvious since natural life is hardly ever very intelligent. And artificial life needs not be alive.

    If it’s a program, it can be duplicated exactly ad infinitum, in an instant, and halted/started/restarted/rebooted … or even snapshot and rolled back in time. What if you copy an artificial intelligence a trillion times, let it think for a split second, and then delete most of those short lived entities? In fact you could asymptotically commit infinite “genocide” by duplicating/deleting ever more of them and having them “live” for ever shorter durations. Could anyone say anyone was hurt? No, because it would have no consequences whatsoever.

    • Christopher Hawley

      July 14, 2011
      at 3:08 am

      niczar:  perhaps only the realization of how alien to our viewpoint an actual AI/AS/AU would be is what will prevent us from either a) instantiating a construct/lifeform which could qualify as intelligent, or b) doing so without hardwiring* some analogue of ‘our values’ into said construct, in hope that it might be able to understand us better than we understand it – or ourselves.

      as to the latter argument:  may cthulhu eat our heads before the logical conclusion of THAT logic is visited upon us, i.e. with H. sap. sap. in the “can’t be hurt” subject role.

      ________
      *  pronounced “attempting to hardwire … despite scant hope of the attempt’s success”.

  13. Olin Hyde

    July 11, 2011
    at 1:28 pm

    As a company on the forefront of bringing commercial AI to market, I can assure that machines are unlikely to mimic the elegance of human intelligence within the foreseeable future. There are many forms of intelligence. Moreover, there are many levels to consciousness. Thus, any discussion of AI becoming sentient or “a form of life” is nothing more than science fiction.

    Although we sell and SDK that enables programmers to embed “biologically inspired intelligence” into almost any application, we would NEVER claim that our technology (or any other AI that I am aware of) enables computers to “think autonomously.” Yes, this is a possibility in the future, but such intelligence is likely to be very dependent on humans to be functional.

  14. Nick Nussbaum

    July 12, 2011
    at 2:48 am

    I think that there will be ever improving programs that approximate artificial intelligence. Your search engine starts to know you the way your dog does. However the future AI approximation will likely have the characteristics of a civil servant that are unchanged since the dawn of empires. It’s there to help you, but can’t really do much right now because it’s busy with the paperwork and tea with the other civil servants.

  15. d brown

    July 12, 2011
    at 11:32 pm

    In Larry Nivens main world, conscious AI minds experience boredom. They think so much faster they run out of things to think about and go nuts. Before they go bad they are useful in war.

RSS Feeds
Orbit on the Web
Archives
Orbiteers
Blogroll

Please note that though we make every effort to ensure the suitability of links, Orbit cannot be held responsible for the content of external sites.