Artificial Intelligence: Smart, More or Less

Editor and General Manager

Less is more, except for some literal-minded folks who think more is more. Go figure. The phrase “less is more” came to mind as I was thinking about thinking. The items that precipitated this recent bout of cogitation were a profile of Karl Deisseroth,1 the Stanford-based neuroscientist who conceived and led the development of CLARITY (Clear Lipid-exchanged Anatomically Rigid Imaging/immunostaining-compatible Tissue hYdrogel) and, less loftily, a promotional trailer for the artificial-intelligence-themed movie Ex Machina.

CLARITY is a process in which liquid acrylamide is used to replace the water and fat in the brain and render the neuroanatomy visible. Ex Machina, at least as far as I can discern from the trailer, is a movie in which a sexy robot terrifies and maybe (the ending wasn’t given away) overpowers her creators and handlers. Being Hollywood sci-fi, the other possible outcome is that humans win the day. Peaceful coexistence is not likely part of the script.

If less is more, then reality is more real than virtual reality, and intelligence is more real than artificial intelligence. (Go ahead, reject the premise.) Is AI simply supercomputing on steroids? Supercomputing programmed to incorporate some of the quirks of human thought? But pop sci-fi tropes have their corollaries in the real world. Stephen Hawking himself, one of the great wonderers, said in a BBC interview last December that, “Computers will overtake humans with AI at some point within the next 100 years.” He followed up with, “When that happens, we need to make sure that the computers have goals aligned with ours.” Yikes, good luck.

Technological development tends to get ahead of legislative and regulatory control—how do you control what’s not yet developed? (Fracking and GMOs come to mind.) No one is drafting a bill saying you can’t make computers too smart. How would that even be defined? Thus, AI development will continue apace and will be what it’s going to be. And what is that, exactly?

You can feed all the information in the world into the greatest processing machine ever, teach it algorithms, endow it with predictive abilities and sight and hearing (camera and microphone) and teach it to accurately recognize and interpret expressions and sounds…and yet, perhaps my wondering is deficient, but I can’t completely buy the worst predictions. Because there is this mass of water and fat and amino acids and carbohydrates that combine to form cortexes and signaling pathways and that pilot a collection and together are the driver of a mass of organs and bones and muscles, and that give it excitement and despair and hope and grief and allow it to anticipate the future and fall in love and move and react and…you get the picture. And then there is intuition and creativity and, to my unclear mind, and most important of all—inspiration. That thing that is just there, resident in thoughts, unexpectedly present, the path to a goal, a problem solved, a verse that will move millions. Unasked for, at least not consciously, but intelligence at its highest. Can that be programmed? Can a subconscious be implanted, if, indeed, that is where inspiration comes from? I wonder, if you build the biggest mass of stem-cell-derived neurons and perfectly executed cortexes and feed it all the information in the world with infinite input and output and processing capability, will it really laugh? I mean, it would “get” the joke, recognize the incongruity or whatever juxtaposition creates humor, and make laughing sounds (audio processing)—but could it even conceive of humor in the first place, and appreciate its value? The AI-guided machine could build CLARITY. But would it see the need? Would the way to the solution appear to it while engaged in some other activity? Could it meditate? Would it need to? Would it come up with a construct like “less is more” that is self-contradictory and yet has a meaning that is clear?

Or do I misunderstand—or lack enough information? Are those things that I think inform intelligence—make it real, make it human—not part of the AI “package”? Computing power and the “intelligence” it brings will no doubt continue growing, and will almost certainly be incorporated into more and more of daily life, at home, in the lab, in the world. Will it always remain essentially artificial? Opinions abound about the power—and beyond question it will be powerful—and dangers of AI. People who work with AI are exhilarated by its potential and awed by its prospects.

Perhaps a lot of what I’m reading and hearing is just an example of other qualities that human intelligence brings to its various endeavors—hope and hype.

Reference

  1. The New Yorker, May 18, 2015, pp 74–83.

Steve Ernst is editor and general manager, American Laboratory/ Labcompare; [email protected]

Comments