The Wikipedia definition of “The Singularity”: “is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence, radically changing civilization, and perhaps human nature. Because the capabilities of such an intelligence may be difficult for a human to comprehend, the technological singularity is often seen as an occurrence (akin to agravitational singularity) beyond which the future course of human history is unpredictable or even unfathomable.”
What most people think of when they think of situations like The Singularity, many people think of movies or fiction books (Skynet, The Matrix, 1984, etc.). Each envision a world where “the machines” are smarter than us and think on their own, without regard for the human condition, well-being. Famous entrepreneur and investor Peter Thiel, has a particularly interesting take on all of this, much of which I agree with. To save you the time of watching the link, he says “all we [humanity] needs is The Singularity.” The reason I agree with the fundamental point of his message is that technology “usually” makes our lives easier, more efficient. More importantly, a new advancement in artificial intelligence would create new jobs where humans control the “smart machines”. Think of it as when machinery was first introduced to factories during the Industrial Revolution. The advent of machines in factories lowered the barrier to entry to the industry and encouraged new competition. For example, it took less “man-power” to produce thousands of garments in a day. Curating and maintaining a machine’s artificial intelligence will be the “factory job” of the future. This may sound odd, but that may be because artificial intelligence is commonly misunderstood.
By working at a startup that gets looped into the “artificial intelligence” realm/discussions, I’m well aware of its recent resurgence, popularity. And what’s comical/frustrating is the public’s view of what artificial intelligence is. We’ve made it a point in sales meetings to explain the different “flavors” of artificial intelligence. So let me set the record straight, artificial intelligence by and large can be: 1.) self-learning (machine-learning), 2.) mimick current human behavior, or 3.) deep-learning. [Head over to the Narrative Science blog for more interesting pieces on this discussion.]
My larger point about point about all three types of AI is that, despite what you may think, all of these take human interaction and intervention. A computer has no sense of what is right and what is wrong, unless verified by a human. Computers can find interesting things in data (machine-learning), but only a human can verify that correlation equals causation. For example, a computer might find that the number of kiwis harvested increases fairly linearly with the number of deer killed each season in Wisconsin. This is useless, spurious correlation that we wouldn’t want computers to identify as significant.
Sure, there are ways to program and help a computer dictate with metadata what could be true/false. But don’t believe that these machines are learning all on their own, they need validation, which usually happens “off-line”, aka by humans (including Watson and every other piece of AI). That’s where jobs are created by artificial intelligence, as opposed to consuming the very jobs that people are concerned they’d be replacing.