CNCCookbook publishes the "Speeds and Feeds" calculator I'm using, GWizard. The owner is a guy named Bob Warfield. Bob's an interesting guy; he has founded a handful of companies in the software world, and I think he says CNCCookbook is his seventh company.
Before I launch into my reaction to these all to common predictions that AI is right around the corner and will take all of our jobs, let me establish my own credentials. Hey, anyone can have an opinion, but like everyone else, I think my opinion is better!Bob points out that AI has been riding the Gartner Hype Cycle for a long time. In last summer's Gartner summary, they put it near the peak. For the third time.
I have worked in what many would call the field of Artificial Intelligence. I made the largest return I’ve ever made selling one of my 6 Venture Capital Startups to another company. The technology we built was able to automatically test software.
- Deep Neural Network ASICs
- Level 3 Vehicle Autonomy
- Smart Robots
- Virtual Assistants
- Deep Learning
- Machine Learning
- Autonomous Vehicles
- Intelligent Apps
- Cognitive Computing
- Computer Vision
- Level 4 Vehicle Autonomy
- Commercial UAVs (Drones)
One of the marketing gurus doing AI demos says "all we gotta do" is wait for computers that are about 100,000 times faster than what we have, and then overstates Moore's law to say we'll have them in 25 years. If computers get twice as fast every two years (and the actual clock speeds plateaued around 2006 and aren't going up, but let's ignore that and say we get twice as fast due to architectural improvements) that takes 17 cycles or 34 years for computers to get 100,000 times faster. It's gonna be a long time before we have Hal "open the pod bay doors". Besides, I have evidence Moore's Law died in 2012 so we may never get there.
- Medical diagnosis better than what human doctors could do. See Mycin for prescribing antibiotics, for example. It was claimed to be better than human doctors at its job but never saw actual use.
- All manner of vision and manipulation. Blocks? So what. Driving cars? Yeah right. Turn ‘em loose against a New York cabbie and we’ll see how they do. The challenge for autonomous vehicles has always been the people, not the terrain.No matter how many autonomous cars drive across the dessert (talk about the easiest possible terrain), they’re nowhere until they can deal with stupid carbon units, i.e. People, without killing them or creating liability through property damage.
By the way, despite awarding numerous prizes of one million dollars and up, so far the DARPA Grand Challenge has failed to meet the goal Congress set for it when it awarded funding–to get 1/3 of all military vehicles to be autonomous by 2015. But the demos sure are sweet!
- Computers have been solving mathematical theorems for ages. In some cases they even generate better proofs than the humans. Cool. But if they’re so good, why haven’t they already pushed mathematics ahead by centuries? Something is not quite right with a demo that can only solve theorems already solved and little else.
- Oooh, yeah, computers are beating chess masters! Sure, but not in any way that remotely resembles how people play chess. They are simply able to consider more positions. That and the fact that their style of play is just odd and offputing to humans is why they win. What good is it? One source claims Deep Blue cost IBM $100 million.When are those algorithms doing to genuinely add $1 billion to IBM’s bottom line? Building still more specialized computers to beat humans at Jeopardy or Go is just creating more demos that solve no useful problems and do so in ways that humans don’t. Show me the AI System that starts from nothing and can learn to beat any human at any game in less than a year and I will admit I have seen Deep Skynet.
Think about the problem the other way, though, if a computer 100,000 times faster would be as good as human brain (and we still have serious gaps in our understanding of just how the brain works - including whether or not our brains do quantum computing), what would be the comparison to today's computer? Could we get useful work out of what we have?
So we need artificial brains that are 100,000 times more powerful. In essence, we can compare today’s AI to brains the size of what cockroaches have. Yet, we’re worried they’re going to take all of our jobs.Decades ago, I read about a drug company that trained pigeons to be Quality Control inspectors on their production lines. The gelatin capsules coming off the production line would sometimes stick together, so you'd get two tops or two bottoms stuck in each other. The production inspectors would watch the molding machine's output on something like a conveyor belt and pick out the defective gelatin caps. The humans would get bored with such a menial task, their attention would wander, and defective capsules would get through. The pigeons found it interesting enough that they paid more attention. As a result, the pigeons were actually better inspectors than the humans - they found 99% of the bad capsules. The only reason they didn't make the pigeons permanent inspectors after this experiment? They were afraid what the competition would say about them if they discovered they were using pigeons.
Are you in a job that a cockroach could do? I hope not.
So far, I am not aware of anyone having harnessed cockroaches to do their bidding, but they are cheap, plentiful, and just as smart as today’s AI’s. Maybe smarter if their brains are quantum computers too.
Maybe it would be cheaper to spend billions learning how to make cockroaches useful?
I don’t know, but we don’t even seem to be able to make much smarter animals useful. Are there dogs running machinery somewhere in China? Is a particularly adept German Shepherd behind the latest quant trading engine on Wall Street?
Can you imagine cockroaches on the production line doing this job? Maybe you pay them with the gelatin capsules they reject. And can you imagine what the competition would say about having trained cockroaches inspect the medical capsules?
Again, let me leave you with a quote I've used before, because I think it's great.
William Bossert, legendary Harvard professor, summed it up by saying, “If you’re afraid that you might be replaced by a computer, you probably can be—and probably should be.” While it may not be comforting, it could be a wakeup call for continued education.