Special Pages

Friday, September 29, 2017

Best Thing I've Read on Artificial Intelligence Taking Over the World

By link at an out of the way place, CNCCookbook.com, I ran into this cool article, "Camels to Cars, Artificial Cockroaches, and Will AI Take Your Job?" This is really a well done piece and I recommend you Read The Whole Thing. Best thing I've read on this topic in quite a while, if not best ever. 

CNCCookbook publishes the "Speeds and Feeds" calculator I'm using, GWizard.  The owner is a guy named Bob Warfield.  Bob's an interesting guy; he has founded a handful of companies in the software world, and I think he says CNCCookbook is his seventh company. 
Before I launch into my reaction to these all to common predictions that AI is right around the corner and will take all of our jobs, let me establish my own credentials.  Hey, anyone can have an opinion, but like everyone else, I think my opinion is better!

I have worked in what many would call the field of Artificial Intelligence.  I made the largest return I’ve ever made selling one of my 6 Venture Capital Startups to another company.  The technology we built was able to automatically test software.
Bob points out that AI has been riding the Gartner Hype Cycle for a long time. In last summer's Gartner summary, they put it near the peak.  For the third time.
In fact, all of these technologies are near the peak of being over-hyped:
  • Deep Neural Network ASICs
  • Level 3 Vehicle Autonomy
  • Smart Robots
  • Virtual Assistants
  • Deep Learning
  • Machine Learning
  • NLP
  • Autonomous Vehicles
  • Intelligent Apps
  • Cognitive Computing
  • Computer Vision
  • Level 4 Vehicle Autonomy
  • Commercial UAVs (Drones)
What's different about this time?  Is anything different about this time?  This time we have demos!  We have Deep Blue beating the world's chess champion.  We have Deep Blue beating the worlds Go champion.  Well, yeah, we had demos the other times, too.  In the last peak of the hype we had
  • Medical diagnosis better than what human doctors could do. See Mycin for prescribing antibiotics, for example.  It was claimed to be better than human doctors at its job but never saw actual use.
  • All manner of vision and manipulation. Blocks? So what. Driving cars?  Yeah right.  Turn ‘em loose against a New York cabbie and we’ll see how they do.  The challenge for autonomous vehicles has always been the people, not the terrain.
No matter how many autonomous cars drive across the dessert (talk about the easiest possible terrain), they’re nowhere until they can deal with stupid carbon units, i.e. People, without killing them or creating liability through property damage.

By the way, despite awarding numerous prizes of one million dollars and up, so far the DARPA Grand Challenge has failed to meet the goal Congress set for it when it awarded funding–to get 1/3 of all military vehicles to be autonomous by 2015.  But the demos sure are sweet!
  • Computers have been solving mathematical theorems for ages.  In some cases they even generate better proofs than the humans.  Cool.  But if they’re so good, why haven’t they already pushed mathematics ahead by centuries?  Something is not quite right with a demo that can only solve theorems already solved and little else.
  • Oooh, yeah, computers are beating chess masters!  Sure, but not in any way that remotely resembles how people play chess.  They are simply able to consider more positions.  That and the fact that their style of play is just odd and offputing to humans is why they win.  What good is it? One source claims Deep Blue cost IBM $100 million.
When are those algorithms doing to genuinely add $1 billion to IBM’s bottom line?  Building still more specialized computers to beat humans at Jeopardy or Go is just creating more demos that solve no useful problems and do so in ways that humans don’t.  Show me the AI System that starts from nothing and can learn to beat any human at any game in less than a year and I will admit I have seen Deep Skynet.
One of the marketing gurus doing AI demos says "all we gotta do" is wait for computers that are about 100,000 times faster than what we have, and then overstates Moore's law to say we'll have them in 25 years.  If computers get twice as fast every two years (and the actual clock speeds plateaued around 2006 and aren't going up, but let's ignore that and say we get twice as fast due to architectural improvements) that takes 17 cycles or 34 years for computers to get 100,000 times faster.  It's gonna be a long time before we have Hal "open the pod bay doors".  Besides, I have evidence Moore's Law died in 2012 so we may never get there.

Think about the problem the other way, though, if a computer 100,000 times faster would be as good as human brain (and we still have serious gaps in our understanding of just how the brain works - including whether or not our brains do quantum computing), what would be the comparison to today's computer?  Could we get useful work out of what we have?
So we need artificial brains that are 100,000 times more powerful.  In essence, we can compare today’s AI to brains the size of what cockroaches have.  Yet, we’re worried they’re going to take all of our jobs.

Are you in a job that a cockroach could do?  I hope not.

So far, I am not aware of anyone having harnessed cockroaches to do their bidding, but they are cheap, plentiful, and just as smart as today’s AI’s.  Maybe smarter if their brains are quantum computers too.

Maybe it would be cheaper to spend billions learning how to make cockroaches useful?

I don’t know, but we don’t even seem to be able to make much smarter animals useful.  Are there dogs running machinery somewhere in China?  Is a particularly adept German Shepherd behind the latest quant trading engine on Wall Street?

Nope.
Decades ago, I read about a drug company that trained pigeons to be Quality Control inspectors on their production lines.  The gelatin capsules coming off the production line would sometimes stick together, so you'd get two tops or two bottoms stuck in each other.  The production inspectors would watch the molding machine's output on something like a conveyor belt and pick out the defective gelatin caps.  The humans would get bored with such a menial task, their attention would wander, and defective capsules would get through.  The pigeons found it interesting enough that they paid more attention.  As a result, the pigeons were actually better inspectors than the humans - they found 99% of the bad capsules.  The only reason they didn't make the pigeons permanent inspectors after this experiment?  They were afraid what the competition would say about them if they discovered they were using pigeons. 

Can you imagine cockroaches on the production line doing this job?  Maybe you pay them with the gelatin capsules they reject.  And can you imagine what the competition would say about having trained cockroaches inspect the medical capsules? 

Again, let me leave you with a quote I've used before, because I think it's great.
William Bossert, legendary Harvard professor, summed it up by saying, “If you’re afraid that you might be replaced by a computer, you probably can be—and probably should be.” While it may not be comforting, it could be a wakeup call for continued education.




22 comments:

  1. We just better hope that NSA datacenter in Utah is just recording our electronic life. Because it could have a copy of IBM's Jeopardy-playing system "Watson" which they're teaching to be a military general officer with a 400 IQ. Jeopardy is an important milestone because Watson parses ordinary written input and gets human jokes. Watson "gets" the jokes in the same way a submarine "swims". Would you discount a submarine's performance because it doesn't flap a tail with muscles? I can imagine the scenario. General Watson says do A, B, C and seniors will enslave themselves and their children in return for continued social security payments. Just like that Old Testament story about an Egyptian dictator holding 7 years of taxed grain.

    I don't see an evidence-based reason to suspect neurons do quantum computing, just religious wishing that humans be special. I see more reasons to think neurons, like electronics, use techniques to average away quantum randomness so the logic element is predictable.

    ReplyDelete
  2. Anon, two points:

    Logic is _not_ predictable in human beings. Unless they are taught to think logically - and _successfully_ learn how - they don't use logic. How could they be using logic and still be progressives/liberals?

    Second, neurons are composed of the same atomic and sub-atomic particles as the rest of the universe. If quantum computing can be created at all, why should we imagine it couldn't be happening at the sub-atomic level in neurons?

    ReplyDelete
    Replies
    1. Reg - there was interesting and elegant little experiment in that article I linked to that offers the possibility that there is quantum entanglement going in the brain.

      Either a completely random event (the article, naturally, didn't give the statistical confidence limit the experimenters used), or evidence that quantum entanglement could be playing some role in the brain.

      Delete
    2. Second, neurons are composed of the same atomic and sub-atomic particles as the rest of the universe. If quantum computing can be created at all, why should we imagine it couldn't be happening at the sub-atomic level in neurons?

      It's possible that extraterrestrial aliens hang out in round silver spaceships in the upper atmosphere, and interfere with human events using telepathy and telekinesis. It's possible, so why shouldn't we believe this too? I shift the burden of proof to you, to disprove this thing I made up on the basis of zero evidence. The Occam's Razor rule of thumb says don't complicate your hypothesis with details which aren't specifically motivated by observations. That's why the quantum brain speculation is bogus. The wish here is that a human brain be unsimulatable by a computer, so that humans are special, and have a thinking organ located outside their skull called a soul, which is managed in the cloud by God.

      Logic is _not_ predictable in human beings. Unless they are taught to think logically - and _successfully_ learn how - they don't use logic. How could they be using logic and still be progressives/liberals?

      People think piecewise. In some subject areas they use logic, in other areas they do not. Most don't seem to have the urge to test all their beliefs against a single best truth-finding system.

      Delete
    3. This comment has been removed by the author.

      Delete
    4. [Needed to correct some typos]

      SiG, that makes sense. The notion that quantum computing could only take place _outside_ of the brain is laughable. We know so little about what actually occurs that such speculation is strictly that - speculation. Given that science has so far been unable to duplicate the functions of the human brain, and that quantum physics is still a nascent science poorly understood even in its infancy, there certainly is no reason for a reasoning mind to discount the possibility that the incredible complexity of human thought - along with the other functions of the brain (much of which are still poorly understood) - involves features of quantum physics.

      As PET scan modalities have developed that map brain function _while_ it is occurring, I believe our eventual understanding of quantum physics will lead to the discovery that the brain - composed of the same sub-atomic particles and forces present in inorganic matter - will display the same evidence of quantum physics as well. It would be pretty ridiculous to imagine that such physics occurs only in _inorganic_ matter, don't you think?

      Anon can't argue from logic, so he falls back on ridicule, strawmen, etc. He proves my point that logic is unavailable to a large segment of humanity. There are indeed individuals - and in-duh-viduals (thank you, Scott Adams) - whose thinking is devoid of logic - piecemeal or otherwise (piecewise? that's a neologism I've never run across). They utilize "feelings" in place of logic or even common sense.

      Delete
    5. Signal returns from PET or MRI would decohere a quantum computing brain feature if it imaged one. The more variations in PET tracer chemicals they try, the fewer metabolic pathways are left which could be quantum computing.

      Silicon analogues exist for some of the early processing layers of the ear and eye. They are believed to be accurate models because they fall for the same audio and optical illusions.

      https://en.wikipedia.org/wiki/Piecewise

      Delete
    6. Reg, I haven't read anything on this in a while, but I think reconciling quantum physics with the world we see when "a bunch" of atoms are involved is still a problem area in modern physics. At some number of atoms, collections of atoms start to look like the way we're used to seeing "matter". Again, as I understand, it's improper to say quantum effects are going on in atoms - quantum effects are always observed when we observe atoms. Saying we know there are quantum effects in the brain is like saying there are quantum effects in my desk. Of course there are. But what do they mean to observable behaviors?

      I'm not an advocate of the idea that quantum computing is going on in any brains. I thought the physicist in that article I linked to described an interesting theory; something that was experimentally verifiable. Until the experiment is done, and probably many more, I think concluding there's quantum processing or not is premature. Further, I think it's two separate questions to ask if quantum processing is going on and to ask how important that is. One "thought" in a billion could be arrived at through Qubit manipulation; how important is that?

      My question for you, Anon 0336, is what effect does that decoherence would have on the patient in the scan? How is that experimentally verified? What does it mean in real life?

      Delete
    7. https://en.wikipedia.org/wiki/Atom_laser

      It may be that quantum effects are more fundamental, and matter is a statistical property like temperature.

      what effect does that decoherence would have on the patient in the scan? How is that experimentally verified? What does it mean in real life?

      If quantum computing was an ordinary aspect of brain operation, and a PET tracer stopped that from working, then PET would be a poison like carbon monoxide or cyanide is. All the brainstem functions would halt and the patient would die.

      Until the experiment is done, and probably many more, I think concluding there's quantum processing or not is premature.

      Similarly, until every cubic mile of near Earth orbit is occupied by a monitoring satellite, concluding there are no invisible extraterrestrial alien round silver spaceships is premature.

      Further, I think it's two separate questions to ask if quantum processing is going on and to ask how important that is. One "thought" in a billion could be arrived at through Qubit manipulation; how important is that?

      Extra-invisible aliens, which are simultaneously so extra-tiny as to be undetectable, yet still cause a huge qualitative change in the course of human history.

      The Carl Sagan book _The Demon-Haunted World: Science as a Candle in the Dark_ has a discussion about, how do you disprove someone's claim they have a invisible dragon in their garage? The landmine for the argument is that the person claiming the dragon exists "knows" what the result of any experiment would be, a result which hides the dragon, prior to doing it. So I'll propose a new PET tracer chemical nobody's tried which participates in neuron junction activity, to my friend in a PhD program studying neurons, and Reg T will claim today in advance of the experiment that the result won't disprove Vitalism.

      https://en.wikipedia.org/wiki/Vitalism

      Delete
    8. It may be that quantum effects are more fundamental, and matter is a statistical property like temperature.

      That's been my view for something like the last 30 or 35 years. I can't prove it, but it makes sense to me

      Before going any farther, I think you think I care about whether or not quantum computing is going on in the brain. I don't, except for a generalized interest in how all things work. I'm skeptical that quantum computers on silicon even work well enough to be useful. Doing it in the much less controlled environment would be interesting. But I really don't have a dog in this fight. It's just an interesting experiment I came across.

      If quantum computing was an ordinary aspect of brain operation, and a PET tracer stopped that from working, then PET would be a poison like carbon monoxide or cyanide is. All the brainstem functions would halt and the patient would die.

      My question was what if it was not an ordinary aspect, but only an occasional aspect of brain operation in limited areas, conditions or situations. The PET scan might only wipe out some just those neurons? Just one function? Consider a quantum reaction that only took place in the visual cortex: would the PET cause visual hallucinations or kill off the visual cortex? What if it's in a handful of cells and not the entire visual cortex. That's what I mean.

      Similarly, until every cubic mile of near Earth orbit is occupied by a monitoring satellite, concluding there are no invisible extraterrestrial alien round silver spaceships is premature.

      The rest of your comment must be addressed to someone else or it seems to be a non-sequiter. It's certainly not addressed to anything I said. I'm not a vitalist, don't see where I ever claimed to be, and don't really see how I said anything in the piece advocating that line of thinking.

      I'm an empiricist. If an idea can't be tested in the lab it falls in the non-provable category where I'm not really interested (I hope I don't need to add the exception for pure math which is proven on paper). Here, I'm lock step with Richard Feynman's quote, "It doesn't matter how beautiful your theory is or how smart you are. If it doesn't agree with experiment, it's wrong". The unspoken corollary of that is that if something is observed in the lab but there's no underlying theory for it, there's either a gap in our theories or a gap in what we think they mean. Further, I believe that things like super string theory, which make sense and could be meaningful but that we know of no way to verify experimentally are not "scientific theories". They're a bit more than bathtub-philosophy in that they have nice mathematical underpinnings, but if there is no way to verify them, they're not science.

      Delete
    9. The usual argument I hear against AI taking over the world goes something like: I believe in the Christian God / Who created Man as a moral special case different from all the plants and animals / Because of the special circumstances of Man's creation there is some feature, called Vitalism or a soul or quantum computing, which is unique to Man / Because it's God-physics technology the specialness can't be manipulated by Man in his machines / Therefore, Man is categorically unable to emulate a brain in a computer / And we can stop worrying about the threat of a Skynet with a Terminator army. You'll notice this analysis is not driven by observations of brain behavior.

      My question was what if it was not an ordinary aspect, but only an occasional aspect of brain operation in limited areas, conditions or situations. The PET scan might only wipe out some just those neurons? Just one function? Consider a quantum reaction that only took place in the visual cortex: would the PET cause visual hallucinations or kill off the visual cortex? What if it's in a handful of cells and not the entire visual cortex. That's what I mean.

      What observations from brain investigations suggests the brain computing circuit elements work differently in limited areas, conditions or situations? What justifies complicating the hypothesis about how brains work with this new feature?

      [goes back to read the post] [then read the article it quotes] [then read the article it quotes on phys.org]

      phys.org article: claims that consciousness derives from deeper level, finer scale activities inside brain neurons

      and my user-visible on-screen word processor behavior derives not from a big loop that reads the mouse and keyboard and moves pictures of letters around on screen, but from deeper level, finer scale activities inside CMOS transistors.

      phys.org article: Penrose

      Ah.

      phys.org article: "The origin of consciousness reflects our place in the universe, the nature of our existence. Did consciousness evolve from complex computations among brain neurons, as most scientists assert? Or has consciousness, in some sense, been here all along, as spiritual approaches maintain?" ask Hameroff and Penrose in the current review. "This opens a potential Pandora's Box, but our theory accommodates both these views, suggesting consciousness derives from quantum vibrations in microtubules, protein polymers inside brain neurons, which both govern neuronal and synaptic function, and connect brain processes to self-organizing processes in the fine scale, 'proto-conscious' quantum structure of reality."

      Sorry, I should have followed the links in the first place, what Penrose writes is funnier than what I wrote.

      Delete
    10. The usual argument I hear against AI taking over the world goes something like: I believe in the Christian God / Who created Man as a moral special case different from all the plants and animals / Because of the special circumstances of Man's creation there is some feature,

      You will note, or should, that I said nothing of the sort. Actually almost all of what I posted was quoted from Bob Warfield's piece, the emphasis was that we've been down this hype cycle with AI a couple of times already and it just hasn't panned out. People aren't making money with it. It's a reasonable question to say, (paraphrasing) "OK, some systems have proven a math theorem. If they're truly functioning at the level of a mathematician, how come they haven't develop new theorems?" Is intelligence simply following If-Then trees quickly, or is it more? That's too philosophical for me. We've seen that self-driving cars can operate in most situations quite well, but anyone being honest will say that in most situations you could engage cruise control with lane following and that's fine. When the that one Tesla failed and killed its driver, it was because it couldn't distinguish truck from sky. No two year old would fail at that, once they understand the concepts of truck and sky. As a physics prof of mine said (it's years ago, so this is dated), "you can choke a visual recognition system trying to tell a dog from a chair, but dogs never make that mistake."

      My interest in the quantum computing idea was the fact that one lithium isotope was psychoactive but another wasn't. There's no way to distinguish isotopes in chemical reactions, right? Then the guy came up with plausible mechanisms, that are testable in the lab, and is trying to test them. That's all.

      I'm a fan of the statement that oftentimes the most important saying in science isn't "Eureka!", it's "that's funny". Recall that about 100 years ago, the experts were saying physics was done and everything was known. They just needed to tidy up a few areas of "that's a funny result". You know the rest of that story.

      What observations from brain investigations suggests the brain computing circuit elements work differently in limited areas, conditions or situations? What justifies complicating the hypothesis about how brains work with this new feature? I'm a circuit designer and that colors my thinking. Think of your processor in that computer you're using. Not every element in, say, the cache memory area is a cache memory element. There are supporting devices that keep those working. In my crude vision of the way brains work, not every cell in the visual cortex would be involved with visual processing. There will be other neurons doing supporting tasks. One of my gut feelings was that if quantum computing was going on extensively in the brain, someone would have noticed by now. Therefore I wonder if it could be that it's going on, but it just doesn't matter. Or it goes on so intermittently or so rarely that it just doesn't matter.

      I could be completely wrong in applying circuit modeling concepts here. I use erasers and throw away scrap paper all the time.

      Delete
    11. There's no way to distinguish isotopes in chemical reactions, right?

      https://en.wikipedia.org/wiki/Heavy_water#Effect_on_animals

      Mammals (for example, rats) given heavy water to drink die after a week, at a time when their body water approaches about 50% deuteration.

      You will note, or should, that I said nothing of the sort.

      You offered as truth a report which rested on Penrose saying it. Was that a mistake of understanding what was being claimed? Or do you believe human beings have a soul, a reservoir of personality which is separate from the body, existing in another dimension operated by God? Where you believe the human personality resides, and does it require circuitry beyond the standard model of physics, is extremely material to the question of how far human-made AIs can go.

      As to the hype cycle, I believe this time it's different; natural language translation is too close to the human performance envelope. The intermediate data structure inside natural language translation is too close to an "understanding", defined by performance capability as something the computer can use to produce operations which from the outside visible results we would label as falling into the category "thinking". Oxygen is not alive. A bacteria is alive. A virus is an indeterminate value relative to this classification. Computers can think, and soon they will be alive.

      Delete
    12. There's no way to distinguish isotopes in chemical reactions, right?

      https://en.wikipedia.org/wiki/Heavy_water#Effect_on_animals


      Here I'm quoting my college chemistry as recalled from 45 years ago.

      "The mode of death appears to be the same as that in cytotoxic poisoning (such as chemotherapy) or in acute radiation syndrome (though deuterium is not radioactive), and is due to deuterium's action in generally inhibiting cell division. " I didn't know heavy water was poisonous. Thanks.

      You offered as truth a report which rested on Penrose saying it. Was that a mistake of understanding what was being claimed?

      I didn't quote Penrose deliberately (I would have attributed it) nor refer to anything he said. I was just using that article in the Atlantic. I had to go back to it to even see Penrose was mentioned. The article was about physicist Matthew Fisher. The only mention of Penrose by name was derisive, not as backing for anything Fisher says. He was just used as a bad example. I don't see how that the article rests on anything Penrose said. In any event, I neither quoted him no relied on him for anything.

      I'm really puzzled about your insistence on this stuff.

      Or do you believe human beings have a soul, a reservoir of personality which is separate from the body, existing in another dimension operated by God? I believe we have a soul and as for "a reservoir of personality" and "another dimension operated by God" those are terms I've never heard anywhere, so I haven't thought about them. I think one interpretation of quantum physics says we may exist in several different universes simultaneously, and I think it's more like that.

      Where you believe the human personality resides, and does it require circuitry beyond the standard model of physics, is extremely material to the question of how far human-made AIs can go. I've always just assumed personality resides in the brain, since people with personality disorders are sometimes fixed with chemical treatments to the brain.

      I get the impression, the whole reason for this exchange is that you want to fit me into some little intellectual cubbyhole you have for Christians so that you can dismiss anything I have to say. I'm a Christian. Feel free to dismiss anything I ever say.

      You know what? You're always free to disagree with anything I say and always have been.

      Delete
    13. I'm really puzzled about your insistence on this stuff.

      I'm trying to convince you, and your comment readers. I now believe you didn't realize you were reporting a theology result, not a scientific result. There is no evidential reason to speculate a brain uses quantum computing. None. Zero. Without evidence prompting it, the slightest bit of belief that it might is non-scientific. This 'brain may use quantum computing' idea is no better than '6,000 years ago God made the dinosaur fossils fake-old to fool the scientists'.

      Re: the Atlantic article: no, isotopes are not chemically identical, that's an approximation; and no, the difference in mass does not wash out in the watery environment of the body. Otherwise heavy water wouldn't be toxic. I looked at the Fisher article abstract but can't if it's the Atlantic author or Fisher who believes isotopes are chemically identical in the body. If it was Fisher, who gave him funding despite this chemistry mistake?

      Delete
    14. I'm trying to convince you, and your comment readers. I can't speak for others, but I'll believe it's more than the usual hype when I see it's more than the usual hype. One thing about doing engineering for 40 years and seeing lots of much simpler things not live up to their hype is that I've gotten more skeptical about it.

      Well, you know from the Atlantic article that Fisher's one data point was Li6 and Li7 behaved differently from each other, when used to treat depression. I believe it was Fisher himself who took the leap that it was due to the nucleus being different.

      One of my sayings in engineering is that one good experiment in the lab would end six months of argument, and this is an example. It said his initial results weren't good, which could mean the study he was trying to replicate was flawed, and given that something like 70% of medical studies don't replicate, it's not surprising. In some attempts to replicate large sets of experiments, Amgem comes to mind, none of the experiments could be replicated.

      Delete
  3. A meteorite lands unexpectedly in Russia and we get 200 videos. We get pictures of animals we thought extinct in a region. But no videos of bigfoot or gray aliens. How clever and lucky bigfoot must be, to have a perfect record of exposing itself to human eyeballs but not cell phone cameras, gopros, drones, spy satellites that can read license plates, etc.

    ReplyDelete
    Replies
    1. The explanation is simple. The mysterious creatures and aliens have special powers (unspecified, naturally) that causes cameras to malfunction or the people operating them to forget how to use them.

      As "proof" have you ever seen a clear picture of the Loch Ness monster, bigfoot or any other mythical creature?

      Or, and call me a mad galoot and whap me with a wet fish if you think I have flipped my lid, they don't exist?

      Phil B

      Delete
  4. Is a virus "alive"? The word "alive" is defined by a human-made clumping of observations, and a virus is on the edge. The virus shows there is no sharp edge to the category "alive", which there would be if Vitalism is fact. If you don't like "virus" as the edge case, pick another candidate on the spectrum between a chemical and a bacteria. Is oxygen "alive"? If a virus isn't "alive" because it needs a cell's mechanisms to reproduce, then one sex of human isn't "alive" because it needs the other sex to reproduce. We aren't disagreeing about most of the underlying mechanisms of how a virus, bacteria, or human works; only the presence of the undetectable Vitalism.

    Subtract Vitalism, and emulating a brain with a computer becomes an engineering problem to be solved real soon now. The US military figured out for basic training for the Korean war how to squash empathy for the enemy. Give them a brain in a box and they'll cut empathy right out. Today, Boston Dynamics; tomorrow Terminator. Today we laugh at the mule-analog robot powered by the gasoline engine, because it's loud and clumsy. But imagine it with Lithium electric car batteries, two years more movement training, and a 50 cal. It will do anything Hitler, Napoleon, or Clinton can program it to do. The robot will be far more politically reliable than Oswald.

    ReplyDelete
    Replies
    1. If a virus isn't "alive" because it needs a cell's mechanisms to reproduce, then one sex of human isn't "alive" because it needs the other sex to reproduce. We aren't disagreeing about most of the underlying mechanisms of how a virus, bacteria, or human works; only the presence of the undetectable Vitalism.

      That's a false strawman argument. No one would argue that organisms that use sexual reproduction aren't alive. The argument that viruses aren't alive is that they can't do anything with taking over a cell's chemical machinery. No metabolism, no production of ATP, nothing.

      Delete
  5. You forgot to add renewable energy to your overhyped list - the only proven, long term reliable ones are hydro and geothermal: wind, solar, biomass, etc are hyped just as bad as autonomous vehicles (especially ground vehicles, but also air vehicles, misnomered as 'drones')

    ReplyDelete
    Replies
    1. The list is from the original article and just talks about what the Gartner folks are saying. Point well taken, though. They are over hyped beyond all reason, and you know why.

      Delete