cyborg.jpg

The End of Cyborgs

in \ Cool Stuff

Share this post

Spencer Torene

Spencer Torene has long feared the coming age of artificial intelligence and is a recent Computational Neuroscience PhD graduate from Boston University. His dissertation was titled, "Learning and Adaptation in Brain Machine Interfaces." Prior to graduate school, he worked in various information technology roles at private, public, and government-funded companies. He now lives in Maryland with his wife, two sons, and dog.

Email Spencer
Connect on Linkedin

I have been concerned about the rise of artificial intelligence as long as I've watched science fiction movies. The rise of AI has almost turned into a joke, stemming from the overly grand claims made in the mid 20th century regarding our computational capacity, during a time when knowledge of our ignorance was even less than it is now. It is no longer a joke. There has been a recent resurgence in the discussion and intelligent people such as computer scientist Stuart Russell have become more concerned about the potential for advanced AI systems to consider us in the same manner that we consider ants. The eventuality of AI’s emergence and probable dominance over humans got me wondering if the best way for humans to maintain significance in our own future would be to become artificially intelligent, ourselves. Or, in the common parlance, to become cyborgs. 

I decided the best way to create cyborgs was to go back to graduate school and research brain machine interfaces. Armed with knowledge of the human brain and advanced machine learning techniques that could be used to both extract and insert neural information, I could help usher in the age of the cyborg. As artificially intelligent cyborgs, humans could hopefully maintain relevance in a world where inorganic machines increasingly make more and more of our decisions. 

My hope for that possibility has dwindled. Studying computational neuroscience led to one critical conclusion: it seems vastly more complex to augment human intelligence than to create AI. Couple this unfortunate reality with no apparent incompatibility between AI and the laws of Physics, and it seems that not only is AI destined to be, but will inevitably continue to improve upon itself and be more capable than humans in any respect. 

There may only be one major bump in the road to create true AI (if even this one): it may require knowledge of how general intelligence is manifest. It seems true that intelligence is the result of massively parallel and recurrent networks of simple processing units (in our case, neurons and, possibly, glial cells), but as is oft-stated about potential extraterrestrial life, how do we know that all intelligences are constituted in a similar way as ours? Just because ours is the only known superior intelligence in the universe does not mean it is the only possible method towards intelligence. There may be a number of mechanisms of and routes to intelligence. Researchers need only find one of those combinations to create a true AI system. 

The problem with creating artificially intelligent humans, on the other hand, is twofold: not only do we need to better understand how our specific form of intelligence works, but we must then learn to interface with it. Even if we learn the mechanisms behind our intelligence, Evolution does not seem to have blessed us with an easily-augmented form of intelligence. Even trivial brain implants are not stable over long time periods, and finding ways of making these interfaces last over long time periods is a problem vexing hundreds of neuroscientists and bioengineers. Once these researchers solve this relatively easy problem, they will then need to move on to the apparently intractable problem of how to create and implant the mesh of millions or billions of micro-wires necessary to properly interface with our brains and bring us anywhere close to the dream of augmenting our intelligence through extensive extraction and insertion of information. Further complicating the picture is the physical space limit in our skulls. We could theoretically go without our natural born skulls, but this is now starting to go down a pretty big rabbit hole of problems to solve... 

Even if we do solve all the problems that prevent us from becoming cyborgs, it may still leave AI in control. Intelligence may be an emergent property of a hierarchy of more and more complex calculation units, arranged in larger and larger networks: from the basic building blocks (e.g. neurons—which themselves are extremely complex and little understood), all the way up to a conglomeration of individually-intelligent systems independently capable of thought and reason. Indeed, there is some truth to the claim that our brains are really two brains: our left and right hemispheres appear to be independent intelligences in perpetual “cooperatition” with each other. Therefore, it could be that augmenting our intelligence requires independently intelligent external systems with which our organic brains would then be in cooperatition. Given that there might not be such a thing as “enough” intelligence, we could extend the hierarchy of intelligences, combining more and more independently intelligent systems, and then combinations of those combinations (etc, etc), which leads us to the basis of the “hivemind,” utterly marginalizing the “human” in humans. All of our more-intelligent external counterparts may override each and every human desire and thought we have, simply because they decide we’re inefficient—or worse, because we have conflicting reproductive goals! We could become nothing more than slowly decaying meatbags, working furiously to copy and recreate instantiations of AI wherever we can. 

I withhold my opinion of whether someone should want to merge with AI, but it seems clear that AI will inevitably “win”, merge or no. 

Since we’re all screwed anyway, just sit back, enjoy what time we have left, and let Pattern do the work for you! 

Sign up to receive the latest articles, news, and updates via email