Experiencing 2020 from the POV of a rational-minded person is exhausting and scary. It sometimes feels as though we’ve ended up in the worst timeline of a badly-scripted parallel universes story. While everyone has their own tried-and-true coping mechanisms, we’d like to suggest a science-based remedy for the pandemic blues: read some wacky research papers.
Hear us out. You could spend your day thinking about things like the upcoming US presidential elections, COVID-19, global climate crisis, and other horrifying topics. You could. Or you could take some time off and read about head transplants, the birth of our future overlords, compelling evidence you’re actually living inside a computer simulation run by your great-great-great-great grandkids, whatever the hell Quantum Darwinism is, and how a new class of calculus will brute force superintelligent AI into existence.
Science is awesome and these are the receipts. Whether you’re a bonafide astrophysicist or a curious kid in grammar school, these papers will entertain and enlighten you. You’re probably already familiar with the concepts discussed in them, but after reading them you’ll find that there’s more to all of these topics than meets the eye.
Off with their heads!
What better place to begin than with the Dr. Sergio Canvero’s epic paper entitled “The “Gemini” spinal cord fusion protocol: Reloaded?” This paper starts with a bang and just gets more interesting as it goes:
Cephalosomatic anastomosis (CSA), that is, the surgical transference of a healthy head on a surgically beheaded body under deep hypothermic conditions, as conceived by Robert White, hinges on the reconnection of the severed stumps of two heterologous spinal cords. On the occasion of the first CSA between primates in 1970, Dr White hewed to the view that a severed spinal cord could not be reconnected, thus leaving the animal paralyzed.
Canvero alleged he was going to perform a human head transplant back in 2017, in the time since he’s faded off into relative obscurity and many publications that showed interest in his work have since deleted those references. Most reputable medical experts considered his efforts to be quackery, but Canvero spent a large portion of his career examining the possibilities.
He believes that the problem with connecting millions of damaged nerves together in order to properly transplant a human head can be solved through use of a common fusogen (sealant) comprised of polyethylene glycol (PEG). Why nobody else ever though to just glue one head onto another body we’ll never know (perhaps it was because it’s obviously not going to work).
Still the paper is fun and the story behind it is cool too. Not too mention there’s a certain prestige that comes with being the person in your peer group with the most knowledge of human head transplants. You never know how that could come in handy.
One machine to bind them, one machine to however the quote goes
There’s no AI in Middle-Earth, though golems are probably related. But there is AI right here on planet COVID and, if you believe Elon Musk and the late Stephen Hawking, one day it’s going to become superintelligent and treat us like pets (if we’re lucky).
While there’s a lot of theories on what we should do about that, there aren’t many believable origin stories for the future overlords. How we do go from autocorrect that can’t figure out after all these years that we’re almost never trying to spell “duck” to machines that can subjugate and destroy us?
Honestly, Google and Amazon don’t have any clearer a path towards artificial general intelligence (AGI) than James Cameron when he made the movie Terminator or Daniel J. Buehrer did when he wrote “A Mathematical Framework for Superintelligent Machines.”
What’s that? You haven’t read Buehrer’s work? Don’t feel bad, we stumbled across it one day in all its glory on sheer accident.
In this paper he writes:
We describe a class calculus that is expressive enough to describe and improve its own learning process. It can design and debug programs that satisfy given input/output constraints, based on its ontology of previously learned programs. It can improve its own model of the world by checking the actual results of the actions of its robotic activators.
For instance, it could check the black box of a car crash to determine if it was probably caused by electric failure, a stuck electronic gate, dark ice, or some other condition that it must add to its ontology in order to meet its sub-goal of preventing such crashes in the future.
In essence Buehrer is talking about creating a new class of calculus that, in its own execution, would be capable of mathematical consciousness through sheer brute force of interpreting its own sensory input. If it works like he describes it, we imagine this calculus would self-propagate. So just a dab will do you.
We’re not sure if we believe this aggressive methodology towards superintelligence, but it sure does make for compelling reading and there aren’t many other people coming up novel avenues to AGI.
Does God play dice?
Einstein and just about everybody else involved in creating the atomic bomb spent a lot of time wondering about esoteric things like whether or not God plays dice with the universe. You don’t have to think very hard to imagine why.
If you’re also the type of person who spends a significant portion of their time creating something intended for mass destruction (perhaps you work at a social media company or Clearview AI), then you might find this 2009 paper called “Quantum Darwinism” interesting.
In it, physicist Wojciech Hubert Zurek lays out the ideas behind a decoherence-based view of classical reality.
Here’s what the paper says:
The quantum principle of superposition implies that any combination of quantum states is also a legal state. This seems to be in conflict with everyday reality: States we encounter are localized. Classical objects can be either here or there, but never both here and there. Yet, the principle of superposition says that localization should be a rare exception and not a rule for quantum systems.
Zurke describes how quantum Darwinism – the idea that we’re not really seeing reality but instead an echo or “imprint” of reality left behind as quantum states become decoherent and then fade back into quantum coherence – explains away the supposed gap between the quantum and classical worlds.
It’s a fascinating paper that lends perfectly to the esoteric “does God even exist?” discussion. Unlike most hard-science papers discussing quantum physics, it postulates a solid connection between the distinctly different worlds complete with an explanation that makes sense.
Speaking of God…
There’s a greater than zero chance that you live inside a computer simulation. Perhaps no scientific fact hits harder than Nick Bostrom’s tilemma supposition (which admittedly requires a little more context than is prudent for this article):
- The fraction of human-level civilizations that reach a posthuman stage is very close to zero
- The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero
- The fraction of all people with our kind of experiences that are living in a simulation is very close to one.
When Bostrom published his masterfully written “Are you living in a computer simulation” in 2003 it was a red letter day for armchair philosophers, aspiring futurologists, and fans of The Matrix, which had just been released a few years prior – arguably The Thirteenth Floor (also released in 1999) is the more closely aligned film though.
Just about all of us has had a silly conversation with a friend who says things like “what if this is all a dream” or “what if it turns out we’re all in a coma aboard a space ship.” But Nick Bostrom actually did the work to narrow down the idea to its brass tacks and present it in a jaw-droppingly simple way. By the time you finish this paper you should be questioning your reality.
Side note: if this is a simulation, whoever is in charge of the 2020 update is a real ass.
Last but not least, The GANfather
Ian Goodfellow. If you’re into AI we just got your attention. And if you don’t know who that is, I envy you because you’re in for a real treat. Goodfellow, as MIT’s Martin Giles dubbed him, is the GANfather. He’s responsible for creating the general adverserial network or GAN.
GANs are a type of AI system that makes some of the most impressive deep learning feats possible. All those cool “this _____ does not exist” sites that show of AI-generated images are powered by GANs. DeepFakes are powerered by GANs and so are just about any other AI that purports to generate novel content meant to imitate human work.
If there were a Mount Rushmore for modern AI architects, Ian Goodfellow goes on that list. The reason he’s on this list is because he was lead author of the team who wrote the original “General Adverserial Nets” paper back in 2014. Yoshua Bengino was also on that same team, so you probably get two Rushmore heads in one paper here. The paper’s very readable and, as far as feet-firmly-on-the-ground AI papers go, it’s quite a good read. It might not be the most “fun” paper on this list, but it’s the one you’ll learn the most from. If you want to understand AI, read GANs.
There are thousands of other great research papers out there so don’t stop here. If you’re really feeling sassy you can just go straight to Google Scholar and start searching for your own wacky research papers – we suggest searching for “multiverse,” “time travel,” and “Dyson spheres” for starters.
What’s your favorite research paper? Is there one you keep going back to for inspiration and comfort when you’re unsure what direction to take? Talk to @mrgreene1977 on Twitter.
Get the TNW newsletter
Get the most important tech news in your inbox each week.