For the last eight months, serial entrepreneur Bryan Johnson has conducted an experiment: He invites a small group of the smartest people he knows to dinner and asks them what they think needs to happen to reach their vision of an ideal world by 2050. The answers–from solving the climate crisis to curing cancer–never focus on improving human intelligence. But Johnson, who has committed $100 million of his own money to develop a wildly ambitious “neural prosthetic” that would essentially be able to reprogram the brain, believes that making humans smarter is key to helping solve every other problem.
First of all: what do we mean by "smarter"? Most people seem to think "smarter" means "I have more facts at my command", but here we are in 2017 walking around with always-connected supercomputers in our pockets and the sum total of human knowledge one Google search away, and even many "smart" people still don't know the difference between "equity" and "equality". Don't even ask what the dumb ones don't know!
Any discussion of augmenting anything needs to be opened up with a discussion of what that thing actually is before we try to "augment" or "improve" it. It's not as if we aren't doing that work; it's that such work is agonizingly slow and difficult, and a lot of what we thought we knew about the brain, about intelligence, about human behavior, etc. has been shown to be misguided, or flat-out wrong, or just plain incoherent. I still run into people who analogize brains and computers, even though a digital computer is an entirely inapposite analogy for a brain — but there's been little incentive to disabuse people of that delusion.
From what my own feeble little brain has been able to cobble together, intelligence has little to do with command of facts, and more to do with having well-developed systems for testing reality. Skepticism, for instance, is more a sign of intelligence than being a constructor of theories is. And by skepticism I mean the ability to robustly test things and act on the results, not simply swapping po-faced gullibility for compulsive rejection of "mainstream" concepts. (Alex Jones is not a skeptic of anything, least of all himself; and the one thing any person needs to be most skeptical of is the guy in the mirror.)
I am not in principle against people using technology to augment their brains, or bodies, or selves. I'm convinced we've been doing that since the beginning of human history anyway; the only thing that's different now is the methodology and the degree of intimacy between the human being and the tech in question. What I'm objecting to is the idea that augmentation automatically equals improvement, that the support we would get from neural prosthetics would make us smarter in the ways that actually matter.
I have an analogy that might make some of you twitch, but here goes. The other day I was talking about religion, and I said something like, "It doesn't give you anything you don't already have." Blunter: If you're already a jerk, getting religion will just make you a jerk who got religion. If you're already a nice guy, religion may help augment that. But it can't give you anything you don't already have, or anything you're not willing to achieve on your own.
Neural prosthetics seem like they have the same issue. If you're already motivated to improve your mind, you'll improve it by way of whatever means are available. A brain-plug might make the process faster, or more convenient, but for all I know that speed and convenience might well come at the cost of other things that would make it not discernibly more useful than just learning things on your own the so-called hard way. (Maybe it's hard for a good reason.) And again, if you don't particularly want to leverage it to improve yourself — in a real way, a way that would last when you don't have it anymore — then I doubt it'll matter much.
Part of the novel I'm currently working on, Always Outnumbered, Never Outgunned, has a story element in this vein. My earlier book Flight of the Vajra also touched on this, although as part of a more general discussion of human perfectability by way of technological evolution. In AONO, there's a technology that has vague parallels to the problem I'm talking about here — not a brain-improvement technology per se, but a kind of neural prosthetic all the same. (If I'm cagey about details, it's only because I hate spoilers.)
It quickly became clear to me, as I wrote the book, that someone whose worldview is mostly shallow, self-centered, self-seeking, zero-sum, etc. would not gain much from such a thing except a powerful way to justify and bolster their existing worldview. It might make them a more efficient machine for one-upping others, but that's about it. Not much of a win, if you ask me.
Few people who have an investment — figuratively or literally — in human perfectability want to hear stuff like this. There's a strong urge on their part to demonstrate that human problems are mainly technical problems, and that the main difference between their attitude now and the officious failed utopianisms of the past is that they are nice guys who mean well. And that they have better technology on their side. The latter is almost certainly true, but the former makes no difference I can discern.
One genuine advantage I could see for a technology like this, and one noted in the linked article, is how it could be used to overcome disabilities, injuries, or disease. That to my mind is the most sensible place to start with something like this, to fix things that are actually broken or that detract pre-emptively from the quality of one's life, instead of attemping to turn people pre-emptively into synthetic geniuses. (And, I should add, only the kind of genius that could be created by way of a brain prosthetic — in short, a genius of factual intelligence, not a genius of compassion, or a genius of creativity, or so on.)
I need to point out I'm not against making things less awful. I'm against the idea that things can be perfected. The former is about reducing, wherever possible and practical, the amount of wholly gratuitous suffering in the world, and about not adding to the existing reservoir of suffering if you can possibly help it. The latter is about aiming for a state where suffering of any kind is anathema, which to me sounds not like life but its opposite.
Want to see what else I have in mind? Check out my (new!) novel Welcome To The Fold, and showing your support for it by registering at Inkshares and adding the book to your "Follow" list! Failing that, you can always buy one of my existing books, available on Amazon Kindle and in dead-tree format.