×
all 28 comments

[–][deleted] 22 points23 points  (10 children)

I've seen this link before (or perhaps some other version of it) and I think it's ridiculous. It starts off with an argument from ignorance: "That space flight is nearly impossible is clear from the failure of physics to provide any such method." Those words could have been written less than 70 years ago. The idea that because we have yet to discover a way to do something, it must be impossible, is simply a logical fallacy. Ironically we probably had a better understanding of space and physics in 1943 or even 1923 than we have of the brain today, despite all the advances in neuroscience.

From there it proceeds with a rambling, disjointed attempt to somehow substantiate the argumentum ad ignorantiam with a series of often inconsistent assertions. Evolution is a process that finds local, not global optima... Yet that couldn't possibly be the case with intelligence? Despite the enormous success of traditional medicine in improving human lifespans and general health since the stone age, we're still employing the same old bullshit argument that nature knows best?

It isn't even like nootropics in healthy humans is a thriving field of study. Most of the studies on nootropics are primarily targeted at developing drug therapies for people whose illnesses or injuries have left them at reduced mental capacity. Because that is what is marketable and profitable to make in today's market. Only incidentally do such substances, if they do, improve cognition in healthy humans. Gwern writes as if all the best brains in science have tried their hands at creating nootropics for healthy humans and failed, which is far from true. He also writes as if we have a complete understanding of cognition and its physical manifestation in the central nervous system, and therefore that we have all but exhausted our options there. We haven't even gotten started.

I'm not sure what the point of this essay is since at times it seems to veer off into opposite directions, and also because elsewhere the author has expressed optimism about nootropics, even if the effects of the current known batch are less dramatic than we might desire.

The human body is extremely complex. Not only is it enormously complex per se, it exists in symbiosis with billions of exogenous micro organisms. Only recently have we discovered the surprising fact that even such a seemingly irrelevant thing as the strain of bacteria in your gut can strongly impact thoughts, feelings and behavior. The brain is an ingenious mechanism we're far from understanding even halfway. And just in the past five years we have probably developed more psychoactive drugs than in the fifty before that due to the explosion of the research chemical/designer drug market. Affordable personal DNA sequencing is on its way; services like 23andme and whatever more advanced versions will be coming in the future allow for a much more targeted, individual approach to anything health-related--including the possibility of improving cognition--than ever before. We have so much to learn and we are living in truly exciting times when it comes to the brain, intelligence, learning, chemistry, pharmacology.

I'm a skeptic and pessimist at heart, but I like to think I have some sort of rational instinct to go along with it. I don't think the super-nootropics we have been promised and which intrepid explorers are group-buying as we speak can deliver on the idea of expanding your IQ by a standard deviation. I would not be surprised if at the end of my lifespan we are still no closer to finding such an intervention that actually works. But neither would it greatly surprise me if we actually did. We know so little in the big picture and there are so many avenues of research still available to us that it seems--excuse the wording--let's just say mentally challenged to blanket deny even the possibility of very effective nootropic interventions in healthy people based on flimsy theoretical arguments which rely on false assumptions and faulty syllogisms.

The sheer arrogance with which this was written should be enough to turn many of you away from it. As if we know all there is to know already and we've tried everything and nothing worked and now we should just give up. It's only been a century since Einstein taught us about the relativity of space and time, for fuck's sake. Less than that since we started developing quantum mechanics, and only a few decades after that we had nuclear power and nuclear weapons. Less than a century since the invention of information theory! Less than a century since Turing and Church invented the computer! Half a century since the internet! We've discovered more new shit in the last hundred years than in the previous thousand. No such breakthrough has been achieved in neuroscience or the study of cognition in general.

This poorly argued pessimist drivel ought to be buried and its author should be intellectually shamed of himself.

[–]shitalwayshappens 4 points5 points  (0 children)

Having read several of /u/Gwern's essays I think his/her style is usually scattered (in more explicit terms, without a point), but at the same time there's more stuff covered (for better or worse). By common standards this may be regarded as poor, but I don't particularly mind. This particular article might be even more so than others.

I'm regarding this as a rough heuristic brainstorming of different ideas lying around in the futurist/transhumanist/etc community applied to different concrete substances. Nevertheless, I find it lacking rigor. In particular, evolution seems to be treated as an explicit object, even though it has not been defined in the article. I think it's much more appropriate to treat it as the general game-theoretic phenomenon of "survival of the fittest" or a hill-climbing algorithm rather than the more specific instance of reproductive fitness. Specifically, Gwern talks about we having tools that evolution doesn't: but why isn't "we having tools" part of "evolution"? Why isn't Gwern writing this essay part of "evolution"? Clearly "evolution" is ill-defined here --- but I think in general it is referred by him/her as the process of attaining local maximum in respect to reproductive fitness.

Ideally, he should set up a mathematical framework in which we can more clearly discuss consequences, invariants, increasing entropies, etc. Nash equilibrium might be a simple starting point, or perhaps a more general decision theory, or even reducing to chemistry and physics. But this would be too effortful for a brainstorming session like this essay.

In general, though, I find it not very useful to utter philosophical mumblings. An "easier" way to achieve the same effect is to engineer the desired result, or prove it impossible in some theory (like the no-go theorems of quantum). For example, computers nowadays can shut up any philosopher a century ago about machines not being able to replace human functions, and the psychological experiments in the last few decades can shut up any Enlightenment philosophers about free will and tabula rassa.

That said, it seems like an overreaction to respond with such strong language.

[–]Rocketbird 2 points3 points  (0 children)

This is a good response. I think he has some good points but I got halfway through and had to stop reading because he wasn't making any sense anymore. The writing in this is awful, and incoherent at times. That said, he's not wrong in saying that changes seem to require trade-offs. I also liked the suggestion that nootropics enhance motivation, not intelligence. But things get a little muddy when you try to separate constructs in that way. Is "ability to focus" part of motivation, or is it part of intelligence? What about memory? What about appreciation of music? These are certainly things I've experienced with Piracetam, but to say that they're affecting my motivation and not my intelligence wouldn't be a wholly accurate statement.

My big question is: Is intelligence an evolutionarily desirable trait? I hate to cite a movie, but I always felt like the central argument of Idiocracy was pretty solid. Lower classes tend to be undereducated and have more children. Whether or not these children will somehow, someday overtake the intelligent folks is a question of future fiction, but at the very least, the possibility that being intelligent doesn't improve your potential to reproduce is at least somewhat hindering our progress as a species.

[–][deleted]  (5 children)

[deleted]

    [–]its4thecatlol 1 point2 points  (0 children)

    Gwern is one of the smartest, most generous people in this community. That's where my agreement with you ends.

    You just shit on simen, another great poster, without actually addressing anything he wrote! You tried to turn this into a battle of semantics, when the underlying argument is actually crystal clear.

    @OP: This has been discussed before a few months ago, use the search function. There was a great thread with gwern himself.

    [–][deleted] 1 point2 points  (3 children)

    You begin your reply by directly contradicting the first sentence in the essay. You end it by suggesting I should go die in a fire. Congratulations. The only thing missing from your post is an accusation that I'm literally Hitler. It's a good thing you plan on not further participating in this thread, because I can't even imagine how you could possibly stoop any lower, but I'm sure you'd find a way.

    [–][deleted] -3 points-2 points  (2 children)

    I seem to remember you suggested the other author should be shamed because of your imaginary disagreement with him. But I'm sure that's as low as you stoop.

    [–][deleted] 1 point2 points  (1 child)

    I think he should be ashamed of his intellectually shoddy work and his arrogant attitude on this matter, yes. I didn't call him a dick and I certainly don't want him to die in a fire. In fact I have a lot of respect for some of gwern's other writings, but this piece in particular and the idea it represents needs to die. I wouldn't exactly call that stooping particularly low, but whatever you say.

    [–]chyckun 0 points1 point  (0 children)

    Telling someone to die in a fire isn't "stooping low" apparently

    [–]virnovus 0 points1 point  (0 children)

    Also, this article all but ignores the fact that evolution does not select for any human measure that can be defined as "success". Really, the only thing that drives evolution is the number of offspring you have that reach sexual maturity.

    Still, I do get the feeling that intelligence is mostly genetic. Things like motivation and working memory can be improved with nootropics, but from all the literature I've read, it seems probable that intelligence is inherent to the structure of the brain, which is something that you can't change with drugs. For example, you can give a chimpanzee all the nootropics you want, but it won't ever bring him up to the same level as a person.

    A lot of articles written about intelligence tend to follow this "sour grapes" pattern. That is, the author seems a little bit miffed that he or she isn't more intelligent, and so postulates that intelligence really isn't that important, or is overrated, or something along those lines. It's funny to see. Of course, someone who is truly intelligent will seek out and eliminate biases in himself.

    [–]Bean_Ender 4 points5 points  (5 children)

    "Whatever receptors or buttons piracetam pushes could already be pushed by the brain the usual way. There is nothing novel about piracetam in that sense."

    Why take antidepressants when your brain could just say "let's produce, and use more serotonin"? You could use that as a false argument about the reasons for taking any drug. Although I have to admit that is all I read, I just saw piracetam and started reading there, then stopped reading there, so maybe I missed something.

    [–]chyckun 0 points1 point  (4 children)

    It doesn't change the fact that he stated that

    [–]Bean_Ender 1 point2 points  (3 children)

    That's right. And your post doesn't change the fact that I stated something. So I guess I don't follow what you mean? Should we never discuss things that already happened since we can't change the past?

    [–]chyckun 1 point2 points  (2 children)

    I was agreeing with you

    [–]Bean_Ender 0 points1 point  (1 child)

    Sorry I was 50 50 on that.

    [–]chyckun 1 point2 points  (0 children)

    Haha I wasn't very clear, sorry about that

    [–]ohsnapitsnathan 3 points4 points  (1 child)

    Ypou could apply a (actually much stronger) version of this argument to infectious disease: multicellular organisms have been in evolutionary conflict with pathogenic bacteria for billions of years, so there's likely not much we can do to improve our defenses because they're already at the limits of what's possible without serious compromises. And, using this approach, you would completely fail to anticipate the invention of vaccination or antibiotics.

    [–]Goof-trooper 1 point2 points  (0 children)

    Haha.

    If one is using more acetylcholine, one needs to create more acetylcholine (the brain cannot borrow indefinitely like the US federal government).

    [–]weatherram 0 points1 point  (0 children)

    Can someone please do a ELI5 for this? My brain hurt after about 3 paragraphs.

    [–]chyckun 0 points1 point  (0 children)

    Shitstorm of comments aside. That was a sad movie :(

    [–]Deeviant 1 point2 points  (7 children)

    I hate when people talk about intelligence as if it is something that is A) Preciously Defined and B) Able to be precisely measured.

    [–]shitalwayshappens 2 points3 points  (2 children)

    then what is a way to talk about intelligence that you don't hate? Or do you just hate talking about intelligences in general?

    Serious question, not trolling.

    [–]Deeviant 0 points1 point  (1 child)

    I believe it is nearly impossible for the vast majority of people to separate talk about intelligence from feelings stemming from ego.

    But in general, I find discussions around attempting to better define what exactly intelligence are somewhat productive. For the most part, however, I feel that people generally define intelligence like this: "Hmm, looks at stuff they are good at, intelligence is X!", X being the stuff they are good at.

    [–]shitalwayshappens 0 points1 point  (0 children)

    Then what I wrote down below in response to /u/TheSystem_IsDown might be to your appetite.

    [–]HarryLillis -1 points0 points  (3 children)

    How about an IQ test?

    [–]TheSystem_IsDown 1 point2 points  (2 children)

    Yes, that's what he hates. IQ (and EQ) tests can easily be gamed, and are affected by elements outside IQ and EQ. I don't think there will ever be a 'perfect IQ test' because you can't precisely measure something that is amorphous.

    Heisenberg uncertainty principle of particle physics says that the more you know about a particle's position, the less you can know about it's momentum. The more you know about a particle's energy, the less you can know about it's position in time. I feel this is a good metaphor for the brain: the more you drill down into one aspect with a test, the less you can know about other aspects.

    [–]shitalwayshappens 0 points1 point  (0 children)

    I think a more appropriate model is minimal message length or data compression/encoding/decoding given a probability distribution of the world (for example by arithmetic coding). If the distribution models an agent's view of the universe, then the more the agent takes an IQ test, or otherwise ruminates about it, the more the distribution shifts toward a neighborhood of the "IQ tests" in the theory space, which means that compression/encoding/decoding of data relevant to the IQ tests, for example, questions on these tests, take less space/become faster sans the space/time overhead taken by the model = distribution. At the same time, shrinking the compression of IQ test questions/answers necessarily expands the compression of things outside of it, by virtual of (lossless) compression being a bijection. Hence the agent becomes slower at handling other items.

    Ugh let me try a clearer analogy.

    A probability distribution can be set up to correspond to your model of the world. Roughly, the more you experience something, the more you know it, the bigger the probability assigned to it. So say you spent a semester studying chemistry, then over the course of the semester, the items associated with chemistry will get a bigger share of your total probability mass of 1. Another way to look at it is that if you can describe something in only a few words, then it has a big probability mass, and conversely, if you can't talk to me about this thing without a wall of text, then this thing has small probability mass in your model of the world. So, if you had known about what I'm trying to say, then I can just say "ah minimal message length is a better way to look at it", and you would understand it (because the idea associated with "minimal message length" has a big probability mass in your head). But I doubt this is the case, so I'm writing a wall of text to explain my ideas agnostic of any underlying model of the universe (correspondingly a probability distribution) so that you can understand without prior knowledge. Similarly, if you have studied chemistry for a semester, then the ideas like "principal quantum number" can be wielded around using only these 3 words, instead of the entire chapter in your textbook set up to describe the concept. That means that it becomes easier to process these concepts, because 1) they take less words and thus time to process 2) they take less "space" in your brain (specifically, working memory).

    Now this is very nice, since it means that concepts that you frequently use can be communicated more economically, and you can manipulate them in your head more easily.

    But this also means that the you will have a harder time in a serendipitous conversation with a friend about how Jane Eyre is about the suppression of human sexuality in the Victorian age when the last literary theory you touched was from 2 years ago. This is because these concepts shift down in probability mass in your model of the universe in response to the shift up by your new found chemistry concepts along with whatever you learned in the 2 years since. Hence you forgot the key terms like "bildungsroman" and "madwoman in the attic" instead have to resort to "Jane Eyre really paints a good picture of the toils of growing up" and "it's pretty dehumanizing to entirely hide away one of the most important figures in the plot" to come at the same ideas, and so are more ineffective in the exchange.

    Similarly, by immersing yourself in IQ tests, you get pretty great at answering IQ questions because your model/distribution builds up to effectively compress, encode, and decode these concepts. But that also means that you become less wonderful at everything else.

    In general, this can explain why, for a specific narrow-purpose task, it will always be cheaper in time and energy to engineer and/or use a machine that accomplishes exactly that, rather than a more general intelligence: it's easier and cheaper to hardcode the relevant probability distribution required for this job, than to shape a flexible distribution converging toward the same result.

    (we can also ditch the probability nonsense and instead use kolmogorov complexity, AIXI, universal compression, etc which would make these concepts described above much closer to reality but also much harder to compute)

    Finally, this is of course an imperfect model that doesn't explain everything in a human mind, but I think it very clearly shows certain information-theoretic principles that no intelligence can escape.

    [–]HarryLillis 0 points1 point  (0 children)

    Anything is more accurate outside of cheating, that has no bearing on the efficacy of the examination.

    You can use whatever terminology that you want, but the scientific consensus is that IQ is an accurate measurement.

    [–]phrresehelp 0 points1 point  (0 children)

    Not this stupid rubbish again written by an author that appears to be in a manic state caused by whatever he or she ingested! I mean come on the whole fucking thing unravels and stops making sense halfway through his drivel.