Overweening Generalist

Showing posts with label neurobiology. Show all posts
Showing posts with label neurobiology. Show all posts

Thursday, October 6, 2016

Our Neurogenetic Archives: A Few Notes

I have a guitar student, and she had a high school assignment to write on John Locke and was worried. I piped up, unwisely: "Ask me anything about John Locke! I'm here to help ya!" She had the vaguest notion of what Locke was up to, but she did know he influenced the risk-takers and revolutionaries who established Unistat. I told her Locke has been shown to be pretty far-wrong with his notion of our minds at birth as tabula rasa. Already, I had lost her.

But aye...I think the jury has come in with a unanimous decision on this: we come equipped, fully loaded. For presumably many but not all imaginable things. This has been established, in historical time, a few seconds ago. Or say 1950-now.

But to what extent are we loaded? Is it only activated with experience in-the-world, with language, with education? Certainly we inherit a shuffled deck of genes from mom and dad. Is that it?

(Aside: this genetic inheritance, modified by drugs, learning, changes in environment, bombardment by cosmic rays, alterations in diet, etc: this is my best unpacking of "Plato's Problem" as mentioned briefly in the review of Knight's book on Chomsky, below.)

In his lecture after winning the Nobel Prize for Medicine in 1968, Marshall Nirenberg talked about "genetic memories." Well of course, our genes can be said to have "memories" in a certain metaphorical sense, but details about this metaphorical sense? As I tried to read his lecture (quite technical...but it turns out Nirenberg was wrong about "nonsense codons"!), I can't get a line on it. He's certainly not going off about how the Akashic Records were "right after all!" or anything like that. Nirenberg gets as close to mentioning the astral plane as Keanu Reeves gets to winning Best Actor.

But that was way back in 1968.

Since then, there's been an explosion of knowledge about epigenetics: it turns out experience-in-the-world of our immediate forebears does have influence on our genes/lives. Poverty has been linked to epigenetic changes and mental illness, for example. Epigenetics is the study of how genes get expressed, and the more I read about it the more my head spins. RNA has much ado about gene expression. It's not merely a "messenger," as many of us were told in skool. Some genes get turned on or off like a binary light switch; others get modulated like a rheostat, gradually becoming more and brighter, or less and dimmer.

Here's another example from the past year: the methylation of the genes coding for the hormone oxytocin - a hormone linked to nurturing, trust and social skills - can get taxed by intense emotional experiences. What a wonderful example of the new reality of understanding biology: a gene that helps us do very important things such as falling in love with baby as soon as she is born? It's processed in the brain, like a drug. (Hell: I see oxytocin as one of the more interesting endogenous drugs we have, and we can synthesize it too!) This hormone/drug, via social interaction in the world, affects our behavior, and the social world/environmental feedback can alter the expression of the gene. This circular-causal feedback looping of nature/nurture ---> nature/nurture, ad infinitum, till death do us part - seems like a microcosm of how Everything works. (And remember: then the epigenetic effects can get inherited by the next generation, via what happened historically in the environment, and just, wow. So: death is not the end of our story. We're connected in ways we didn't know before.)

Gosh dad!: Father may pass down more than his genes: his life experience too?
Oh, my: a bad night's sleep can epigenetically alter your genes.
Our genetic cups runneth over: epigenetic drugs are in the works.
Not fair: Study of Holocaust survivors show trauma passed on to children's genes.

Think of how all this impacts the roiling and boiling issue of income inequality...

There's plenty more where that came in. A fine readable book for non-specialists that I can point to 'cuz I read it and was enthralled: Nessa Carey's The Epigenetics Revolution: How Modern Biology Is Rewriting Our Understanding of Genetics, Disease, and Inheritance

Combine this with a few books on the new synthetic biology, CRISPR techniques, and what the hell: quantum computing and ye head shall be spaghettified.



But back to the neurogenetic archives. They seem to have some ontological status outside the drawing room where the Theosophical expert waxes on about past lives. But to what degree?

Darold Treffert is a psychiatrist who's been studying savants and autistic people with extraordinary abilities in some domain of life. He's been at it for many decades. He became personal friends with Kim Peek, the person "Rain Man" was based on (though that character was a composite of many savants, says Treffert). In the beginning he was a traditional scientist who read Jung and thought it wasn't science: too soft. Now he thinks Jung was on to something; he thinks we may have genetic memories of things experienced in the past by others whom we often cannot identify. See his two books (mentioned in the text linked to) and give us a better explanation.

How wild this is! We can inherit knowledge? We can get bashed in the head and suddenly write symphonies, when before we couldn't even carry a tune? (Being somewhat conservative in certain areas, I'd rather not get my head bashed in and instead risk continuance of not being a genius.) Treffert says we inhabit a metaphorically left-brain (linear, rational) society; maybe activate latent abilities by spending more time doing what the Kulchur is telling us as "wasting time": doing art. (Here's yet another argument for Basic Income?)

Timothy Leary and Robert Anton Wilson have a collectively dizzyingly rich series of speculations on neurogenetic memory, based on their reading in genetics, mythology, neuroscience, history, anthropology, and literature; they scattered their ideas throughout their many books, and I'd point to Leary's Info-Pyschology and Wilson's Prometheus Rising for starters...

David Foster Wallace, in an essay on David Lynch collected in A Supposedly Fun Thing I'll Never Do Again, riffs on our topic, saying our internal impressions and moods are, "An olla podrida of neurogenetic predisposition and phylogenetic myth and psychoanalytic schema and pop culture iconography." (p.199 in my copy) I hadda look up "olla podrida."

Well, now I said to myself, "I think I write too much for this texting world. I'll try to make this OG spew a short one," and so I'll end with a quote from my favorite cognitive neurolinguist, George Lakoff:

"When we understand all that constitutes the cognitive unconscious, our understanding of the nature of consciousness is vastly enlarged. Consciousness goes way beyond mere awareness of something, beyond the mere experience of qualia, beyond the awareness that you are aware, and beyond the multiple takes on immediate experience provided by various centers of the brain. Consciousness certainly involves all of the above, plus the immeasurably vast constitutive framework provided by the cognitive unconscious, which must be operating for us to be aware of anything at all."
Philosophy In The Flesh: The Embodied Mind and Its Challenge to Western Thought, p.11

Thanks for bringing your immeasurably vast constitutive framework of your cognitive unconscious to the OG: see ya!

                                      художник Боббі Кемпбелл зробив цю графіку для мене

Thursday, March 12, 2015

Free Will: The Law, Philosophy, and Microbes

It must have been around age 15 when I first encountered "the problem" of "free will." What a kick that  some Greeks had said that everything reduces to atoms, and every billiard-ball atom in the universe impinges on every other billiard ball - including the ones in my "mind" and muscles - and so there's no free will. For a long time I thought it was sophistry - after I learned what that word meant.

At some point I started reading on "free will" and realized I could continue to read and think on it my entire life; there's quite a lot of ink already spilled on the issue. If it's an issue at all to you, that is. Because almost all Westerners assert their wills are free. Doesn't it just seen like you decided to read Overweening Generalist today, rather than skip it? You WILLED it, and it was so. And now I'm already boring you. I'll try to be amusing to keep you from "willing" your way on to the next of 23 zappazillion other possible Netpages.

I've found that there have been times since I was 15 - a long-assed time ago, friends! - when I thought like William James about the "free will vs. determinism" Big Q: To paraphrase James: Of course the world and everything is determined. And yet our wills are free. A sort of "free-willed determinism" must be the run of things. It was lines like this that won James lots of "man-in-the-street" fans; his academic colleagues? Not so much.

In other words, for James it's a tired subject. We've been debating it for 2500 years and can't come to a consensus, so let's just change the conversation, aye?

But then I'll get into periods when the topic is really hot. For example, I have mostly thought arguments against free will were hilarious, deep-down. Their adherents may have been dead serious. But really? You, Philosopher, did not have a choice but to write this rather dry piece of argumentation just so I could not choose to read it? (And yet I did read it...did I choose to or not? If it was so "dry" why didn't I choose to do something else? Clearly, there are more pleasurable things. Like masturbation...which is a lot like what I've just read from Mr. Philosopher.) Or: what if he's right?

Gedankens
I've spent entire weeks reclining on a divan trying to remind myself there's no "free will." Not even for Rush's drummer and lyricist Neil Peart. Neil was confused when he wrote that, probably under the spell of a ditzy libertarian. When I did or said something I later thought was ignorant I gave myself a break: It's just the way Things are. To borrow from William James again, I found this a delightful way to take a "moral holiday." I found that when I or others did kind things for others, it was probably a nice grace built into the fabric of existence. Others who acted like jerks couldn't help themselves. My stress levels seemed to dip. I'd constantly catch myself thinking in the "free will-ist" mode and remind myself that that was not allowed until next Monday, or whenever.

Being some sort of agnostic hedonist with Buddhistic and Taoist tendencies, I couldn't see this thinking linked to Judeo-Xtian ideas, although clearly: the Free Will v. Determinist worldviews (which I will from here on out refer to as The Main Event) have had huge play in theology and law. In Law, we apparently have a very very strong need for people to be blameworthy, and therefore what's commonly called the compatibilist view holds sway. Things are determined, but there's enough room for moral choices, unless you were coerced, or drugged, or not "of sound mind" and many other very interesting hedges...like maybe you murdered top public officials at point-blank cold blood because you were addicted to Hostess Twinkies, an addiction to which was symptomatic of a non compos mentis-level of depression.

Compatibilists
In 1962 Strawson had the audacity to argue for a particularly hardcore compatibilist position: let's say you are a sober, healthy bus driver and a child runs out in front of your bus. You have no time to react. You hit the kid and he dies. Strawson thought - if I read him correctly - that the consequences of your actions are enough to hold you culpable. Forget about any extenuating circumstances. If I have free will of the kind I hope I have, I hold Strawson in contempt for a sort of robotic punitive dickishness all too common among fascist Law and Order types. (Funny: one week I decided to adopt a hardcore No Free Will and I'd read Strawson: he couldn't help it. Poor guy. Oh well, it's part of some larger, Weirder Plan?)

                                                    Dr. Samuel Johnson

Sidelight
You've all heard/read the bit about Boswell relating to Dr. Johnson about Bishop Berkeley's views about "reality": we can only have experiences of things, we cannot know, do not experience Abstract Nouns. According to Berkeley, you're having an experience reading this rather prolix blogger write about The Main Event on a "computer." That's about all you can say about it. You see the computer, the words. You can feel the computer. You cannot infer about anything else "out there" that's causing you to have further abstracted notions about what might be going on; "God" put all those interesting ideas in your head. The sense data came from Him too. Anyway: Johnson hears this and it pisses him off and he kicks a rock and says to Boswell, "I refute him thus!" Supposedly Johnson hurt his foot, took the pain to illustrate that things really are "out there" and he chose to demonstrate and feel it. Guerrilla ontologist Berkeley never meant to rouse ire in a dude like Johnson, but Johnson seems to have taken it as a challenge to his own free will, and the will to believe other stuff is really "out there." Like things to kick by the side of the road. I see merit in both writers' ideas. Johnson's "I refute him thus!" and the kick has been referred to by wiseacres of much reading as argumentum ad lapidem. I hear this fallacy in bars all the time.

Incompatibilists
These folks see The Main Event as skewed toward determinism and so are reluctant to blame. However, libertarian incompatibilists see free will as far more important than whatever there is that determines us (genes, history, our upbringing, environment, etc), so they pretty much reject determinism and do find others blameworthy. A few incompatibilists who do not find people blameworthy are pre-determinists of the olde fashioned kind: atoms and billiard balls and all that: we cannot possibly trace the contours of causation. Do you know why you have a headache? How memories are formed? How vision is processed? Do you have access to how a suite of genes are turning on right now, coding for proteins, turning other genes off, modulating others like a rheostat?

Far more incompatibilists are "skeptical" ones: a shorthand for them: we don't have enough free will to find others blameworthy. As a general reader, it seems more biologist-types are going this way. My favorite thinker who's an incompatibilist is the eminent neurophysiologist and baboonologist Robert Sapolsky of Stanford. The eminent philosopher Daniel Dennett claims 59% of philosophers in 2009 were compatibilists, according to a Philpapers survery. He says only 12% of philosophers were determinists.

                                         Barbara Fried of Stanford law; she's a 
                                         strong proponent of skeptical incompatibility

Sidelight
Isaac Bashevis Singer, 20th c. novelist, was asked about whether he believed in free will. He replied yes, he had no choice. Which reminds me of an old joke. Think about the eagle, frog and truck driver as thinking they were executing their actions freely:

Moses, Jesus, and a bearded old man are playing golf. Moses drives a long one, which lands on the fairway but rolls directly toward the pond. Moses raises his club, parts the water, and the ball rolls safely to the other side. 

Jesus also hits a long one toward the same pond, but just as it's about to land in the center, it hovers above the surface. Jesus casually walks out on the pond and chips one onto the green.

The bearded man's drive hits a fence and bounces out onto the street, where it caroms off an oncoming truck and back onto the fairway. It's headed directly for the pond, but it lands on a lily-pad, where a frog sees it and snatches it into his mouth. And then an eagle swoops down, grabs the frog, and flies away. As the eagle and frog pass over the green the frog drops the ball out of its mouth and the ball lands in the cup for a hole-in-one. 

Moses turns to Jesus and says, "I hate playing with your dad."

Why I'm Once Again On a Kick Over The Main Event
It has to do with the literally hundreds of articles I've been reading over the past year on how bacteria in our gut has been strongly linked to debilitating dis-ease and how those microbes can influence our thinking. Our microbiome in general (bacteria, viruses, fungi and other microbes) outnumber "our" own cells 10-1. Peer-reviewed, well-designed studies have linked our microbiome to obesity, diabetes, atherosclerosis, asthma, colon cancer, ulcers, irritable bowel syndrome, lymphoma, malnutrition, hypertension, liver cancer, psoriasis, even ear wax. Whether we were born vaginally or by Caesarian seems to have an enormous influence on our microbiome, and therefore immune system and therefore general health. All of this makes me think 1.) In scientific endeavors, we as a species, hardly know anything so far. 2.) This stuff is very exciting and offers real hope for the cure or alleviation of lots of human suffering, but the complexity is staggering. 3.) If all of this stuff is true, it has bewilderingly fascinating implications for Qs surrounding The Main Event, no? One thing most of us can say: "Boy, my gut bacteria have really done a number on me!...I just don't know to what extent, to what part of 'me'...and why. I don't even know all that much about how bacteria work."

                                                Toxoplasmosis cycle

Briefly: A Few Other Mitigators
Enforced miseducation. Public Relations and advertising. Kahneman and Tversky's uncovering of a litany of unconscious biases in human minds, even the best of those minds, including these two last named, one of who won a Nobel Prize. Side effects, TV, social media, memes, possibly quantum indeterminism.

Sounds Like Science Fiction: Toxoplasmosis gondii
A Czech biologist named Jaroslav Flegr, sitting in then-Soviet-controlled Czechoslvakia, was reading a Richard Dawkins book. Dawkins wrote how a flatworm gets into an ant and hijacks its nervous system by altering a protein. When the temperature drops, ants normally go underground. But a zombified ant instead heads to the top of a blade of grass, where its mandibles clamp hard onto the tip of the blade of grass, until it's eaten by a cow or a sheep. In the ungulate's stomach, the flatworm is in the perfect environment to reproduce. Flegr started thinking about his own behavior and the ant's around 1990. He knew some things about Toxoplasmosis gondii, a single-celled protozoa that cats carry in their bodies. Flegr remembered walking out into heavy traffic and didn't jump out of the way if cars honked. He had openly criticized the communists running Czechoslovakia, which could have led him to be imprisoned, but he was lucky. When studying in Turkey, there was sectarian violence and gunshots, but he stayed calm, to the surprise of his colleagues and even himself.

Flegr wondered if he had been infected by Toxo and if it caused this odd behavior. Luckily the Charles University at Prague, where he'd recently gotten hired, had just developed a superior test for Toxo: he had it.

It turns out around 10-20% of Unistatians are infected with Toxo. 30-40% of Czechs are infected. In France: up to 55%, probably due to eating undercooked meat. Billions worldwide are infected with it. It gets into your brain - I know this sounds like a Lovecrafty psychedelic horror story, but it's true! - and hibernates there, causing cysts. Now: take a guy like Flegr: he's not all that worked up about it, and he has it and knows as much about it as anyone. So it's not that horrible. But it is pretty effing weird...

Here's what Toxo does in your brain, according to the latest research: it forms little cysts inside certain neurons, quietly altering connections. If you watched the Sapolsky video I linked to above, you've heard a lot of this already. Toxo alters those parts of the brain that respond to dopamine and it just so happens its primary actions on human behavior have to do with the basics, the primal circuits: sexual arousal, fear, and anxiety. The protozoan Toxo knows exactly what to do to ramp up production of dopamine, causing pleasure-seeking: sex, drugs, rock and roll.

Toxo alters trust in others and how outgoing we are, and this is sex-specific: men become more introverted and suspicious while women become more extroverted and trusting. Isn't this the WEIRDEST stuff? Toxo alters our response to scents. In lab rats infected with Toxo, they loved the smell of cat urine, which is supposed to scare the hell out of them. Toxo-infected rats become easy prey for cats...which is just what Toxo wants! Be aware of the kitty litter box! Wash your vegetables very well, cook your meat well-done, try not to drink water that might be contaminated with cat feces. (Difficult in much of the Third World, but they have other problems.) Toxo is linked to car crashes, suicides, and schizophrenia.

Car crashes? Yea: people just aren't as vigilant on the road with Toxo on the brain. Toxo-infected folks are 2 1/2 times more likely to be in a car accident. People don't mean to be bad drivers; they just sorta don't care all that much behind the wheel. Lapses of concentration, possibly an increased propensity to go into bizarre daydreams? Sapolsky thinks this is just the tip of the iceberg: there are probably all sorts of "puppetmaster" microbes we haven't identified yet. Sapolsky also says the damage done by Toxo to drivers is not as bad as drunk or texting drivers. To make the roads safer, deal with drunks and texters first. Just the fact that a protozoa can get into your brain and influence us in such intimate ways: ain't life grand?

How does all this alter your ideas about The Main Event?

The world of biology is thronged with stories about insects, fish and crustaceans becoming "zombified" by some other organism with its own plans.

Besides Toxo, Sapolsky the incompatibilist thinks we can't come to grips with not having free will, and we suffer for it. For him, every move we make is part of an intricate cascade of genetic, cellular, cultural and personal factors. Toxo is just one more Damned Thing.

                                    Public intellectual and philosopher Daniel Dennett

Daniel Dennett
I'll give a compatibilist the last word here. I find his arguments nuanced but I also find him arrogant. He thinks neurobiology is no place to think about The Main Event. He rejects quantum indeterminism because we must look for explanations for a "free will worth wanting" at a higher level of complexity, a more human-leveled explanatory scheme. For Dennett, we are indeed enmeshed in causality and yet we are autonomous free willists. One narrative he goes for comes from John von Neumann and Oskar Morgenstern's Theory of Games: when we take an intentional stance we have a theory of mind: we know what others know and know they know we know X, Q, Z, etc. On this level, we are free agents who can plan for possible exigencies, make rational decisions and be held accountable. He's used animals and plants as examples of living things that cannot possibly have the intentional stance. But some of the stuff I've read about plant  and animal behavior lately? I wonder if Dennett is guilty of not reading enough outside his own world of Expertise. I do like this passage from him, from an article in Prospect magazine, on our topic:

What people seem to want - though articulating this idea causes them to backtrack in embarrassment - is to be a sort of god, perched somehow on the edge of the physical universe, neither a part of it or remote from it, able to interfere "at will" with its ongoing streams of causation, without at the same time being caused by these very streams to choose which of the options to favor.

Thanks for reading this bloated rant...but maybe you had no choice?

just a few sources that were used here:
Dennett's review of Sam Harris's book Free Will, (2012):
http://www.naturalism.org/Dennett_reflections_on_Harris's_Free_Will.pdf

Dennett's article from Prospect, in which he champions fellow philosopher Alfred Mele:
http://www.prospectmagazine.co.uk/features/are-we-free

In case you missed the link: Sapolsky's 25 min video-talk on Toxo:
http://edge.org/conversation/toxo

Kathleen McAuliffe's 2012 article on Flegr from Atlantic:
http://www.theatlantic.com/magazine/archive/2012/03/how-your-cat-is-making-you-crazy/308873/

Discussion about UC Davis study about how random fluctuations in the brain may allow for free will:
http://www.3quarksdaily.com/3quarksdaily/2014/09/randomness-the-ghost-in-the-machine.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+3quarksdaily+%283quarksdaily%29

Barbara Fried's "Beyond Blame," makes a case for abandoning blame:
http://bostonreview.net/forum/barbara-fried-beyond-blame-moral-responsibility-philosophy-law

Sci-Am: "Is Free Will An Illusion?":
http://www.scientificamerican.com/article/is-free-will-an-illusion/

"The Body's Ecosystem," The Scientist, Aug 2014:
http://www.the-scientist.com/?articles.view/articleNo/40600/title/The-Body-s-Ecosystem/

microbiome superstar Rob Knight's TED talk:
http://www.ted.com/talks/rob_knight_how_our_microbes_make_us_who_we_are?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+TEDTalks_video+%28TEDTalks+Main+%28SD%29+-+Site%29

"Turd transplant leads to rapid weight gain and obesity" Boing Boing:
http://boingboing.net/2015/02/07/turd-transplant-leads-to-rapid.html

"The E. Coli Made Me Do It": gut microbes and human behavior, New Yorker:
http://www.newyorker.com/tech/elements/the-e-coli-made-me-do-it
"Do Gut Bacteria Rule Our Minds?":
http://www.ucsf.edu/news/2014/08/116526/do-gut-bacteria-rule-our-minds
"Can Microbes in the Gut Influence the Brain?":
http://www.livescience.com/49373-google-hangout-on-brain-and-microbiome.html

"The Super-Abundant Virus Controlling Your Gut Bacteria":
http://www.newscientist.com/article/dn25954-the-superabundant-virus-controlling-your-gut-bacteria.html#.VQFhESgx9FL

Plato and a Platypus Walk Into A Bar..., by Cathcart and Klein

The Epigenetics Revolution, by Nessa Carey


Monday, March 24, 2014

Recent Research on Odors

Last July I read a delightful essay by a former chemistry teacher, who was responding to an article in Scientific American that defended the minor, competing theory of how olfaction works in humans, and presumably, other mammals and critters: that each molecule has quantum vibration, and this is what distinguishes smells for us. A hydrogen atom in a molecule was substituted with a heavier deuterium isotope, which technically did not change the molecular structure of the original, but both flies and people could smell the difference. Previously, the idea of a quantum vibration working in the nervous system was laughed at by detractors as "fashionable junk science." The reigning idea of olfaction is that  a molecule docks in one of our 400-or-so different receptors on the olfactory bulb, and each receptor acts in concert with the others. Once docked, a chain reaction occurs and the brain recognizes, "Hey that smells like sandalwood," or "Ewww! What smells like rotten eggs in here?" The author of the essay, Ruchira Paul, wasn't entirely convinced of the quantum vibrational theory, but was still open to it.

What I liked most was Ruchira's observations that we have a limited vocabulary for odors, we don't have accurate standards for measuring smells, that our memory of odors lasts longer than our memory of sights, and that our sense of scents seemed uniquely intimate in its link with our own biographies and memories, our history. Thus seems probably because olfaction is part of the limbic system. She writes that smells are the "forgotten sense" in the semantic sense that, among psychophysical testing of our perceptual apparatuses, researchers have had better instruments to test our range of detection of differences in sight and sound, because they are purely physical phenomena, while our sense of smell and taste are chemical and thus more unwieldy and difficult to measure.



The physics: we have three light receptors, and researchers have estimated humans can distinguish about 10 million colors. Wavelengths of light turned out to be quite amenable to measurement.

Our ears are very complex, miraculous little organs working in concert, and biophysical research in the branch of physics called acoustics found that humans could distinguish differences in around 500,000 wavelengths of sound, and we now know that this number diminishes with age. (<---Depressingly, I found I dropped out at 14kHz. And yes, I'm close to twice 25.)

But what about odors? Chemistry-detection/measurement turned out to be more of a sticky wicket. Many of us grew up hearing and believing that we humans are completely defeated by dogs in our ability to detect odors. If you have an old biology textbook hanging around the house it probably says humans can perceive about 10,000 different odors, while dogs detect 300,000,000. This turns out to be a vastly un-empirical guesstimate from the 1920s. How far-off the guesstimate was we'll get to in a moment.

A year ago, March 2013, I began doing some research on medical diagnoses via analysis odors, because of all those articles on dogs I'd read over the years: how they could detect cancer and all that. It turned out to be fascinating, hot, exciting stuff, and maybe I'll do a separate blogspew on it one day, but just a quick diversion into that area...

Researchers at Cedars-Sinai in Los Angeles say, of course: obesity is not difficult to detect. I mean, just look at that dude! And why are people obese? Well, obviously: they eat too much and don't exercise enough. Jeez, no foolin' Sherlock? Tell me something we don't know! How about this: when you breathe out certain gas-emitting bacteria from the microbiome in your gut, this may be a deeper reason why you're obese: the ratio of gut bacteria that are associated with fatness versus the gut bacteria that are not? The implications are <ahem> large. And this gives our overweight loved ones cause for hope, because if we can figure out how to alter our gut bacteria ratio via drugs or even simple probiotics? We could be on the way to defeating obesity. (And oh man! This has become a hot research topic; there's a lot riding on making this work.) ("Doctors Detect Obesity Bug On Breath")



Also, dig: 11 months ago, in PLOS ONE, a possible discovery of individual human metabolic phenotypes! (Human wha?) Okay, our gadgets are now becoming so sophisticated that the chemical world is becoming much easier to map, finally. Maybe it will soon catch up with purely physical phenomena we can measure. But researchers in Zurich, noting that, despite fluctuating factors involving diet and the gut microbiome, people's urine remained "highly individual," and that urine phenotypes (phenotypes: that which we can observe; genotypes: an organism's genetic makeup which codes for genetic expression that largely gives rise to that which is phenotypic) persist over time. The Zurich researchers used a group of subjects over nine days, exhaling into a machine that handled mass spectrometry, found that "consistent with previous metabolimic studies based on urine, we conclude that individual signatures of breath composition exists." I've even heard this individual-breath signature idea bandied about as a way to get rid of all our passwords, but I'm not sure if the geek was joking or not.

Back to our general sense of smells...

Last September a study appeared in PLOS ONE that I found intriguing: in 1985 a book appeared called Atlas of Odor Character Profiles, by Andrew Dravnieks. Researchers used this book as a basic data set to start with, and with it they have determined there are around ten basic, tightly-structured categories of odor. They are:

1. fragrant
2. woody/resinous
3. fruity (non-citrus)
4. chemical
5. minty/peppermint
6. sweet
7. popcorn
8. lemon
[The last two are both "sickening"]
9. pungent
10. decayed

What they hope to do now is demonstrate the soundness of this - to me, overly-rationalistic take, but what do I know? - model by predicting how a given chemical compound is likely to smell. (My first guess? Number 4.) The researchers used very elaborate statistical techniques to arrive at the 10 and will continue to do so as they test their model. And maybe I shouldn't be so snarky: the basic model for the sense of taste has remained the same for very many years, only going from four to five since 1985 in the West (sweetness, sourness, bitterness, saltiness, with the relatively recent addition of umami).

I hope these guys are on to something, if only for the reason that we can all internalize these ten categories and then invent more words to describe nuances within each category. With this research, Ruchira Paul's observation that we don't yet have accurate standards by which to measure smell will have been eclipsed by some new, "objective" model.

The Latest: Humans Can Detect One TRILLION Odors ("Conservative Estimate")
You may have heard the news from last week. See HERE for a decent overview. Researchers at Rockefeller University took 26 participants and used 128 different odor molecules. (In the actual phenomenological-existential "real" world there are vastly more odors, but that's why this research is so brilliant: they would test a person by mixing two of three vials with combinations from the 128 odors, and the third one was not the same as the other two. The result: if less than 50% of the molecules are identical, people could still smell the difference! People could tell the difference between the two (same) vials and the one different one. If 51% of the two vials were identical, people could tell. The researchers admitted that often the admixtures of odors from the original 128 were "nasty and weird." Think about it: they could mix 10, 20, or 30 odors, in any combination from the 128. This yields trillions of different scents. And people could detect the differences! One basic odor of the 128 may have been "orange," another "spearmint" or "anise." But they mixed them together in all sorts of groupings. No wonder they were "nasty and weird."

We have around 400 different small receptors, working in concert. A smell of a rose would have around 275 different molecules in unique combination.

One of the olfactory researchers, Andreas Keller, said, "The message here is that we have far more sensitivity in our sense of smell that for which we give ourselves credit. We just don't pay attention to it and don't use it in everyday life."

I want to see this study replicated many times. It almost seems too wonderful to be true. I hope there's no Clever Hans Effect tainting the research. It makes me wonder about training humans to smell cancer like dogs, but we seem so biased toward sophisticated gadgetry in this regard, and against dogs and human perceptual apparatus alone, that I won't hold my breath...or nose.

Imagining Smells: An Uncommon Gift
Or so Oliver Sacks tells us. Most of us have little trouble conjuring in our minds a sight or sound from our vivid past. But it's rare to summon an induced hallucination of odor. However, some can do this, and Sacks relates what one "Gordon C." wrote to him in 2011:

Smelling objects that are not visible seems to have been a part of my life for as long as I can remember....If, for instance, I think for a few minutes about my long dead grandmother, I can almost immediately recall with near-perfect sensory awareness the powder that she always used. If I'm writing to someone about lilacs, or any specific flowering plant, my olfactory senses produce that fragrance. This is not to say that merely writing the word "roses" produces the scent; I have to recall a specific instance connected with a rose, or whatever, in order to produce the effect. I always considered this ability to be quite natural, and it wasn't until adolescence that I discovered it was not normal for everyone. Now I consider it a wonderful gift of my specific brain.
-pp.45-46, Hallucinations

On the other hand, there are other, more terrifying olfactory hallucinations described in this wonderful book: people who had traumatic accidents who were violently attacked or witnessed something horrific will, when by-chance experiencing the smell or similar smells associated with the traumatic moment, might experience a shell-shocked "replay"back to the Very Unpleasant Moment.

Let us tend to those more-common moments when some odor sends us back in time to a more comforting or interesting moment, which seems more common with the olfactory/memory nexus than the triggering of traumatic memories.

Wednesday, October 2, 2013

The Drug Report: Crisis In Psychopharmacology

It's been at least 30 years since a truly new drug has hit the market that addresses the needs of patients suffering from depression, anxiety, manic depression (now rather bloodlessly called "bipolar disorder"), and schizophrenia. Any "new" drugs in the last 30 years have been basically some variation on an older, established drug (called "Me Too" drugs), in an effort of competing drug companies to keep up with the competition. These non-new "new" drugs are almost always marketed as "blockbuster" or "revolutionary" therapeutics, touting less side effects than older, competing drugs. They are not new and the side effects are just different, not less. 50 or so psychiatric drugs bring in $25 billion a year in Unistat alone. And they're pretty lousy.

(I know, I know: you'd be far worse off without the one that worked for you. Hey: they do some good. For some people. I want better drugs for you, is all. And we were promised them with the 2000 mapping of the human genome. So...where are they? Later.)

                                                        serotonin

The drugs people use - by every estimate I've seen between 20% to 25% of the Unistat population takes  at least one of these - were discovered by accident. By serendipity. In the 15 years after 1945. In 1952 a tuberculosis drug didn't work for TB, but iproniozid sure elicited euphoria when tested! Bingo: the first antidepressant. The drug that became Tofranil was supposed to work for schizophrenics, but it didn't help them, only make them run naked into town, laughing. Another antidepressant. In 1949 lithium was discovered, by accident, to treat manic depression. In 1957 Leo Sternbach was about ready to give up his research into a class of antihistamines, things were looking like a dead-end, when he stumbled onto the benzodiazepines: your Valium, Xanax, Lorazepam, Klonopin, etc: an empire of anti-anxiety drugs, and a huge influence on the tonality of culture in the West in the latter half of the 20th century.

With better technics, we learned much more about neurons and neurotransmitters. The SSRIs seemed to treat depression and anxiety. They were really the last big breakthrough. Ever since then, clinical trials that have made it to Stage III have been nothing but huge, sad, very expensive wastes. And so Novartis, Glaxo-Smith-Kline, Astra Zeneca, Pfizer, Sanofri and Merck have by and large quit trying. They've halted clinical trials, moved onto research that shows more promise. The pipeline for new psychopharmacological drugs is dry.

                                    psilocybin, very much like serotonin in structure

Wait a minute: with more neuroscientists than ever before, far better imaging devices, a tremendous acceleration of knowledge about the human brain over the past 30 years...why? And mental health takes an increasing toll on us. If not you, someone you know. Why is this so difficult? Is it because what R.D. Laing called "the medical model" finally showed its hand? (A pair of nines?)

Again: our technology to map with ever finer-grains our cells, genes, and organs is greater than ever. We now have a deeper understanding of the human genome, an explosive discovery of the complexity of the epigenome, increasing understanding of how our environment and microbes interact with us...why don't we have a drug that will cure depression by now? Are we simply too complex to understand? Were we destined to be granted a brief window of time in which a few "happy accidents" would yield up as good as it gets, and it all ended 30 years ago? What about our computing power and pharmacological knowledge? Isn't it also subject to Moore's Law: a doubling roughly every 18 months? Shouldn't we have had a bevy of breakthroughs by now?

What are we doing wrong?

In 2011 Eli Lilly thought they had a breakthrough for schizophrenia. They'd given PCP to mice, then their new drug and...the mice calmed down! Everything went well. They got to Stage III clinical trials (humans) and 18 months later the drug was dead. Placebos worked just as well. Lilly is another company that has all but given up now too.

                                    LSD: like psilocybin and serotonin, structurally

Some New Ways of Thinking and Genuine Promise 
Steven Hyman of Harvard and M.I.T. knows this field well. He was quoted in an article I read as admitting of his colleagues, "People are tired of curing mice."

Let's go back to the last breakthough: Prozac and all its cousins.

It had been assumed that, when those happy accidents occurred, there must be a theoretical basis. Pharmacologists have always acted like they were on top of what was going on, but the trade secret was they were faking it: when a drug worked, it went on the market, people used it and they "worked" well enough, but at first the chemists and psychiatrists had no idea why. With better understanding of the brain, they found the ancient model of the imbalance of humors as an explanatory scheme. Only they juiced it up: they found  these drugs altered neurotransmitters. Therefore, the lack of the neurotransmitter caused the disease! It seemed quite plausible, and very much like the hardcore finding that insulin works for diabetics.



Nassim Nicholas Taleb says this is a classic case of the "reverse-engineering problem": drop an ice cube on the floor and then go play cards with your friends in the other room. Can you visualize the cube breaking down into a tiny pool of water? Of course you can. You walk back into the kitchen and see a tiny pool of water where you had dropped the cube. It's pretty straight-forward. Now: imagine walking down the street and coming upon a tiny pool of water. A little spot of wet. How many ways can you dream up the cause of this spot?

A cop comes upon a drunken man looking for his keys, at night, under a streetlight. The cop asks the drunk why he keeps looking under the streetlight, and the drunk says it's because the light is so much better there.

Obviously, even our best researchers have been looking where the light was bright. And the reverse-engineered explanation of our not-all-that-great/we-can-do-better psychopharmacological drugs? Human. All-too human.



The neurotransmitters are not the cause of the mental illness. They merely point at the underlying cause; neurotransmitters (dopamine, serotonin, norepinephrine, etc) are tangential and partial. Reverse-engineering to allow more serotonin to remain in the synaptic gap between neurons was a genius move; too bad there are a handful of studies that show SSRIs work little better than placebos. (For some people they have worked well enough; I don't want to slight this!) All in all, there's a "truthiness" about depression drugs.

We treat everyone the same in studies, while knowing they have variable epigenomes. This is receiving some major research and seems quite promising, to my eyes. We have a semantic problem with experts dealing with a patient, making observations and tests, then naming the disease they "have," which is a major problem: people and diseases do not fall into our socially-constructed and convenient categories as well as we'd like. This problem is now far more acknowledged than ever, which seems promising to me. One example is the Research Domain criteria: we map behavioral abnormalities and symptoms and link them to specific causes in the brain, without the label of "schizophrenia" or "panic disorder." Why is this approach better? Because it's more targeted. Instead of looking at one or two neurotransmitters that "cause" schizophrenia, we try to find out specifically what causes people to hear voices, or become catatonic.

The idea that we must take 18 years from conception through clinical trials is being re-thought. Even more crucially for mental disease: non-human animal studies long ago reached diminished returns. Now the idea is small-scale, carefully controlled studies on humans will speed up the process and may yield breakthroughs in shorter periods.

Another area of promise: when a drug failed, it often worked for a few people. But our gold standard of drug testing: double-blind and placebo-controlled? The rules were that if the placebo worked as well as the drug, throw out the drug. But the people who were helped probably should have told us something.

Along those lines, there is a strong call to restore abandoned or "invisible" clinical trials to correct the scientific record. We may learn some very interesting things from "failed" trials.

The techniques surrounding stem cells have accelerated at an incredibly dizzying pace upward and for the better: now researchers can test cells and drugs in a a dish and make very good guesses as to whether a compound would have some efficacy.

With the mapping of human genome in 2000, hundreds of utopian promises were made that now seem embarrassing or outright quackery. But there was reason to be optimistic. We thought because we were very complex, we'd have the most genes, but instead of 100,000 we only had about 21,000. Grapes have more genes than us: this was nothing like what we'd expected. Worse: 13 years later we now know that a "bigger" system - in terms of complexity - governs the genome: the epigenome. It turns out that RNA plays a far, far bigger part than we'd thought. The complexity can seem overwhelming.

In 2002 researcher Andrew Hopkins came up with an eye-opening paper, the "druggable genome": Okay: we'd thought we had 100,000 genes. We have closer to 21,000. He estimated that only about 10% of those genes coded for proteins that could bind to small molecules, which is how drugs work, basically. So: about 2,100 genes. But he estimated that, of those, only about 20% would be likely to involve diseases. So now we're down to about 420 possibilities for targets. And then he guessed we'd already discovered 50% of those (probably accidentally?). We only had 210 targets left? For all diseases, not just mental illnesses? Not exactly a rosy scenario. But...

Cheminformatics! This is a burgeoning discipline using the aforementioned computational doubling: there are tens of thousands of compounds in digitized libraries. Do you test them all? Two guys wrote  an algorithm to teach a computer to sift through a welter of data on TB, which is becoming antibiotic-resistant. A Big Deal, quite threatening to all of us, potentially. Their algorithm said: find all compounds that are like the drugs that used to work on tuberculosis. So you get that data set. Then the algorithm says, throw out every compound known to be toxic to mammalian cells. You have a smaller set, but a safer one to work with. The algorithm discovered a 40-year old drug that was shown to have anti-TB properties but had been forgotten.




Even more interesting and promising: researchers in Cambridge, MA have taken messenger RNA (mRNA), an ultra fragile molecule which, when injected activates the body's immune response, tweaked a couple of "letters" in its nucleotide sequence, and made a non-fragile mRNA that does not turn on the immune system. What this could do is take the information from the DNA in a gene and make it "fix" missing or broken proteins in another cell, in effect causing a patient with a (probably inherited?) protein abnormality to make a drug inside their own cells!

Nessa Carey, a gifted explainer of how epigenetics works in our bodies, has urged us to be cautious about getting too excited over drugs based on DNA-RNA, because so far, "One of the major problems with this kind of approach therapeutically may sound rather mundane. Nucleic acids, such as RNA-DNA, are just difficult to turn into good drugs. Most good existing drugs - ibuprofen, Viagra, antihistamines - have certain characteristics in common. You can swallow them, they get across your gut wall, they get distributed around your body, they don't get destroyed too quickly by your liver, they get taken in by cells, and they work their effects on the molecules in or on the cells. Those all sound like really simple things, but they're often the most difficult things to get right when developing a new drug."

Finally, there is a very real call to combine all our new technologies with an active looking for happy accidents, like in the 1945-60 period. We find as many compounds that could possibly have efficacy, get people willing to be guinea pigs to try them (we have far better ways to guess at what's likely to have horrendous side effects or death-dealing qualities, but we're by no means "covered" here), and see what happens! Yes, the dark side is that the poor will probably be the ones to sign up...How do we find new things to try? "Scientists Map All Possible Drug-Like Chemical Compounds." It turns out the drunk looking for his keys was far more accurate an analogy than we might've guessed. Or wanted to guess. Check out all the unexplored chemical "space" yet to be charted! It reminds me of the incredible number of phenethylamines and tryptamines that Alexander Shulgin mapped: but a drop in the ocean? (Shulgin deserved the Nobel Prize for Chemistry: just read-up on his career! It's almost criminal he didn't get the Prize.) It's like looking for signs of life in the Milky Way! Or more prosaically: like geologists learning how to more profitably drill for oil. It's also about algorithms and possibilities and adventure and hellacious mistakes yet to be made.

To all of us looking for better living through chemistry: Bon appetite! I do think we may make it through this bottleneck to a whole new world of more sophisticated drugs that will make all the ones we've had since 1945 look primitive. Maybe?

Some Of The Works Consulted:
The Epigenetics Revolution by Nessa Carey
"No New Meds," by Laura Sanders:
http://www.sciencenews.org/view/feature/id/348115/description/No_New_Meds
Happy Accidents: Serendipity In Modern Medical Breakthroughs, by Morton A. Meyers
"The Psychiatric Drug Crisis" by Gary Greenberg:
http://www.newyorker.com/online/blogs/elements/2013/09/psychiatry-prozac-ssri-mental-health-theory-discredited.html
PIHKAL: A Chemical Love Story, by Alexander and Ann Shulgin
"Where Are All The Miracle Drugs?" by Brian Palmer:
http://www.slate.com/articles/health_and_science/human_genome/2013/09/human_genome_drugs_where_are_the_miracle_cures_from_genomics_did_the_genome.single.html
"Messenger RNAs Could Create a New Class of Drugs," by Susan Young:
http://www.technologyreview.com/news/512926/messenger-rnas-could-create-a-new-class-of-drugs/
"Faster, Smarter and Cheaper Drug Discovery":
http://www.sciencedaily.com/releases/2013/03/130321131920.htm
Serendipity: Accidental Discoveries In Science, by Royston Roberts
Hope or Hype: The Obsession With Medical Advances and the High Cost of False Promises, by Richard A. Deyo and Donald L. Patrick
"Experts Propose Restoring Invisible and Abandoned Trials to 'Correct the Scientific Record'":
http://www.sciencecodex.com/experts_propose_restoring_invisible_and_abandoned_trials_to_correct_the_scientific_record-114055
The Black Swan: The Impact of the Highly Improbable, by Nassim Nicholas Taleb

Saturday, November 3, 2012

Human Brains: Enchanted Looms, Electro-Colloidal Computers, Flying Lasagna, and Other Grey Matters

A Generalist trying to study and write about the human brain seems bound to tax attention: there's simply so much there to get all worked up over, especially since the 1990s "Decade of the Brain" and the resultant supernovae of imaging machines, knowledge of genes and epigenetics, experimental psychologies, and an ungainly amount of scientific data. No PhD in Neuroscience can keep up with all of it; one must specialize. We now have Neuroeconomics. Finally!

But the Generalist is at free play in the dense, massive fields.

I had wanted to do an entire blogspew on the materiality of the human brain, simply because I find descriptions of it so trippy. Full Disclosure: I have never held a human brain in my hands. But I've read and seen enough from people who have, or surgeons who have performed brain surgery, to palpably - in my imagination - "feel" the majesty of it all. But first: the human brain from another level: how we perceive or make "reality," and how tenuous it all seems.



Two Quotes From Disparate Recent Readings
We've learned a lot about how memory works in the last 20 years, but there's a lot left to learn. Just about any textbook minted in the last ten years will discuss how different declarative memory ("knowing that") works versus procedural memory ("knowing how"...like navigating a stairwell, riding a bike, or tying your shoes).

Discussing recent findings using imaging machines, Amiri, Lannon and Lewis write, "While explicit memory (basically: declarative- OG) is swift and capacious, a fallacious sense of accuracy attends its frequently erroneous returns. New scanning technologies show that perception activates the same brain area as imagination. Perhaps for this reason, the brain cannot reliably distinguish between recorded experience and internal fantasy." - A General Theory of Love, p.104

Before you go thinking about you and your friends and everyone you love here - not to mention how this might impact "personal responsibility" and the Law! - dig this quote from Douglas Rushkoff's Program Or Be Programmed: Ten Commands For The Digital Age:

"But the latest research into virtual worlds might suggests the lines between the two (digital models of reality and our own being-in-the-world models - OG) may be blurring. A Stanford scientist testing kids' memories of virtual reality experiences found that at least half the children cannot distinguish between what they really did and what they did in the computer simulation. Two weeks after donning headsets and swimming with virtual whales, half of the participants interviewed believed they had actually had the real world experience. Likewise, Philip Rosedale - the quite sane founder of the virtual reality community Second Life - told me he believes that by 2020, his online world will be indistinguishable from real life." - p.69

[Note: This all may dovetail mindblowingly with Nick Bostrom's ideas about humans being a computer simulation, which I touched on HERE, and this recent article, "Physicists May Have Evidence Universe Is A Computer Simulation". Caution: If you you're not familiar with these ideas yet and have wanted to do a psychedelic drug such as psilocybin mushrooms but can't find any, these ideas may prove an adequate substitute.]



Three Pound Universe: Dissection Witness
I liked Zoe Williams's brief article on her experience in the room with a neuropathologist and his "special chopping board and really sharp knife." I'll watch anything on the science channels on TV that are about the materiality of the brain, and I can't get enough of reading about the sacred object you're using now to decode what I'm trying to say. For us, it seems plausible that the brain is the most complex object in the universe. And when Williams describes it as "jaundiced pallor and pronounced bulge, like pickled eggs," it activates those circuits in my own brain that have to do with...surrealism.

Maybe that's just me.

Dr. Gentleman, who seems to love his job of slicing and dicing recently deceased brains, works for a UK brain bank devoted to researching Parkinson's, Alzheimer's, and Multiple Sclerosis, roughly in that order. He can use the naked eye to read the sorts of suffering the human underwent. It's always interesting to hear about something like, for example, strokes.

"'It's pot luck with strokes,' he explains at one point - you can have a stroke and not notice. Or you can have a stroke that leaves you with a cystic cavity, or what a layperson might call, a big hole in your head."

Gentleman cuts away in front of the journalist, pointing out, "that's the main event; personality, executive function, reason." I find the high number of errors interesting: people while living had been diagnosed with some brain disease - they and their loved ones were at least given a name for their malady - and far more often than I would've thought, it was wrong, judging by the visual evidence of the physical insults of the person's actual brain. Clearly, we have a long road to hoe here.

All this stuff not only puts me in the mood of surrealism, but concomitant to this, in encountering the actual material brain, a combination of dreamlike wonder juxtaposed with ghastly existential terror, back to dreamlike wonder. And quite often a dark humor suffuses the scene.

If you're still interested in this stuff, HERE's another: cutting through the deeply-buried pineal gland. You can thank me later.

                                                            Richard Brautigan

Richard Brautigan's Brains, "Literally"
You can make this stuff up, but you must have an eldritch, poetic mind. But this story is true: poet J.J. Phillips wanted to do research on the counterculture novelist/poet Richard Brautigan (have you read Trout Fishing in America?). Stephen Gerz tells the story in his edifying book blog Booktryst in a post "Novelist Richard Brautigan's Brains At Bancroft Library: A Grand Guignol Adventure," which you must read; I can't do it justice.

Maybe I should've posted this on Halloween.

I take some odd and demented delight in knowing most of the action here took place in my neighborhood. The owner of Serendipity Books, Peter Howard? His legend grows by the year. Did he know for sure that's why some of Brautigan's papers were so messy? Phillips had to call in a coroner to confirm. And what of the librarians at the UC Berkeley Bancroft Library? Phillips thought they were acting "squirrelly and obfuscatory." And I think Phillips has a point: what if Brautigan had had Mad Cow disease?

Being a fan of Codrescu, I can only imagine his reaction upon hearing the story. Wow.

Another Poet
I'd like to leave you with a link to "Brain," by C.K. Williams. Here the brain is traversed by the poet, a cavern, a maze of corridors...and where is a comforting soul?

Who knows what's real? All "I" - this is my brain speaking here - know is, I'm hungry and it's time for dinner, so I bid this spew adieu.

Saturday, May 21, 2011

Robert Sapolsky On Us And Our Cousins

This is one of the most eloquent talks I've ever seen on humans and how they are alike and different from their other-animal cousins. By the end your socks ought to have been knocked off, out the window and never to be worn again. Sapolsky is probably the funniest of all the Third Culture thinkers, and I cherish all of his books. I hope you enjoy it. It's less than 38 minutes long, and pretty darned spellbinding, to my eyes/ears. 2009 graduation at Stanford. The dean talks for about five minutes. Sapolsky is introduced around 4:50. (If you've ever wanted to study neurobiology via audio or DVD, Sapolsky has a knockout course available here.)