In the past few weeks I've been reading the so-called New Atheists - articles and passages in books by and about them, interviews, etc - and the more I read them the more each thinker seems slightly different than the others. The ones I'm talking about are Dawkins, Hitchens, Sam Harris, Dennett, Ayaan Hirsi Ali, Ibn Warraq, Pinker, Jerry Coyne, Victor Stenger, Michael Shermer, and Lawrence Krauss.
Ayaan Hirsi Ali, moving the New Atheist
debate forward, although I must wonder
why she's allied with the retrograde
American Enterprise Institute
I could go into why I started to see individuation with each of these thinkers, but that's for some other day; what really fascinated me was what was not said, and the idea that this sort of thinking is "new;" it is not. Atheism has a long pedigree, even in Unistat, but it's largely been marginalized. I don't recall any atheist thinker being singled out in any class I ever took in high school. When I started reading compendia of atheist thought, one thing led to another and I realized it was just another marginalized discourse in Unistat; Randall Collins would say that the social and intellectual conditions were not right for a more mainstream discussion of atheism in the culture at large. It is no accident that this "new" discourse (also a publishing phenomenon, but it wouldn't be if people weren't buying the books) exploded after 9/11.
Randall Collins's magisterial The Sociology of Philosophies has a robust theory about why ideas gain ground at certain times and not others. He seems heavily influenced by Erving Goffman in developing ideas about emotional energies gathered in groups around a seminal thinker, and how the group branches out and disseminates and develops ideas, depending on culture and history, space and political propinquities. Cultural capital is actualized around attention spaces and I'll just quote from him to give you a feel for where Collins is coming from:
Imagine a large number of people spread out across an open plain - something like a landscape by Salvador Dali or Giorgio de Chirico. Each one is shouting "Listen to me!" This is the intellectual attention space. Why would anyone listen to anyone else? What strategy will get the most listeners? [...]
A person can pick a quarrel with someone else, contradicting what the other is saying. That will gain an audience of at least one; and if the argument is loud enough, it might attract a crowd. Now, suppose everyone is tempted to try it. Some arguments start first, or have a larger appeal because they contradict the positions held by several people; and if other persons happen to be on the same side of the argument, they gather around and provide support. There are first-mover advantages and bandwagon effects. The tribe of attention seekers, once scattered across the plain, is changed into a few knots of arguments. The law of small numbers says that the number of these successful knots is always about three to six. The attention space is limited; once a few arguments have partitioned the crowds, attention is withdrawn from those who would start another knot of argument. Much of the pathos of intellectual life is in the timing of when one advances one's own argument. (p.38)
Randall Collins, sociologist extraordinaire
The so-called New Atheist's arguments seem to have reached a plenum, but quite possibly we will be surprised by some new development in their lines of argument. I do think Unistat needed an intellectual avalanche of books and articles espousing atheism for one reason or another. I find the right wing Christian ideology - which to me always seemed closer to fascism than Jesus's words from the Gospels - stultifying. And no doubt there were plenty of people who didn't believe but found themselves in pockets of Unistat in which ostracism for "coming out" was a very real threat; so they endured Sunday mornings. Possibly the New Atheists, as their ideas trickle into the capillaries of small town thought, make it just a little bit easier to realize oneself. The rise of mainstream atheism in Unistat seemed dialectically necessary. We'll see where it goes. Meanwhile, I have my fascinations with the two thinkers mentioned in my title: their quasi-atheistic ideas don't seem to have captured an attention space.
Collins's ideas are about ideas appearing at the right place, right time, under the right conditions. Nonetheless we are free-thinking agents and do not place a high value on following the main streams in order to have the correct ideas to trot out at cocktail parties.
Nassim Nicholas Taleb (NNT)
I combine two thinkers (Taleb and Wilson) of seemingly disparate personal disposition, each with seemingly quite different audiences, yet both thinkers have produced a body of work that shows a fascination with chaos, randomness, erudition and epistemological doubt. Indeed, Taleb's three major works (Fooled By Randomness;The Black Swan; Antifragile) are now being labeled "The Incerto Trilogy." A taste of Nassim's basic incertitude: "Prediction requires knowing about technologies that will be discovered in the future. But that very knowledge would almost automatically allow us to start developing those technologies right away. Ergo, we do not know what we will know." (p.173, The Black Swan)
Taleb - who seems a strident character who insists he has a good sense of humor, so I'll take him at his word - thinks it's a bad idea to bash religion, even though he himself seems an atheist. Why not bash like Dawkins, Harris, et.al? Because nature abhors a vacuum, and he points to the atheistic USSR under Stalin: something else takes the place of irrational religion, and it could lead to far worse outcomes. He traces the first suicide bombers. Were they actuated by fundamentalist religious fervor? No, they were not Islamic terrorists from the Middle East. Rather, they were Greek orthodox Communists in Lebanon. The vacuum left in the wake of The State's abolition or proscriptions against religion are replaced by "all kinds of crazy beliefs." NNT also would have rival religions not be in physical contact, which seems a tall order but interesting idea. Top-down attacks on religion do not work, and NNT points to the diminution of Catholicism in Southern Europe and Ireland, which saw an accompaniment of usury and debt. (Unforeseen consequences?) And here's one of my favorite passages from NNT; it gives much of the flavor of his overall philosophical caste of mind:
I am most irritated by those who attack the bishop but somehow fall for the securities analyst - those who exercise skepticism against religion but not against the economists, social scientists, and phony statisticians. Using the confirmation bias, these people will tell you that religion was horrible for mankind by counting deaths from the Inquistion and various religious wars. But they will not show you how many people were killed by nationalism, social science, and political theory under Stalinism or during the Vietnam War. Even priests don't go to bishops when they feel ill: their first stop is the doctor's. (p.291, The Black Swan)
Later Nassim said that if you're critical of religion but invest in the stock market you're a hypocrite, which reminded me of Dawkins saying that the postmodernists who questioned the fundamental laws of physics who then got on airplanes were hypocrites. Skepticism is "domain-dependent" and 19th century "rational" Western medicine no doubt killed more people than it saved. When you have "experts" you have the "illusion of control."
NNT thinks that if religion has survived for millennia we shouldn't uproot it unless we can be damned well sure we can replace it with something less damaging. (But we cannot be sure, right?)
Like the late Robert Bellah and Robert Anton Wilson, NNT thought religion was not about "belief" but about action, and it starts with ritual. We have ideas about "God" all mixed up. Most religions started off with rituals, then developed deities post hoc. Religion makes people do things, and then the King arrives and uses the local religion for social control.
Further, NNT sees very strong historical lessons in Christianity and Islam that support his idea that history does not crawl but "jumps" and is best not thought of as something that develops slowly and relatively predictably: in noting the paucity of extant writings by contemporary thinkers in or near Jesus's time, "Apparently, few of the big guns took the ideas of a seemingly heretical Jew seriously enough to think he would leave traces for posterity." And: "How about the competing religion that emerged seven centuries later; who forecast that a collection of horsemen would spread their empire and Islamic law from the Indian subcontinent to Spain in just a few years? Even more than the rise of Christianity, it was the spread of Islam (the third edition, so to speak) that carried full unpredictability; many historians looking at the record have been taken aback by the swiftness of the change." NNT follows up these observations by making a general note about our study of history: "These kinds of discontinuities in the chronology of events did not make the historian's profession too easy: the studious examination of the past in the greatest detail does not teach you much about the mind of History; it only gives you the illusion of understanding it." (p.11, op.cit)
Illusions of understanding: this is at the heart of NNT's work.
For NNT, the Economist's religion of probability is as primitive as religious fundamentalists; here NNT's deliberate provocation seems to dovetail with Robert Anton Wilson's guerrilla ontological takes on "serious" bodies of thought. NNT reminds us that Syria, Egypt and Iraq were "secular" states, that churches are standing-room-only in Russia now, and that Dennett's argument for "science" clashes with most individual scientists,who understand how very much we do not know in the scientific world. Almost every decision every day is probabilistic and faith in the stock market or communism or "capitalism" works really well...until it doesn't.
NNT has also observed something interesting about the three monotheistic religions that most people would consider "good" or "fair" and I rarely see this mentioned: Christianity's ideas about sex ended the anthropologist's "Big Man"'s monopolization of women. One man, one wife: the little guy was not left out any more. Islam came along and made a restriction to four wives. Judaism had been a polygamous religion, but in the Middle Ages is became monogamous. NNT observes that this may have been a political move that headed off potential revolutions of angry, sexually-deprived men fomenting violence from the bottom of society.
So, for Nassim Nicholas Taleb: no New Atheism for him. And yet he's not exactly a believer. With regard to the desirability of religious belief, there seemed much unsaid, much overlooked, and he tried to point some of it out.
Robert Anton Wilson (RAW)
Born poor into Catholicism on Long Island in the 1930s, RAW recalls, in a documentary about him, that he found out that Santa Claus wasn't real. He kept waiting for them to admit that God wasn't real, but they never did. RAW's atheology - I adopted the term after reading a piece in which he used the term within the context of the serious/joke religion Discordianism - seems more avant garde than NNT's. RAW began satirizing the Bible and all monotheistic religions in one of the first articles he ever published, "The Semantics of 'God'" in Paul Krassner's The Realist, in 1959. RAW's main riff was Why do we call God a "he"? If we do, we must assume He has a penis. And how large must it be? Then RAW pretended to use math in comparing King Kong and the average man's penis size, Kong's height and the relative size of a penis-per-height ration for gorillas, then speculating about the size of God's schlong. If we're not prepared to admit "God" has a penis, let's stop calling God "he" and say "It." Neurosemantically, we might derive a more sane view of "God" if we said "It."
RAW even neologized over the overwhelming male-ness in monotheistic religions - why women can't be priests, etc: "theogenderology."
After more than a decade of very intense self-experimentation with psychedelic drugs, abstruse Crowleyan magickal practices, an immersion in the most difficult High Modernist texts, and all sorts of other self-described "gimmicks" in order to see how malleable his own mind was, RAW decided he was a "model agnostic," taking Neils Bohr's Copenhagen Interpretation of quantum mechanics and combining it with phenomenological sociology, a studiously ironic take on conspiracy theories, and Korzybski's General Semantics to make a heady personal philosophical brew about the "self" and the world of perception, "reality tunnels" and ideologies, and a radical doubt filled with endless wonder about the world, of which we must always be uncertain.
Some scholars of hermeticism may be able to discern a long line from 14th century thinkers to Wilson; what's interesting to me is RAW's abiding interest in popular culture, surrealistic humor, neuroscience, the quantum theory, Einstein's relativity, the main strains of 20th century philosophy (including Existentialism, Phenomenology, Pragmatism, Logical Positivism, and the "Linguistic Turn"), combined with Crowley's synthesis of seemingly all the major alchemical and hermetic practices. He liked to quote Crowley's line from The Book of Lies:
I slept with Faith and found a corpse in my arms upon awakening; I drank and danced all night with Doubt and found her a virgin in the morning.
Doubt keeps the mind alive and questioning. And yet doubt requires belief. Why not watch your own nervous system as you decide to "believe" in some idea for a week, and then doubt it? Believe, then doubt; believe then doubt. See what happens to your ideas about "reality." RAW seems to dare his readers to try this. (At times he explicitly advocated it.)
Here's the thing: for RAW and many other modernistic antinomians: all gods and goddesses are "real" in the sense that they are projections the human genome has made; they are externalizations of deep inner aspects emanating from the biology of humanity. And so, on that level, let us use them to gain poetic insight about ourselves. Note: he doesn't believe the gods and goddesses of history "really" exist "out there;" they exist "in here," which seems "real" enough. I think it will be quite some time before the New Atheist's ideas, working dialectically with the traditional believers of a monotheistic God, create a intellectual space in which to consider "god" in these terms.
Moreover, I have oversimplified RAW's atheology, as at times in his writing career he considered himself a sort of theologian, and near the time of his death he seemed to still agree with a boyhood influence, Ezra Pound, about "seeing" gods. Here's a passage from Pound that gives us a tinge of the flavor:
We find two forces in history: one that divides, shatters and kills, and one that contemplates the unity of the mystery.
"The arrow hath not two points."
There is the force that falsifies, the force that destroys every clearly delineated symbol, dragging men into a maze of abstract arguments, destroying not one but every religion.
But the images of the gods, or Byzantine mosaics, move the soul to contemplation and preserve the tradition of the undivided light.
(pp.306-307, Selected Prose 1909-1965, Ezra Pound)
RAW at other times seemed to identify with William Blake in naming our creative spark as God.
[But is this not what the modern guerrilla-ontological trickster hermeticist does?]
In an article published in Oui magazine in 1977, RAW quoted a fellow counterculture-hero-writer, Kurt Vonnegut, about the clash between science and religion:
As Kurt Vonnegut says, "A great swindle of our time is the assumption that science has made religion obsolete. All science has damaged is the story of Adam and Even and Jonah and the whale." Vonnegut goes on to say there is nothing in science that contradicts the works of mercy recommended by Saint Thomas Aquinas, which include: to teach the ignorant, to console the sad, to bear with the oppressive and troublesome, to feed the hungry, to shelter the homeless, to visit prisoners and the sick, and to pray for us all. (p.57, The Illuminati Papers)
In the bulk of RAW's writings on organized religion, though, he seems much more in the line of Nietzsche and Mencken and Carlin, with surreal barbed satire about good rich vicious Christians in church enjoying hell-fire sermons that seemed like the worst S&M trip ever, while they politically advocated "more bombs for Jesus."
Finally, a little article I found a while back made me think this would make RAW smile: The Claremont College Theology School desegregated the way the religious books were categorized and shelved in their library.
Some Sources:
Robert Bellah interview: Religion isn't so much about what we believe, but what we do
Nassim Nicholas Taleb on YouTube: 9 mins: On Role of Religion (live talk from Q&A with audience)
"Why Monotheism Leads To Theocracy," by Joshua Keating
"Atheism Is Maturing and it Will Leave Richard Dawkins Behind," Martin Robbins
The Overweening Generalist is largely about people who like to read fat, weighty "difficult" books - or thin, profound ones - and how I/They/We stand in relation to the hyper-acceleration of digital social-media-tized culture. It is not a neo-Luddite attack on digital media; it is an attempt to negotiate with it, and to subtly make claims for the role of generalist intellectual types in the scheme of things.
Showing posts with label Nassim Nicholas Taleb. Show all posts
Showing posts with label Nassim Nicholas Taleb. Show all posts
Saturday, January 4, 2014
Promiscuous Neurotheologist: The Atheologies of Nassim Nicholas Taleb and Robert Anton Wilson
Friday, October 11, 2013
Euclidean Quotidian: 90 Degree Angles and the Semantic Unconscious
Ten Scattershot Ideas, One For Each Finger and Two Thumbs
1.) Supposedly the medieval Europeans thought Euclid's works were the same as the one we know as Eucleides of Megara, so olde books about geometry in Europe were by "Megarensis." They weren't the same dude: "Megarensis" was a contemporary of Plato; the great Euclid of high school geometry was closer to being contemporary with some of Plato's early students.
The Arabs got hold of Euclid and thought the name was made of ucli (the key) and dis (measure). At any rate, his Elements was the model of rationality ne plus ultra, and I'm writing this piece after pondering Euclid's influence on two philosophers, Vico and Spinoza, who were not the first to mimic the potent rhetorical form and structure of Euclid.
2.) In Peter Thonemann's review of three books for the TLS, note the story of the Malawi girl, who charged with learning how to set a dinner table English-style, experienced a steep learning curve, because the world she grew up in was curvilinear; there were no right angles. We had to learn the "order of things" we take for granted as "the way things are done." I also thought it interesting that with the Romans rolling through the peoples of Europe, they brought right angles and rectangles and ideas about straight lines and order with them, the Irish being the last to "convert," and it went along with Christianity.
There's a question of the "reading" of artefacts from the long-dead: if they built with right angles, was their social structure more authoritarian? Some think so. Others think what matters is the initial posit and then iterated forms that grew from there. Mikhail Okhitovich, Soviet sociological thinker of the 1930s, asserted that right angles originated with private land ownership, then extended to architectural forms, and represent a non-communistic mode of thought; because of this curvilinear forms in architecture were the best and most egalitarian form.
Before rigid hierarchical forms of State, what was often found were circular forms, which have a center but seem to resist hierarchy...on some level. Do Euclidean forms give rise to a form of thought that permeates a culture, and if so, is this idea mostly unconscious, part of the paideuma?
Many non-communist Left-ish thinkers have assumed that dwellings based on rectangles and 90 degree angles were somehow metaphors for artificiality, non-organicism, or simply convention, and living in "boxes" tended to encourage conformist social ideas and a stifling of creativity. Look at any fat book on great 20th century architects and buildings. Look at Buckminster Fuller.
3.) A pop kulch example of a leftist strain in American thought is found in this folk song: "Little Boxes." Boxes and conformity. Boxes and restraint. Boxes and the suburbs, Levittowns.
4.) The distaste for "boxes" runs in countless intellectual and aesthetic fields. While Nietzsche lays out with this probe: "Mathematics would certainly not have come into existence if one had known from the beginning that there was no exactly straight line, no actual circle, no absolute magnitude," and we are left to wonder, our contemporary Nassim Nicholas Taleb writes in his Bed of Procrustes, "They are born, then put in a box; they go home to live in a box; they study by ticking boxes; the go to what is called 'work' in a box, where they sit in their cubicle box; they drive to the grocery store in a box to buy food in a box; they go to the gym in a box to sit in a box; they talk about thinking 'outside the box'; and when they die they are put in a box. All boxes, Euclidean, smooth boxes." (p.31)
5.) Art critic Jed Perl wonders about the state of painting and painters in today's art world. At one time the rectangle frame of the painting was a given. The artist played an outre role in society. But now practically all competing media are either rectangle shaped (iPod/iPad/iPhone?), or text is read within a rectangular-ish frame (the screen you're using now?); further: images in the most popular media are dynamic inside a rectangular frame: TV, films, the camera frame. Could it be that the "degree of stabilizing supremacy of that rectangle has been undermined by the technology that surrounds us?," Perl asks. He knowns painters. It's his milieu. And Perl asserts that today's painter, because of the static image inside a rectangle, has been forced to go on the defensive or offensive, which presents a new hindrance. At the same time, Perl asserts that painting is not dead.
6.) In what appears to be an untitled poem, Tony Quagliano:
I read this poem about geometry
or shadows
or was it poetics, or
some analogy among the three---
that sounds right
a poem about science and art
itself some artful connection
opting for the poem of course (being a poem) slyly
saying math's impure
or at least not pure enough
for one geometer not impressed by Euclid
or more impressed by non-Euclid
or some such twist
and what gets me, why I mention this at all, is
that the poem was good
though no one bled directly in it
words were clean, scientific
stitched in artful lines for the anthologist
and while a slashed wrist would have to wait
this poem of shadows, or math
or some connection in the courtyard of art
this fragile suture, poet to geometer, takes life
over your dead body
and mine
and it was good
which is why I mention this at all.
-p.65, Language Matters: Selected Poetry
7.) I remember reading about some hotshot engineering students - probably at CalTech? - and the problem of stacking oranges at the grocery store. Because of their roundness, there's far more non-used-up space (AKA "air") between oranges. How to maximize the number the oranges stackable? Well, you obviously make square oranges, using the Lego-mind. Easier said than done.
I hadn't thought much about shipping containers and how they have made the world seem far smaller and distance irrelevant until I read Andrew Curry's fine piece in Nautilus not long ago. "Invisible to most people, (shipping containers) are fundamental to how practically everything in our consumer-driven lives works." As for packing as much stuff into a space as efficiently as possible, it doesn't get much better than shipping containers. ("Invisible to most people...")
Score one for rectilinearity.
8.) One of the Prophets of Euclidean space and modern consciousness, Marshall McLuhan, in 1968:
The visual sense, alone of our senses, creates the forms of space and time that are uniform, continuous and connected. Euclidean space is the prerogative of visual and literate man. With the advent of electric circuitry and the instant movement of information, Euclidean space recedes, and the non-Euclidean geometries emerge. Lewis Carroll, the Oxford mathematician, was perfectly aware of this change in our world when he took Alice through the looking-glass into the world where each object creates its own space and conditions. To the visual or Euclidean man, objects do not create time and space. They are merely fitted into time and space. The idea of the world as an environment that is more or less fixed is very much the product of literacy and visual assumptions. In his book The Philosophical Impact of Contemporary Physics Milic Capek explains some of the strange confusions in the scientific mind that result from the encounter of the old non-Euclidean spaces of preliterate man with the Euclidean and Newtonian spaces of literate man. The scientists of our time are just as confused as the philosophers, or the teachers, and it is for the reason that Whitehead assigned: they still have the illusion that the new developments are to be fitted into the old space or environment.
-p. 347, Essential McLuhan, from an essay, "The Emperor's New Clothes," originally in Through the Vanishing Point: Space in Poetry and Painting, co-written with Harley Parker. McLuhan asserted in 1968 that "the artist is a person who is especially aware of the challenge and dangers of new environments presented to human sensibility." McLuhan thought artists were subversive because society expected the replication of existing orders and forms, but artists violated these expectations.
Three thoughts:
a.) In 1968 McLuhan may have been far more prophetic than he thought: not only are scientists still trying to come to terms with non-Euclidean findings in astrophysics, materials science, microbiology, subatomic physics (but I do see some inroads), but going back to Jed Perl's essay on the "state of the art" in painting 45 years later, McLuhan's "with the advent of electric circuitry"...and I think maybe painting, contra Perl, may be, if not dead, in the ICU, condition: critical.
b.) When I do that mental yoga which allows me into McLuhan's thought-space, I realize how intensely Euclidean my assumptions seem, as based on the idea of Gutenberg Man and the space of the literate reader of texts, for hours every day, decades on end, eyes decoding 26 symbols with punctuation, left to right, linear left to right, left to right (THIS), left to right, punctuation. In my conditioned assumptions of quotidian reality, objects "really do" fit inside of space and time. I want them to create space and time themselves, by power of their sheer Being, capital be. But most of the time: no. I have to work on it. How do I get out of Gutenberg Euclidean head space? Cannabis, film, walks in nature, animation, humor and surrealism, reading Joyce or Pound, get into the Korzybski-Zen level of the phenomenal event-level, pre-language, observing without hypnotizing and misleading "woids," and then careful consciousness of abstracting, watching myself abstract until It all melts, or something strange in science. You have your ways.
c.) For such a overwhelmingly "straight" Euclidean man, Prof. McLuhan's (whose personal politics were a sort of conservative Catholic with tinges of anarchy?) mind was, to me, reliably non-Euclidean and psychedelic. His deep immersion in James Joyce and Ezra Pound was probably a significant influence here, but there was so so so so much more. He was an absolute virtuoso with playing with metaphors and combining those ideas with others, if only just to see if they were thrilling and made anyone else want to think about some idea in some new way. I find this an anarchist strain in McLuhan's thought. (How about I take catholic idea about the senses and think about the new electronic media, like radio of TV? I can add ideas I copped from Thomas Nashe, Wyndham Lewis, Ezra Pound, Harold Innes, and anthropologists. And Finnegans Wake! And mythology, Poe, Einstein, and painting's figure/ground and the rise of the Renaissance's vanishing point? And then: Vico! And commercials and comic strips!? And Walter J. Ong...and and and...)
9.) Robert Anton Wilson's extensions of Timothy Leary's ideas of the evolution of "circuits" in the human mind drew heavily on Euclid for the first three "domesticated primate" aspects of all of us: the oral/biosurvival circuit is about approach/avoidance and is represented in Euclidean metaphor as "forward-back." The second circuit stage of development (according to the theory, we "imprint" all of these circuits), the anal/territorial circuit, is about up/down, and represents the deeper levels of any thinking about politics, whether within the family, local city, national, or international. Notice up/down fits well in Euclidean space-thought.
The third circuit is about right/left and for mammals like us, based on the bilateral symmetry of the body and the nervous system, which nature has seen fit to encourage a dominance of one side over the other, most people's left hemisphere's motor cortex encouraging right-handedness. Conceptual thought and left-right equations (think: algebra!) and logic all fall under the third circuit.
Although neuroscientific ideas about hemisphericalization in evolution and discrete modules of each of the brain's two hemispheres has moved away from a once-popular notion of the "holistic" right hemisphere and the "linear" left, these metaphors still seem to resonate. For Wilson, right-handedness and math and literacy in symbolic humans indicate a left-hemisphere domination (the left hemisphere controls the right side of the body) which has unconsciously biased "linear" and hierarchical forms in human history, which begins with writing. The right hemisphere, relatively "silent" and seemingly subdued by assumptions about "reality" made by the left hemisphere (especially in industrialized Western humans), has yet to harness the intuitive genius housed in the right hemisphere.
So much ink has been spilled over these ideas, once extremely popular but now seemingly in a slow descent. Nevertheless, these ideas live, as you may have noticed from a conversation within the past few months. Why?
Well, I think it's because there's still some truth to the right/left brain modularity-of-function idea, although it's not as simple as those who popularized the findings of the Sperry and Gazzaniga "split brain" experiments. Also: I think Wilson was on to something: "Right-hand dominance, and associated preferences for the linear left-lobe functions of the brain, determine our normal modes of artifact-manufacture and conceptual thought, i.e., third circuit 'mind.' It is no accident, then, that our logic (and our computer-design) follows the either-or, binary structure of these circuits. Nor is it an accident that our geometry, until the last century, has been Euclidean. Euclid's geometry, Aristotle's logic, and Newton's physics are meta-programs synthesizing and generalizing first brain forward-back, second brain up-down and third brain right-left programs." - Cosmic Trigger vol 1, pp.199-200
For Wilson (and Leary) there were relatively "new" circuits that have appeared in human evolution over the last 11,000 years or so. And they seem non-Euclidean, more organic, curvilinear, and more inclusive of a holistic, total-floating body sense, as if we were meant to move through space/time.
To be clear: Euclid and his forebears the Pythagoreans wormed their way into our paideuma due to the natural evolution of mammals on a rocky watery planet with an atmosphere conducive to carbon-based replicative life forms under the purview of a energy-source star at a Goldilocks distance. We got Euclidean forms because that's the way we evolve. Which may Beg the Q, but it's one of my favored narratives, and my entire brain, both hemispheres, seem to harmonically resonate with it.
[Further extrapolations from Wilson on this complex of ideas: see Illuminatus! Trilogy, pp.793-795; Prometheus Rising, pp.97-100; Schrodinger's Cat Trilogy, pp.342-347.]
10.) I grew up in boxy architecture, and when I first encountered this idea - about rectangles and 90 degree angles and conformity - I also found out we forgot how we did it, but at some point we had to learn to see in 3-D spatial terms. Supposedly some cultural anthropologists had gone into deepest darkest rain forest Africa and lived with and studied pygmies, whose complete environment was always giant trees and vines and moving through those living breathing green spaces, always canopied by jungle thickness as "ceiling."And when they were taken to a clearing at the edge of the forest and the anthropologists pointed to a man and a jeep far off in the distance, the natives thought they were seeing a tiny man. They had not learned to see over vistas of "open space."
So, I lay in bed and looked at the point where the ceiling meets the walls. Two walls meet at the "point" of the ceiling. And I tried to remember what it was like to not see that as a point in space. It's akin to many visual illusions or the Necker Cube you've all seen. It was fruitless. Until, one day...O! Such little things that thrill me. Aye: the corner was on a flat plane. And then it pointed out toward me...
I attest, I assert that when I enter buildings of a non-Euclidean build, my consciousness is altered. An inventory of memories and anecdotes would bore you and me, but I wonder if you have felt the same? I love round rooms. A spiral staircase can really get me going. On and on. But here's the thing: if I grew up in a non-Euclidean house, I strongly suspect that entering a Euclidean "tiny box" house would alter my conscious also. Because I think these represent the unfamiliar structure of space...
I hope I didn't come off like some un-hep "square" in this blogspew.
1.) Supposedly the medieval Europeans thought Euclid's works were the same as the one we know as Eucleides of Megara, so olde books about geometry in Europe were by "Megarensis." They weren't the same dude: "Megarensis" was a contemporary of Plato; the great Euclid of high school geometry was closer to being contemporary with some of Plato's early students.
The Arabs got hold of Euclid and thought the name was made of ucli (the key) and dis (measure). At any rate, his Elements was the model of rationality ne plus ultra, and I'm writing this piece after pondering Euclid's influence on two philosophers, Vico and Spinoza, who were not the first to mimic the potent rhetorical form and structure of Euclid.
2.) In Peter Thonemann's review of three books for the TLS, note the story of the Malawi girl, who charged with learning how to set a dinner table English-style, experienced a steep learning curve, because the world she grew up in was curvilinear; there were no right angles. We had to learn the "order of things" we take for granted as "the way things are done." I also thought it interesting that with the Romans rolling through the peoples of Europe, they brought right angles and rectangles and ideas about straight lines and order with them, the Irish being the last to "convert," and it went along with Christianity.
There's a question of the "reading" of artefacts from the long-dead: if they built with right angles, was their social structure more authoritarian? Some think so. Others think what matters is the initial posit and then iterated forms that grew from there. Mikhail Okhitovich, Soviet sociological thinker of the 1930s, asserted that right angles originated with private land ownership, then extended to architectural forms, and represent a non-communistic mode of thought; because of this curvilinear forms in architecture were the best and most egalitarian form.
Before rigid hierarchical forms of State, what was often found were circular forms, which have a center but seem to resist hierarchy...on some level. Do Euclidean forms give rise to a form of thought that permeates a culture, and if so, is this idea mostly unconscious, part of the paideuma?
Many non-communist Left-ish thinkers have assumed that dwellings based on rectangles and 90 degree angles were somehow metaphors for artificiality, non-organicism, or simply convention, and living in "boxes" tended to encourage conformist social ideas and a stifling of creativity. Look at any fat book on great 20th century architects and buildings. Look at Buckminster Fuller.
3.) A pop kulch example of a leftist strain in American thought is found in this folk song: "Little Boxes." Boxes and conformity. Boxes and restraint. Boxes and the suburbs, Levittowns.
4.) The distaste for "boxes" runs in countless intellectual and aesthetic fields. While Nietzsche lays out with this probe: "Mathematics would certainly not have come into existence if one had known from the beginning that there was no exactly straight line, no actual circle, no absolute magnitude," and we are left to wonder, our contemporary Nassim Nicholas Taleb writes in his Bed of Procrustes, "They are born, then put in a box; they go home to live in a box; they study by ticking boxes; the go to what is called 'work' in a box, where they sit in their cubicle box; they drive to the grocery store in a box to buy food in a box; they go to the gym in a box to sit in a box; they talk about thinking 'outside the box'; and when they die they are put in a box. All boxes, Euclidean, smooth boxes." (p.31)
5.) Art critic Jed Perl wonders about the state of painting and painters in today's art world. At one time the rectangle frame of the painting was a given. The artist played an outre role in society. But now practically all competing media are either rectangle shaped (iPod/iPad/iPhone?), or text is read within a rectangular-ish frame (the screen you're using now?); further: images in the most popular media are dynamic inside a rectangular frame: TV, films, the camera frame. Could it be that the "degree of stabilizing supremacy of that rectangle has been undermined by the technology that surrounds us?," Perl asks. He knowns painters. It's his milieu. And Perl asserts that today's painter, because of the static image inside a rectangle, has been forced to go on the defensive or offensive, which presents a new hindrance. At the same time, Perl asserts that painting is not dead.
6.) In what appears to be an untitled poem, Tony Quagliano:
I read this poem about geometry
or shadows
or was it poetics, or
some analogy among the three---
that sounds right
a poem about science and art
itself some artful connection
opting for the poem of course (being a poem) slyly
saying math's impure
or at least not pure enough
for one geometer not impressed by Euclid
or more impressed by non-Euclid
or some such twist
and what gets me, why I mention this at all, is
that the poem was good
though no one bled directly in it
words were clean, scientific
stitched in artful lines for the anthologist
and while a slashed wrist would have to wait
this poem of shadows, or math
or some connection in the courtyard of art
this fragile suture, poet to geometer, takes life
over your dead body
and mine
and it was good
which is why I mention this at all.
-p.65, Language Matters: Selected Poetry
7.) I remember reading about some hotshot engineering students - probably at CalTech? - and the problem of stacking oranges at the grocery store. Because of their roundness, there's far more non-used-up space (AKA "air") between oranges. How to maximize the number the oranges stackable? Well, you obviously make square oranges, using the Lego-mind. Easier said than done.
I hadn't thought much about shipping containers and how they have made the world seem far smaller and distance irrelevant until I read Andrew Curry's fine piece in Nautilus not long ago. "Invisible to most people, (shipping containers) are fundamental to how practically everything in our consumer-driven lives works." As for packing as much stuff into a space as efficiently as possible, it doesn't get much better than shipping containers. ("Invisible to most people...")
Score one for rectilinearity.
8.) One of the Prophets of Euclidean space and modern consciousness, Marshall McLuhan, in 1968:
The visual sense, alone of our senses, creates the forms of space and time that are uniform, continuous and connected. Euclidean space is the prerogative of visual and literate man. With the advent of electric circuitry and the instant movement of information, Euclidean space recedes, and the non-Euclidean geometries emerge. Lewis Carroll, the Oxford mathematician, was perfectly aware of this change in our world when he took Alice through the looking-glass into the world where each object creates its own space and conditions. To the visual or Euclidean man, objects do not create time and space. They are merely fitted into time and space. The idea of the world as an environment that is more or less fixed is very much the product of literacy and visual assumptions. In his book The Philosophical Impact of Contemporary Physics Milic Capek explains some of the strange confusions in the scientific mind that result from the encounter of the old non-Euclidean spaces of preliterate man with the Euclidean and Newtonian spaces of literate man. The scientists of our time are just as confused as the philosophers, or the teachers, and it is for the reason that Whitehead assigned: they still have the illusion that the new developments are to be fitted into the old space or environment.
-p. 347, Essential McLuhan, from an essay, "The Emperor's New Clothes," originally in Through the Vanishing Point: Space in Poetry and Painting, co-written with Harley Parker. McLuhan asserted in 1968 that "the artist is a person who is especially aware of the challenge and dangers of new environments presented to human sensibility." McLuhan thought artists were subversive because society expected the replication of existing orders and forms, but artists violated these expectations.
Three thoughts:
a.) In 1968 McLuhan may have been far more prophetic than he thought: not only are scientists still trying to come to terms with non-Euclidean findings in astrophysics, materials science, microbiology, subatomic physics (but I do see some inroads), but going back to Jed Perl's essay on the "state of the art" in painting 45 years later, McLuhan's "with the advent of electric circuitry"...and I think maybe painting, contra Perl, may be, if not dead, in the ICU, condition: critical.
b.) When I do that mental yoga which allows me into McLuhan's thought-space, I realize how intensely Euclidean my assumptions seem, as based on the idea of Gutenberg Man and the space of the literate reader of texts, for hours every day, decades on end, eyes decoding 26 symbols with punctuation, left to right, linear left to right, left to right (THIS), left to right, punctuation. In my conditioned assumptions of quotidian reality, objects "really do" fit inside of space and time. I want them to create space and time themselves, by power of their sheer Being, capital be. But most of the time: no. I have to work on it. How do I get out of Gutenberg Euclidean head space? Cannabis, film, walks in nature, animation, humor and surrealism, reading Joyce or Pound, get into the Korzybski-Zen level of the phenomenal event-level, pre-language, observing without hypnotizing and misleading "woids," and then careful consciousness of abstracting, watching myself abstract until It all melts, or something strange in science. You have your ways.
c.) For such a overwhelmingly "straight" Euclidean man, Prof. McLuhan's (whose personal politics were a sort of conservative Catholic with tinges of anarchy?) mind was, to me, reliably non-Euclidean and psychedelic. His deep immersion in James Joyce and Ezra Pound was probably a significant influence here, but there was so so so so much more. He was an absolute virtuoso with playing with metaphors and combining those ideas with others, if only just to see if they were thrilling and made anyone else want to think about some idea in some new way. I find this an anarchist strain in McLuhan's thought. (How about I take catholic idea about the senses and think about the new electronic media, like radio of TV? I can add ideas I copped from Thomas Nashe, Wyndham Lewis, Ezra Pound, Harold Innes, and anthropologists. And Finnegans Wake! And mythology, Poe, Einstein, and painting's figure/ground and the rise of the Renaissance's vanishing point? And then: Vico! And commercials and comic strips!? And Walter J. Ong...and and and...)
9.) Robert Anton Wilson's extensions of Timothy Leary's ideas of the evolution of "circuits" in the human mind drew heavily on Euclid for the first three "domesticated primate" aspects of all of us: the oral/biosurvival circuit is about approach/avoidance and is represented in Euclidean metaphor as "forward-back." The second circuit stage of development (according to the theory, we "imprint" all of these circuits), the anal/territorial circuit, is about up/down, and represents the deeper levels of any thinking about politics, whether within the family, local city, national, or international. Notice up/down fits well in Euclidean space-thought.
The third circuit is about right/left and for mammals like us, based on the bilateral symmetry of the body and the nervous system, which nature has seen fit to encourage a dominance of one side over the other, most people's left hemisphere's motor cortex encouraging right-handedness. Conceptual thought and left-right equations (think: algebra!) and logic all fall under the third circuit.
Although neuroscientific ideas about hemisphericalization in evolution and discrete modules of each of the brain's two hemispheres has moved away from a once-popular notion of the "holistic" right hemisphere and the "linear" left, these metaphors still seem to resonate. For Wilson, right-handedness and math and literacy in symbolic humans indicate a left-hemisphere domination (the left hemisphere controls the right side of the body) which has unconsciously biased "linear" and hierarchical forms in human history, which begins with writing. The right hemisphere, relatively "silent" and seemingly subdued by assumptions about "reality" made by the left hemisphere (especially in industrialized Western humans), has yet to harness the intuitive genius housed in the right hemisphere.
So much ink has been spilled over these ideas, once extremely popular but now seemingly in a slow descent. Nevertheless, these ideas live, as you may have noticed from a conversation within the past few months. Why?
Well, I think it's because there's still some truth to the right/left brain modularity-of-function idea, although it's not as simple as those who popularized the findings of the Sperry and Gazzaniga "split brain" experiments. Also: I think Wilson was on to something: "Right-hand dominance, and associated preferences for the linear left-lobe functions of the brain, determine our normal modes of artifact-manufacture and conceptual thought, i.e., third circuit 'mind.' It is no accident, then, that our logic (and our computer-design) follows the either-or, binary structure of these circuits. Nor is it an accident that our geometry, until the last century, has been Euclidean. Euclid's geometry, Aristotle's logic, and Newton's physics are meta-programs synthesizing and generalizing first brain forward-back, second brain up-down and third brain right-left programs." - Cosmic Trigger vol 1, pp.199-200
For Wilson (and Leary) there were relatively "new" circuits that have appeared in human evolution over the last 11,000 years or so. And they seem non-Euclidean, more organic, curvilinear, and more inclusive of a holistic, total-floating body sense, as if we were meant to move through space/time.
To be clear: Euclid and his forebears the Pythagoreans wormed their way into our paideuma due to the natural evolution of mammals on a rocky watery planet with an atmosphere conducive to carbon-based replicative life forms under the purview of a energy-source star at a Goldilocks distance. We got Euclidean forms because that's the way we evolve. Which may Beg the Q, but it's one of my favored narratives, and my entire brain, both hemispheres, seem to harmonically resonate with it.
[Further extrapolations from Wilson on this complex of ideas: see Illuminatus! Trilogy, pp.793-795; Prometheus Rising, pp.97-100; Schrodinger's Cat Trilogy, pp.342-347.]
10.) I grew up in boxy architecture, and when I first encountered this idea - about rectangles and 90 degree angles and conformity - I also found out we forgot how we did it, but at some point we had to learn to see in 3-D spatial terms. Supposedly some cultural anthropologists had gone into deepest darkest rain forest Africa and lived with and studied pygmies, whose complete environment was always giant trees and vines and moving through those living breathing green spaces, always canopied by jungle thickness as "ceiling."And when they were taken to a clearing at the edge of the forest and the anthropologists pointed to a man and a jeep far off in the distance, the natives thought they were seeing a tiny man. They had not learned to see over vistas of "open space."
So, I lay in bed and looked at the point where the ceiling meets the walls. Two walls meet at the "point" of the ceiling. And I tried to remember what it was like to not see that as a point in space. It's akin to many visual illusions or the Necker Cube you've all seen. It was fruitless. Until, one day...O! Such little things that thrill me. Aye: the corner was on a flat plane. And then it pointed out toward me...
I attest, I assert that when I enter buildings of a non-Euclidean build, my consciousness is altered. An inventory of memories and anecdotes would bore you and me, but I wonder if you have felt the same? I love round rooms. A spiral staircase can really get me going. On and on. But here's the thing: if I grew up in a non-Euclidean house, I strongly suspect that entering a Euclidean "tiny box" house would alter my conscious also. Because I think these represent the unfamiliar structure of space...
I hope I didn't come off like some un-hep "square" in this blogspew.
Wednesday, October 2, 2013
The Drug Report: Crisis In Psychopharmacology
It's been at least 30 years since a truly new drug has hit the market that addresses the needs of patients suffering from depression, anxiety, manic depression (now rather bloodlessly called "bipolar disorder"), and schizophrenia. Any "new" drugs in the last 30 years have been basically some variation on an older, established drug (called "Me Too" drugs), in an effort of competing drug companies to keep up with the competition. These non-new "new" drugs are almost always marketed as "blockbuster" or "revolutionary" therapeutics, touting less side effects than older, competing drugs. They are not new and the side effects are just different, not less. 50 or so psychiatric drugs bring in $25 billion a year in Unistat alone. And they're pretty lousy.
(I know, I know: you'd be far worse off without the one that worked for you. Hey: they do some good. For some people. I want better drugs for you, is all. And we were promised them with the 2000 mapping of the human genome. So...where are they? Later.)
serotonin
The drugs people use - by every estimate I've seen between 20% to 25% of the Unistat population takes at least one of these - were discovered by accident. By serendipity. In the 15 years after 1945. In 1952 a tuberculosis drug didn't work for TB, but iproniozid sure elicited euphoria when tested! Bingo: the first antidepressant. The drug that became Tofranil was supposed to work for schizophrenics, but it didn't help them, only make them run naked into town, laughing. Another antidepressant. In 1949 lithium was discovered, by accident, to treat manic depression. In 1957 Leo Sternbach was about ready to give up his research into a class of antihistamines, things were looking like a dead-end, when he stumbled onto the benzodiazepines: your Valium, Xanax, Lorazepam, Klonopin, etc: an empire of anti-anxiety drugs, and a huge influence on the tonality of culture in the West in the latter half of the 20th century.
With better technics, we learned much more about neurons and neurotransmitters. The SSRIs seemed to treat depression and anxiety. They were really the last big breakthrough. Ever since then, clinical trials that have made it to Stage III have been nothing but huge, sad, very expensive wastes. And so Novartis, Glaxo-Smith-Kline, Astra Zeneca, Pfizer, Sanofri and Merck have by and large quit trying. They've halted clinical trials, moved onto research that shows more promise. The pipeline for new psychopharmacological drugs is dry.
psilocybin, very much like serotonin in structure
Wait a minute: with more neuroscientists than ever before, far better imaging devices, a tremendous acceleration of knowledge about the human brain over the past 30 years...why? And mental health takes an increasing toll on us. If not you, someone you know. Why is this so difficult? Is it because what R.D. Laing called "the medical model" finally showed its hand? (A pair of nines?)
Again: our technology to map with ever finer-grains our cells, genes, and organs is greater than ever. We now have a deeper understanding of the human genome, an explosive discovery of the complexity of the epigenome, increasing understanding of how our environment and microbes interact with us...why don't we have a drug that will cure depression by now? Are we simply too complex to understand? Were we destined to be granted a brief window of time in which a few "happy accidents" would yield up as good as it gets, and it all ended 30 years ago? What about our computing power and pharmacological knowledge? Isn't it also subject to Moore's Law: a doubling roughly every 18 months? Shouldn't we have had a bevy of breakthroughs by now?
What are we doing wrong?
In 2011 Eli Lilly thought they had a breakthrough for schizophrenia. They'd given PCP to mice, then their new drug and...the mice calmed down! Everything went well. They got to Stage III clinical trials (humans) and 18 months later the drug was dead. Placebos worked just as well. Lilly is another company that has all but given up now too.
LSD: like psilocybin and serotonin, structurally
Some New Ways of Thinking and Genuine Promise
Steven Hyman of Harvard and M.I.T. knows this field well. He was quoted in an article I read as admitting of his colleagues, "People are tired of curing mice."
Let's go back to the last breakthough: Prozac and all its cousins.
It had been assumed that, when those happy accidents occurred, there must be a theoretical basis. Pharmacologists have always acted like they were on top of what was going on, but the trade secret was they were faking it: when a drug worked, it went on the market, people used it and they "worked" well enough, but at first the chemists and psychiatrists had no idea why. With better understanding of the brain, they found the ancient model of the imbalance of humors as an explanatory scheme. Only they juiced it up: they found these drugs altered neurotransmitters. Therefore, the lack of the neurotransmitter caused the disease! It seemed quite plausible, and very much like the hardcore finding that insulin works for diabetics.
Nassim Nicholas Taleb says this is a classic case of the "reverse-engineering problem": drop an ice cube on the floor and then go play cards with your friends in the other room. Can you visualize the cube breaking down into a tiny pool of water? Of course you can. You walk back into the kitchen and see a tiny pool of water where you had dropped the cube. It's pretty straight-forward. Now: imagine walking down the street and coming upon a tiny pool of water. A little spot of wet. How many ways can you dream up the cause of this spot?
A cop comes upon a drunken man looking for his keys, at night, under a streetlight. The cop asks the drunk why he keeps looking under the streetlight, and the drunk says it's because the light is so much better there.
Obviously, even our best researchers have been looking where the light was bright. And the reverse-engineered explanation of our not-all-that-great/we-can-do-better psychopharmacological drugs? Human. All-too human.
The neurotransmitters are not the cause of the mental illness. They merely point at the underlying cause; neurotransmitters (dopamine, serotonin, norepinephrine, etc) are tangential and partial. Reverse-engineering to allow more serotonin to remain in the synaptic gap between neurons was a genius move; too bad there are a handful of studies that show SSRIs work little better than placebos. (For some people they have worked well enough; I don't want to slight this!) All in all, there's a "truthiness" about depression drugs.
We treat everyone the same in studies, while knowing they have variable epigenomes. This is receiving some major research and seems quite promising, to my eyes. We have a semantic problem with experts dealing with a patient, making observations and tests, then naming the disease they "have," which is a major problem: people and diseases do not fall into our socially-constructed and convenient categories as well as we'd like. This problem is now far more acknowledged than ever, which seems promising to me. One example is the Research Domain criteria: we map behavioral abnormalities and symptoms and link them to specific causes in the brain, without the label of "schizophrenia" or "panic disorder." Why is this approach better? Because it's more targeted. Instead of looking at one or two neurotransmitters that "cause" schizophrenia, we try to find out specifically what causes people to hear voices, or become catatonic.
The idea that we must take 18 years from conception through clinical trials is being re-thought. Even more crucially for mental disease: non-human animal studies long ago reached diminished returns. Now the idea is small-scale, carefully controlled studies on humans will speed up the process and may yield breakthroughs in shorter periods.
Another area of promise: when a drug failed, it often worked for a few people. But our gold standard of drug testing: double-blind and placebo-controlled? The rules were that if the placebo worked as well as the drug, throw out the drug. But the people who were helped probably should have told us something.
Along those lines, there is a strong call to restore abandoned or "invisible" clinical trials to correct the scientific record. We may learn some very interesting things from "failed" trials.
The techniques surrounding stem cells have accelerated at an incredibly dizzying pace upward and for the better: now researchers can test cells and drugs in a a dish and make very good guesses as to whether a compound would have some efficacy.
With the mapping of human genome in 2000, hundreds of utopian promises were made that now seem embarrassing or outright quackery. But there was reason to be optimistic. We thought because we were very complex, we'd have the most genes, but instead of 100,000 we only had about 21,000. Grapes have more genes than us: this was nothing like what we'd expected. Worse: 13 years later we now know that a "bigger" system - in terms of complexity - governs the genome: the epigenome. It turns out that RNA plays a far, far bigger part than we'd thought. The complexity can seem overwhelming.
In 2002 researcher Andrew Hopkins came up with an eye-opening paper, the "druggable genome": Okay: we'd thought we had 100,000 genes. We have closer to 21,000. He estimated that only about 10% of those genes coded for proteins that could bind to small molecules, which is how drugs work, basically. So: about 2,100 genes. But he estimated that, of those, only about 20% would be likely to involve diseases. So now we're down to about 420 possibilities for targets. And then he guessed we'd already discovered 50% of those (probably accidentally?). We only had 210 targets left? For all diseases, not just mental illnesses? Not exactly a rosy scenario. But...
Cheminformatics! This is a burgeoning discipline using the aforementioned computational doubling: there are tens of thousands of compounds in digitized libraries. Do you test them all? Two guys wrote an algorithm to teach a computer to sift through a welter of data on TB, which is becoming antibiotic-resistant. A Big Deal, quite threatening to all of us, potentially. Their algorithm said: find all compounds that are like the drugs that used to work on tuberculosis. So you get that data set. Then the algorithm says, throw out every compound known to be toxic to mammalian cells. You have a smaller set, but a safer one to work with. The algorithm discovered a 40-year old drug that was shown to have anti-TB properties but had been forgotten.
Even more interesting and promising: researchers in Cambridge, MA have taken messenger RNA (mRNA), an ultra fragile molecule which, when injected activates the body's immune response, tweaked a couple of "letters" in its nucleotide sequence, and made a non-fragile mRNA that does not turn on the immune system. What this could do is take the information from the DNA in a gene and make it "fix" missing or broken proteins in another cell, in effect causing a patient with a (probably inherited?) protein abnormality to make a drug inside their own cells!
Nessa Carey, a gifted explainer of how epigenetics works in our bodies, has urged us to be cautious about getting too excited over drugs based on DNA-RNA, because so far, "One of the major problems with this kind of approach therapeutically may sound rather mundane. Nucleic acids, such as RNA-DNA, are just difficult to turn into good drugs. Most good existing drugs - ibuprofen, Viagra, antihistamines - have certain characteristics in common. You can swallow them, they get across your gut wall, they get distributed around your body, they don't get destroyed too quickly by your liver, they get taken in by cells, and they work their effects on the molecules in or on the cells. Those all sound like really simple things, but they're often the most difficult things to get right when developing a new drug."
Finally, there is a very real call to combine all our new technologies with an active looking for happy accidents, like in the 1945-60 period. We find as many compounds that could possibly have efficacy, get people willing to be guinea pigs to try them (we have far better ways to guess at what's likely to have horrendous side effects or death-dealing qualities, but we're by no means "covered" here), and see what happens! Yes, the dark side is that the poor will probably be the ones to sign up...How do we find new things to try? "Scientists Map All Possible Drug-Like Chemical Compounds." It turns out the drunk looking for his keys was far more accurate an analogy than we might've guessed. Or wanted to guess. Check out all the unexplored chemical "space" yet to be charted! It reminds me of the incredible number of phenethylamines and tryptamines that Alexander Shulgin mapped: but a drop in the ocean? (Shulgin deserved the Nobel Prize for Chemistry: just read-up on his career! It's almost criminal he didn't get the Prize.) It's like looking for signs of life in the Milky Way! Or more prosaically: like geologists learning how to more profitably drill for oil. It's also about algorithms and possibilities and adventure and hellacious mistakes yet to be made.
To all of us looking for better living through chemistry: Bon appetite! I do think we may make it through this bottleneck to a whole new world of more sophisticated drugs that will make all the ones we've had since 1945 look primitive. Maybe?
Some Of The Works Consulted:
The Epigenetics Revolution by Nessa Carey
"No New Meds," by Laura Sanders:
http://www.sciencenews.org/view/feature/id/348115/description/No_New_Meds
Happy Accidents: Serendipity In Modern Medical Breakthroughs, by Morton A. Meyers
"The Psychiatric Drug Crisis" by Gary Greenberg:
http://www.newyorker.com/online/blogs/elements/2013/09/psychiatry-prozac-ssri-mental-health-theory-discredited.html
PIHKAL: A Chemical Love Story, by Alexander and Ann Shulgin
"Where Are All The Miracle Drugs?" by Brian Palmer:
http://www.slate.com/articles/health_and_science/human_genome/2013/09/human_genome_drugs_where_are_the_miracle_cures_from_genomics_did_the_genome.single.html
"Messenger RNAs Could Create a New Class of Drugs," by Susan Young:
http://www.technologyreview.com/news/512926/messenger-rnas-could-create-a-new-class-of-drugs/
"Faster, Smarter and Cheaper Drug Discovery":
http://www.sciencedaily.com/releases/2013/03/130321131920.htm
Serendipity: Accidental Discoveries In Science, by Royston Roberts
Hope or Hype: The Obsession With Medical Advances and the High Cost of False Promises, by Richard A. Deyo and Donald L. Patrick
"Experts Propose Restoring Invisible and Abandoned Trials to 'Correct the Scientific Record'":
http://www.sciencecodex.com/experts_propose_restoring_invisible_and_abandoned_trials_to_correct_the_scientific_record-114055
The Black Swan: The Impact of the Highly Improbable, by Nassim Nicholas Taleb
(I know, I know: you'd be far worse off without the one that worked for you. Hey: they do some good. For some people. I want better drugs for you, is all. And we were promised them with the 2000 mapping of the human genome. So...where are they? Later.)
serotonin
The drugs people use - by every estimate I've seen between 20% to 25% of the Unistat population takes at least one of these - were discovered by accident. By serendipity. In the 15 years after 1945. In 1952 a tuberculosis drug didn't work for TB, but iproniozid sure elicited euphoria when tested! Bingo: the first antidepressant. The drug that became Tofranil was supposed to work for schizophrenics, but it didn't help them, only make them run naked into town, laughing. Another antidepressant. In 1949 lithium was discovered, by accident, to treat manic depression. In 1957 Leo Sternbach was about ready to give up his research into a class of antihistamines, things were looking like a dead-end, when he stumbled onto the benzodiazepines: your Valium, Xanax, Lorazepam, Klonopin, etc: an empire of anti-anxiety drugs, and a huge influence on the tonality of culture in the West in the latter half of the 20th century.
With better technics, we learned much more about neurons and neurotransmitters. The SSRIs seemed to treat depression and anxiety. They were really the last big breakthrough. Ever since then, clinical trials that have made it to Stage III have been nothing but huge, sad, very expensive wastes. And so Novartis, Glaxo-Smith-Kline, Astra Zeneca, Pfizer, Sanofri and Merck have by and large quit trying. They've halted clinical trials, moved onto research that shows more promise. The pipeline for new psychopharmacological drugs is dry.
psilocybin, very much like serotonin in structure
Wait a minute: with more neuroscientists than ever before, far better imaging devices, a tremendous acceleration of knowledge about the human brain over the past 30 years...why? And mental health takes an increasing toll on us. If not you, someone you know. Why is this so difficult? Is it because what R.D. Laing called "the medical model" finally showed its hand? (A pair of nines?)
Again: our technology to map with ever finer-grains our cells, genes, and organs is greater than ever. We now have a deeper understanding of the human genome, an explosive discovery of the complexity of the epigenome, increasing understanding of how our environment and microbes interact with us...why don't we have a drug that will cure depression by now? Are we simply too complex to understand? Were we destined to be granted a brief window of time in which a few "happy accidents" would yield up as good as it gets, and it all ended 30 years ago? What about our computing power and pharmacological knowledge? Isn't it also subject to Moore's Law: a doubling roughly every 18 months? Shouldn't we have had a bevy of breakthroughs by now?
What are we doing wrong?
In 2011 Eli Lilly thought they had a breakthrough for schizophrenia. They'd given PCP to mice, then their new drug and...the mice calmed down! Everything went well. They got to Stage III clinical trials (humans) and 18 months later the drug was dead. Placebos worked just as well. Lilly is another company that has all but given up now too.
LSD: like psilocybin and serotonin, structurally
Some New Ways of Thinking and Genuine Promise
Steven Hyman of Harvard and M.I.T. knows this field well. He was quoted in an article I read as admitting of his colleagues, "People are tired of curing mice."
Let's go back to the last breakthough: Prozac and all its cousins.
It had been assumed that, when those happy accidents occurred, there must be a theoretical basis. Pharmacologists have always acted like they were on top of what was going on, but the trade secret was they were faking it: when a drug worked, it went on the market, people used it and they "worked" well enough, but at first the chemists and psychiatrists had no idea why. With better understanding of the brain, they found the ancient model of the imbalance of humors as an explanatory scheme. Only they juiced it up: they found these drugs altered neurotransmitters. Therefore, the lack of the neurotransmitter caused the disease! It seemed quite plausible, and very much like the hardcore finding that insulin works for diabetics.
Nassim Nicholas Taleb says this is a classic case of the "reverse-engineering problem": drop an ice cube on the floor and then go play cards with your friends in the other room. Can you visualize the cube breaking down into a tiny pool of water? Of course you can. You walk back into the kitchen and see a tiny pool of water where you had dropped the cube. It's pretty straight-forward. Now: imagine walking down the street and coming upon a tiny pool of water. A little spot of wet. How many ways can you dream up the cause of this spot?
A cop comes upon a drunken man looking for his keys, at night, under a streetlight. The cop asks the drunk why he keeps looking under the streetlight, and the drunk says it's because the light is so much better there.
Obviously, even our best researchers have been looking where the light was bright. And the reverse-engineered explanation of our not-all-that-great/we-can-do-better psychopharmacological drugs? Human. All-too human.
The neurotransmitters are not the cause of the mental illness. They merely point at the underlying cause; neurotransmitters (dopamine, serotonin, norepinephrine, etc) are tangential and partial. Reverse-engineering to allow more serotonin to remain in the synaptic gap between neurons was a genius move; too bad there are a handful of studies that show SSRIs work little better than placebos. (For some people they have worked well enough; I don't want to slight this!) All in all, there's a "truthiness" about depression drugs.
We treat everyone the same in studies, while knowing they have variable epigenomes. This is receiving some major research and seems quite promising, to my eyes. We have a semantic problem with experts dealing with a patient, making observations and tests, then naming the disease they "have," which is a major problem: people and diseases do not fall into our socially-constructed and convenient categories as well as we'd like. This problem is now far more acknowledged than ever, which seems promising to me. One example is the Research Domain criteria: we map behavioral abnormalities and symptoms and link them to specific causes in the brain, without the label of "schizophrenia" or "panic disorder." Why is this approach better? Because it's more targeted. Instead of looking at one or two neurotransmitters that "cause" schizophrenia, we try to find out specifically what causes people to hear voices, or become catatonic.
The idea that we must take 18 years from conception through clinical trials is being re-thought. Even more crucially for mental disease: non-human animal studies long ago reached diminished returns. Now the idea is small-scale, carefully controlled studies on humans will speed up the process and may yield breakthroughs in shorter periods.
Another area of promise: when a drug failed, it often worked for a few people. But our gold standard of drug testing: double-blind and placebo-controlled? The rules were that if the placebo worked as well as the drug, throw out the drug. But the people who were helped probably should have told us something.
Along those lines, there is a strong call to restore abandoned or "invisible" clinical trials to correct the scientific record. We may learn some very interesting things from "failed" trials.
The techniques surrounding stem cells have accelerated at an incredibly dizzying pace upward and for the better: now researchers can test cells and drugs in a a dish and make very good guesses as to whether a compound would have some efficacy.
With the mapping of human genome in 2000, hundreds of utopian promises were made that now seem embarrassing or outright quackery. But there was reason to be optimistic. We thought because we were very complex, we'd have the most genes, but instead of 100,000 we only had about 21,000. Grapes have more genes than us: this was nothing like what we'd expected. Worse: 13 years later we now know that a "bigger" system - in terms of complexity - governs the genome: the epigenome. It turns out that RNA plays a far, far bigger part than we'd thought. The complexity can seem overwhelming.
In 2002 researcher Andrew Hopkins came up with an eye-opening paper, the "druggable genome": Okay: we'd thought we had 100,000 genes. We have closer to 21,000. He estimated that only about 10% of those genes coded for proteins that could bind to small molecules, which is how drugs work, basically. So: about 2,100 genes. But he estimated that, of those, only about 20% would be likely to involve diseases. So now we're down to about 420 possibilities for targets. And then he guessed we'd already discovered 50% of those (probably accidentally?). We only had 210 targets left? For all diseases, not just mental illnesses? Not exactly a rosy scenario. But...
Cheminformatics! This is a burgeoning discipline using the aforementioned computational doubling: there are tens of thousands of compounds in digitized libraries. Do you test them all? Two guys wrote an algorithm to teach a computer to sift through a welter of data on TB, which is becoming antibiotic-resistant. A Big Deal, quite threatening to all of us, potentially. Their algorithm said: find all compounds that are like the drugs that used to work on tuberculosis. So you get that data set. Then the algorithm says, throw out every compound known to be toxic to mammalian cells. You have a smaller set, but a safer one to work with. The algorithm discovered a 40-year old drug that was shown to have anti-TB properties but had been forgotten.
Even more interesting and promising: researchers in Cambridge, MA have taken messenger RNA (mRNA), an ultra fragile molecule which, when injected activates the body's immune response, tweaked a couple of "letters" in its nucleotide sequence, and made a non-fragile mRNA that does not turn on the immune system. What this could do is take the information from the DNA in a gene and make it "fix" missing or broken proteins in another cell, in effect causing a patient with a (probably inherited?) protein abnormality to make a drug inside their own cells!
Nessa Carey, a gifted explainer of how epigenetics works in our bodies, has urged us to be cautious about getting too excited over drugs based on DNA-RNA, because so far, "One of the major problems with this kind of approach therapeutically may sound rather mundane. Nucleic acids, such as RNA-DNA, are just difficult to turn into good drugs. Most good existing drugs - ibuprofen, Viagra, antihistamines - have certain characteristics in common. You can swallow them, they get across your gut wall, they get distributed around your body, they don't get destroyed too quickly by your liver, they get taken in by cells, and they work their effects on the molecules in or on the cells. Those all sound like really simple things, but they're often the most difficult things to get right when developing a new drug."
Finally, there is a very real call to combine all our new technologies with an active looking for happy accidents, like in the 1945-60 period. We find as many compounds that could possibly have efficacy, get people willing to be guinea pigs to try them (we have far better ways to guess at what's likely to have horrendous side effects or death-dealing qualities, but we're by no means "covered" here), and see what happens! Yes, the dark side is that the poor will probably be the ones to sign up...How do we find new things to try? "Scientists Map All Possible Drug-Like Chemical Compounds." It turns out the drunk looking for his keys was far more accurate an analogy than we might've guessed. Or wanted to guess. Check out all the unexplored chemical "space" yet to be charted! It reminds me of the incredible number of phenethylamines and tryptamines that Alexander Shulgin mapped: but a drop in the ocean? (Shulgin deserved the Nobel Prize for Chemistry: just read-up on his career! It's almost criminal he didn't get the Prize.) It's like looking for signs of life in the Milky Way! Or more prosaically: like geologists learning how to more profitably drill for oil. It's also about algorithms and possibilities and adventure and hellacious mistakes yet to be made.
To all of us looking for better living through chemistry: Bon appetite! I do think we may make it through this bottleneck to a whole new world of more sophisticated drugs that will make all the ones we've had since 1945 look primitive. Maybe?
Some Of The Works Consulted:
The Epigenetics Revolution by Nessa Carey
"No New Meds," by Laura Sanders:
http://www.sciencenews.org/view/feature/id/348115/description/No_New_Meds
Happy Accidents: Serendipity In Modern Medical Breakthroughs, by Morton A. Meyers
"The Psychiatric Drug Crisis" by Gary Greenberg:
http://www.newyorker.com/online/blogs/elements/2013/09/psychiatry-prozac-ssri-mental-health-theory-discredited.html
PIHKAL: A Chemical Love Story, by Alexander and Ann Shulgin
"Where Are All The Miracle Drugs?" by Brian Palmer:
http://www.slate.com/articles/health_and_science/human_genome/2013/09/human_genome_drugs_where_are_the_miracle_cures_from_genomics_did_the_genome.single.html
"Messenger RNAs Could Create a New Class of Drugs," by Susan Young:
http://www.technologyreview.com/news/512926/messenger-rnas-could-create-a-new-class-of-drugs/
"Faster, Smarter and Cheaper Drug Discovery":
http://www.sciencedaily.com/releases/2013/03/130321131920.htm
Serendipity: Accidental Discoveries In Science, by Royston Roberts
Hope or Hype: The Obsession With Medical Advances and the High Cost of False Promises, by Richard A. Deyo and Donald L. Patrick
"Experts Propose Restoring Invisible and Abandoned Trials to 'Correct the Scientific Record'":
http://www.sciencecodex.com/experts_propose_restoring_invisible_and_abandoned_trials_to_correct_the_scientific_record-114055
The Black Swan: The Impact of the Highly Improbable, by Nassim Nicholas Taleb
Labels:
Alexander Shulgin,
chemists,
drugs,
epigenetics,
genomics,
human brain,
Nassim Nicholas Taleb,
Nessa Carey,
neurobiology,
paradigm shifts,
pharmaceuticals,
placebos
Friday, August 9, 2013
Books: Passing Remarks On Select Titles, Fictional, Non-Fictional and Nonexistent
Not long ago I was nosing through Sally Wade's The George Carlin Letters. She was Carlin's love the last years of his life. Before they got together she was in Dalton's bookstore in Santa Monica and overheard his distinct voice: "I'll take Our Culture and What's Left of It, The Anatomy of Dirty Words, and Rationale of the Dirty Joke...if you can get 'em to me by Friday," he says to the clerk who's helping him, "I'll give ya a tip to buy yourself some weed." These titles may sound like they were made up in the mouth of Carlin, but as you can see by the links (which I do not profit monetarily by citing here; I'm merely a cheerleader for Book Kulchur), they're real. And I'm sort of surprised Carlin didn't own two of those already: Sagarin's Anatomy and Legman's Rationale. They had long been in his wheelhouse. (Maybe he lost his old copies?) I looked up Our Culture and it's by Theo. Dalrymple, of whom I've only read a few articles. This one seems reactionary, no? But I don't know; haven't read it.
Sagarin was influenced by Benjamin Lee Whorf and was one of the intellectual founders of the modern gay rights movement. What a fascinating figure and unsung hero! He was a pioneer in using sociological analysis to show that laws that persecuted people for "obscene" language or other behaviors the dominator culture labeled as "deviant" were unjust laws; these "deviant" behaviors and utterances were legitimate expressions and should be protected and not prosecuted.
Legman was one of the great lone archivist-intellectuals. I own a copy of Rationale and it's a stunning, thick work of readable deep scholarship about a "taboo" subject. Note the line from folklorist Susan Davis about Legman's term "Hell Boxes": they're "a substrate of material that almost everybody knows is there, but can't talk about in polite circles." Legman was all about mining the Hell Boxes, which seem a level or two "above" what Frobenius and Pound called the paideuma.
Robert Anton Wilson told me that the function of a good comedian was to touch on these subjects, because they discharged pent-up energy about the subjects into laughter.
Legman's archival bend reminds me of Ed Sanders, who supposedly has just an unbelievably large archive (500 banker's boxes? Wow) somewhere in upstate NY near Woodstock. In one of his books he mentions he was writing or had written (I lost my notes!) a history of surveillance by Authority of artists, poets, and other Thought Criminals. I have never seen it, and don't know of a library that owns it (a library I could borrow from). Here's a link to something called Sanders' Report: Surveillance Stories of the 60s and 70s, but I'm not convinced this - whatever it is - is the epic "surv" (as Sanders writes it) book from him. I hope something really huge comes from him, culled from his massive archive. As Charlie Parker blew, "Now's The Time."
This reminds me of a review of a book I haven't read: British Writers and MI5 Surveillance: 1930-1960, by Smith. The idea that intellectuals and poets were/are a threat to the existing order: artists seem to devoutly wish it were true, the evidence seems sketchy, and the spies and cops that persecute the artists, as Orwell points out as quoted in the review, don't know what the ideas "are" that has them arresting/harassing/bugging the "red" artist. The leader of the Communist Party in Great Britain considers the intellectuals "less than nothing of their value to the party."
Speaking of which: has anyone written an actual book called Theory and Practice of Oligarchical Collectivism? O'Brien, et.al in Orwell's 1984 wrote it; now seems the time to write an actual version. I'd read it if it came out, probably right after The Grasshopper Lies Heavy (a book by Hawthorne Abendsen in Philip K. Dick's The Man In The High Castle), or The Bawdy Humor of Noam Chomsky (a sarcastic in-group joke title by McCawley, Lakoff and other of Chomsky's ex-students, as found in Randy Allen Harris's excellent Linguistics Wars).
While I'm on this stuff: William F. Buckley wrote The Wit and Wisdom of Vlad the Impaler in Robert Anton Wilson's Schrodinger's Cat Trilogy. RAW has to be in the Top 20 of authors who loved to make up titles of books; his books are stacked with fictional fiction and fictional non-fiction, usually in his own fiction. Wigner's Friend by Timothy Leary (a fictional non-fiction book - about the epistemological underpinnings of quantum mechanics - by a non-fictional person); Little People With Big Ideas by Markoff Chaney is a non-fiction book, presumably, by a fictional character who's related to the family that produced Lon Chaney, but Markoff is a midget (or "Mgt"), or "little person." His fictional name is a pun on a mathematical concept that I think I "grok" but maybe not. The idea that a "simple random walk" is an example of a Markov Chain...and this is related to Brownian motion, chaos theory, and Monte Carlo? Maybe I don't grok it yet.
Anyway, Markoff Chaney also wrote a book called Reality Is What You Can Get Away With. (citation: see the omnibus edition of Schrodinger's Cat Trilogy, p.538). This fictional character Chaney wrote a book about ontology, or how and what constitutes "reality" or Being-ness. Later, the writer that wrote Chaney's Being-ness into...some ontological status? himself wrote a book by that same title. There is no way to tell if both books "are" the same book, as one is a non-fiction book (maybe?) in a fictional work, while the other appears to be a "play" of some sort that includes a lot of non-fiction, but most librarians would consider it a "fictional" work...unless they classified it as "Screenplays - United States," which I'm not sure is a "real" Dewey or Library of Congress classification term or not...At any rate, "Robert Anton Wilson" appears to have written two books of the same title, and my educated guess is that the books are quite different. I hope I haven't lost you here, Dear Reader. The version with more ontological status seems HERE. Buy your own copy (more ontological status?) HERE. Do not confuse all this with something about Terry Gilliam!
A book that RAW seemed to have made up, but I found out was real, was Oral Sadism and the Vegetarian Personality, written and/or edited by Glenn C. Ellenbogen. RAW thought the book was a terrific parody of academic Psychology. I have not read the book, but when I need that sort of laff (PDQ), I'll seek it out.
Pseudo-Explicational Omnibus is the title of my own nonexistent book about certain writings by Borges, Pynchon, Tom Robbins, Robert Anton Wilson, and Stanislaw Lem. To give you an idea of what the book is about, see Lem's book A Perfect Vacuum.
Here's a book: Universal Ecstatic Tautology, by Alejandro Favian. This appears to exist. I found it in a book on that fabulous weirdo-genius-scientist-Egyptologist-Jesuit Athanasius Kircher. The title would sound like a satire on Kircher, but it was written by one of his greatest admirers, and it's in five volumes, totaling 3000 pages, and like Kircher's books, it's about everything.
3000 pages is nothing, really. The other day a few of us were talking about documentaries we'd seen that knocked our socks off. I struggled to recall the name of the documentary, but when I mentioned it was about the Outsider Artist Henry Darger, someone came forth with In The Realms of the Unreal, directed by Jessica Yu. (HERE's a trailer.) Darger invented his own world and painted it. The expansion of the mythos of his world, The Story of the Vivian Girls, In What Is Known as the Realms of the Unreal, of the Glandeco-Angelinnian War Storm, Caused By the Child Slave Rebelllion, allegedly runs to 15,145 pages. I have not read it.
Medicine Chest Against All Heresies sounds like something Wilson made up (to me), but it was written by the orthodox early Christian, Epiphanius of Salamis. When I first saw the title it seemed like something parodical about fundamentalist materialists, Ayn Rand followers, or far-right-wing Christians. It appears Epiphanius was on the ep and ep.
The Etymologicon was a book Giambattista Vico imagined, and if he'd had the support of Readers, he may have written it. It was a book that would give all the deepest roots of every word in every language, so the reader could travel back to the Beginning of human language. I found out recently that Mark Forsyth had written a book by that title (2011), with the intriguing subtitle, "A Circular Stroll Through The Hidden Connections of the English Language." It's only English, but I think Forsyth has me with "hidden connections." I'll get to it soon, or at least take a "stroll" through it.
One of my favorite thinkers, Nassim Nicholas Taleb, doesn't like business books much. Neither do I. NNT gives us advice on reading: "With regular books, read the text and skip the footnotes; with academic books, read the footnotes and skip the text; and with business books, skip the text and footnotes." Bed of Procrustes, p.46. Also: "What we call 'business books' is an eliminative category invented by bookstores for writing that has no depth, no style, no empirical rigor, and no linguistic sophistication." (op.cit, 47)
I'd like to end yet another blog on books spew by returning to Prof. George Carlin, who thought a book called Doorway to Norway would be a good idea for a travel book to that country. (See Napalm and Silly Putty, p31)
Sagarin was influenced by Benjamin Lee Whorf and was one of the intellectual founders of the modern gay rights movement. What a fascinating figure and unsung hero! He was a pioneer in using sociological analysis to show that laws that persecuted people for "obscene" language or other behaviors the dominator culture labeled as "deviant" were unjust laws; these "deviant" behaviors and utterances were legitimate expressions and should be protected and not prosecuted.
Legman was one of the great lone archivist-intellectuals. I own a copy of Rationale and it's a stunning, thick work of readable deep scholarship about a "taboo" subject. Note the line from folklorist Susan Davis about Legman's term "Hell Boxes": they're "a substrate of material that almost everybody knows is there, but can't talk about in polite circles." Legman was all about mining the Hell Boxes, which seem a level or two "above" what Frobenius and Pound called the paideuma.
Robert Anton Wilson told me that the function of a good comedian was to touch on these subjects, because they discharged pent-up energy about the subjects into laughter.
Legman's archival bend reminds me of Ed Sanders, who supposedly has just an unbelievably large archive (500 banker's boxes? Wow) somewhere in upstate NY near Woodstock. In one of his books he mentions he was writing or had written (I lost my notes!) a history of surveillance by Authority of artists, poets, and other Thought Criminals. I have never seen it, and don't know of a library that owns it (a library I could borrow from). Here's a link to something called Sanders' Report: Surveillance Stories of the 60s and 70s, but I'm not convinced this - whatever it is - is the epic "surv" (as Sanders writes it) book from him. I hope something really huge comes from him, culled from his massive archive. As Charlie Parker blew, "Now's The Time."
This reminds me of a review of a book I haven't read: British Writers and MI5 Surveillance: 1930-1960, by Smith. The idea that intellectuals and poets were/are a threat to the existing order: artists seem to devoutly wish it were true, the evidence seems sketchy, and the spies and cops that persecute the artists, as Orwell points out as quoted in the review, don't know what the ideas "are" that has them arresting/harassing/bugging the "red" artist. The leader of the Communist Party in Great Britain considers the intellectuals "less than nothing of their value to the party."
Speaking of which: has anyone written an actual book called Theory and Practice of Oligarchical Collectivism? O'Brien, et.al in Orwell's 1984 wrote it; now seems the time to write an actual version. I'd read it if it came out, probably right after The Grasshopper Lies Heavy (a book by Hawthorne Abendsen in Philip K. Dick's The Man In The High Castle), or The Bawdy Humor of Noam Chomsky (a sarcastic in-group joke title by McCawley, Lakoff and other of Chomsky's ex-students, as found in Randy Allen Harris's excellent Linguistics Wars).
While I'm on this stuff: William F. Buckley wrote The Wit and Wisdom of Vlad the Impaler in Robert Anton Wilson's Schrodinger's Cat Trilogy. RAW has to be in the Top 20 of authors who loved to make up titles of books; his books are stacked with fictional fiction and fictional non-fiction, usually in his own fiction. Wigner's Friend by Timothy Leary (a fictional non-fiction book - about the epistemological underpinnings of quantum mechanics - by a non-fictional person); Little People With Big Ideas by Markoff Chaney is a non-fiction book, presumably, by a fictional character who's related to the family that produced Lon Chaney, but Markoff is a midget (or "Mgt"), or "little person." His fictional name is a pun on a mathematical concept that I think I "grok" but maybe not. The idea that a "simple random walk" is an example of a Markov Chain...and this is related to Brownian motion, chaos theory, and Monte Carlo? Maybe I don't grok it yet.
Anyway, Markoff Chaney also wrote a book called Reality Is What You Can Get Away With. (citation: see the omnibus edition of Schrodinger's Cat Trilogy, p.538). This fictional character Chaney wrote a book about ontology, or how and what constitutes "reality" or Being-ness. Later, the writer that wrote Chaney's Being-ness into...some ontological status? himself wrote a book by that same title. There is no way to tell if both books "are" the same book, as one is a non-fiction book (maybe?) in a fictional work, while the other appears to be a "play" of some sort that includes a lot of non-fiction, but most librarians would consider it a "fictional" work...unless they classified it as "Screenplays - United States," which I'm not sure is a "real" Dewey or Library of Congress classification term or not...At any rate, "Robert Anton Wilson" appears to have written two books of the same title, and my educated guess is that the books are quite different. I hope I haven't lost you here, Dear Reader. The version with more ontological status seems HERE. Buy your own copy (more ontological status?) HERE. Do not confuse all this with something about Terry Gilliam!
A book that RAW seemed to have made up, but I found out was real, was Oral Sadism and the Vegetarian Personality, written and/or edited by Glenn C. Ellenbogen. RAW thought the book was a terrific parody of academic Psychology. I have not read the book, but when I need that sort of laff (PDQ), I'll seek it out.
Pseudo-Explicational Omnibus is the title of my own nonexistent book about certain writings by Borges, Pynchon, Tom Robbins, Robert Anton Wilson, and Stanislaw Lem. To give you an idea of what the book is about, see Lem's book A Perfect Vacuum.
Here's a book: Universal Ecstatic Tautology, by Alejandro Favian. This appears to exist. I found it in a book on that fabulous weirdo-genius-scientist-Egyptologist-Jesuit Athanasius Kircher. The title would sound like a satire on Kircher, but it was written by one of his greatest admirers, and it's in five volumes, totaling 3000 pages, and like Kircher's books, it's about everything.
3000 pages is nothing, really. The other day a few of us were talking about documentaries we'd seen that knocked our socks off. I struggled to recall the name of the documentary, but when I mentioned it was about the Outsider Artist Henry Darger, someone came forth with In The Realms of the Unreal, directed by Jessica Yu. (HERE's a trailer.) Darger invented his own world and painted it. The expansion of the mythos of his world, The Story of the Vivian Girls, In What Is Known as the Realms of the Unreal, of the Glandeco-Angelinnian War Storm, Caused By the Child Slave Rebelllion, allegedly runs to 15,145 pages. I have not read it.
Medicine Chest Against All Heresies sounds like something Wilson made up (to me), but it was written by the orthodox early Christian, Epiphanius of Salamis. When I first saw the title it seemed like something parodical about fundamentalist materialists, Ayn Rand followers, or far-right-wing Christians. It appears Epiphanius was on the ep and ep.
The Etymologicon was a book Giambattista Vico imagined, and if he'd had the support of Readers, he may have written it. It was a book that would give all the deepest roots of every word in every language, so the reader could travel back to the Beginning of human language. I found out recently that Mark Forsyth had written a book by that title (2011), with the intriguing subtitle, "A Circular Stroll Through The Hidden Connections of the English Language." It's only English, but I think Forsyth has me with "hidden connections." I'll get to it soon, or at least take a "stroll" through it.
One of my favorite thinkers, Nassim Nicholas Taleb, doesn't like business books much. Neither do I. NNT gives us advice on reading: "With regular books, read the text and skip the footnotes; with academic books, read the footnotes and skip the text; and with business books, skip the text and footnotes." Bed of Procrustes, p.46. Also: "What we call 'business books' is an eliminative category invented by bookstores for writing that has no depth, no style, no empirical rigor, and no linguistic sophistication." (op.cit, 47)
I'd like to end yet another blog on books spew by returning to Prof. George Carlin, who thought a book called Doorway to Norway would be a good idea for a travel book to that country. (See Napalm and Silly Putty, p31)
Thursday, July 26, 2012
The Cosmic Schmuck Principle and Some of Its Family Resemblances
The Cosmic Schmuck Principle
Robert Anton Wilson minted the term "Cosmic Schmuck" in a similar spirit to Murphy's Law. The Cosmic Schmuck Principle seemed aimed at greater ethical behavior among the educated classes; I see this impetus in RAW as an influence from Ezra Pound and Confucius, and also Alfred Korzybski. Also: RAW wrote a lot about hearing and reading formulations like this while growing up:
An X (person or group) appears to have done something lousy.
Therefore all people who seem like Xes are suspect or bad or dangerous, or might do something lousy.
This formulation leads to incivility, bad ethics, injustices, violence, and even genocide. (Think of Hitler making the above statement, and replace X with Jews.)
Yes, but what is this thing called The Cosmic Schmuck Principle? It has to do with pretending to a level of certainty or knowledge that you are unlikely to have, and so you're acting like a schmuck. Oh, but let's have a concise statement from RAW:
Another website excerpts more from the page(s) with the quote I used above; I link to it in the interests of context. I don't know who the man is in the photo. It is not RAW.
Nota bene and what I find very lovable in the Cosmic Schmuck Principle is that it's heavily implied that we are all schmucks, to some degree. And RAW would've acknowledged his own schmuckiness at times. This dovetails really well with the ideas about Wrongology from Kathryn Schulz, who I write about near the end of this piece...
One of the main tropes that runs through Wilson's entire oeuvre is the embracing of uncertainty; one reason being that, in his epistemology, given our nervous systems and how we're wired, coupled with what we've found in quantum mechanics, cultural anthropology, genetics, perception psychology and neuroscience, linguistics, and a whole host of other disciplines, we cannot know anything but the most trivial things for certain, and maybe not even these trivial things. And secondly, this is something to be embraced, not because it is inevitable and seems to have been built into the fabric of the weirdness of "reality," but because it enables us to live with a sense of deep wonder, which he once said was "all the religion we need."
It could be that Pyrrho the Skeptic was the first to advocate for something along these lines (after encountering some "naked wise men" in India?); there seems much to dispute here.
Other ideas that seem to bear a family resemblance in the Wittgensteinian sense: fallibalism, aspects of the sociology of knowledge, Eric Hoffer's "True Believer," and many other forms of social epistemology. I want to discuss - and maybe even elucidate - a few others here.
Richard Rorty and "Knowingness"
One of my favorite academic philosophers of the late 20th century (Rorty died in 2007 at the age of 75), Rorty thought the educated classes, especially via too much theory, had fallen into a trap he called "knowingness," which he may have gotten from someone else, possibly the literary critic Harold Bloom? Anyway, when I first read about knowingness in Rorty's sense it knocked me on my ass, and a definition has stuck in my neural circuits:
"Knowingness is a state of soul which prevents shudders of awe."
Think of the 23 year old grad student who thinks he's "seen it all." He hasn't. Not even close. He "knows" too much. A 23 year old grad student has hardly seen anything, but he is under the illusion he's seen it all. He has been trained to think analytically, and possibly over-analyzes everything, so that nothing is wonderful anymore. This seems born of a deep-seated fear, because another part of himself knows he hasn't experienced much of the world yet. Academics up to the age of 80 have been known to have fallen deeply into the slough of knowingness. It's pretentious to us, but for them, they have defended their knowledge in learned paper after learned paper. Who reads these papers? His colleagues and hardly anyone else. He lives in an academic bubble of knowingness, and many of his fellow academics are hyper-theoretizing and caught in the mire of knowingness also. Females are just as liable to this trap, this "state of soul," as men. It seems a lot like the Cosmic Schmuck Principle, but I seriously doubt Rorty ever read Wilson. They ran in different intellectual strata. But I think it would be a safe guess, were someone to have asked Wilson (who also died in 2007) if Cosmic Schmuckiness prevented a shudder of awe, he'd say yes.
Likewise, I easily imagine Rorty, after reading his books and seeing interviews with him, that he'd embrace the idea of recognizing when you were pretending to know when you really didn't. His theory of truth - which was not a theory - was that truth was something that happened to an idea. And it happened because it was found to be good, pleasurable, helpful. When you're on that track - what helps you get through your days and nights with more humanity - I hazard that you're bending towards less schmuckiness, less knowingness already...
Rorty was often labeled a "neo-pragmatist." I think the Cosmic Schmuck Principle fits into the pragmatist (accused as "anti-philosophy" by some) project snugly.
When I read online about criticism of people between 18-30 who are thought of as "Hipsters," I get a vague whiff that their critics think the Hipsters have too much knowingness, or are Cosmic Schmucks. But because I'm still not sure what truly constitutes Hipster-hood, I will neither defend Hipsters nor join in the scorn. But above all, I don't want to play in the formulation near the top of this article ("An X appears to have done something lousy..."), as it's NEVER fair or just to do so. Moving on...
A hedgehog. This one is probably smarter than Thomas Friedman?
Hedgehogs
Sir Isaiah Berlin wrote in 1953 a famous essay on types of intellectuals, "The Hedgehog and The Fox: An Essay on Tolstoy's View of History," and he drew upon the ancient Greek poet Archilochus, who wrote that the fox knows many things, but the hedgehog knows one big thing. In Daniel Kahneman's recent - astonishingly erudite, endlessly worthsomewhiles - book, Thinking, Fast and Slow, he expounds on the Hedgehogs in our midst, the "experts" and (worse, to my eyes) the "pundits."
Kahneman, a psychologist who won the Nobel Prize for Economics (a fascinating story in itself), is the go-to guy for insight into our own biases, and how to stop acting like a sucker or Schmuck...even though it appears we're wired to fall into schmuckiness (not the word Kahneman uses!) by evolution.
"As Nassim Taleb pointed out in The Black Swan, our tendency to construct and believe coherent narratives of the past makes it difficult for us to accept the limits of our forecasting ability. Everything makes sense in hindsight, a fact that financial pundits exploit every evening, as they offer convincing accounts of the day's events. And we cannot suppress the powerful intuition that what makes sense in hindsight today was predictable yesterday. The illusion that we understand the past fosters overconfidence in our ability to predict the future." - p.218
Kahneman illustrates the role of chance in 20th century history: in very minute sections of time, the fertilized eggs that went on to become Mao, Hitler and Stalin had around a 50/50 chance of becoming females. And around 47 million people were murdered because of this chance. (My estimates, based on a few moments rustling around in some history books; Kahneman does not come up with a number in the text.)
[I may have taken tremendous liberty with this past example; I may have made something along the lines of an egregious error. If anyone would like to point it out, I would be happy to hear what that error might consist of. In other words: was I being a Cosmic Schmuck there? Or not? Or does my writing this bracketed paragraph somehow exonerate me from any Cosmic Schmuckery I may have been guilty of in the above paragraph? Are we in a Strange Loop right now?]
Daniel Kahneman then discusses Philip Tetlock's 20 year project of doing massive interviews and questionnaires with "experts" - pundits who forecasted about political and economic trends - and how these experts panned out, with hindsight. The results, which should be far better known than they are, show that these pundits performed, as Kahneman writes, "worse than they would have if they had assigned equal probabilities to each of the three potential outcomes." (Tetlock got 80,000 predictions for very many questions that had respondents pick whether they think the status quo would remain, there would be more of something such as political freedom or economic growth, or less of those things. Tetlock's book is Expert Political Judgement: How Good Is It? How Can We Know?)
Nassim Nicholas Taleb writes about this very thing, quite amusingly, in The Black Swan: quite often, instead of asking your stockbroker for tips on how to invest in the market, you can ask a taxi cab driver and you'll end up with the same amount of money. Similarly, Tetlock is quoted: "In this age of academic hyperspecialization, there is no reason for supposing that contributors to top journals - distinguished political scientists, area study specialists, economists, and so on - are any better than journalists or attentive readers of The New York Times in 'reading' emergent situations."
Tetlock falls back on Berlin's "Hedgehogs" when talking about "experts" and why we listen to them. Most of the pundits we see are not foxes - who know a lot of things - but "experts" who are loathe to admit when they were wrong, but when forced to admit their wrongness always have many ready-made excuses. They are dazzled by their own brilliance (to my mind the worst of the worst in Unistatian electronic corporate media are Thomas Friedman and David Brooks, not that you'd asked), and they're led astray not by what they believe but how they think. They have a coherent model of the world, and they worship that model. Robert Anton Wilson called this "modeltheism." If you show them they have been wrong in their predictions, they get angry and say they were off by a little bit, or the timing was a tad askew. As Kahneman writes, "They are opinionated and clear, which is exactly what television producers love to see on programs. Two hedgehogs on different sides of an issue, each attacking the idiotic ideas of the adversary, makes for a good show." (p.220)
These Hedgehogs seem like cousins to the Cosmic Schmucks (look at the astounding level of schmuckiness attained by a guy like Rush Limbaugh!), and they seem far too knowing also, eh?
The take-away message? Take the punditocracy with a massive salt-lick, and be a Fox. (Or a non-overweening generalist?)
Kathryn Schulz, who has a lot to say about being wrong
The Pessimistic Meta-Induction From The History Of Science
The wha? Get it straight, kids, from the sexy intellectual Kathryn Schulz. This same article is collected in the recent book This Will Make You Smarter: New Scientific Concepts To Improve Your Thinking, pp.30-31. Most of the theories of the past have fallen by the wayside, so why do we, as Schulz says, grant ourselves "chronological exceptionalism"? When I ran across this bit in the book, I was reminded of John Horgan's book The End of Science, in which he asks very many of the biggest names in science whether we know about 99% of what there is to know, or maybe it's closer to 1%? Fascinating book, wonderful on the sociology of scientific intellectuals and the hard-to-pin-down field Horgan calls "limitology," and it's quite readable, with an ending that, for me, had a twist and was surprising. (Horgan's attitude toward his own question.)
I liked what blogger Roger E. Breisch had to say about the Pessimistic Meta-Induction From The History Of Science, and other of Schulz's ideas from her own book, Being Wrong: Adventures In The Margins of Error.
In my own reading of classics, I think I've seen variations of all of the above family members in the writings of Montaigne, and earlier, Lucretius. And still earlier, Epicurus. But I refuse to dogmatize about any of this and would rather declare that I think I've detected more than enough Cosmic Schmuckery in my own thinking and utterances lately...
Here's Kathryn Schulz, Wrongologist, talking about many of the ideas above. It's 4 mins and 19 seconds.
Robert Anton Wilson minted the term "Cosmic Schmuck" in a similar spirit to Murphy's Law. The Cosmic Schmuck Principle seemed aimed at greater ethical behavior among the educated classes; I see this impetus in RAW as an influence from Ezra Pound and Confucius, and also Alfred Korzybski. Also: RAW wrote a lot about hearing and reading formulations like this while growing up:
An X (person or group) appears to have done something lousy.
Therefore all people who seem like Xes are suspect or bad or dangerous, or might do something lousy.
This formulation leads to incivility, bad ethics, injustices, violence, and even genocide. (Think of Hitler making the above statement, and replace X with Jews.)
Yes, but what is this thing called The Cosmic Schmuck Principle? It has to do with pretending to a level of certainty or knowledge that you are unlikely to have, and so you're acting like a schmuck. Oh, but let's have a concise statement from RAW:
The Cosmic Schmuck Principle holds that if you don't wake up, once a month at least, and realize you have recently been acting like a Cosmic Schmuck again, then you will probably go on acting like a Cosmic Schmuck forever; but if you do, occasionally, recognize your Cosmic Schmuckiness, you might begin to become a little less Schmucky than the general human average at this primitive stage of terrestrial evolution. - p. 21, Natural Law: Or Don't Put A Rubber On Your WillyHERE is the text of this incredible little anarcho-libertarian pamphlet on epistemology at Scribd.
Another website excerpts more from the page(s) with the quote I used above; I link to it in the interests of context. I don't know who the man is in the photo. It is not RAW.
Nota bene and what I find very lovable in the Cosmic Schmuck Principle is that it's heavily implied that we are all schmucks, to some degree. And RAW would've acknowledged his own schmuckiness at times. This dovetails really well with the ideas about Wrongology from Kathryn Schulz, who I write about near the end of this piece...
One of the main tropes that runs through Wilson's entire oeuvre is the embracing of uncertainty; one reason being that, in his epistemology, given our nervous systems and how we're wired, coupled with what we've found in quantum mechanics, cultural anthropology, genetics, perception psychology and neuroscience, linguistics, and a whole host of other disciplines, we cannot know anything but the most trivial things for certain, and maybe not even these trivial things. And secondly, this is something to be embraced, not because it is inevitable and seems to have been built into the fabric of the weirdness of "reality," but because it enables us to live with a sense of deep wonder, which he once said was "all the religion we need."
It could be that Pyrrho the Skeptic was the first to advocate for something along these lines (after encountering some "naked wise men" in India?); there seems much to dispute here.
Other ideas that seem to bear a family resemblance in the Wittgensteinian sense: fallibalism, aspects of the sociology of knowledge, Eric Hoffer's "True Believer," and many other forms of social epistemology. I want to discuss - and maybe even elucidate - a few others here.
Richard Rorty and "Knowingness"
One of my favorite academic philosophers of the late 20th century (Rorty died in 2007 at the age of 75), Rorty thought the educated classes, especially via too much theory, had fallen into a trap he called "knowingness," which he may have gotten from someone else, possibly the literary critic Harold Bloom? Anyway, when I first read about knowingness in Rorty's sense it knocked me on my ass, and a definition has stuck in my neural circuits:
"Knowingness is a state of soul which prevents shudders of awe."
Think of the 23 year old grad student who thinks he's "seen it all." He hasn't. Not even close. He "knows" too much. A 23 year old grad student has hardly seen anything, but he is under the illusion he's seen it all. He has been trained to think analytically, and possibly over-analyzes everything, so that nothing is wonderful anymore. This seems born of a deep-seated fear, because another part of himself knows he hasn't experienced much of the world yet. Academics up to the age of 80 have been known to have fallen deeply into the slough of knowingness. It's pretentious to us, but for them, they have defended their knowledge in learned paper after learned paper. Who reads these papers? His colleagues and hardly anyone else. He lives in an academic bubble of knowingness, and many of his fellow academics are hyper-theoretizing and caught in the mire of knowingness also. Females are just as liable to this trap, this "state of soul," as men. It seems a lot like the Cosmic Schmuck Principle, but I seriously doubt Rorty ever read Wilson. They ran in different intellectual strata. But I think it would be a safe guess, were someone to have asked Wilson (who also died in 2007) if Cosmic Schmuckiness prevented a shudder of awe, he'd say yes.
Likewise, I easily imagine Rorty, after reading his books and seeing interviews with him, that he'd embrace the idea of recognizing when you were pretending to know when you really didn't. His theory of truth - which was not a theory - was that truth was something that happened to an idea. And it happened because it was found to be good, pleasurable, helpful. When you're on that track - what helps you get through your days and nights with more humanity - I hazard that you're bending towards less schmuckiness, less knowingness already...
Rorty was often labeled a "neo-pragmatist." I think the Cosmic Schmuck Principle fits into the pragmatist (accused as "anti-philosophy" by some) project snugly.
When I read online about criticism of people between 18-30 who are thought of as "Hipsters," I get a vague whiff that their critics think the Hipsters have too much knowingness, or are Cosmic Schmucks. But because I'm still not sure what truly constitutes Hipster-hood, I will neither defend Hipsters nor join in the scorn. But above all, I don't want to play in the formulation near the top of this article ("An X appears to have done something lousy..."), as it's NEVER fair or just to do so. Moving on...
A hedgehog. This one is probably smarter than Thomas Friedman?
Hedgehogs
Sir Isaiah Berlin wrote in 1953 a famous essay on types of intellectuals, "The Hedgehog and The Fox: An Essay on Tolstoy's View of History," and he drew upon the ancient Greek poet Archilochus, who wrote that the fox knows many things, but the hedgehog knows one big thing. In Daniel Kahneman's recent - astonishingly erudite, endlessly worthsomewhiles - book, Thinking, Fast and Slow, he expounds on the Hedgehogs in our midst, the "experts" and (worse, to my eyes) the "pundits."
Kahneman, a psychologist who won the Nobel Prize for Economics (a fascinating story in itself), is the go-to guy for insight into our own biases, and how to stop acting like a sucker or Schmuck...even though it appears we're wired to fall into schmuckiness (not the word Kahneman uses!) by evolution.
"As Nassim Taleb pointed out in The Black Swan, our tendency to construct and believe coherent narratives of the past makes it difficult for us to accept the limits of our forecasting ability. Everything makes sense in hindsight, a fact that financial pundits exploit every evening, as they offer convincing accounts of the day's events. And we cannot suppress the powerful intuition that what makes sense in hindsight today was predictable yesterday. The illusion that we understand the past fosters overconfidence in our ability to predict the future." - p.218
Kahneman illustrates the role of chance in 20th century history: in very minute sections of time, the fertilized eggs that went on to become Mao, Hitler and Stalin had around a 50/50 chance of becoming females. And around 47 million people were murdered because of this chance. (My estimates, based on a few moments rustling around in some history books; Kahneman does not come up with a number in the text.)
[I may have taken tremendous liberty with this past example; I may have made something along the lines of an egregious error. If anyone would like to point it out, I would be happy to hear what that error might consist of. In other words: was I being a Cosmic Schmuck there? Or not? Or does my writing this bracketed paragraph somehow exonerate me from any Cosmic Schmuckery I may have been guilty of in the above paragraph? Are we in a Strange Loop right now?]
Daniel Kahneman then discusses Philip Tetlock's 20 year project of doing massive interviews and questionnaires with "experts" - pundits who forecasted about political and economic trends - and how these experts panned out, with hindsight. The results, which should be far better known than they are, show that these pundits performed, as Kahneman writes, "worse than they would have if they had assigned equal probabilities to each of the three potential outcomes." (Tetlock got 80,000 predictions for very many questions that had respondents pick whether they think the status quo would remain, there would be more of something such as political freedom or economic growth, or less of those things. Tetlock's book is Expert Political Judgement: How Good Is It? How Can We Know?)
Nassim Nicholas Taleb writes about this very thing, quite amusingly, in The Black Swan: quite often, instead of asking your stockbroker for tips on how to invest in the market, you can ask a taxi cab driver and you'll end up with the same amount of money. Similarly, Tetlock is quoted: "In this age of academic hyperspecialization, there is no reason for supposing that contributors to top journals - distinguished political scientists, area study specialists, economists, and so on - are any better than journalists or attentive readers of The New York Times in 'reading' emergent situations."
Tetlock falls back on Berlin's "Hedgehogs" when talking about "experts" and why we listen to them. Most of the pundits we see are not foxes - who know a lot of things - but "experts" who are loathe to admit when they were wrong, but when forced to admit their wrongness always have many ready-made excuses. They are dazzled by their own brilliance (to my mind the worst of the worst in Unistatian electronic corporate media are Thomas Friedman and David Brooks, not that you'd asked), and they're led astray not by what they believe but how they think. They have a coherent model of the world, and they worship that model. Robert Anton Wilson called this "modeltheism." If you show them they have been wrong in their predictions, they get angry and say they were off by a little bit, or the timing was a tad askew. As Kahneman writes, "They are opinionated and clear, which is exactly what television producers love to see on programs. Two hedgehogs on different sides of an issue, each attacking the idiotic ideas of the adversary, makes for a good show." (p.220)
These Hedgehogs seem like cousins to the Cosmic Schmucks (look at the astounding level of schmuckiness attained by a guy like Rush Limbaugh!), and they seem far too knowing also, eh?
The take-away message? Take the punditocracy with a massive salt-lick, and be a Fox. (Or a non-overweening generalist?)
Kathryn Schulz, who has a lot to say about being wrong
The Pessimistic Meta-Induction From The History Of Science
The wha? Get it straight, kids, from the sexy intellectual Kathryn Schulz. This same article is collected in the recent book This Will Make You Smarter: New Scientific Concepts To Improve Your Thinking, pp.30-31. Most of the theories of the past have fallen by the wayside, so why do we, as Schulz says, grant ourselves "chronological exceptionalism"? When I ran across this bit in the book, I was reminded of John Horgan's book The End of Science, in which he asks very many of the biggest names in science whether we know about 99% of what there is to know, or maybe it's closer to 1%? Fascinating book, wonderful on the sociology of scientific intellectuals and the hard-to-pin-down field Horgan calls "limitology," and it's quite readable, with an ending that, for me, had a twist and was surprising. (Horgan's attitude toward his own question.)
I liked what blogger Roger E. Breisch had to say about the Pessimistic Meta-Induction From The History Of Science, and other of Schulz's ideas from her own book, Being Wrong: Adventures In The Margins of Error.
In my own reading of classics, I think I've seen variations of all of the above family members in the writings of Montaigne, and earlier, Lucretius. And still earlier, Epicurus. But I refuse to dogmatize about any of this and would rather declare that I think I've detected more than enough Cosmic Schmuckery in my own thinking and utterances lately...
Here's Kathryn Schulz, Wrongologist, talking about many of the ideas above. It's 4 mins and 19 seconds.
Saturday, May 19, 2012
Obesity, OR: "Does Our Butt Look Big In That?" (Pt. 3)
A lyricist named Bernie Taupin once wrote this line in a song called "The Bitch Is Back," sung by Elton John:
"Times are changin' now the poor get fat."
And if anyone wants to know why or how this historical turn of events took place, it's easy to find out that our ingenious modern era with its manipulation of science and technology has produced food at a level to mock Malthus, and cheaply, too. (In the rich countries.) Evolutionarily, for 99% of the time we've been homo sapiens it's been a real slog to capture enough calories and eat a diet with enough protein, fat, and carbs to keep us going, and the average life expectancy rose to the unheard-of high of 38 years old in Unistat by 1850. Evolutionarily, we were pretty much programmed to die by 40. Why sit around as old people and use up the precious tribe's resources? Just for your stories and wisdom? Write that shit down, grandpa, and die already. You're taking up space and it's been at least five years since you used the plow worth a damn.
Sir Thomas Malthus was a catastrophist. If you were around when he was doing his version of what Sir Martin Rees is doing now, and you were prone, let's say, to "pessimistic thinking," you might have thought him a prophet. Basically he said we humans reproduce at an exponential rate, while the rate of food production is arithmetical. It was only a matter of time before famines became common and quite widespread. Malthus was a Man of God, too...No wonder his outlook was so prone to bleakness...(I tend to listen worriedly to Sir Martin Rees, though, truth be told, but that's for another blogspew.)
Reverend Malthus, 1766-1834. Sociologist, economist,
pessimist.
All too human, Malthus did his futurology and prognostications while living in what Nassim Nicholas Taleb calls Mediocristan: he was using far too simple mathematics and couldn't factor in something totally unknowable but fairly Black Swan-ish: we were able to harness mind-power to produce more and more food in smaller and smaller areas, and quicker and quicker, and then transport got better and faster, and refrigeration came into its own...another futurologist proven wrong. (Temporarily?)
But in the ultra-short period of, say, 125 years, this easy access to sugar (which was always hard to find for 99% of our existence), fat, and carbs - all delightful, life-enriching and acting on dopamine levels in the brain (AKA the "reward system") - threw us a curve. We didn't know how to handle it. And then other sciences and technologies combined forces and made our lives comfier and comfier, to the point where, very very suddenly, on our evolutionary scale, we sit around all day long, every day, and eat rich, fatty food. Meanwhile, our bodies are basically the same ones we had a million years ago. No wonder we're fat!
Now we are so rich we've extended the average lifespan to double what it was in 1850, and we're dying of degenerative diseases. Now the game is not predicting when the food will run out, but when we'll learn how to handle the food. And maybe our analytical tools are more sophisticated than Malthus's.
We saw in my last entry that the NCHS/CDC say the stats showed the obesity epidemic is leveling off already. A recent mega-research paper predicted 42% of the Unistat public would have a Body Mass Index of 30% or higher by 2030, but we have reasons to doubt that. The CDC in 2003 predicted that by 2010 40% of the public would be clinically obese (BMI above 30%); the number turned out to be 35.7%
The British Dept of Health predicted in 1999 that by 2010 25% of Brits would be obese. They updated this prediction in 2006 to 33%. By 2010 the number was 26.1%. Fudge factors? Yes, all sorts of them. First off, of course, many who responded by admitting they were eating fudge as they spoke. Then again...
Some numbers were obtained by phone surveys, asking people how much they weighed, and people tend to prevaricate in that situation. Nonetheless, the numbers are probably pretty close. They have turned out to not be as bad as our best predictors predicted. Do the predictors have a vested interest in their High Numbers? Yes, probably. More money gets thrown at Public Health and obesity-related problems, and some of that money sticks to the predictors and their colleagues. But still: we have a long road to hoe, and it's not going to be easy.
Chicago-style deep-dish pizza: now qualifies as a "vegetable"
in Unistat schools, thanks to the Goliath food and beverage
industry and their lobbyists. Man, this looks good right about
now! Eh?
Why will it be difficult? Well, that too is a very complex problem, but if we look at the Goliath-like Food and Beverage Industry and what it can afford in lobbying Congress, versus the public interest groups that want to educate and restrict massive amounts of sugar and fat in schools, or curb advertising aimed at children, well, David gets stomped to death by Goliath like an ant. In the last three years, four government agencies sought to reduce sugar, salt and fat in food marketed to kids: Congress killed it. The Center For Science in the Public Interest - a bunch of do-gooders who object to 9 year olds who weigh 170 pounds already - spent $70,000 last year lobbying Congress. The Food and Beverage Industry spends that every 13 hours. Pizza is now classified as a "vegetable" in schools. According to this article from Reuters, the food/bev industry has never lost a significant political battle, and their tactics are the same as what the tobacco industry's were: we're just giving people what they want in a free society. There's no real proof our food and drink is making people sick. They need to moderate their own intake, and exercise more. If you made a hefty paycheck working as a lobbyist for big Food and Bev, wouldn't you say that too?
Note that Ol' Captain Buzzkill William Dietz makes an appearance in the above-cited article: "This may be the first generation of children that has a lower life span than their parents."
Here are two classic takes on why we're fat, from different points of view. First, check out Professor Richard McKenzie, who may be getting some of that sweet Food and Bev money alongside his emeritus professor dough. Yes, we're fatter. On average, Unistatians are 26 lbs heavier than they were in 1960. SUVs were made for fatties. Gurnies have had to be reinforced, stadium seats widened. Because we're on average 26 pounds fatter than 1960, we use an extra two billion gallons of gasoline and jet fuel. We create much more greenhouse gas and our medical costs have skyrocketed. But, as he argues in his book Heavy: The Surprising Reasons America Is the Land of the Free and Home of the Fat, it's all due to lowered tariffs, cheap imports, and "our growing economic freedoms," which go with political freedoms. No reason to change any of the freedom stuff! (I'll let you mull this one over on your own.)
I think it's a classic, valid libertarian view. There's much to say for it. I'm not completely sold on how we're economically freer now, though. But the freedom argument holds some appreciable weight (sorry!) with me. What I object to is the ultra-monied Food/Bev lobby and their louder bullhorns. They don't want frank education about food and what it's doing to us. For guys like McKenzie, money equals freedom, but I'd like more "freedom" for the educators.
Jonah Lehrer, brilliant popularizer of neuroscience,
the latest psychology, and very creative science writer,
born in 1981.
From Wired, here's a typically smart article from Jonah Lehrer. Why do people eat too much? Well, we're really bad at recognizing when we're full. (That long legacy of hungry homo saps.) Also, restauranteurs think we expect huge portions, and we probably do. So plates have gotten bigger and bigger. Serving sizes are up, Lehrer says, 40% over the last 25 years. We're prone to mimicking the behaviors of those around us. And yes, Big is Good. But why? Lehrer links this to primate status-seeking, which I find fascinating. The problem is: seeking high status by getting the big serving, we get obese, which lowers status. Talk about a vicious circle!
As always, Lehrer suggests a way our of the predicament: if we become mindful of the power/powerlessness module in our primate brain that links Big Food to High Status and therefore, Power, we realize the folly. Mindfulness. It's a big theme in much of Lehrer's writings on neuroscience. But it's easier said than done.
In closing, I suggest we meditate - or ruminate? - a bit on the epigraph Jonah Lehrer uses at the beginning of his article, the quote from M.F.K. Fisher. Is it true? If so, how much do you think it explains about our obesity problem? Do you think some subconscious part of our brain tends to equate food with security, security with love, love with food?
"Times are changin' now the poor get fat."
And if anyone wants to know why or how this historical turn of events took place, it's easy to find out that our ingenious modern era with its manipulation of science and technology has produced food at a level to mock Malthus, and cheaply, too. (In the rich countries.) Evolutionarily, for 99% of the time we've been homo sapiens it's been a real slog to capture enough calories and eat a diet with enough protein, fat, and carbs to keep us going, and the average life expectancy rose to the unheard-of high of 38 years old in Unistat by 1850. Evolutionarily, we were pretty much programmed to die by 40. Why sit around as old people and use up the precious tribe's resources? Just for your stories and wisdom? Write that shit down, grandpa, and die already. You're taking up space and it's been at least five years since you used the plow worth a damn.
Sir Thomas Malthus was a catastrophist. If you were around when he was doing his version of what Sir Martin Rees is doing now, and you were prone, let's say, to "pessimistic thinking," you might have thought him a prophet. Basically he said we humans reproduce at an exponential rate, while the rate of food production is arithmetical. It was only a matter of time before famines became common and quite widespread. Malthus was a Man of God, too...No wonder his outlook was so prone to bleakness...(I tend to listen worriedly to Sir Martin Rees, though, truth be told, but that's for another blogspew.)
Reverend Malthus, 1766-1834. Sociologist, economist,
pessimist.
All too human, Malthus did his futurology and prognostications while living in what Nassim Nicholas Taleb calls Mediocristan: he was using far too simple mathematics and couldn't factor in something totally unknowable but fairly Black Swan-ish: we were able to harness mind-power to produce more and more food in smaller and smaller areas, and quicker and quicker, and then transport got better and faster, and refrigeration came into its own...another futurologist proven wrong. (Temporarily?)
But in the ultra-short period of, say, 125 years, this easy access to sugar (which was always hard to find for 99% of our existence), fat, and carbs - all delightful, life-enriching and acting on dopamine levels in the brain (AKA the "reward system") - threw us a curve. We didn't know how to handle it. And then other sciences and technologies combined forces and made our lives comfier and comfier, to the point where, very very suddenly, on our evolutionary scale, we sit around all day long, every day, and eat rich, fatty food. Meanwhile, our bodies are basically the same ones we had a million years ago. No wonder we're fat!
Now we are so rich we've extended the average lifespan to double what it was in 1850, and we're dying of degenerative diseases. Now the game is not predicting when the food will run out, but when we'll learn how to handle the food. And maybe our analytical tools are more sophisticated than Malthus's.
We saw in my last entry that the NCHS/CDC say the stats showed the obesity epidemic is leveling off already. A recent mega-research paper predicted 42% of the Unistat public would have a Body Mass Index of 30% or higher by 2030, but we have reasons to doubt that. The CDC in 2003 predicted that by 2010 40% of the public would be clinically obese (BMI above 30%); the number turned out to be 35.7%
The British Dept of Health predicted in 1999 that by 2010 25% of Brits would be obese. They updated this prediction in 2006 to 33%. By 2010 the number was 26.1%. Fudge factors? Yes, all sorts of them. First off, of course, many who responded by admitting they were eating fudge as they spoke. Then again...
Some numbers were obtained by phone surveys, asking people how much they weighed, and people tend to prevaricate in that situation. Nonetheless, the numbers are probably pretty close. They have turned out to not be as bad as our best predictors predicted. Do the predictors have a vested interest in their High Numbers? Yes, probably. More money gets thrown at Public Health and obesity-related problems, and some of that money sticks to the predictors and their colleagues. But still: we have a long road to hoe, and it's not going to be easy.
Chicago-style deep-dish pizza: now qualifies as a "vegetable"
in Unistat schools, thanks to the Goliath food and beverage
industry and their lobbyists. Man, this looks good right about
now! Eh?
Why will it be difficult? Well, that too is a very complex problem, but if we look at the Goliath-like Food and Beverage Industry and what it can afford in lobbying Congress, versus the public interest groups that want to educate and restrict massive amounts of sugar and fat in schools, or curb advertising aimed at children, well, David gets stomped to death by Goliath like an ant. In the last three years, four government agencies sought to reduce sugar, salt and fat in food marketed to kids: Congress killed it. The Center For Science in the Public Interest - a bunch of do-gooders who object to 9 year olds who weigh 170 pounds already - spent $70,000 last year lobbying Congress. The Food and Beverage Industry spends that every 13 hours. Pizza is now classified as a "vegetable" in schools. According to this article from Reuters, the food/bev industry has never lost a significant political battle, and their tactics are the same as what the tobacco industry's were: we're just giving people what they want in a free society. There's no real proof our food and drink is making people sick. They need to moderate their own intake, and exercise more. If you made a hefty paycheck working as a lobbyist for big Food and Bev, wouldn't you say that too?
Note that Ol' Captain Buzzkill William Dietz makes an appearance in the above-cited article: "This may be the first generation of children that has a lower life span than their parents."
Here are two classic takes on why we're fat, from different points of view. First, check out Professor Richard McKenzie, who may be getting some of that sweet Food and Bev money alongside his emeritus professor dough. Yes, we're fatter. On average, Unistatians are 26 lbs heavier than they were in 1960. SUVs were made for fatties. Gurnies have had to be reinforced, stadium seats widened. Because we're on average 26 pounds fatter than 1960, we use an extra two billion gallons of gasoline and jet fuel. We create much more greenhouse gas and our medical costs have skyrocketed. But, as he argues in his book Heavy: The Surprising Reasons America Is the Land of the Free and Home of the Fat, it's all due to lowered tariffs, cheap imports, and "our growing economic freedoms," which go with political freedoms. No reason to change any of the freedom stuff! (I'll let you mull this one over on your own.)
I think it's a classic, valid libertarian view. There's much to say for it. I'm not completely sold on how we're economically freer now, though. But the freedom argument holds some appreciable weight (sorry!) with me. What I object to is the ultra-monied Food/Bev lobby and their louder bullhorns. They don't want frank education about food and what it's doing to us. For guys like McKenzie, money equals freedom, but I'd like more "freedom" for the educators.
Jonah Lehrer, brilliant popularizer of neuroscience,
the latest psychology, and very creative science writer,
born in 1981.
From Wired, here's a typically smart article from Jonah Lehrer. Why do people eat too much? Well, we're really bad at recognizing when we're full. (That long legacy of hungry homo saps.) Also, restauranteurs think we expect huge portions, and we probably do. So plates have gotten bigger and bigger. Serving sizes are up, Lehrer says, 40% over the last 25 years. We're prone to mimicking the behaviors of those around us. And yes, Big is Good. But why? Lehrer links this to primate status-seeking, which I find fascinating. The problem is: seeking high status by getting the big serving, we get obese, which lowers status. Talk about a vicious circle!
As always, Lehrer suggests a way our of the predicament: if we become mindful of the power/powerlessness module in our primate brain that links Big Food to High Status and therefore, Power, we realize the folly. Mindfulness. It's a big theme in much of Lehrer's writings on neuroscience. But it's easier said than done.
In closing, I suggest we meditate - or ruminate? - a bit on the epigraph Jonah Lehrer uses at the beginning of his article, the quote from M.F.K. Fisher. Is it true? If so, how much do you think it explains about our obesity problem? Do you think some subconscious part of our brain tends to equate food with security, security with love, love with food?
Subscribe to:
Posts (Atom)


















