Overweening Generalist

Showing posts with label Ray Kurzweil. Show all posts
Showing posts with label Ray Kurzweil. Show all posts

Sunday, January 10, 2016

Robert Anton Wilson's "Jumping Jesus Phenomenon": Fugitive Notes

Prelim
I've been reading with plentious enjoyment John Higgs's recent alternative history of the Roaring 20th Century, Stranger Than We Can Imagine, and it's obvious that Robert Anton Wilson influenced Higgs, almost throughout, although RAW is mentioned only in the bibliography, with Robert Shea and their Illuminatus! Trilogy.

I'll review Higgs's book in good time, but it's gotten me thinking about the semantic unconscious and what drove us into the vast unintelligible weirdness of the 20th c, and I kept thinking about attempts to quantify acceleration of information in history. I've run across a number of fascinating models, but my favorite one remains RAW's model, which he dubbed "The Jumping Jesus Phenomenon" in one of his choicer ludic moods.

For the uninitiated, check out these links; all others feel free to skip down:
"The Jumping Jesus Phenomenon by Robert Anton Wilson Updated by Bob Pauley" (the gist, pithy)

A short excerpt from RAW on this topic, first paragraph HERE, from his book Right Where You Are Sitting Now.

Here's a YouTube clip on Jumping J and The Singularity (you get to see RAW here). It will take 2 1/2 minutes of your sweet accelerating time. I'm pretty sure it's an excerpt from a 2006 Belgian documentary called Technocalyps.


             A Ray Kurzweil graph showing how Everything will lead to the Singularity by 2045
                      and then everything will be just ducky.


Influences
RAW states in the YouTube video that he first got turned onto the idea when reading Alfred Korzybski. (There seems to be far more about this idea in Korzybski's 1921 book Manhood of Humanity than in his 1933 magnum opus, Science and Sanity.) Other influences were Buckminster Fuller, Henry and Brooks Adams, Adrian Berry, Claude Shannon, Theodore Gordon, Carl Oglesby, Timothy Leary, Unistatian writer/biographer John Keats (died: 2000 CE), and many others. Wilson had one of the great capacious, compendious and generalist minds of the 20th c, so his model of acceleration of information in history must be considered something of a "meta" model. When he noticed later thinkers and ideas that seemed isomorphic to his model, at times he incorporated these new ideas, and in the last 15 years of his life sounded sanguine about Ray Kurzweil's models of acceleration, just to give one example. He also - in both his books and articles - frequently cited his immediate influences for a given idea.

It seems likely that, while RAW had been fascinated by the idea of info-acceleration and its effect on societies and human nervous systems, it was the reading of a 1979 book - RAW was 47 in the year 1979, if you're keeping score at home - on another of his enduring preoccupations, the prospects for human immortality, titled Conquest of Death, by Alvin Silverstein, that first got the Jumping J jumpin'. (See pp.134-141) Silverstein had found a 1973 study by economist Georges Anderla, for the OECD, that attempted to quantify knowledge acceleration in history.

Here's a slice of Silverstein:

Let us assume that by the year A.D. 1 we had accumulated an arbitrary single unit of knowledge. Fifteen hundred years later (A.D. 1500) the sum total of human scientific and technological knowledge had about doubled. At about this point the scientific revolution had begun, welding the natural curiosity of humanity to disciplined scientific techniques and quickening the pace of progress. It required only 250 years (through the age of Newton and the Enlightenment of the eighteenth century) for the store of data to double once more, to four units. With the rise of new secular academic institutions, science had a "workbench." The next doubling took only 150 years: by the year 1900 humanity had eight units in its "knowledge bank." (pp. 135-136)

The article Silverstein had drawn from was from an OECD publication called Information In 1985, though Anderla's paper was written in 1973. Silverstein uses "units;" RAW uses "Jesuses" after the scientist-derived nomenclature practice of naming a unit of measurement after one of their own. Since Anderla's arbitrary single unit begins very near the time of the person named Jesus the Christ was born, RAW thought it would be ironic, hip and catchy to re-name the unit a "Jesus."

It's a MODEL
Note in the Silverstein quote: it's technological and scientific knowledge, presumably because that's easier to quantify than a conception from my favorite model in the sociology of knowledge, Peter Berger and Thomas Luckmann's thesis that "the sociology of knowledge must concern itself with everything that passes for 'knowledge' in society." (The Social Construction of Reality, p.14-15)

Also: presumably there's a much more unwieldy problem lurking here: the difference between data, information, knowledge, and wisdom. In "The Neurogeography of Conspiracy" in Right Where You Are Sitting Now RAW includes a footnote on one of these elements:

No statement is made or intended about wisdom, which is private, not public, and somewhat more mysterious. (p.90)

In reading Wilson's numerous takes on his Jumping J Phenom we see the idea linked to vast historical migrations of world centers of wealth. When there's lots of wealth there's lots of ideas/data/new techniques/mathematical progress/information/knowledge. Because this meta-conceptualization is a map made by a human (RAW) from reading ideas about this topic from others, it's an intensely social project. Further, because Wilson is not a Platonist, human nervous systems make these models of models of models. Hence, the migration of wealth over the globe over the longue duree was called a "neurogeography" of wealth migration. Indeed: to remind us that (presumably) ALL of our studies are human and not given by the gods, even mathematics might be thought of as "neuromathematics."All of our maps and models of "reality" must first filter through the nervous systems of Beings like ourselves who live on this planet, in a gravity well, orbiting a Type G main-sequence star. There might be other intelligent Beings elsewhere with different systems for mapping "reality." Hilariously, RAW pointed out that, in reminding ourselves of this, we must think of the discipline of neuroscience as neuro-neuroscience. Was he fucking with us? Probably...

Being a generalist dunderhead, I can't begin to offer the distinct features that delineate data from information. Or info from knowledge. The more intrepid New York intellectual Daniel Bell, in his The Coming of Post-Industrial Society (1973), gives us:

1. data or information describing the empirical world
2. information: organizing the data into meaningful systems such as statistical analysis
3. knowledge: use of information to make judgements.

Which...alright. Not bad.

Some of the best writing I've seen on this clusterfuck of ill-defined terms comes from David Weinberger. See his Everything Is Miscellaneous on the radical pace and change in the way knowledge is structured these days/daze, due to the digital supernova we're all experiencing right where we are sitting now. (See esp. pp. 100-106, op cit). For some golden passages about today's world and knowledge to the social construction of meaning, see the same book, pp. 199-230. But I digress...

In James Kaklios's The Amazing Story of Quantum Mechanics he compares the quantification of energy doubling versus information doubling: it did not occur at the same pace, because energy storage must obey the properties of atoms. "If energy storage also obeyed Moore's Law, experiencing a doubling of capacity every two years, then a battery that could hold its charge for only a single hour in 1970 would, in 2010, last for more than a century." (Anyone heard of a battery in the works that promises to last 100 years? I wouldn't be surprised if we get one of those in the next few years.)

I lump in energy doubling to screw with our already muddled minds, and 'cuz I get a heady buzz off this stuff. The great anthropologist Leslie White had attempted to quantify social evolution via a measurement of the energy through-put of a particular society.

We could model all this as Jumping Jesus vs. White's Law vs. Moore's Law, but that would be simplifying things far too much, in my estimation. Even Jumpin'J AND White's Law AND Moore's Law seems too simple.

By which I also mean too complex. At this level, logic breaks down for me. Pardonnez-moi, mais il me semble ĂȘtre plein de merde.

Moore's Law
In 1965 Gordon Moore of Intel, said the number of transistors would double every 18 months to two years, and therefore nothing will stay the same, that no technology will be safe from a successor, that computing costs would fall while computing power would increase exponentially. The idea of a personal computer was hatched in the 1960s, but was as yet unfeasible. Yet Moore's Law was like the governing religion in Silicon Valley/San Francisco/Berkeley; it was widely assumed that this rate of exponential growth applied not only to technology, but business, education, and even culture. Of course, Wilson loved this idea (not as a religion though). The visionaries of the sixties knew they only needed to wait a few years and the personal computer would arrive.

I had been trying to keep up with Moore's Law for years. Just when it looked like it would slow down, somebody figured out something, and it kept going. Then, for the last few years, it looked like it had slowed from its exponential pace. Then, last month I read this article. Ok: a marriage of electrons and photons to fit 20 million transistors and 850 photonic components (whatever that means) onto a single 3x6 millimeter chip? Greater bandwidth for less power. "Ultrafast low-power data crunching." And it's quickly scalable for commercial production. It's clear from the article that it's going to be good for the environment (barring the Law of Unforeseen Consequences). But I wondered how this related to Moore's Law. Was Moore's Law even a "thing" anymore? So I emailed the lead researcher at Berkeley about this. Here's Dr. Vladimir Marko Stojanovic's response:


Moore's law has definitely slowed down. Even Intel postponed their 7nm process this year to next year - something that hasn't happened the last fifty years.
Also, the process nodes below 45nm are not really any faster and hardly denser, just more energy-efficient due to mobile demand.

Then Stojanovic wrote a sentence that was way over my head, but asserted this new breakthrough goes "beyond Moore's Law." I guess I'll take him at this word. Hey, why not? Stranger than we can imagine?

Stranger Than We Can Imagine
According to RAW's Jumping Jesus model, we had accumulated 8 Jesuses by 1900, right around the time Higgs's book is concerned with. Higgs covers the 20th century, in which information doubled again by 1950 to 16 Jesuses, by 1960 to 32 Js, by 1967 to 64, by 1973 128 Js...and then things get out of hand. I'm not sure anyone understands it.

It could be that once we got to 8 Js, human society begins to produce knowledges it had no idea what to do with. Or at least: it had a very difficult time trying to come to grips with this hot mess of items like the Id, genocide, chaos mathematics (which plays a big part in forecasting futures), quantum mechanics and relativity, climate change, postmodernism, psychedelic drugs, and existentialism. Well, clearly we have not come to grips with any of these Ideas yet, eh?

A Few Quotes

The "investigative"poet/chronicler/counterculture historian and Egyptologist Ed Sanders said in 2000CE that our era is "data retentive." (post-Snowden: who can possibly say Sanders was wrong?)

The UC Berkeley information theorist Peter Lyman, who wrote How Much Information? and who died in 2007, said, "It's clear we're all drowning in a sea of information."

"Where is the wisdom we have lost in knowledge?
Where is the knowledge we have lost in information?"
-T.S. Eliot, "The Rock"

Eric Schmidt said that from the dawn of civilization until 2003, humans had produced five exabytes of data. He said that we now produce that every two days, and it's accelerating. I don't know his source(s). I have no grasp of an "exobyte." I'm not really even sure how this was measured or what it could mean, but Schmidt seemed excited by this. I gleaned it from This Will Make You Smarter, ed. by John Brockman, p.305 Of course, cultural evolution, being Lamarckian, occured pre-Jesus. Who knows when time-binding began? (My guess was on a Tuesday, probably rainy, may as well stay in the cave all day and...write something?) Vico says, "The Greek philosophers accelerated the natural course of their nation's development." (paragraph #158, New Science)

"The ignorance of how to use new knowledge stockpiles exponentially." - Marshall McLuhan

"I suspect or intuit that this ever-accelerating info-techno-sociological rev-and-ev-olution follows the laws of organic systems and continually reorganizes on higher and higher levels of coherence, until something kills it." - Robert Anton Wilson brainmachines.com Manifesto

"Along the way to knowledge,
Many things are accumulated.
Along the way to widsom,
Many things are discarded."
-Lao-Tzu

In a 2004 interview, RAW makes it clear how crazy it is to try to be a generalist these days of accelerated info. This interview is included in the book True Mutations: Interviews On The Edge of Science, Technology and Consciousness, by R. U. Sirius:

NF: Is there anything specific that you can think of that you feel has been superceded?
RAW: I can’t think about it right now. You can’t be a generalist in this world. So many things I’ve written about have changed by now I don’t know how out-of-date I am. I just know I must be out-of-date.


                                            art/graphic design by Bobby Campbell

Friday, April 5, 2013

Stephen Wolfram's Model of Information

In the 1940s, John von Neumann and Stanislaw Ulam began playing around with the idea of natural systems being extrapolated to initial conditions, then playing out as a sort of cellular automata. And I remember when I first read about cellular automata - James Gleick's book Chaos: Making A New Science had just come out - and it was filled with mind-spaghettifying ideas. Ideas like artificial life, the now-famous "Butterfly Effect," chaos mathematics and Benoit Mandelbrot and fractal geometry and fractal art, and it was  - much of it - way over my Generalist's head, but exciting. Cellular automata was in there. I had never heard of it.

                                                           Wolfram

Years later I picked up Stephen Wolfram's book after it came out in 2002: A New Kind of Science was about 1300 pages long, and was the manifesto of a guy who graduated with a PhD in particle physics from CalTech when he was 20, then received one of the first MacArthur "Genius" awards at age 21. This guy had a way to model just about everything: syntactic structures, social systems, particle physics.  Just about everything. It turns out he was a big-time guy in cellular automata, carrying on in the tradition of another "Martian," John von Neumann.

Wolfram's math was over my head, but books like this make me excited just to be in the presence of this sort of compendious mind. It's the kind of book I take off the shelf and open at random and read, hoping for some sort of inspiration. It usually works. Wolfram models information in our world upon his forays into cellular automata, in which you have a very basic system under initial conditions, and watch it evolve. He developed a taxonomy of the sorts of systems that arise, that he called "Class 1" "Class 2," and so on. These first two classes exhibit a low order of complexity; they tend to reach a level of constancy and repetition that's sorta boring. There's no surprises. They go on and on, ad nauseum, or die. A system like this? A clock.

His Class 3 level I think of as "noise." You can't predict anything. It's seemingly entirely random, like being bombarded by cosmic rays. If there's any structure at all, it's too complex. It seems akin to entropy. A system like this? Your TV tuned to a dead channel: all static and noise.

                                        cellular automata being simulated, played out. 

Wolfram's Class 4 is where the action is: these systems turn out lots of surprises. They're complex but there's structure; you can model from them and make a certain sense out of what's going on. Systems like this are intellectually exciting and basically describe any theory or "law" in the sciences. They're surfing the edge, almost falling into "noise" but never quite. It reminded me of Ilya Prigogine's ideas about complex adaptive systems and negative entropy, how life flourishes despite how "hot" it burns and uses resources. It creates information, structure, patterns, complexity. Indeed, Prigogine and Wolfram seem compatible enough to me...

                                 Shannon's basic equation for information theory:
                                 world-shattering stuff, turns out


My Other Information Systems
Probably because of my intellectual temperament - which includes not being particularly adept at math - I had always been very impressed with guys like Wolfram and what they were able to do with math, but I have also been suspicious that they're somehow operating from the conceit...or rather, flawed assumption that numbers can describe everything and that everything that's interesting to us is really just stuff that's interacting with the environment and doing computations. I thought these weird math geniuses had become hamstrung by the computer metaphor, and as I saw how different the human brain was from what they had asserted it was - a "biological computer" - I felt my suspicions confirmed.

I remember Timothy Leary giving a talk in Hollywood. He had been reading a recent book and was very enthusiastic about it. It was titled Three Scientists and Their Gods, by Robert Wright. So of course I had to read it. It's about Ed Fredkin, E.O. Wilson, and Kenneth Boulding. Leary seemed taken by Fredkin especially. This was an Everything Is Information Processing in a digital way stuff. Leary's psychedelic intellectual friend Robert Anton Wilson seemed interested in this view too, but never committed to it. RAW always seemed more committed to Claude Shannon's mathematical theory of communication - which is the gold standard for quantifying information - but Shannon's theory has information with no necessary semantic component; RAW made a heady brew from combining Shannon with Korzybski, who was all about semantics and our environment and how we make meanings.

Earlier, the originator of pragmatism, Charles Sanders Peirce, had developed a theory of semiotics that took into consideration the content of information using signs and a mind interacting with signs; he had begun to work out a system of defining, quantifying, and taking into account the evolution of a piece of information. This was the "pragmatic theory of information," but it hasn't gone all that far. Shannon's 1948 paper blew it off the map. But still, "information" had to have some sort of semantic component to it, or I had difficulty grasping it. Shannon's and Von Neumann's and Fredkin's and Wolfram's and Leary's ideas about "information" felt too disembodied to me; my intuition told me this couldn't be right. But I'm starting to come over to their side. Let me explain.


Modeling Natural Processes
Via cellular automata theory and the gobs of other stuff a Mind like Wolfram has, he said you can only get so far by modeling life as atoms, or genes or natural laws or as matter existing in curved space at the bottom of a gravity well. More fruitfully, we can model any natural process as computation. Big deal, right? Yea, but think of what this implies: Wolfram thought we can model a redwood tree as a human mind as a dripping faucet as a wild vine growing along a series of trees in a dense jungle thicket in the Amazon. Why? Because all of these systems were "Class 4" systems, and these are the only really interesting things going on. All of these systems exhibit the behavior of "universal computation" systems. (If this reminds you of fractals and art and Jackson Pollock, you're right: I see all of this stuff as a Piece. And so, apparently, does the math.)

Also: you cannot develop an algorithm that can jump ahead to predict where the system will be at Time X; this was proven by Alan Turing in 1936. You can't predict faster than the natural process itself. You had to wait to see what the system did; this blows to smithereens any Laplacian Demonic idea about knowing all the initial conditions and being able to predict everything. So guys like Ray Kurzweil - who has become more and more a sort of Prophet for quantifying the acceleration of information and making bold, even bombastic prediction about what will happen to our world, our society? Wolfram/Turing say no. There are no short cuts and our natural world is irreducible to anything close to Laplace's Demon. The system is too robust to reduce to even what Kurzweil seems to think it is. Robert Anton Wilson used the term "fundamentalist futurism" to criticize those groups of intellectuals in history that Karl Popper had called the enemies of the Open Society. I think the term may apply to Kurzweil too, but I'm not sure. Certainly it seems to apply to Hegelian historicism, most varieties of Marxism, Plato's Republic, and Leo Strauss and the Wolfowitz/Bush/Cheney Neo-Cons.

As I read Wolfram and Kurzweil, the latter seems to see our world as modeled within Wolfram's classificatory scheme as something like a Class 2 system: complex, but if you know enough about the algorithm that undergirds the whole schmeer: fairly predictable.

Arrogance? Aye, but human, all-too human, as Fred N wrote.

                                                    Drew Endy, now at Stanford

Synthetic Biology
Leary, with his penchant for neo-logizing, had in his 1970s book Info-Psychology, defined "contelligence" as "the conscious reception, integration and transmission of energy signals." There were eight fairly discrete levels of this reception--->integration----> transmission dynamic (modeled on the syntactic actions of the neuron). All well and good and trippy, but a team at Stanford led by Drew Endy has made a computer out of living cells.

Engineers at Stanford, MIT, and a bunch of other places have made biological computers. Do you know how a computer must be able to store lots of data? Well, it turns out storing data in DNA is insanely, wildly do-able and has more storage space than you can imagine. Perhaps you heard that some more of these everything-is-a-computer types stored all of Shakespeare's Sonnets in DNA. But that's small taters: it looks like we'll be able to store entire libraries, TV shows, movies, and CDs in DNA. Read THIS and see if you don't feel your mind getting a tad spaghettified.

So: a silicon chip uses transistors to control the flow of electrons along a path; Endy and his team at Stanford have developed a "transcriptor" to control the flow of proteins from one place to another, using Boolean Integrase Logic gates (or "BIL gates" so there's your geek humor for the day!). Endy says their biological computers are not going to replace the thing you're using to read this, but they will be able to get into a tiny, tight quarters and feedback info and manipulate data inside and between cells...something your Smart Phone cannot do.

Endy sees his biological computers as inhabiting a cell and telling us if a certain toxin is present there. It could also tell us how often that cell has divided, giving us early info on cancer, for example. It could also tell us how a drug is interacting with the cell, and make therapeutic drugs more individually tailor-made.

In a line that reminded me of dear old Crazy Uncle Tim, Endy told NPR that, "Any system that's receiving information, processing information, and then using that activity to control what happens next, you can think of as a computing system."

For more on bio-computing, see HERE and HERE.

I'm starting to swing more with Wolfram. But there are many other little snippets that are swaying me. I still like older forms of "information," more human-scaled and poetic and embodied.

But then there are the intelligent slime-molds, which I will leave you with. Grok in their fullness. Don't say I ain't never gave ya nuthin'!

How Brainless Slime Molds Redefine Intelligence.

Sunday, January 20, 2013

History and Perception of Time: Labeling and Control

I use the word "control" in the title but I think in this semantic sense it's human; oh-so human.

Here's What I'll Ramble On About Here:
Noocene Epoch
-"human progress"
- acceleration of data, information
- Anthropocene Epoch
- Holocene Epoch
- a final riff

So: How do you think we're doing so far in the Noocene Epoch? (There oughtta be an umlaut over that second "o" in Noocene.) I copped this Epoch from The Biosphere and Noosphere Reader. There it was defined as something like: how we manage and adapt to the immense amount of knowledge we've created. My answer is: I don't know, but I suspect a lot of us are having birth-pangs of a rather longue duree, if we can use that term on a personal scale.

Mutt: We can't.
Jute: We can.
Mutt: You won't.
Jute: I will.

With something like a logarithmic increase in world population and technological development, including Teilhard's global media/communications vision of a noosphere (the human mind permeating the electromagnetic spectrum), we seem to be going a bit nuts; it may be coming too fast for our biologically-evolved selves. And are we making logarithmic-like gains in empathy, understanding, and a general updating of ethics and manners, a cosmopolitan outlook? My knee-jerk says nay; Steven Pinker wants to argue something like a "yes" to this in his recent doorstop The Better Angels Of Our Nature. And I so want to believe his basic thesis is right.



Human "Progress"
On the other hand, there's a long tradition of denial of "progress" by heavyweight thinkers. I usually read them as necessary correctives to a general cultural mindlessness about "progress." Chris Hedges has a bit of a jeremiad this week: the very technological boom that we've created - it started only a few minutes ago, on the vast homo sapiens sapiens timescale - is the very thing that may be taking us down. For those of us with an atavistic need for Bad Time when there's one to be had, read Hedges's "The Myth of Human Progress."

Acceleration of Info
Robert Anton Wilson thought the general rise of social lunacy and conspiracy theory was related to the information flow-through in society, which, according to statistics he derived from French economist Georges Anderla, was doubling at ever-increasing rates. Bytes, Data, Information, Knowledge, and Wisdom may all "be" different things, indeed, but RAW's (and Kurzweil's for that matter) notions of pegging an idea and a method for counting, then watching the curve rise absurdly quickly, seems an effective rhetoric to get us to think of acceleration of processes, however flawed the methodology may be.

Futurist Juan Enriquez talking about data-doubling for 2 minutes.
Ray Kurzweil's Law of Accelerating Returns (ancient!: from 2001)
Robert Anton Wilson and Terence McKenna on information doubling; 4 minutes

Anthropocene Epoch
According to RAW's Jumping Jesus, we were at 4 Jesus in 1500, then 8 by 1750 and the start of the Industrial Age. I increasingly see the Industrial Age as now being described as the beginning of the Anthropocene Epoch. Can we get out of it unscathed? I increasingly doubt it. I don't mean the human experiment on this rocky watery planet will end soon, but I do think we will radically alter what it means to be "human" in the next 30-50 years.

                                                  Cesare Emiliani

Holocene Epoch
The Age of Faith. The call of Being. The Mind of Europe, the Ming Dynasty, the "postmodern," The Sixties...All of these ways of conceptualizing our time here (and any other one you can think of) happened during the Holocene Epoch, which was coined by Cesare Emiliani: he thinks our calendar, which shifts when a Jewish rabbi-carpenter-anarchist was born, is too subjective. The "entirely recent" (AKA "Holocene") is, for Emiliani, anything from 10,000 years ago to today, roughly the Neolithic to now. The last great Ice Age had receded: the human story is told in the last 10,000 years, and so why don't we just add a "1" to whatever year we're in now and think of time that way? So, we're living in 12013 now.

I confess I'm a sucker for romantic intellectuals who are so overweening in their grandiosity of ideation that they think they can change the basic calendar. Do I think Emiliani's idea will ever catch on? Not a chance. But it has caught on with me. I like the psychological sense of a new way to control my perception of time with the Holocene.

Final Riff
To whatever extent human's many problems represent an Existential Risk: climate change, lurking plagues, asteroid collisions, Mutually Assured Destruction, and continued overpopulation (the world had roughly 200 million total when the anarchist rabbi was born; 791 million in 1750; 1.6 billion in 1900; 2.9 billion in 1960; 3.6 in 1970; 4.4 in 1980; 5.2 in 1990; 6.0 in 2000; and we passed 7,000,000,000 around Halloween, 2011); whether there's another Great Dying, or a Robot Apocalypse, or a happy Singularity or Omega Point: we will need to pass through something Ahead that we might later think of as a Bottleneck Epoch.

On another level and despite the many charming cyclical models of Time and History proffered by some of our more ingenious thinkers, the ideas from Hegel, Marx, Heidegger and Derrida lead me to agree with Derrida: there is no lost original language or vocabulary that will restore our sense of being grounded in some sort of Absolute Ultimate. All that is or seems, seems as metaphor, and we must find our way bravely in this present (which we want, at times, to be "timeless"). We post-postmoderns: can we believe in a teleology for our species, within an historical trajectory? Do we take seriously an eschatology? Clearly some do, but they seem in a negligible minority. In the previous paragraph I hazarded a Bottleneck Epoch, my optimism winning out. I, like Buckminster Fuller, am biased: I like the humans and I, as Bucky said, want them "to be a success in universe."

Nonetheless, how do we think about our present eschatoteleological dilemma? (A: mostly, we don't.)

I wrote this entire post in hopes that someone will think me a Heavy Cat.

Tuesday, June 19, 2012

The Drug Report: June, 2012: The Trouble With Cholesterol

[Friends: I could write about DRUGS every day here for five years and never get tired of it, although most of you would be tired of me. The mandate I've placed on myself is to adopt the persona of a "generalist," so the drug-writing would do harm to the stated purpose of the blog...I'm not sure I've made a good case for my thesis yet anyway, although one of these days I'll arrive at the point...There are already quite a lot of good readable blogs on drugs out there anyway, and if you've seen one you'd like to give a shout-out to, go ahead and mention it in the comments section. - OG]

Lipitor and Other Statin Drugs, and Why I May Have an Excuse For Not Being As Smart As You Think I Oughtta Be...
...Which I'll get to shortly, but first: did you know that, despite our ability to synthesize new compounds as medicines/pharmaceuticals/DRUGS, we still derive most of our best-selling drugs from Nature? Can you imagine surgery before the opiate drugs like morphine? Well, where did we find out about opium? From poppy seeds. This will never cease to invoke wonder in me: a molecule produced by a flower was found to produce euphoria and a diminution of pain in humans. We didn't know why/how this worked (morphine synthesized in the lab circa 1803) until the latter half of the Roaring 20th century.

Indeed, a recent study from Singapore shows that about 25% of the best-selling medicines were derived from microorganisms first found in leeches, snails, bacteria, fungi, and other critters. To meander away from the topic for one sentence, what I found interesting in the study linked to here was that these researchers think they've punched a hole in the reigning idea that beneficial-to-humans substances can be found throughout the biosphere; they think there are hot pockets of classes of organisms where you can get much more bang for your buck when looking for the novel stuff, and this is likened to the way petroleum geologists have gone looking for where to drill most profitably. But yea: if you use aspirin, antibiotics, Procodin for coughs, Ventolin for asthma attacks, Lantus for diabetes, Beserol as a muscle relaxant, or Drovan for hypertension, you have, behind all these drugs, researchers studying the microorganisms produced by "wee beasts" - as the Father of Microbiology, Anton van Leeuwenhoek (say "LAY-ven-hook") - called them. But let me back up just a bit.

                                           statin-discoverer Akira Endo, at 75 in 2008

Lurking around since the 1950s at least was the idea that, the reason heart attacks were such a problem was that people make cholesterols in their liver, derived from dietary constituents, and these cholesterols do all kinds of beneficial things for us, like maintain cell walls and cellular skeletal structures. The liver made an enzyme called HMG-CoA reductase (which I include in this post to try to impress you), and this stuff did lots of good, but if there was too much of it, when it tried to return to the heart, it got stuck on arterial walls and formed fatty-like plaques. Maybe these plaques built up over time, but when they got too big they caused strokes; when they came loose they caused heart attacks. Those obstructed arteries probably played havoc in many ways. Certainly cardiologists and heart surgeons believed this: they saw evidence of it with their own eyes. (I'll spare you the pics.)

But let me back up again a bit.

                                  Anton van Leeuwenhoek, lens-grinder, curious self-
                                  experimenter extraordinaire. Read the chapter on him
                                  in de Kruif's The Microbe Hunters!

In 1971, a researcher at Sankyo Pharmaceuticals, Akira Endo - not to be confused with the Akira Endo who's a Japanese-American conductor - began to muck around with chemicals produced by fungi that grew on things like rice. He is credited with discovering statins, a class of drugs that definitely lower LDL cholesterol. Flash forward 35 years and Endo's receiving accolades and awards and the Japanese equivalent of the Nobel Prize for medicine. Those 35 years flash-forwarded and we also see that every Big Pharma company had their own statin drugs, but Pfizer's Lipitor (atorvastatin) was the blockbuster, the Thriller, the Titanic of all pharmaceutical drugs. It entered the market in 1997 and has made Pfizer $81 billion. It's probably the best-selling pharma-drug of all time. At least 20 million Unistatians are taking it in 2012, most of them over 45 years old. Pfizer's patent protection ran out last November, and it has been aggressively dealing with insurance companies by lowering its price in order to compete with the new generics market. Meanwhile, studies done by academic researchers and governmental bodies keep finding that statins are something like miracle-drugs, not only demonstrably lowering cholesterol and heart attacks and other cardiovascular morbidities, but they might inhibit Alzheimer's and, and, and...well, just all sorts of unforeseen wonderful things these statins do! But we consumers might want to start looking into these claims for ourselves. Chances are, we use statins ourselves or know someone who uses them. And let me just say this: make no mistake about it, Big Pharma largely funds just about every study you'll read on how great statins are.

Just one more back-up and then I'll keep it in drive from here on out?

Some Personal Stuff: About My Blood and Genes
Around 1997 I went in for a physical. I'm a lithe, ectomorphic dude. I'm omnivorous, but not a major meat-eater. I love eggs, but only have them once every two weeks or so. I exercise a lot, because I enjoy the mental states I get in when I'm hiking or riding my bike around town, and I love the endorphin buzz if I've exercised vigorously enough. But my blood tests showed too-high LDL (the "bad" cholesterol that could shorten lives). My doctor said he was surprised because of my lean body mass and asked about my parents. Well, my mom died of a massive heart attack in her sleep at age 53, but she had smoked cigarettes heavily from an early age. My dad? He's got a bit of a belly but he's in pretty good shape and yes, he's on cholesterol-lowering drugs. My doc thought I probably had a genetic predisposition to high cholesterol, and offered to write a prescription for Lipitor, which I balked at. He said I could try to lower my levels on my own through diet and exercise for six months, come back for a blood test, and if it's still high, I really ought to go on a statin. I said let's do that.

So I practically went vegan (not quite) and exercised with an added reason in mind, went back six months later: practically no change in LDL levels. So, I went on Lipitor.

Let me say this: I have never noticed any untoward effects of 10mg of Lipitor before bed. And when I went back four months later for a blood test, a few days later my doc called and said he'd never seen such a quick and dramatic lowering of cholesterol levels, especially the LDL baddies. So, as I understood it, I had moved into a mode of medical thought that was about preventing a disease before it has a chance to occur. This made sense to me, and seemed "progressive."

                                  Did mushroom spores arrive here from space? What 
                                  are some of them trying to tell us? Are you a mycophile,
                                  a mycophobe, or more neutral?

Among Us: Fungus
Lipitor hit the top of the charts, investors in Pfizer were euphoric. Later research showed all the other competing statins from the other companies were just as good (Crestor, et.al), but Lipitor's advertising was stellar. And all these billions from something derived from fungus from red yeast rice...something like that. In contemporary taxonomy of living things, Fungi has its own Kingdom all to itself. They reproduce via almost-invisible spores that fly through the air. Mycologists (AKA mushroom experts) are always fun to listen to; I've never heard a mycologist who was a dull speaker, and Paul Stamets (watch the gorgeous 150-second video in the lower right hand corner, "Fantastic Fungi: The Spirit of Good"!) and Terence McKenna (who was largely self-taught) are/were totally spellbinding in their own way. Every mycophile I've known was eccentric and very intelligent. There's something about fungus I can't put my finger on...I learned from both Stamets and McKenna that some people tend to be paranoid about the creepy images and powers of mushrooms and other varieties of fungus. These people are called mycophobes. The very straight east coast banker R. Gordon Wasson, an American, was a mycophobe, until his Russian wife Valentina - a mycophile - showed him how wonderful fungi were. Wasson later tracked down the mushroom that Mexican shamans said allowed humans to contact the gods. When he wrote an article about it for Life in the late 1950s, it caused a big stir among Beatniks and artists and other intellectual ne'er do-wells. As well it should've.

Yep: fungus can be tasty. It can create compounds with strong effects on humans, including alcohol, antibiotics, and hallucinogens. And it can lower cholesterol and save lives...according to the cardiologist model. But there are dissenters...Let's give 'em a hearing.

The Statin Contrarians
Largely shouted-out by the Big-Pharma-backed studies, this small but concerned group of medical researchers (possibly the most notable being Beatrice Golomb of UC San Diego) have been raising questions about what they think may be the vastly over-prescribed statin public, the manipulation of data by Big Pharma, the longterm side effects, whether statin use leads to Lou Gehrig's Disease and other neuromuscular diseases, and dementia, depression, and impulsive behavior. Furthermore, there may be a serious question that if statins have serious and more widespread side-effects, would we ever even find out, with the way the FDA tracks this stuff? Florida doctor Mark Goldstein even linked the massive use of statins to the 2008 world economic meltdown. I don't know how serious he was, but the impulsive behavior he saw in some of his patients who used statins made him think of the banking crash. In Tom Jacobs' article, "Cholesterol Contrarians Question the Cult of Statins," a 2009 piece for Pacific-Standard (then Miller-McCune), he concludes with these two rather paranoia-inducing (for me) paragraphs:

"So here's where we stand: A hugely profitable, largely self-regulated industry is aggressively promoting a line of newly developed products it assures us are safe and beneficial, when in fact they contain a significant element of risk. Much of the media takes the companies' claims at face value, leaving millions of people ignorant of the fact they are unwittingly participating in a huge, high-stakes gamble.

"Sound familiar? Statins may not have caused the financial meltdown, but the parallels between the two stories are positively heart-stopping."

In early 2012, the FDA issued new warnings for statins, mentioning possible increased risk of Type 2 Diabetes, and...I forget what the other thing was...oh yea: memory loss. (Can one monitor one's own side effects, always? With the 5% of the population that experience muscle fatigue and muscle pain with statin use, this seems easy. But note how often you or your friends blame their temporary inability to recall a name or a word in conversation. If you're over 40 and you have smart friends, they will darkly joke about early-onset dementia or Alzheimer's. The very significant segment of the over-45 population on statins that are reporting memory problems? Do we know this is caused by the drug and not that they're...aging? HERE is a horror story. But statistically, this is in fact rare, and the cost-benefit of using statins still seems to be in the statins' favor. I said "seems.")

A Gene Thing To Note
Researchers at Oxford found a "rogue" gene  (SLC01B1, just to keep you thinking I'm smart) that may account for 60% of the reasons why some people experience nasty-to-life-threatening side effects from statin use, especially neuromuscular disorders. If you have one copy of the gene, you're four times more likely to experience a nasty side effect; if you have two copies? 16 times more likely! And what's really a bit disconcerting: 25% of the population carries one or two copies of this gene, a number so high I think another reporter's use of the term "rogue" was misleading. This, I confess, I found more than a tad creepy.

Thinking For Ourselves
The statin contrarians say the public has been conditioned to be "cholesterol-phobic." They say that what heart specialists see should be balanced with what neurologists and other doctors see with regards to statin use. There's a very vocal crowd that seems mycophobic; they have argued that statins are a "mycotoxin" that obstruct the mevalonite pathway and, well, let me just give you the shrill title of a book I found: How Statin Drugs Really Lower Cholesterol and Kill You One Cell At A Time, by Yoseph and Yoseph. At the risk of sounding flippant, this reminded me of the plot from the old Twilight Zone episode, "To Serve Man." My gawd! The book To Serve Man? It's...it's...a cookbook!!!

[But then again, maybe the Yosephs are Cassandras and we statin users are guinea pigs in one of the worst biomedical disasters in the entire Pan-Galactic Archives? For now, I'm still swallowing my statin after brushing my teeth. You gather your information, sift, weigh the pros and cons, call 'em as you sees 'em, take responsibility for your decisions, think for yourself. Are you sure you want to be eating that Thing you had for lunch?]

But I have studied enough statistics to not be scared off statins for now. As you can see by yet another too-long blogspew, I keep up on this stuff. But before I leave you (as if anyone is still reading by now!), I want to add something that, for some reason, has generally gone unsaid in this statin side-effects brouhaha.



The Possible Role of Co-Enzyme Q-10
My favorite Media Doctor has always been Dr. Andrew Weil. I really like his books. He was at Harvard studying medicine when Timothy Leary was experimenting with psilocybin (a fungus!), and was writing for the Harvard Crimson. He seemed to want in on the experiments, but couldn't get in. He found that other undergraduates were in on the research, a violation of Harvard's code of ethics, so Weil blew the whistle on Leary, Metzner, Alpert (Ram Dass), and eventually the psychedelic psychologists were kicked out of Harvard, or dropped out, or asked to leave. Weil had since then come to rather amiable terms with Leary (before Leary's death in 1996), but Ram Dass seems to have never forgiven him. (According to Don Lattin's The Harvard Psychedelic Club.) Anyway, I digress...because apparently I can't help myself. ("Impulsive behavior" driven by statins? No, I think I started digressing in writing around age 7...)

I went to see what Weil thought about statins, and he seemed to stress the use of the dietary supplement Co-Enzyme Q-10. (Hereafter CoQ10) So I read up on CoQ10. Very interesting. But I didn't start buying supplements of it.

Then I read a wonderful book of interviews by David Jay Brown called Mavericks of Medicine: Conversations on the Frontiers of Medical Research. Durk Pearson and Sandy Shaw got on the topic of statins, mitochondria, the politics of biomedical studies, etc. I will throw this out there for the general edification of anyone reading this: food for thought, grounds for your own research!

In the Realm of Conspiracy Theory
Pearson says (I'm reading from page 112) that statins indeed do block the synthesis of mevalonate, which is used to make cholesterol. "However, mevalonate is also used to make a substance called Co-Enyme Q-10, which is part of the the single electron transfer chain controlling chemistry in the mitochondria." Pearson suggests a supplementation of CoQ10 higher than I already use (I'm a convert!), and gives good reasons why. He also says our ability to synthesize our own CoQ10 degrades as we age, or our mitochondria age. Pearson says you won't find this info in the Physician's Desk Reference, so even doctors don't know we should be supplementing with Co-Q10, much less the massive statin-using public. The FDA has been unresponsive to researchers and other doctors who have raised this issue. Why? Here's a nice little conspiracy theory, ladies and germs:

Pearson/Shaw (they're always together and, as Brown says, finish each others' sentences) say that Merck Pharmaceuticals has a patent on any statin plus CoQ10 since around 1990. They're not making it because it's really hard and expensive to get FDA approval of a combination drug, so they're sitting on this info. Yea, but why? If statins can be so debilitating, why not go ahead and try to get FDA approval anyway? Because by the time researchers knew about the drawbacks of statins they were already approved and making boffo dinero for Big Pharma. Coming out with the drawback data would have delayed the gravy train, gravy ironically elevating cholesterol levels to the point where your aorta congeals into a hockey puck, but there I go digressing again...On with the conspiracy theory:

Rather than pull the statins, then go through the long clinical trials of statins plus CoQ10, the Industry kept mum, lest the money-flow float out the window. And if the Public knew about the liabilities, they'd sue, sue, sue. The law among Big Pharma was like the law of the Mafia: omerta. Or: keep your mouth shut! Shaw/Pearson liken this to RJ Reynolds, the tobacco company, who did develop cigarettes that were safer, but didn't release them, because doing so would be an admission you'd already been poisoning the community. The FDA wouldn't let them say their new cigarettes were "less carcinogenic."

Here's Ray Kurzweil, from Brown's book:

"Co-Enzyme Q-10 is important. It never ceases to amaze me that physicians do not tell their patients to take CoQ10 when they prescribe statin drugs. This is because it's well-known that statin drugs deplete the body of CoQ10, and a lot of the side effects such as muscle weakness that people suffer from statin drugs [...] [CoQ10] is involved in energy generation within the mitochondria of each cell. Disruption of the mitochondria is an important part of the aging process and this supplement will help slow that down. CoEnzyme Q-10 has a number of protective effects including lowering blood pressure, helping to control free radical damage, and protecting the heart." (pp.242-243)

Here's a line from Nassim Nicholas Taleb's book of aphorisms, Bed of Procrustes: "Pharmaceutical companies are better at inventing diseases that match existing drugs, rather than inventing drugs to match existing diseases."

And with that ominous observation, I take full responsibility for...what I can take responsibility for...and  urge the Reader to look under rocks and see what squirms there, no matter how unpleasant. Because, whether the Truth shall set us free or not, trying to find more "truth" is bound to make our lives far more interesting than jelling out in front of the TV, no?

At any rate, if you think, after reading me, I'm sort of a dim-bulb, I have my excuse: I was only trying to save my own life!

Some Books and Articles Consulted, From Memory:
Happy Accidents: good on the discovery of statins and other drugs, very readable and delightful!
Scientists Greater Than Einstein: a modern version of the medical researcher/doctor as Hero - a chapter devoted to the heroic life-saving efforts of Akira Endo - in the mold of Paul de Kruif's classic The Microbe Hunters and Sinclair Lewis's fictional offshoot of that book, Arrowsmith, which was heavily influenced by de Kruif.

"Drug To Cut Cholesterol Tests Better Than Statin" (there may be much better drugs for controlling cholesterol coming down the pipes, but this one has to be injected.)

"American's Cholesterol Levels Shrink, Even as Their Waistlines Expand." (ties in with my obesity blogs?)

"Statins Cause Fatigue In Some People" (and yet, nothing on CoQ10)

"Lipitor Patent Ends; generic available: What Now?"

Saturday, March 3, 2012

Life Extension Notes

Completing the trifecta of Leary and Wilson's SMI2LE vision of a way out of technological materialism with no goal, no telos, is Life Extension. This one's a whole ungainly ball of wax, and I will have to do multiple posts on it over the coming months in order to feel like I've said anything substantial about it.

I previously posted on Space Migration HERE.
My stab at Intelligence Increase ("I" squared) was HERE.


Cellular Level
From my current view, the most exciting and radical research on life extension is the hard slog done at the level of the cell. According to some statistics, the combination of ever-faster computers and the fact that there are more scientists doing research now than at any time in history - by far! - means that key findings about how and why we age, how we can slow it down, stop it or even reverse it, are immanent. (Let us define "immanent" as something like "in the next 15 years," just for kicks, although many in the field will say that's being conservative!)

Around 1961 cytogerontologist Leonard Hayflick and his colleague Paul Moorhead defined what is now commonly called the "Hayflick Limit": they proved that cellular DNA can only replicate about 60 times during a lifetime; with constant replications, mistakes are made, gunk gets into the next generation, and accumulates over time. I heard one theorist describe it as like when you or your friends bought a Beatles record and taped it for friends. Then they taped it from the tape they were given, then those friends taped it from the taped tape, etc: after awhile you got a copy that sounded like crap. Well, when we get to be around 75, our cells are filled with gunk, the machinery has started to break down (I've seen the carburetor used as a metaphor a lot here too), and the processes that keep our hearts, brains, livers and other good stuff...rust and break. (We probably sound as bad as a washed-out Beatles tape, too.) We die. It seems we're programmed to be like that.

There are plenty of Very Bright People who think we can defeat these programs.

So far, the only proven method of extending life is to slow metabolism, which for humans means caloric restriction, or to eat less. The less food/calories, the less to metabolize, the slower gunk and noise accumulates at the cellular level. The problem with this is that it's really not very fun. I think Woody Allen put it best when he said that - I paraphrase from memory - "If you want to live to be 100, you need to give up all those things that make you want to live to be 100."

There's gotta be a better way. And it looks like there are a few prospects. I'll cover a few.

Sidelight: The literature on life extension is so extensive, it's easy to find oneself in a rocky and sedimented terminal morain. This precariousness seems heightened if the reader is a non-specialist, and who's more non-specialist than an overweening generalist? Still, I have found a few hot rocks. They seem to shimmer brilliantly. I call each one a possible fleck from the Fountain of Youth. Let us call each item a "de Leonite clue."


Here's a possible de Leonite: ELLPs. What are they? They're Extremely Long-Lasting Proteins, and they were recently found on the surface of the nucleus of a neuron by researchers at the Salk Institute's Molecular and Cell Biology Lab, led by Martin Hetzer. What's the big deal? Well, most proteins last for a few days at most. The ELLPs that coat a cell last a lifetime, and they're part of the NPC, or Nuclear Pore Complex. Okay, look at the cross-section of the cell above and note things called "microtubules," "microfilaments," and "plasma membrane." A cell needs to both protect itself from outside bad stuff and to let in outside good stuff - to put it like a nine year old. The Salk researchers found that bad stuff can happen because these ELLPs that coat the outside of the cell walls of a neuron like marble, erode over the years, and this is probably how a lot of gunk (there's our carburetor metaphor again) gets into cells. Another way of putting it: the gate keepers of the cell break down over a lifetime and allow toxins to get in, damaging DNA, and let us not even talk about the nasty stuff that happens when your DNA is damaged.

The results from this research could lead to better ways to treat or avert neurodegenerative diseases like Alzheimer's and Parkinson's. Hetzer's team were told the research they undertook would be too bold, difficult and expensive to conduct. You probably didn't hear about this in the "news" because it didn't contain enough blood, Beiber, ballgame or Beyonce.

Seriously, this is major stuff, because no one really knew ELLPs were what they were. Read more about it HERE.

Q: What are sirtuins? And what do they have to do with me?


I'm glad you asked. They are Silent Information Regulator Two proteins that act as enzymes, and were linked to life extension around 1986. Mammals have seven types of sirtuins, and Sir1 has been linked to the reason why caloric restriction works to slow aging. It gets really dodgy from here on out, for the OG.

How cool would it be if we could find something that would activate the Sir1, without having to constantly feel hungry? When we start to starve - or we restrict our caloric intake by 30%--40% - as the theory goes, some genes kick in and protect us against the stress of being hungry, and protect our cells and vital organs. These are the sirtuins. Here's where red wine comes in: it looks like, when we drink certain red wines that contain resveratrol, that it activates a gene-complex very Sir1-like, and has the same effects as caloric restriction. Instead of caloric restriction/quasi-starvation, drink a hearty glass of red zinfandel! Sound too good to be true? (Wait for it...)

 MIT professor Leonard Guarente has been at the forefront of Sir1 research, from caloric restriction to providing a theoretical platform for new anti-aging drugs based on his (and other's) sirtuin findings. Glaxo-Smith-Kline paid $720 million for a company called Sirtrus, and...here's an article from two years ago. It's not going as well as we'd/they'd hoped. In many articles, Dr. Richard Miller, a professor of Pathology at the University of Michigan and critic of the sirtuin hypothesis, has been saying that the relatively simple story of the sirtuins was too simple, that sirtuins are probably just one of very many systems in cell-signaling that influence aging. But Guarente is holding firm, recently publishing a long paper on the miraculous potentials of the sirtuins.

Meanwhile, research on the other sirtuins is hot - there's potentially gold in them thar genes! - and for the sufficiently geeky, see HERE (Sir6 linked to longer lifespan!), HERE (researchers in London seem to see a chimera in the sirtuin hunt vis a vis longevity?), and HERE, for starters. Note this last article suggests mutations that increased sirtuins were linked to anxiety and panic disorders. I think the sirtuin road led to a terminal morain for this reader. But I will still pay attention to anything that might come up. You never know what Sir5 might have in store for us, for example. The sirtuins may yet yield a dynamite de Leonite/piece of the puzzle.

I want to get into telomeres and telomerase, but first a word from an Oracle, AKA Kaku:

Stop me if you've heard this one, or just bear with me: each one of our chromosomes has a sort of "cap" of base-pairs that can be visualized like a shoelace: if you didn't have that little piece of hard plastic at the end of your laces, everything would shred and tying your shoes would be an unpleasant task. A telomere is one of these caps. It strongly appears that, with each cell division over a lifetime, the cap gets shorter. End of telomere = haywire/Hayflick Limit. Cells can't repair, eventually TAFUBAR.
Now: there are two classes of cells that don't age: your germ/sex cells (sperm/eggs, which is good, or your baby will be born looking as old as you!), and our old nemesis, the emperor of all maladies, cancer. Cancer can just go on dividing forever! It's immortal! (Not all cancer cells, just...enough. Oy. As Professor Carlin wrote in Napalm and Silly Putty, "If you live long enough, everyone you know has cancer.") Until it - cancer - kills itself by killing its host (us or our loved ones). Let's not worry about cancer's problems. It's doing just fine for now.

So what makes sex and cancer cells immortal? Telomerase. Both germ and cancer cells produce the enzyme telomerase, which keeps the telomeres intact when cells divide. Can we come up with telomerase therapy that will effectively arrest aging, or potentially reverse some of it? Dr. Michael Fossel thinks so. He thinks it's about ten years away! (Other Big Brains I've read say 50-100 years and we'll have telomerase therapy. Aubrey de Grey says about 100 years, and that guy usually seems optimistic to me. Speaking of Mr. de Grey, get a load of him if you haven't already. There are endless videos to be found if you like this one:



Axiological Level
What really interested me about what de Grey says here about those opposed to longevity research - mainstream media people who seem to regard it as faintly ridiculous or science-fiction-y and throw in the word "immortality," and gerontologists. This last I found very interesting, because de Grey seems to perceive those engaged in regenerative medicine as knowing things the gerontologists do not. Which I find totally plausible. If one reads Leonard Hayflick, one rapidly sheds any optimism for notable improvements in human lifespan soon; Hayflick is a heavyweight in the field of human lifespan studies and he's not exactly sanguine about our prospects for immortality, to put it mildly. But I wonder if de Grey is rather talking about temperaments of researchers? Just note the impression the fields have on your own disposition towards improving human lifespans: "gerontology"....""regenerative medicine"? At what point do we reconcile these two, if ever? It gets weirder.

I think it has to do with axiology, the study of our basic values. We build whole worlds of thought, political systems, ideologies, laws, and dreams on the basic building blocks of our own values. And where did we get these values that are so important to us? It's a difficult question. I've written on axiology HERE and HERE, but I have barely touched the surface of the idea.

One of the major founders of Transhumanism, Max More. Check out what he has to say between 5:26 and 8:07:

This gets to a recent book review I read in Reason magazine. I have not read The Body Politic: The Battle Over Science In America, by Jonathan D. Moreno. Not yet. Ronald Bailey's review seems to extend the values issues that Max More brought up after his famous Free Inquiry  article from 1993, between "humanists" and "transhumanists," this latter group who were enthused about More's ideas probably not even self-defining themselves as "transhumanists" at the time. Coupled with Aubrey de Grey's gerontologists vs. regenerative medicine researchers (and IT professionals, libertarians, and Canadians), now we have the "biopolitics" of strange bedfellows: biocons, who object to things like embryonic stem cell research on strictly "moral" grounds; and egalitarian leftists, who see a disaster in runaway advanced new biological techniques: only for the rich, non-egalitarian, and lacking in human dignity. In this article, "biopolitics" is defined as "the nonviolent struggle for control over the actual and imagined achievements of the new biology and the world it symbolizes."

(I analyzed Rick Santorum's views on bioethics vis a vis this wider conversation and will charitably designate him as some sort of "paleobiocon." I think I'm letting him off easy, too.)

The Reader has to decide where they stand on this stuff. Where are you, bio-ethically? The more I study this stuff, the more difficult it gets. On principle I like the idea of Unistat starting out as a country that values very highly new knowledge. And the history of the 19th/early 20th century Progressive movement and eugenics is sobering. On the other hand, I see Nassim Nicholas Taleb's book The Black Swan as prophetic (ironically!): we now live less and less in the old linear, Bell Curvey "Mediocristan" and more and more in Extremistan: the rich have gotten wildly richer and almost everyone else has stayed the same or has fallen lower. And I don't see much brightness on the horizon. Just look at the overtly bought elections. Just look at the appalling decadence in the basic fact that: the banks did what they did to our lives, got bailed out, and Obama has not been able to put in place any regulations with teeth to speak of, four years later.

However, as Taleb goes to great lengths to emphasize: we can only notice where there's risk/fragility built into a system and try to minimize it. All other bets about the future are off. No one knows. Certainly not "experts"!

I tend to be with Moreno's "bioprogressives," who welcome new advances, even if I probably won't be able to afford them. Hell, I can't even afford basic health insurance now...

Speaking of Taleb: "Don't talk about 'progress' in terms of longevity, safety, or comfort before comparing zoo animals to those in the wilderness." - p.7, Bed of Procrustes


David Jay Brown's marvelous book of interviews, Mavericks of Medicine: Conversations on the Frontiers of Medical Research (2006) was an overwhelmingly stimulating read this past week - on this subject of life extension and other topics - and is largely to blame for the gap between my last post and this one. Highly recommended! (It's dedicated to Robert Anton Wilson, too.)


To end on a lighter note, Professor Carlin thought that "Dying must have a survival value. Or it wouldn't be part of the biological process."

Ray Kurzweil - no relation to Peter Bogdanovich that I know of - on aging and supplements, for a minute and 40 seconds: "We can't rely on 'being natural,' that's not good enough..."

Wednesday, July 6, 2011

Musing on Two Mad Scientists, Part One

It turns out most of us have been using the term "Luddite" wrongly. Or rather, the way we use terms changes over time, so most of us have been using the term "correctly" because the now-accepted, socially-conventional meaning has to do with some form of antipathy to technology. A wife locks herself out of the house and pounds on the living room window, yelling at her husband, "Turn on your cell phone, you Luddite!"

But the Luddites were against the way their bosses were treating them; they actually liked technology. But when they were treated poorly in the workplace they took it out on the machines, smashing them. There was about a five-seven year period of labor wars in England, supposedly masterminded by "Ned Ludd," who probably never existed. But capitalists hired cops and some workers were killed in melees. Isn't it marvelous how words change over time? Some background on the Luddites is here.
-------------------------
In a recent blogspew I mentioned how I'd grappled with my openness to novelty (see here), and back in May when I started blogging I wrote on futurology on Memorial Day. I have always been morbidly fascinated by Frankenstein and the morality of science and technology. For the most part, I think I could adopt a stance of CAUTION that's two or three times more "cautious" than I already feel now, but when it comes to the acceleration of technology, I think any stance I take will have about as much affect as throwing a glass of water over Niagara Falls: this technological imperative seems like some sort of species-drive to me, and we can only hope to change our attitudes about It when it comes to Its various fruitions.
-------------------------
Mad Scientist Number One: Hans Moravec. I've been following this guy ever since I read about him in Mondo 2000: A User's Guide To The New Edge, around 1990 or so. I read his Mind Children. I have paged through many of his subsequent books, and avidly look for video of him on the Net, or interviews. Here is a two minute piece of fairly recent vintage, to give you a picture of this guy's thinking; there's all kinds of stuff on him under "Hans Moravec" in your search engine:




Okay: a Moravec interview or video never fails to creep me out. I think he's clearly one of our best "mad scientists;" but remember: I said I like mad scientists. I want crazy geniuses to be weird, colorful characters with "impossible" ideas that they seem absolutely certain are inevitabilities. Maybe it's the little kid in me, the one who loved comic books; maybe it's just my love of an unstinting, grinning absurdity that threatens to be not-absurd. Clearly our take on these things are largely a matter of temperment. I like my Mad Scientists, even if they're actively malevolent. Moravec never fails to deliver. I'm pretty sure he's a "nice guy." That's what's so creepy about him. I would not go as far as Josef Wiezenbaum, who said Mind Children was like Mein Kampf. The brilliant mathematician-philosopher Roger Penrose thought many of Moravec's ideas were "horrific" when he read one of Moravec's books. "Horrific"? Yes...maybe. He thinks by 2040, based on Moore's Law, there will be enough generations of ever-smarter robots that there will be a "mind fire" and the robots will very quickly become smarter than us. Lately he has been diligently working on getting robots to "see" like we do (sorta), and working it out so they can navigate around a clutter-filled house is a tough problem. I will get to why these guys also make me laugh at the end of this post. 
---------------------------------
Mad Scientist Number Two: Ray Kurzweil. Most of you probably know this guy. When I read him I find that I'm forced to take him seriously, if only because of his track record. He thinks that, by 2045 our technology will have merged with us - we will be cyborgs with the capacity to be a "billion times smarter" than we are now. And we could live forever. He thinks, like Moravec, that we will be able to back up the information in our nervous systems, download it into a silicon-based robot, and be immortal. If this stuff doesn't boggle your mind, nothing will. That is: if you treat his ideas as you do a movie: you must willfully suspend disbelief, enter into Kurzweil's world, grapple with his reasons why he believes this is possible. Otherwise, you're just gonna get creeped out (and I do, no matter how much I enter into the Kurzweil reality tunnel), get very excited over the Technological Singularity, get paranoid, get outraged, become terrified, or just laugh. Become pixillated. That's what happens to me: I am so aghast at this stuff, I'm all those things: excited, terrified, outraged, and laughing. I think it's a reasonable response. I currently see the visions of the Singularitarians, with their seeming precious seriousness, as a form of surrealism. And it makes me laugh. (But I admit this is only my reading, and they may have the last laugh...it's odd: despite the various scenarios for 2050, they still demand a non-surrealist reading. Let us read them in at least two ways.)

I'll speculate on why I don't think the Singularity will happen by 2045 in my next post.

Until then, I would like to leave you with an apt quote from the late madcap philosopher George Carlin, who asserted that the "the future will soon be a thing of the past." Meditate on that! (?)


Monday, May 30, 2011

Forecasting and Futurology on Memorial Day in the U.S.

Today Unistatians (i.e, the People of Unistat, more commonly known as "the United States," and where I happen to have been born and "raised," and within whose borders I currently reside) have a national holiday to drink, fight, watch sports, and air grievances with family and friends - among other things - in memory of people who died while fighting in one of Unistat's military services. For most Unistatians, the "reasons" why those people went to war and died in the first place are naive, embarrassingly wrong,  or opaque. But fuck it! Let's parTAY!

Certain thinkers since, oh, let's say Diderot's Encyclopedia, have been wondering if we can somehow figure out a way to use rational-empirical physicalist methods to forecast the future. This kind of thing really got going with the advent of science fiction and brilliant Generalists such as H.G.Wells and Jules Verne. And it became a sort of cottage industry circa 1950. Futurology!

There seem to be two broad approaches linking physical processes to the acceleration of history and technology: 1.) thermodynamic measurements,  and 2.) information theory and chaos mathematics

The second one interests me more, mostly because, as a non-specialist, I find the Information Theory/Chaos Math folk have more engaging (and therefore plausible) narratives. (In my current state of ignorance it seems that thermodynamics and information intersected around 1944-48, with the work of scientific Illuminati like John Von Neumann, Norbert Wiener, Erwin Schrodinger, and Claude Shannon. More some other day. When I've read up more on it.)

For the nigh-curious, here is a list of some Futurologists.

Info-flow and acceleration of technology (and madness, anxiety and reactionary politics?):
Around 1990, for example, the mathematician Theodore Gordon (for some reason left off the Wiki List) published a paper which demonstrated that chaos increases as information flows throughout society increased, and the two are probably intertwined. What it seems to me now is that Gordon was saying that more and more Black Swan events would happen, but they might be almost impossible to predict. These events should be easier to predict, but because of the cognitive biases of "experts" and "specialists" they are not. The global financial meltdown of 2008 should be the Final Bell to start thinking about World Systems differently, building in more robustness and minimizing fragility, but there is no reason to believe this is happening or will happen, given events since August, 2008. (These "locked-in" cognitive biases of "experts" is a huge hurdle, seems to me.)

Moving along, ever-accelerating...

Sir Martin Rees has a sobering quasi-prediction for humanity's future, and if ya wanna, give yourself an intellectual thrill-chill and watch his 18 minute talk from July, 2005 on this here:


(Much) more sanguine futurologists such as Ray Kurzweil think we will reach a point of "singularity." And it seems rather soon. Different singularitarians forecast different dates for a whole new ballgame: our genomics, robotics, computer science, nanotechnology, etc, will increase in their acceleration and everything will change so radically they will look back on us at this moment and laugh at how "primitive"we were. This is too big to go into here, but I'll address my personal pixillation over their whole compelling narrative in a future blog-spew. Suffice: when I read the Extropians and Singularitarians, part of me is very excited and enthusiastic if they're right, and another part of me (intuition?) is horrified and tends to encourage a spiral toward the anxiety pole, damn my overweening imagination!

Anyway, something's coming, and pronto! Head's up!

Finally, I leave us with a quote/forecast from Nassim Nicholas Taleb, author of The Black Swan and Fooled By Randomness:

"The twentieth century was the bankruptcy of the social utopia; the twenty-first will be that of the technological one." - found on p.31 of Taleb's The Bed of Procrustes.


[Irony? Taleb had a NY Times best seller with The Black Swan, but now when you type those words into your search engine, you get the first three pages covered with information about a film about...ballet? (Just kidding: I saw it and thought it intense and psychedelic-Jungian. But poor Nassim! Was it a black swan event for him?)]


P.S: Have a great Memorial Day! And in memoriam, think about reading General Smedley Butler's short book War Is A Racket; it just may blow your mind while leaving everything else intact.