"It's an instrument," Machine Gun Kelly said. "Play it." [1]
--------------------------------------------------
Lately I've been studying ideas about influence, coercion, advertising, hypnosis, and ideas about "mind control," particularly what is usually called "conspiracy theory" ideation. I'll just leave it at that.
Well...no. Let me add one thing: I have come to a tentative conclusion about that last item: Yes, some conspiracy theories about "mind control" seem to have varying degrees of validity, if not soundness. Others seem batshit crazy to me. But for those C-theorists with more scholarly minds - or even those who have attained reading levels of a bright 15 year old - I think the richest depths to plumb are in the study of 1.) Rhetoric, and 2.) Metaphor. You wanna learn how to control minds? Find out everything you can about both of those areas. You won't be drilling in a dry hole.
Can Chinatown be a metaphor? Who for? Why?
-----------------------------------------------------
In a prescient essay from 1996, "Farewell To The Information Age," UC Berkeley linguist Geoffrey Nunberg quotes John Perry Barlow, Ted Nelson and Michael Benedikt about how digitization wipes everything clean and is totally revolutionary. Barlow said something to the effect, "We thought we were in the wine business but it turns out we're in the bottling business." Nunberg riffs off this - in 1996! - by writing, "We are breaking the banks and hoping still to have the river." (If I recall correctly Nunberg is quoting Paul Duguid.)
No divagation here. Make up your own!
------------------------------------------------------
"You can only cruise the boulevards of regret so far, and then you've got to get back on the freeway again." [2]
------------------------------------------------------
"I am completely convinced that there is a wealth of information built into us, with miles of intuitive knowledge tucked away in the genetic material of every one of our cells. Something akin to a library containing uncountable reference volumes, but without any obvious route of entry. And, without some means of access, there is no way to even begin to guess at the extent and quality of what is there. The psychedelic drugs allow exploration of this interior world, and insights into its nature."
-Alexander Shulgin, PIHKAL p.xvi
Do you like to find out new things every day? The pleasure of learning a new thing gives you a bit of a dopamine buzz. Because you're learning. And possibly from books. Now: what if you already have the most marvelous stash of novelty-in-form-ation ensconced in your genes? Too bad you don't have a key to that library. Well, who is this Shulgin guy? Does he know of which he speaks? If he's right, what are some of the barriers to keep you/me from accessing the stupendously wondrous texts held within?
A friend of Ted Nelson - Jaron Lanier - thinks the idea that all it will take is another thirty or fifty years of Moore's Law and our computers/AI will outrun Nature? Probably wrong, even though widely accepted among his fellow Internet-inventors. And, because I love metaphors around books, Jaron says this:
"Wire and protocol-limited mid-twentieth-century computer science has dominated the cultural metaphors of both computation and living systems. For instance, Jorge Luis Borges described an imaginary library that would include all the books that ever were or might possibly be written. If you were lucky enough to live in a universe big enough to contain it (and we aren't), you'd need to invest the lives of endless generations of people, who would always wither away on starships trying to get to the right shelf. It would be far less work learning to write good books in the traditional way. Similarly, Richard Dawkins has proposed an infinite library of possible animals. He imagines the invisible and blind hand of evolution gradually browsing through this library, finding the optimal creature for each ecological niche. In both cases, the authors have been infected by the inadequate computer science metaphors of the twentieth century. While an alternative computer science is not yet formulated, it is at least possible to speculate about its likely qualities." - The Next Fifty Years (2002)
First off: are there any Borges experts out there? I wonder how much Borges was influenced by computer science in his marvelous "Library of Babel" versus notions of infinity he'd read about in kabbalah, Renaissance magicians, and sufism. Still, I guess Jaron's point holds regardless. And he's been trying to re-imagine a computer science for quite awhile now, given the quick advent and obvious problems of inequality and surveillance.
The codex-book as metaphor seems so potent to literate minds. When I read Borges's famous short story, then read Lanier's literal interpretation, I realize I visualize the Library of Babel as something along Chomsky's "discrete infinity." I mean, I don't want to board a starship, but I do hazily recall many days of spending timeless hours in the stacks of very large libraries or used book stores, finding endlessly marvelous things, actually looking at books written in Chinese - completely mysterious and yet wondrous - and the Babel branch is like that, only it goes on forever. The place closes at 10 PM, and I realize I never ate dinner. And now that you mention it, I don't see any EXIT signs anywhere. How long have I been in here? How do I get back to the register?
However, psychedelic drugs as accessing experiential book-like knowledge? I don't know. One often reads in visionary works the problem of our "clouded lenses" - flawed vision as metaphor. In Erik Davis's Nomad Codes there's a metaphor around psychedelic drugs as keys that can open doors previously kept locked. Earlier (c.1976), Dr. Leary gave us the metaphor of DNA as text: "The DNA code contains the entire life blueprint - the history of the past and the forecast of the future. The intelligent use of the brain is to imprint the DNA code." - Info-Psychology, p.59 As an exercize, unpack all the metaphors there!
-----------------------------------------------------
Speaking of kabbalah: Joseph Dan discusses the structural argument of the Zohar: "Historical events, the phases of human life, the rituals of the Jewish sabbath, and the festivals are all integrated into this vast picture. Everything is a metaphor for everything else." - Kabbalah: A Very Short Introduction, p.33
-----------------------------------------------------
"The history of consciousness is the history of words, " Joyce said immediately. "Shelley was justified in his bloody unbearable arrogance, when he wrote that poets were the unacknowledged legislators of the world. Those whose words make new metaphors that sink into the public consciousness, create new ways of knowing ourselves and others." [3]
-----------------------------------------------------
Along the above lines, one of my favorite passages in Lit about the poet's magickal imaginative powers to alter reality comes from a passage in A Midsummer Night's Dream, where Theseus says the "poet's eye" works on "the forms unknown" and:
Turns them to shapes and gives to airy nothings
A local habitation and a name.
You've probably seen this quote used to bolster all sorts of arguments in contemporary thought. There seem to "be" things "out there" as yet undiscovered OR: people experience something but have no words to label these "things" in experience. The neologist, the meme-propagator, the master rhetorician, the re-framing metaphor user who alters minds: these all seem to fit Theseus's poet's magickal workings.
In a delightful book on the neuroscience of music, Daniel J. Levitin discusses our need to categorize from an evolutionary standpoint. "Categorization entails treating objects that are different as of the same kind. A red apple may look different from a green apple, but they are both still apples. My mother and father may look very different, but they are both caregivers, to be trusted in an emergency [...] Leonard Meyer notes that classification is essential to enable composers, performers, and listeners to internalize the norms governing musical relationships, and consequently, to comprehend the implications of patterns, and experience deviations from stylistic norms." Then Levitin quotes The Bard's lines from above. - This Is Your Brain On Music, p.147
------------------------------------------------------
There may be one reader (I'm looking at YOU!) who has wondered, "Is this dude gonna address all the 'the brain is a computer' metaphors?" No. Because there's too much written about it. I swim in those waters. (Are you, by chance feeling hyper-aware of metaphors right now? Hyperaware of the so-called "tacit dimension"?) One of my favorite lines about "the brain is a computer" comes from some book I don't even remember reading, but it's in my notes. The brain is NOT a computer, but it is a Chinese restaurant: crowded, chaotic, lots of people running around, and yet stuff gets done. I apparently got this metaphor from Welcome To Your Brain, by Aamodt and Wang.
------------------------------------------------------
George Lakoff admits his empirical research on metaphor (of which I am a major amateur reader) had been preceded by Ernst Cassirer, I.A. Richards, Kenneth Burke, Benjamin Lee Whorf and a few others. The oldest thinker he names is Vico, who died in 1744. Lakoff argues strongly and convincingly that metaphor is not some fancy part of speech, as most of us were taught. It's deeply embedded in everything we say and do. I once wrote him that he never mentions Norman O. Brown, who said, "All that is, is metaphor." Lakoff wrote back and said NOB wasn't "empirical." Anyway, check out these lines from a guy who died in 1592 (if Vico was allowed, why not this guy?):
"To hear men talk of metonomies, metaphors and allegories, and other grammar words, would not one think that they signified some rare and exotic form of speaking? And yet they are phrases that are no better than the chatter of my chambermaid." - Montaigne "On the Vanity of Words"
Okay, maybe it's a stretch. Montaigne seems to not be arguing that metaphor is basic to our speech - as Vico did - but he seems to be rather unimpressed by the talk of metaphors. And yet, he's using metaphors in every sentence. If Montaigne were here to find this out, I suspect he'd find it all quite marvelous.
------------------------------------------------
A.) I recall Joseph Campbell talk about a lecture he gave on gods, goddesses, heroes, etc. And a young man rose up and said these things didn't exist; they're lies. Campbell replied they were metaphors. After a slightly rancorous exchange, Campbell suddenly realized the young man didn't know what a metaphor was. Campbell told him it's when you say something IS something else.
B.) Alfred Korzybski argued that humans suffer for taking literally what he called "The Is of Identity" and "The Is of Predication." If I say, "Cate Blanchet is the greatest actress alive now," (And I might if you were here, just for fun, but for now that would be missing the point entirely) I'm predicating/identifying/making the same "Best Actress In The World" and "Cate Blanchet." But who knows how to logically prove my assessment? And even if I could prove - an impossibility, in my metaphysics - that Cate "really is" equal to the term "best actress in the world," Cate's so much more than that. I'm hypnotizing myself or you or both of us by leaving out Cate as a mother, Aussie, masturbator, gardner, philanthropist, a person with a rich private memory, as prankster, etc, etc, etc, etc.
How do we square A with B? And what about font size?
1. From the Hemingway-inspired short story by William S. Burroughs, "Where He Was Going," from Tornado Alley
2. Inherent Vice, Thomas Pynchon
3. Masks of the Illuminati, Robert Anton Wilson
OG logo by Bobby Campbell
The Overweening Generalist is largely about people who like to read fat, weighty "difficult" books - or thin, profound ones - and how I/They/We stand in relation to the hyper-acceleration of digital social-media-tized culture. It is not a neo-Luddite attack on digital media; it is an attempt to negotiate with it, and to subtly make claims for the role of generalist intellectual types in the scheme of things.
Overweening Generalist
Showing posts with label Jaron Lanier. Show all posts
Showing posts with label Jaron Lanier. Show all posts
Sunday, February 7, 2016
Metaphors in Literature, Philosophy and Science: Divagations
Sunday, December 1, 2013
Big Data and Two Proposals For How We Should Be Compensated For It
Both Jaron Lanier and Evgeny Morozov have looked at the asymmetries in Big Data, saw how We have given our data away for free to gazoollionaires, and Lanier and Morozov have done gedankenexperiments to see how the playing field may become slightly more leveled, regarding the case of The People v. Google, WalMart, Goldman Sachs, Facebook, Twitter, Amazon, health insurance companies...the NSA.
I'll try to give a thumbnail, but some of you are ahead of me on this stuff, so feel free to chime in and correct my various egregious erroneous apprehensions. Even though you may be ahead of us here, can we agree this Big Data asymmetry qualifies as a Missing Public Discussion? On with it...
Morozov notes the winner-take-all aspects in Facebook and Google, et.al, having the biggest computers to harvest the most data about us. They've got megapetaflops of data stored on us. And they need more. We thought we were just having fun and playing and they were "giving" us search or social connectivity. We volunteered our user data, belatedly realized there were such things as "data trails" and gigantic computers somewhere with fancy algorithms attached to our name/number and what we do, what we like, who we know, where we live, how much money we have...and of course the NSA has the goods on the sort of porn-loving perverts we are. We only devoutly wished they wouldn't "go there," assuming like three year olds that if we wished they'd be decent they would be. Back to Bad Boy Evgeny.
The problem of Big Data asymmetry is a democracy problem, and passing privacy laws would be like rearranging the deck chairs on the Lusitania. We need a civic solution.
He says the metadata should be thought of as the "social graph" and it's ours. Mostly. Anyway: it's time someone "pay" for it; we should be getting something back for...ourselves. For Morozov, this isn't money for us. The social graph should be given free to any startup. Google and Facebook: how are you going to compete with them? Level the data playing field! As it is, the situation is not good for "free market" competition. To say the least. If there was more competition for Google and Facebook (et.al) it could lead to possibly a reacquisition of some privacy...and innovation among the data-gatherers.
How about government getting in on it? Nope: public money can't compete here; the behemoths are way ahead of all that. They're that Big. However, our personal information and our social connections (which we gave them, remember) are not only the public...mind? and much of the personal aspects of our selves, but we and our connections may outlast Facebook and Google and the other tentacles of the Behemoth. Historically, very very few corporations have lasted 100 years. (I smell that last part as a component of a Bad Argument, but let me sally forth anyway...)
Morozov proposes the social graph as a public institution, to be regulated, maybe by a civil agency or even the UN. This would open up competition: say you wanted to start something to compete with a Behemoth: the social graph is there, and you access it. Morozov seems correct: if we went back in time before these Behemoths got started, we'd look at the hardware and algorithms and not be all that impressed. They only got there earlier and nabbed our data quicker than everyone else, and "won."
As a new competitor, maybe you'd guarantee anonymity, so you would opt in. Or you could opt out. The regulatory body would control how social graph data was collected and accessed.
The NSA is mired in secrecy, with no congressional oversight, which seems like a clear violation of the 4th Amendment to many of us. And that's not to mention what they've been doing. Of course, NSA used Google's and Facebook's data on us. And Verizon's and AT&T's and holy muthafreakinshit what a mess this is for any semblance of privacy, Constitutional rights and protections, decency, democracy. You know: The little things.
NSA ain't goin' away, so let's take the NSA's data (they're being paid by us, our taxes!) and make some or a lot of it available for a more robust competition for social networking and search engines.
This is a basic sketch of Morozov's way of dealing with our current Worldwide Theatre of the Absurd and Big Data collection asymmetry. Personally, I think it's nuts. (He does call it a "modest proposal.") But, if implemented, can it make things worse? Or would it be more likely to make things better in some way? What am I missing here? One thing I like about his ideas here: he wants to fiercely politicize the public dialogue about privacy and data and democracy.
Lanier basically sees the mess we're in as a "collective action" problem: it can't be solved by individuals in a free market but only by a sort of paradigm-shift in the way We perceive the problem, and by an adjusted normative response.
Facebook employs less than 5000 people, but it's worth over $65 billion. The heirs of the WalMart fortune are worth, according to one data point I saw recently, $147 billion...and they're just Sam Walton's offspring. How can a scenario like this be sustainable? It can't. (Lanier in his book Who Owns The Future? fascinated me in many ways, but one of them was his explanation of how WalMart "won": they basically did what Facebook and Google did, but earlier than them: massive data banks [what Lanier calls "Siren Servers" and they're the new "factories" for the Robber Barons of 2013] on consumers, buyers, distributors, every sort of technological minutiae imaginable, all to get a leg up on their competitors.)
The bigger the computer, the more likely you're gonna be the winner in a game that's basically winner-take-all. And what really makes you a winner? Data. Big Data. Gather the data, enter it. Pay hotshot computer people to write the algorithms. Pay others to keep the lights on and the data servers from overheating.
The Facebook game of "giving" consumers something they want then harvesting data about them? This will continue to spread throughout banking, health care, retailing: they'll give us good service at good prices...but soon most of us will be unemployed and at their mercy because The Behemoth is too good at doing what it does. Marc Andreeson wrote an essay for the Wall Street Journal in 2011: "Why Software Is Eating The World." With the acceleration of computing power entire industries are replacing workers and distribution with a few dozen of the most talented programmers and a few dozen data servers.
Look at how Amazon ate up its competition. Look at how many people Kodak employed, with decent, middle-class-bolstering jobs. Then look at how many people work at Instagram (hint: the number is 13), which bought Kodak. How can anything like that be sustainable without some sort of "collective action" solution, as Jaron Lanier puts it?
Note: there are scads of smart free-market thinkers who think all of this is good. No collective action required. Andreeson is one of those guys. You probably know one yourself. Jaron Lanier is not one of them. He sees this as a disaster: you think the inequality between the 1% and the rest of us is bad now? He sees all this as making it much worse, and it's happening so fast we're stunned. I agree.
So what does Lanier propose? He's somewhat similar to Morozov in that he agrees the Behemoths have mostly gotten that way by collecting data about us. But his solution - and he's proposed variations on this scenario - is that we should be paid for our data, via micropayments. NSA and other governmental surveillance is out of control because there's no limit on the cost to them. If they had to pay you a tiny bit of something when they took a picture of you on some camera on some street corner and used facial recognition and stored that data...you should get some little bit back for that. It's your data. Whoever agreed to allow the government to be so intrusive in our lives? If they're going to do this sort of shit, they're going to have to pay. After all, We are the government, in a democracy....errr...right? In increasingly starry-eyed theory we're the government. We pay them out of our taxes to work for us. Imagine that.
And not only that: all the data about us that's being shuffled around and sold to other Behemoths and vendors: that represents us. If they're going to do business with our data, they're just gonna hafta pay. Literally. With "micropayments." Every bit of data about us can be tagged when it's used and we get a little bit back. If you write some article and all kinds of people link to it, tweet it, use it in some way (still not sure about the limits of this), you get something back. One of the godfathers of the Net, a fascinating genius named Ted Nelson, wanted HTML to always link back to the origins of some idea. It didn't go Ted's way, but Lanier - who knows and loves Nelson - says there might be a way to tag our data to ourselves so that if our face ends up in an ad on Facebook, we get paid. This would seem to entail a reworking of the architecture of the Net, so I don't know how workable the idea is. In theory I like it more than Morozov's idea...which is, I know, anathema to the Everything FREE! vision we all love(d) so much.
Some Sources Used
"Let's Make the NSA's Data Available For Public Use" by Evgeny Morozov
"The Real Privacy Problem" by Morozov
"Who Owns the Future?": Morozov reviews Lanier and thinks Lanier's ideas are lame. (Of course!)
"U MAD???: Evgeny Morozov, the Internet, and the Failure of Invective" by Maria Bustillos: a sort of smack upside the head for Evgeny; Bustillos rather likes Lanier. And Bustillos is one of our best interpreters of this whole scene, in my view; I love her.
video: "Jaron Lanier On Connected Media Universal Micropayments and Attribution": 2 minutes. I think Lanier had a dental problem here, which accounts for the lisp?
"In Venting, A Computer Visionary Educates," an article by John Markoff about Ted Nelson
- A bunch of other sources; presumably I'd have had to pay a little bit under the micropayment scheme, but then presumably I'd get something back from people reading this? However, when we look at it from Jaron Lanier's perspective, the Behemoths are gonna have be paying us far more than we're paying them?
I'll try to give a thumbnail, but some of you are ahead of me on this stuff, so feel free to chime in and correct my various egregious erroneous apprehensions. Even though you may be ahead of us here, can we agree this Big Data asymmetry qualifies as a Missing Public Discussion? On with it...
Morozov notes the winner-take-all aspects in Facebook and Google, et.al, having the biggest computers to harvest the most data about us. They've got megapetaflops of data stored on us. And they need more. We thought we were just having fun and playing and they were "giving" us search or social connectivity. We volunteered our user data, belatedly realized there were such things as "data trails" and gigantic computers somewhere with fancy algorithms attached to our name/number and what we do, what we like, who we know, where we live, how much money we have...and of course the NSA has the goods on the sort of porn-loving perverts we are. We only devoutly wished they wouldn't "go there," assuming like three year olds that if we wished they'd be decent they would be. Back to Bad Boy Evgeny.
The problem of Big Data asymmetry is a democracy problem, and passing privacy laws would be like rearranging the deck chairs on the Lusitania. We need a civic solution.
He says the metadata should be thought of as the "social graph" and it's ours. Mostly. Anyway: it's time someone "pay" for it; we should be getting something back for...ourselves. For Morozov, this isn't money for us. The social graph should be given free to any startup. Google and Facebook: how are you going to compete with them? Level the data playing field! As it is, the situation is not good for "free market" competition. To say the least. If there was more competition for Google and Facebook (et.al) it could lead to possibly a reacquisition of some privacy...and innovation among the data-gatherers.
How about government getting in on it? Nope: public money can't compete here; the behemoths are way ahead of all that. They're that Big. However, our personal information and our social connections (which we gave them, remember) are not only the public...mind? and much of the personal aspects of our selves, but we and our connections may outlast Facebook and Google and the other tentacles of the Behemoth. Historically, very very few corporations have lasted 100 years. (I smell that last part as a component of a Bad Argument, but let me sally forth anyway...)
Morozov proposes the social graph as a public institution, to be regulated, maybe by a civil agency or even the UN. This would open up competition: say you wanted to start something to compete with a Behemoth: the social graph is there, and you access it. Morozov seems correct: if we went back in time before these Behemoths got started, we'd look at the hardware and algorithms and not be all that impressed. They only got there earlier and nabbed our data quicker than everyone else, and "won."
As a new competitor, maybe you'd guarantee anonymity, so you would opt in. Or you could opt out. The regulatory body would control how social graph data was collected and accessed.
The NSA is mired in secrecy, with no congressional oversight, which seems like a clear violation of the 4th Amendment to many of us. And that's not to mention what they've been doing. Of course, NSA used Google's and Facebook's data on us. And Verizon's and AT&T's and holy muthafreakinshit what a mess this is for any semblance of privacy, Constitutional rights and protections, decency, democracy. You know: The little things.
NSA ain't goin' away, so let's take the NSA's data (they're being paid by us, our taxes!) and make some or a lot of it available for a more robust competition for social networking and search engines.
This is a basic sketch of Morozov's way of dealing with our current Worldwide Theatre of the Absurd and Big Data collection asymmetry. Personally, I think it's nuts. (He does call it a "modest proposal.") But, if implemented, can it make things worse? Or would it be more likely to make things better in some way? What am I missing here? One thing I like about his ideas here: he wants to fiercely politicize the public dialogue about privacy and data and democracy.
Lanier basically sees the mess we're in as a "collective action" problem: it can't be solved by individuals in a free market but only by a sort of paradigm-shift in the way We perceive the problem, and by an adjusted normative response.
Facebook employs less than 5000 people, but it's worth over $65 billion. The heirs of the WalMart fortune are worth, according to one data point I saw recently, $147 billion...and they're just Sam Walton's offspring. How can a scenario like this be sustainable? It can't. (Lanier in his book Who Owns The Future? fascinated me in many ways, but one of them was his explanation of how WalMart "won": they basically did what Facebook and Google did, but earlier than them: massive data banks [what Lanier calls "Siren Servers" and they're the new "factories" for the Robber Barons of 2013] on consumers, buyers, distributors, every sort of technological minutiae imaginable, all to get a leg up on their competitors.)
The bigger the computer, the more likely you're gonna be the winner in a game that's basically winner-take-all. And what really makes you a winner? Data. Big Data. Gather the data, enter it. Pay hotshot computer people to write the algorithms. Pay others to keep the lights on and the data servers from overheating.
The Facebook game of "giving" consumers something they want then harvesting data about them? This will continue to spread throughout banking, health care, retailing: they'll give us good service at good prices...but soon most of us will be unemployed and at their mercy because The Behemoth is too good at doing what it does. Marc Andreeson wrote an essay for the Wall Street Journal in 2011: "Why Software Is Eating The World." With the acceleration of computing power entire industries are replacing workers and distribution with a few dozen of the most talented programmers and a few dozen data servers.
Look at how Amazon ate up its competition. Look at how many people Kodak employed, with decent, middle-class-bolstering jobs. Then look at how many people work at Instagram (hint: the number is 13), which bought Kodak. How can anything like that be sustainable without some sort of "collective action" solution, as Jaron Lanier puts it?
Note: there are scads of smart free-market thinkers who think all of this is good. No collective action required. Andreeson is one of those guys. You probably know one yourself. Jaron Lanier is not one of them. He sees this as a disaster: you think the inequality between the 1% and the rest of us is bad now? He sees all this as making it much worse, and it's happening so fast we're stunned. I agree.
So what does Lanier propose? He's somewhat similar to Morozov in that he agrees the Behemoths have mostly gotten that way by collecting data about us. But his solution - and he's proposed variations on this scenario - is that we should be paid for our data, via micropayments. NSA and other governmental surveillance is out of control because there's no limit on the cost to them. If they had to pay you a tiny bit of something when they took a picture of you on some camera on some street corner and used facial recognition and stored that data...you should get some little bit back for that. It's your data. Whoever agreed to allow the government to be so intrusive in our lives? If they're going to do this sort of shit, they're going to have to pay. After all, We are the government, in a democracy....errr...right? In increasingly starry-eyed theory we're the government. We pay them out of our taxes to work for us. Imagine that.
And not only that: all the data about us that's being shuffled around and sold to other Behemoths and vendors: that represents us. If they're going to do business with our data, they're just gonna hafta pay. Literally. With "micropayments." Every bit of data about us can be tagged when it's used and we get a little bit back. If you write some article and all kinds of people link to it, tweet it, use it in some way (still not sure about the limits of this), you get something back. One of the godfathers of the Net, a fascinating genius named Ted Nelson, wanted HTML to always link back to the origins of some idea. It didn't go Ted's way, but Lanier - who knows and loves Nelson - says there might be a way to tag our data to ourselves so that if our face ends up in an ad on Facebook, we get paid. This would seem to entail a reworking of the architecture of the Net, so I don't know how workable the idea is. In theory I like it more than Morozov's idea...which is, I know, anathema to the Everything FREE! vision we all love(d) so much.
Some Sources Used
"Let's Make the NSA's Data Available For Public Use" by Evgeny Morozov
"The Real Privacy Problem" by Morozov
"Who Owns the Future?": Morozov reviews Lanier and thinks Lanier's ideas are lame. (Of course!)
"U MAD???: Evgeny Morozov, the Internet, and the Failure of Invective" by Maria Bustillos: a sort of smack upside the head for Evgeny; Bustillos rather likes Lanier. And Bustillos is one of our best interpreters of this whole scene, in my view; I love her.
video: "Jaron Lanier On Connected Media Universal Micropayments and Attribution": 2 minutes. I think Lanier had a dental problem here, which accounts for the lisp?
"In Venting, A Computer Visionary Educates," an article by John Markoff about Ted Nelson
- A bunch of other sources; presumably I'd have had to pay a little bit under the micropayment scheme, but then presumably I'd get something back from people reading this? However, when we look at it from Jaron Lanier's perspective, the Behemoths are gonna have be paying us far more than we're paying them?
Wednesday, November 6, 2013
Rise of the Robots and Technological Unemployment
When I was in grammar school and high school I'd often ditch class and go to the library. One of the things I'd learned was good for laffs and the imagination: look at microfilm of old Life magazines, or if the library had bound versions of the entire year for old magazines I'd love to read those. The ads in magazines like Colliers that showed a doctor saying he prefers these cigarettes over all others because of their fine, smooth taste. His stethoscope around his neck, smiling. Wow! How things had changed since...1952!
Always wondrous were ads for gadgets that would eliminate drudgery and free up the woman of the house (it was always a woman) to live a life of leisure. The rhetoric of machines that would eliminate soul-numbing work captured my attention at a very early age because all you had to do was extrapolate...wouldn't it be cool if dad didn't have to go to work and he and mom would be there when I got home from school...doing...whatever it was they wanted to do? What would my world be like when I was an old man of 30?
As I began to study the history of the Industrial Revolution up to present days, I found this rhetoric of labor and machines a constant: at some point in the future - possibly my own future - we would enter another Epoch: robots and computers (same thing) would do all the horrible work, leaving humans to create, socialize, dream. How would the bills get paid? I didn't know, never having paid bills. I figured the money went to others...who worked. But: their work would have gone away too, right?
Everyone would be playing games, painting, writing poetry or learned papers and books, learning new languages or music, or joyously goofing off.
"Because everything in her house in waterproof, the housewife of 2000..." Wow!
It doesn't look like it's going to happen like They Promised, does it? Why?
Well, the simple answer: instead of the populace understanding that any machine that puts people out of work was invented not only by a genius and his team, but the genius and his team built upon millions of hours of previous work by previous toilers and tinkerers and basic scientific research funded by everyone - all of who were supported by farmers and mothers - we instead allowed the idea that whoever could buy the biggest and fastest machines, owned All Of That.
There seem to be a few hundred choice entry points to tell this story to myself and y'all, but for now I'll cut to December of 2012.
Paul Krugman
In one of his shorter posts for the NYT, Krugman published "Rise of the Robots" on December 8, 2012. He notes that the "college premium" had been stagnant for a few years. In other words, the payoff for getting a degree was not showing its previous earning power in the marketplace. When he first started writing about income inequality twenty years earlier, it was about the gap between laborers and CEOs and other assholes, like hedge fund managers. Now it seems to be between workers and capital...and OMG Marxism! The dreaded Karl Marx, hibernating for a hundred years, suddenly stirs. Production rises, income of labor stays the same and then begins to lose. Why? Automation. Read the article. "If this is the wave of the future, it makes nonsense of just about all the conventional wisdom on reducing inequality." Education won't help when what we really have are a few people who own machines. The biggest and fastest machines. Those with the biggest and fastest machines are reaping all the rewards; everyone else gets the shaft. You buy the biggest machines, you pay 100-1000 of the brightest PhDs to collect data, write algorithms, maintain the data servers...you win! Everyone else is fucked.
Jaron Lanier
Jaron Lanier, computer whiz/prodigy/generalist/genius says he was there (and he was, as numerous books on the history of Silicon Valley attest to) when this really got going and he and his famous friends thought it was going to be this incredible "information is free" thing that would make everyone's lives better. Now he says they were horribly wrong. Because it turns out that the NSA, Wal-Mart, Facebook, Goldman Sachs...all bought the biggest, fastest computers and hired an army of gifted geeks. He has ideas about how to save us, and I think they're good to begin our thinking with.
I've followed Lanier's career for a long time. I think he's one of the best and most interesting thinkers in the world, but rather than talk about his ideas, I'd rather you took the time to watch what he's saying about the existential situation we're in now:
Here's 4 minutes on "Why Facebook isn't free."
Here he is interviewed by Andrew Keen, about Lanier's book Who Owns The Future? It's about 10 minutes and 40 seconds:
Finally, for 27 minutes or so - I think you'll find it well worthwhile - he's interviewed about his books and his changed thinking and what we might do to remedy this "jobless recovery" situation. NB around 5:20 to 6:00, in talking about the structural changes from Kodak to Instagram: "We pretend that the people who do the work don't exist." Another notable moment: from around 8:00 on: "honesty in accounting" could solve the mess the middle class is in. Also a fascinating point: around 11:30: "levies" and their history:
I have a bee in my bonnet and I'm afraid you're going to be hearing more from the OG on income inequality, American fascism, mob mentality, robots/automation/computers, Real Wealth vs. Money, the college loan bubble, Missing Public Discussions, and social fallout of Winner-Take-All Hypercapitalism and Privateeing, and ideas about how we might extricate ourselves from rising misery.
Always wondrous were ads for gadgets that would eliminate drudgery and free up the woman of the house (it was always a woman) to live a life of leisure. The rhetoric of machines that would eliminate soul-numbing work captured my attention at a very early age because all you had to do was extrapolate...wouldn't it be cool if dad didn't have to go to work and he and mom would be there when I got home from school...doing...whatever it was they wanted to do? What would my world be like when I was an old man of 30?
As I began to study the history of the Industrial Revolution up to present days, I found this rhetoric of labor and machines a constant: at some point in the future - possibly my own future - we would enter another Epoch: robots and computers (same thing) would do all the horrible work, leaving humans to create, socialize, dream. How would the bills get paid? I didn't know, never having paid bills. I figured the money went to others...who worked. But: their work would have gone away too, right?
Everyone would be playing games, painting, writing poetry or learned papers and books, learning new languages or music, or joyously goofing off.
"Because everything in her house in waterproof, the housewife of 2000..." Wow!
It doesn't look like it's going to happen like They Promised, does it? Why?
Well, the simple answer: instead of the populace understanding that any machine that puts people out of work was invented not only by a genius and his team, but the genius and his team built upon millions of hours of previous work by previous toilers and tinkerers and basic scientific research funded by everyone - all of who were supported by farmers and mothers - we instead allowed the idea that whoever could buy the biggest and fastest machines, owned All Of That.
There seem to be a few hundred choice entry points to tell this story to myself and y'all, but for now I'll cut to December of 2012.
Paul Krugman
In one of his shorter posts for the NYT, Krugman published "Rise of the Robots" on December 8, 2012. He notes that the "college premium" had been stagnant for a few years. In other words, the payoff for getting a degree was not showing its previous earning power in the marketplace. When he first started writing about income inequality twenty years earlier, it was about the gap between laborers and CEOs and other assholes, like hedge fund managers. Now it seems to be between workers and capital...and OMG Marxism! The dreaded Karl Marx, hibernating for a hundred years, suddenly stirs. Production rises, income of labor stays the same and then begins to lose. Why? Automation. Read the article. "If this is the wave of the future, it makes nonsense of just about all the conventional wisdom on reducing inequality." Education won't help when what we really have are a few people who own machines. The biggest and fastest machines. Those with the biggest and fastest machines are reaping all the rewards; everyone else gets the shaft. You buy the biggest machines, you pay 100-1000 of the brightest PhDs to collect data, write algorithms, maintain the data servers...you win! Everyone else is fucked.
Jaron Lanier
Jaron Lanier, computer whiz/prodigy/generalist/genius says he was there (and he was, as numerous books on the history of Silicon Valley attest to) when this really got going and he and his famous friends thought it was going to be this incredible "information is free" thing that would make everyone's lives better. Now he says they were horribly wrong. Because it turns out that the NSA, Wal-Mart, Facebook, Goldman Sachs...all bought the biggest, fastest computers and hired an army of gifted geeks. He has ideas about how to save us, and I think they're good to begin our thinking with.
I've followed Lanier's career for a long time. I think he's one of the best and most interesting thinkers in the world, but rather than talk about his ideas, I'd rather you took the time to watch what he's saying about the existential situation we're in now:
Here's 4 minutes on "Why Facebook isn't free."
Here he is interviewed by Andrew Keen, about Lanier's book Who Owns The Future? It's about 10 minutes and 40 seconds:
Finally, for 27 minutes or so - I think you'll find it well worthwhile - he's interviewed about his books and his changed thinking and what we might do to remedy this "jobless recovery" situation. NB around 5:20 to 6:00, in talking about the structural changes from Kodak to Instagram: "We pretend that the people who do the work don't exist." Another notable moment: from around 8:00 on: "honesty in accounting" could solve the mess the middle class is in. Also a fascinating point: around 11:30: "levies" and their history:
I have a bee in my bonnet and I'm afraid you're going to be hearing more from the OG on income inequality, American fascism, mob mentality, robots/automation/computers, Real Wealth vs. Money, the college loan bubble, Missing Public Discussions, and social fallout of Winner-Take-All Hypercapitalism and Privateeing, and ideas about how we might extricate ourselves from rising misery.
Thursday, July 12, 2012
John B. Calhoun, Digital Media and Relative Sanity: Media Hygiene
Ironist-Ethologist John B. Calhoun: Some Background
Have you ever read science fiction writer John Brunner's Stand On Zanzibar? What about Tom Wolfe's non-fiction "New Journalism" book, The Pump House Gang? Ever seen a film called Soylent Green? (Of course you have! Enjoy your next meal...) Did you ever read (or read about) Dr. Paul Ehrlich's The Population Bomb? He appeared on The Tonight Show With Johnny Carson in the 1960s, with his grim neo-Malthusian message about how overpopulation will cause famine, deplete our water resources, and we'd all die, doomed and sick, panicky and screamy and just tremendously bothered, sorta like a horror film. Sorta like Soylent Green. Ehrlich's appearance on the TV, with dire fnord messages made his book an instant best-seller.
(Hint to writers prospecting for writing gold: write a book on how we're all gonna die monstrously horrible deaths because we're not paying attention to something. Use a modicum of statistics, but use imagery like a novelist. You may be livin' on Easy Street before you know it! Then you and I and everyone we love will die in some catastrophe that had nothing to do with the one you warned about in your book. Your Thing was Bird Flu; what really wiped the humans out was an errant asteroid. It's a Win-Win for you [selling a lot of books] and the asteroid [gets to annihilate an advanced civilization, or pull off a "Milky Way Hit Job," as they say at Galactic Central]. You heard it here first!)
So yea: there was this ethologist named John B. Calhoun. Some of you Unistatian history buffs will know about John C. Calhoun. This middle initial B dude was quite different. He built what he called "utopias" for rodent populations, then sat back and watched them reproduce, interact, raise their young, etc, took feverish notes, tabulated his data, did it again and again. His work influenced all the books and the film (and many others, no doubt) I mentioned above. Some sources say Calhoun's term "behavioral sink" - which I think qualifies for what literary critic Harold Bloom called "strong poetry" - has become widely known, although in my experience with highly educated people, 2011-2012, few have heard the term. So I'll put some flesh on it for you, if you don't mind.
habit-trail for rodents: sorta between our City and a rabbit's warren?
The people most likely to know "behavioral sink" would be urban sociologists. What Calhoun found was that, when a utopian spatial set-up for rodents - everything they need to be "happy" and which he actually, with a wry smile, labeled as "heaven" - reaches a population density of X, things start to go rapidly downhill toward dystopia. Social norms break down, rodents get aggressive, narcissistic, they die earlier, they fail to pass their genes on, and when they do pass their genes on, they're bad parents. They also have weird sex and abuse drugs and eat too much. Some theorists have said that it's not so much the geometrical space - now cramped to the Xth point - which is bringing on this behavior, but it's the overabundance of social interactions that drives them nuts.
(Remember: we ARE talking about rodents, as studied in a non-Skinnerian way: a set-up that initially, rodents would prefer, in an environment as close to what they'd make themselves, in as natural a setting as possible. And FULL DISCLOSURE: I happen to enjoy some drugs and alcohol and weird sex, and again, I'm not even on Facebook or Twitter and I'm not a rodent to boot. I DO see short attention spans and very deep shallows in more places now, tons of stupidity among people with historically unparalleled access to Information, and lots of indifference to suffering, but am still not totally sold on Calhoun's rodents-to-humans idea. However, he did think that too many/a high frequency of bad social interactions drive things down the Sink, whereas his more famous followers - like Ehrlich - thought population density depleted water and food, and that will do us in. Calhoun thought the social stuff was enough to send us over. I worry that he's more accurate than I'd thought. Moreover, subsequent urban theorists have had some problems with Calhoun's findings, while still thinking he was on to something. Let this all-too-typical parenthetical by the OG constitute an "interlude"? Very well then. Let's move on.)
(Remember: we ARE talking about rodents, as studied in a non-Skinnerian way: a set-up that initially, rodents would prefer, in an environment as close to what they'd make themselves, in as natural a setting as possible. And FULL DISCLOSURE: I happen to enjoy some drugs and alcohol and weird sex, and again, I'm not even on Facebook or Twitter and I'm not a rodent to boot. I DO see short attention spans and very deep shallows in more places now, tons of stupidity among people with historically unparalleled access to Information, and lots of indifference to suffering, but am still not totally sold on Calhoun's rodents-to-humans idea. However, he did think that too many/a high frequency of bad social interactions drive things down the Sink, whereas his more famous followers - like Ehrlich - thought population density depleted water and food, and that will do us in. Calhoun thought the social stuff was enough to send us over. I worry that he's more accurate than I'd thought. Moreover, subsequent urban theorists have had some problems with Calhoun's findings, while still thinking he was on to something. Let this all-too-typical parenthetical by the OG constitute an "interlude"? Very well then. Let's move on.)
Try and tell me you weren't thinking of humans there. No big deal: that's what we do. We try to relate almost everything to how it might bear on our lives. Any questions so far?
Yea, the list of items Calhoun saw among rodents when "utopia" devolved into a "behavioral sink" seems a might close to home, eh? I forgot to mention the rodents, when socially crowded beyond equilibrium, got stressed-out physical illnesses, developed psychosomatic symptoms, and mental illness.
John B. Calhoun was once a big deal, but the vagaries of time and the hyper-mega-turbulent acceleration of ideas, knowledge, and media noise seems to have crowded him out since his heyday: circa 1948-72. Those dystopian books and movies neglected to emphasize an aspect of Calhoun's thought: he was an Ironist among social scientists. He was dismayed that those who were influenced by him were so starkly pessimistic. Because Calhoun wasn't. He believed in the creative ability of humans (let's face it: you study rodents like crazy to learn about humans) to solve their Big Problems. And he thought we needed to seek out what he called "creative deviants" in order to, among other solution-oriented ideas, COLONIZE SPACE!
Gadget Addiction and Social Interaction Quality
We live in a world in which some people have the job description of "Information Management Expert." Increasingly, in meeting harried and frazzled clients, they caution that drinking digital information from a fire hydrant is bound to cause the sanest, best-grounded of us to make our brains feel like expired tapioca, our bodies like losing fighters in a Bruce Lee flick, our affects and outlooks on the world like someone being told their dog just died. Must it be like this? Apparently, yes.
Now: I know YOU have kept things in balance. You are "on the ball," but you know others who fit the description. And most importantly: you care. That's why you're here, reading this. It's just the Big-Hearted Person you are. You can't help it. Born that way. It's simply the way you roll. Ya gotsa do what ya gotsa do. Hey, I hear ya. How does anyone ever get by without Us around? Am I right? <cough>
In earlier blogspewage I'd written about the obesity epidemic. Part of the deeply structured reason we have the Problem is because, historically, sugar and fat are cheap and bountiful, whereas for 99% of our time as hominids, that stuff - which we need to stay sharp - was a total SCORE, and everyone in the wandering extended-family band society rejoiced: a bit of honey! Some meat from a large mammal! Now you drive thru Burger King. Same with information. Access to social Others is a bit complex.
In this article about whether Internet Compulsion Disorder should be included in the upcoming DSM-V, we see the classic instant gratification via dopamine-circuit-buzz dealio described.
Here's an article that compares the Roman writer Petronius (he of the incredible "Dinner With Trimalchio") and surfeit of food, to our surfeit of information. Note the grad student who talks about life online as feeling less like a thinking, feeling human and more like a rat who must press the button for another pellet, forgetting why...Gluttony Going Viral indeed.
Oh, my. It would be easy to link another 400 articles here, related to digital media and addiction. Hey, I'm not immune. Maybe it's the deeper reason why - other than the surface reasons I tell myself - that I'm not on Facebook or Twitter? A social connection, however mediated, and depending on the individual's eccentric nervous system and his/her - to borrow from William S. Burroughs - algebra of need - gives a dopamine reward nod. And for very many of us, this is enough. But then look at how Calhoun's rodents reacted when their social space was crowded. This brings me to a related idea, Neophilia and Neophobia.
Winifred Gallagher, behavioral science writer
Leary and Calhoun
Before personal computers and cell phones - much less Facebook, Twitter, iPads, Siri, X-Box, et.al - Timothy Leary was a technophilic neophile, who saw the pending Internet as a boon (and...it has been, right?) and would be the new LSD. I think he turned out to be basically right. We would live in a "virtual reality" and it would be so interesting that we wouldn't need psychedelic drugs: the new media would be psychedelic itself. And Jaron Lanier - for my money, one of the most interesting geniuses on the planet - pioneered virtual reality. And a young person very much influenced by Leary, Douglas Rushkoff, declared that the counterculture had "won;" the ideas ushered in by Baby Boomers had become mainstream.
And yet, in Winifred Gallagher's terrific 2011 book on neophilia and its discontents, New: Understanding Our Need For Novelty and Change, she cites the other side of Leary's neophilic enthusiasm by invoking...John B. Calhoun, citing his idea of the "behavioral sink" and that, yes, we have increased creativity, communication, ideas and access to information, but what has/will also accelerate is the number of social roles we'll be forced to play, and that things are moving too fast for us to learn how to get a handle. We'll have more competition, negative encounters, and a general dissatisfaction with daily life. Gallagher quotes Calhoun, "Everything is coming at us fast and faster, yet we can't even learn from our experiences unless we have refractory periods to digest them in." (p.169)
This "refractory period" is part of the OG's raison d'etre. I baldly state: we need more quiet, non-electronically-mediated face-to-face communication/interaction with others and we need more time alone in our interiorities, reading stimulating and challenging books. So by my own logic, stop reading my blog! (Waitaminnit: that can't be right...) All the gadgets are fine, but many of us are getting carried away. (Hey you! Are you reading this on a tablet computer while texting someone on the subway? Pay attention! You don't want to miss your stop like you did that one time.) I also think Douglas Rushkoff is right: just as in the Axial Age the very few rabbis or monks read the books to the rest of us, and a very few wrote the books while the rest of us read them post-Gutenberg, we need to learn to program like the programmers and not just consume their programmed gadgets, which have inherent biases in them. I have not taken up programming. The weak version of Rushkoff's advocation of being a "programmer" is to actively seek out and expose the biases of our gadgets. This I enjoy doing.
I am NOT against all our wonderful gadgets. I am for much more thinking about what makes us happy, though. I despise the Bewildered Herd. Making the decision to question how we're being programmed may be a crucial one. Things seem to be accelerating logarithmically, and no one is in control; it cannot be stopped. But we can modify our choices. Am I preaching to the choir here? I suspect so...
This "refractory period" is part of the OG's raison d'etre. I baldly state: we need more quiet, non-electronically-mediated face-to-face communication/interaction with others and we need more time alone in our interiorities, reading stimulating and challenging books. So by my own logic, stop reading my blog! (Waitaminnit: that can't be right...) All the gadgets are fine, but many of us are getting carried away. (Hey you! Are you reading this on a tablet computer while texting someone on the subway? Pay attention! You don't want to miss your stop like you did that one time.) I also think Douglas Rushkoff is right: just as in the Axial Age the very few rabbis or monks read the books to the rest of us, and a very few wrote the books while the rest of us read them post-Gutenberg, we need to learn to program like the programmers and not just consume their programmed gadgets, which have inherent biases in them. I have not taken up programming. The weak version of Rushkoff's advocation of being a "programmer" is to actively seek out and expose the biases of our gadgets. This I enjoy doing.
I am NOT against all our wonderful gadgets. I am for much more thinking about what makes us happy, though. I despise the Bewildered Herd. Making the decision to question how we're being programmed may be a crucial one. Things seem to be accelerating logarithmically, and no one is in control; it cannot be stopped. But we can modify our choices. Am I preaching to the choir here? I suspect so...
M.I.T. scholar of social media and technology in general and internet in particular and how it affects us, Sherry Turkle, wrote books on the exciting possibilities for individuality and the play with identity. That was ten years ago. Now, her latest book is Alone Together: Why We Expect More From Technology and Less From Each Other. The title gives the gist.
Jaron Lanier's 2009 book You Are Not A Gadget: A Manifesto, is cited as "essential reading" in Douglas Rushkoff's 2010 book Program Or Be Programmed: Ten Commands For A Digital Age. I consider Rushkoff's to be THE manifesto for those of you worried about our digital algorithms increasingly programming us, and how to wrest something human back for ourselves, for our sanity. Then read Lanier. Then read their influences and the books in their bibliographies.
I wonder where Uncle Tim would be on this issue if he were here today?
Or: just say fuck it, and go write on your Facebook wall about LAME blogging jeremiahs and how BORING they are.
My adrenal glands are faxing to my liver and heart, my hypothalamus-pituitary-adrenal axis has gone into fight/flight/feed/fuck mode, so I will bid my Dear Readers adieu for today.
Here's Sherry Turkle for 6 minutes and 17 seconds (not that you "have" that much time!), on Facebook and privacy and cell phones at the dinner table and other things:
Subscribe to:
Posts (Atom)