Late last night I had a serious lapse of faith in social media — as we all must from time to time.
We should have serious doubts questions about this stuff… Which is why I chuckle whenever I read editorials merely pointing out “there are hazards” and digitization “isn’t all good” — as if any sane person could completely overlook the risks.
But this time was different.
I was writing about Lawrence Lessig’s argument “Against Transparency” for The New Republic in October. He wrote that more data may be dangerous because it opens the door for a lot of dubious correlation=causation claims. He focused on campaign contributions, but the same principle applies to money received by doctors from pharma cos, etc:
The most we could say–though this is still a very significant thing to say–is that the contributions are corrupting the reputation of Congress, because they raise the question of whether the member acted to track good sense or campaign dollars. Where a member of Congress acts in a way inconsistent with his principles or his constituents, but consistent with a significant contribution, that act at least raises a question about the integrity of the decision. But beyond a question, the data says little else.
At this point in the article I thought Lessig was really on to something, but as I continued reading, I started germinating disagreement:
This is the problem of attention-span. To understand something–an essay, an argument, a proof of innocence– requires a certain amount of attention. But on many issues, the average, or even rational, amount of attention given to understand many of these correlations, and their defamatory implications, is almost always less than the amount of time required. The result is a systemic misunderstanding–at least if the story is reported in a context, or in a manner, that does not neutralize such misunderstanding. The listing and correlating of data hardly qualifies as such a context. Understanding how and why some stories will be understood, or not understood, provides the key to grasping what is wrong with the tyranny of transparency.
Some of those points triggered associations with some notions I’ve been finding more and more use for.
“Attention-span” triggered an association with Tyler Cowen’s points in Create Your Own Economy about time horizons: while the web certainly does break up our attention into smaller spans from moment-to-moment, it also enlarges our span of broken-up attention, allowing us to incrementally pull everything back together by investing all of those tiny moments into long-term projects (or simply areas we know a lot about).
For example, we might have trouble focusing on a big book about climate change or macroeconomics or modern art, but it also enables us to keep working away at the topic, keeping our interest alive and our curiosity active as information is updated.
I’ve lost track of more books than I care to count, but while they sit on my shelf, unread (which is not entirely useless), I’m gradually finding and piecing together articles, reviews, commentary, counter-arguments, etc on the same subjects around the web. There are all kinds of options for organizing all of these diffuse particles and shaping them into a coherent project.
On this topic, for example, I aggregate all of my bookmarks, shared items, and tweets into FriendFeed here. A lot of those also get developed into blog posts (here) and from there I take it up another level of generalization, using Prezi as a mindmap to try composing the whole domain into a coherent, presentable narrative.
We’re trading one kind of competence for another: instead of grinding down through a stack of books — it’s about maintaining inertia and sense of direction on longer voyages of discovery.
And then there’s the social aspect… [insert platitude about engagement, relationships & community here].
That’s how science is supposed to work: somewhat meandering & driven by curiosity over a lifetime, with a lot of wrong turns along the way.
But at some point (in the past century — maybe between the WWII war effort and the space race) science fetishized (as it required) the ability to sit all day crunching massive quantitative problems.
Charles Darwin (for example) wasn’t an outstanding genius — smart enough, but no prodigy. What made him a great scientist was that he really wanted to know the origin of species and didn’t give up — for decades — until all of the connections fell into place. His process of gathering specimens wasn’t so different (on some level) from the process of gathering information from around the web, incrementally, as it appears.
Well… that’s an exaggeration. The point is many of science’s most influential discoveries were made by people who were notoriously unable to focus on one thing for very long — but were able to keep coming back when new evidence and insights appeared.
[That point is really open for discussion (soft way of saying it might be bullshit); a lot of the most important theories were formulated by brilliant minds at a very young age... I guess my point is that science has benefited from both lifelong resilience and white-hot brilliance -- so can the web.]
The spirit of persistence, continuity, openness, corroboration, and correction has priority over the more recent spirit of accomplishment, exclusive focus, and industrialized discipline.
The web is bringing that long-term spirit back. It’s teaching us to act despite doubt, and to learn — not just despite mistakes but because of them. Uncertainty and error are part of the process — they’ve always been, they just go covert sometimes and people start to think we can avoid them altogether.
Now uncertainty, error, ambiguity and change are back in a big way. As Jared Cohen from the US State Department told NPR, “The 21st century is a very bad time to be a control freak.”
So we need to cultivate values, mindsets, and conceptions to accommodate the uncertainty; we need institutions and conventions that can learn and adapt with emerging circumstances; we need to be able to change our minds and shift our focus as new information becomes available to us.
The ability to block out distractions and grind through that stack of books might even be a disadvantage. In a world where so much changes so fast there’s a very fine line between “ability to avoid distractions” vs “inability to see warning signs that the course you’re on is becoming irrelevant.”
So we need a balance.
Obviously.
But trying to “settle on” a balanced approach and steadfastly stick to that simply perpetuates the old mindset.
Don’t think we can sit down and figure out what the balance should be — but we shouldn’t let that discourage us either.
Finding the right balance is going to be an iterative & ongoing process — something that has to be worked-out over time and will probably never be settled.
This is exactly what we need to learn to get comfortable with.
As I’ve found in my own projects, once we become more accustomed to incrementally and pragmatically working towards a balance, the need to articulate a “solution” becomes redundant.
The practice is the solution.
Pragmatism is a skill we learn; like any other skill, once we’re good at it, it becomes enjoyable in itself. We shouldn’t fuss around too much trying to plan a way to direct or incentivize it (or even rationalize it), we simply have to encourage and enable the love of learning wherever we experience an opportunity.
The more we learn, the more we learn to love it; the more we love it, the more we want to learn — and the more we care about quality — the more prepared we’ll be for the uncertain future.
On one hand it’s wrong to assume we can trust ideological reasons to guide us; on the other hand, we can’t simply trust “the way things are” either.
Life’s an adventure.
We need to conceive democracy as an adventure too — an ongoing process of deliberation rather than as series of scheduled competitions framed by static rules, institutions, and procedures.
This is why I’m not as afraid of destroying our system of democracy as Lessig seems to be: I have a vision of what might replace it and I want to push further in that direction. I can’t see exactly what’s over the edge, but I at least see it isn’t anything that will kill us.
Maybe it will be a positive development if the slightest correlations are misinterpreted as corruption. My response was, “Good, if everyone’s afraid of being accused of corruption people will become less willing to even take that chance. People will find ways to run an election campaign that doesn’t rely on ever-increasing sums of money to finance the advertising arms race.”
And it just so happens that social media is creating opportunities to do that.
So maybe instead of attack ads and billboards — which politicians hypothetically won’t be able to afford because they’re afraid of accusations and scandals — we’ll start getting more genuine communication…
But then something terrifying occurred to me.
What if, instead of moving towards more open and deliberative types of campaigns, the advantages simply shift from those able to raise large sums of money to those who already have it?
Uh oh…
But we can’t afford to stand around speculating about what might or might not happen, we should be investing more in experiments and prototypes so we have some more empirical bases for answering these difficult questions and proposing solutions.
All changes have unintended consequences…
Let’s finally accept that and get on with the process in a more proactive way.
Stop saying “I have my doubts about social media” and “there’s a negative side to the web” because there’s a negative side to everything and we should have always been critical of our assumptions and tools — both new and old.
The most immediate example is the fact that I wouldn’t have read Lessig’s criticism without social media — and you wouldn’t have found what you’re reading right now (for what it’s worth).
Flawed as it may be, the web is becoming the best resource we have to learn and deliberate and pragmatically work through the process of overcoming our past (and future) mistakes.
What choice do we have?
[Philosophical background starts here.]