I just finished reading a recent post by Jeff Jarvis titled, Media after the site, which explores two ideas – what the next phase of media will be, and how we’ll determine which information to trust. I think he put together some good thoughts, but I’d like to expand on them.
Jarvis starts with a vision of the next evolution of media:
The next phase of media, I’ve been thinking, will be after the page and after the site. Media can’t expect us to go to it all the time. Media has to come to us. Media must insinuate itself into our streams.
He proposes that this idea is personified in @stephenfry, a guy with a huge Twitter following that has built a brand out of himself – via the content he creates, which then spreads and finds its niches by getting retweeted and distributed by his followers. The idea is supplemented with a quote from a NYT story:
If the news is that important, it will find me.
He identifies this as a shift in the role and function of information on the web – that ‘the website’ will become more of an archive for information, but it will constantly have new life breathed into it when it enters the ‘stream’ of the real-time web:
Content will insinuate itself into streams and streams will insinuate themselves back into content. The great Mandala.
I think Jarvis made some good observations, but I think there’s a bigger picture to be considered. I’m going to rephrase his observations in a larger context:
The real-time web has changed the way we access and distribute information.
I’ve been fleshing out these ideas over the past few months, and I think looking at Jarvis’s points through the lens of the 3 key drivers of the web’s evolution brings a bit of clarity. Historically, as complexity increases, we develop new methods of sifting through all the information – to separate quality content from noise. It’s no different on the web. There are millions of blogs and sources of news and information, but how do we get to the gooey caramel center of stuff that matters to us?
Enter the real-time web. (which at the moment is Twitter). What makes Twitter different is that it’s not so much a social network, but rather a massive Idea Exchange. Critics say that Twitter’s 140 character limit is making us stupid, but I think it’s just the opposite: We’re being challenged to convey information in less space, without it losing its value. This is a method for managing complexity.
As Jarvis says, the information from blogs and websites ‘enters the stream’ (the twitterstream), gets chewed on and kicked around and validated via retweets, and then ends up back on our blogs where we break it down, analyze it, and try to find new insights hidden within it. If we think we’ve found an insight, we throw it back into the twitterverse and see what people think. And hence the cycle continues.
It’s a beautiful process we’re participating in, trying to collectively make sense of information. Whereas Jarvis says the process is exemplified by @stephenfry, I’m saying multiply that by millions. Each person is just one component of a worldwide net; little hubs that transmit information. You don’t need to have a million followers to be able to convey a message, you just need the message, and the information will travel.
I’m seeing the real-time web evolving into a collective intelligence, a type of ‘global brain’, which I outlined a little further in ‘Twitter’s Intelligent, Welcome to Web 3.0‘. I think that’s the big picture idea that Jarvis didn’t directly mention, though he dances around it. He mentions how having access to ‘the stream’ will be increasingly important and relevant as our “always-connected and always-on devices” become more seamless and ubiquitous. I agree .
Once we really realize how to unlock the potential power of the real-time web, why wouldn’t we want to stay connected?
The 2nd idea Jarvis addresses: how do we prioritize the information, and who do we trust to filter or explain it?
He references Clay Shirky’s post on Algorithmic Authority to explain that we’re developing new systems for getting quality information – and that’s by trusting other humans to filter it for us. But who do you trust? I think Shirky’s whole post was summarized when he said this:
“…the criticism that Wikipedia, say, is not an “authoritative source” is an attempt to end the debate by hiding the fact that authority is a social agreement, not a culturally independent fact…”
This statement speaks directly to the kinds of shifts we’re experiencing in how we define “experts” and, to a degree, how we define “knowledge.” We’re beginning to really embrace the idea that we decide who’s an authority, and we’re doing that by gauging the value we’re brought by what the person is saying.
Meaning – there’s too much information out there. When we find people who are able to bring us the information that’s meaningful and relevant to us, that fits into some larger context, that we can apply in our own lives and careers in order to keep us ahead of the curve – that’s who becomes an authority. We trust them because they bring us value.
I think that this is what’s left of the web’s evolution. The point of all of this was to find a better way to connect people, ideas, and information. Now it’s a matter of refining that process. That means developing better ways to build our knowledge networks by knowing who’s out there (I’m calling for a methodology for visualizing human capital), and developing better ways to tag, store, and retrieve information.
From there, we’ll be able to really start exploring how to collectively make sense of information and solve problems. Because yes, there will always be those individuals that help us clarify information and bring it into focus, but they are still just a node. The brilliance of where the web is going is that we can pull apart and reconstruct reality collectively as we go, in real time, and at a scale that has simply never been possible before.
thanks to @grahamhumph for pointing out Jarvis’s post.