Category betaworks

You gotta read this! / Thoughts about reading and internet media use

Back in May Mike Hudack posted a rant about the state of the news media. The gist of it is: here we are in 2014, the Internet is at scale — the mobile internet is in the pockets 30%+ of adults worldwide and social networks are at a proportionate scale and yet the news media seems to be becoming more and more dumb. Put another way: the world of news creation and access have been blown open and yet most news organizations have hollowed out their news capabilities and are posting the trivial listicles about “28 young couples you should know”. The response was interesting — one reason is Mike works at Facebook. Alexis Madrigal summed up much of the sentitment in a sentence in the comments: “Hey, Mike, … My perception is that Facebook is *the* major factor in almost every trend you identified.”

A month later — here in New York our extended spring was rolling onward and on Sunday, June 8th, the University of Redding in the UK put out a press release saying that the Turing test had been passed for the first time, ever. The media ran with the story — or more accurately reproducing the press release. The headlines were excellent, easily shareable, easily clickable. I for one saw it fly by my steam and thought “wow, milestone passed”, I will share that. The problem was the press release wasn’t true, and neither were most of the stories that were published. Fast forward to a month later — right at the end of June the the AP announced that they are going to start algorithmically writing stories. Using earnings reports data they are going to let machines “write” business stories.

Lets take a step back and think a bit about what is going on here. We have a dominant social distribution system that favors sharablility — case in point: the Hudack discussion. Its biased towards speed, and that bias is short circuiting fact checking — as the Turing example shows. And in the case of Facebook its mediated by algorithms that arent transparent. Layer in the economics, the cost, of the creation of this “news” add in the AP announcement and you get a good idea of where this is headed. Algorithmically created news stories, mediated by algorithms, shared by people, people who are barely reading these posts. If we can all just get services like Socialflow to do our sharing — we humans can completely quit this loop.

Maybe this isn’t the whole story?   Read on … 

What can homescreens tell us about the way people use their phones. 

Screen Shot 2014-02-22 at 5.07.15 PM At betaworks we aim to build apps that people love: the essential apps that people use every day and that they obsessively want to have on the homescreen of their devices, one touch away. Yet, measuring progress against this goal is a challenge. We have internal analytics, tools, and KPIs that give us an indication of progress. We are obsessive users of Chartbeat, which we helped design specifically to track real-time social engagement. We use Twitter and social channels to measure the scale of engagement and its depth. Twitter is especially good at giving us a sense of depth: examining the language, the influencer clusters and the sentiment that people use to describe our work. When people talk about Dots as an obsession they love or a Tapestry story as something that moved them, we take these as indicators that we are accomplishing our goal. However, it’s just an indicator and the world we build in today is balkanized. More often than not, we can’t get enough visibility into many of the platforms on which we build experiences. Whether it’s the App Store, Facebook, Instagram, Snapchat, most platforms today are opaque in terms of metrics and data. But at the start of each year there is an elegant hack we apply.

Each new year, people share pictures of their homescreens on Twitter, Instagram and other social sharing platforms. If you search Twitter for #homescreen2014, you will see a stream of pictures of people’s homescreens — the primary screen of their phone with all the apps they choose to keep there. It is fascinating to browse through this stream of images — analyzing it is even more interesting. Right after the new year, we culled 1000 homescreen images from Twitter, cut up the images and tabulated the apps on the homescreens vs. those in folders. Admittedly, it’s a hack, and the sample is skewed: among all smartphone users, we’re biasing completely for people who use Twitter, and among Twitter users we’re selecting for the type of person who is willing to share a homescreen image. But, caveats aside, the data are fascinating. Eighty-seven percent of homescreens shared in our sample were iOS and 12 percent were Android (1 percent was Windows). For the sake of consistency, we focused the analysis below on iOS — the 87 percent.

The first metric that we pull from the sample is the percent of people who have apps we are developing at the betaworks studio on their homescreens. We then look at the investments we have made. These are our KPIs, so let me start with them and then offer up some data and perspective beyond betaworks.

Our results. Betaworks apps we are building at the studio are on 17.3 percent of people’s phones, up from less than 5 percent at the start of 2013. In terms of the investments that betaworks has made — that haven’t exited — they account for a further 15 percent. That is a significant jump in presence and share.

Read more over on medium

How Bloomberg does interviews …

I did a live interview last Friday on Bloomberg TV.  It was interesting.  A conversation about the early stage technology environment, increases in the cycles of change, new things at betaworks and the Facebook IPO.

Borthwick on Facebook IPO, Betaworks' StrategyMay 12 (Bloomberg) — John Borthwick, chief executive officer of Betaworks, talks about the company's investment strategy in technology startups, Facebook Inc.'s pending initial public offering and the outlook for its shares and competition.

 ____________________

If the subject of the Facebook IPO and the acceleration of the rate of technology change interests you there are two other posts on the subject I saw this weekend.

Here’s Why Google and Facebook Might Completely Disappear in the Next 5 Years

Mobile – Facebook And Google Can’t Live With It And They Can’t Live Without It

Back to Bloomberg and the live interview

Live TV is always interesting, I dont enjoy it but I love the fact that its live, its your words, no editing possible. That aside, the way that Bloomberg do these segments is fascinating. The host is wired up, standing in the atrium of the Bloomberg building, producer jammed into her ear.  She has two screens in front of her, both are bloomberg terminals, running windows.  First check out that keyboard, Bloomberg terminals and airport checkin are the only places you see things like that.  Back to the screens.   From what I could gather one on the right was email and a chat window.  Email was moving fast, a stream of a message or so every few minutes, Twitter sending new follows, notifications, @mentions, etc.  On the left was an application to let the host compose real time a feed into her teleprompter.

The segment began with a discussion of Facebook and Google.  Part way through it the producer (in her ear) tells her there is a breaking story about JP Morgan.   As soon as there is a pause she says “we are going to jump to a breaking story after the advertising break”.   During the ad’s she composes the introduction to the breaking story on the screen on the left.

Its fascinating to watch the process, only thing that is missing is a chartbeat terminal with a live feed of user metrics (ie: who is watching what).   The way media is made is changing as the real time stream is becomes an integral part of the creation / production process.

betaworks 2012 shareholder letter

Related links:

findings

 

 

 

 

 

 

 

 

Today at betaworks we are launching findings, a platform for sharing and discovering what people are reading.  You can see my bookshelf of what I’m reading here – this includes books as well as web pages from which I have clipped highlights that interest me.  You can see the quotes I have highlighted here, or the same collection as an xml feed.  All these quotes are then placed into a social framework where you can explore who I follow on findings and who follows me.  Users of findings get to choose whether to make their collections public or private.  The default is public because at betaworks we believe that making data open and sharable adds value to the data in its entirety. I’m not sure if it’s a squared relationship but I do believe that it’s more than linear.

Building findings was a slow brew or a “slow hunch”.  Back in 2005, Steven Jonson wrote a great blog post about his use of DevonThink software and how he was using it as his personal Memex.  The piece resonated with me.  At the time I had a flatfile system with years worth of collected quotes and clips; it was searchable but the happenstance of the discovery tool that Devon offered opened up a whole new dimension to my collection.  Steven and I started to develop the first version of findings about four years ago with the goal of creating a platform to help people collect, share and discover things they were reading.

Along the journey Steven found this wonderful quote from Robert Darnton about the commonplace book:

“Unlike modern readers, who follow the flow of a narrative from beginning to end, early modern Englishmen read in fits and starts and jumped from book to book. They broke texts into fragments and assembled them into new patterns by transcribing them in different sections of their notebooks. Then they reread the copies and rearranged the patterns while adding more excerpts. Reading and writing were therefore inseparable activities. They belonged to a continuous effort to make sense of things, for the world was full of signs: you could read your way through it; and by keeping an account of your readings, you made a book of your own, one stamped with your personality.”  (You can see the origin of this clip and others by Darnton on this page.)

This quote exemplifies how I read, and write, today.  Despite this, the tools and the language of sharing quotes and marginalia are still only loosely formed. With findings.com, we take a step forward (or back!) to this future.

Back in 2007, there were no ebook readers, no kindles, no iPads – not even a nook. The iPhone was barely six months old and had no apps –  unless you decided to jailbreak.  In short, it was too early for findings so we bought the domain and shelved the development.  As a side note, we originally started with the domain findin.gs but that was a mouthful so we moved over to findings.com.  A bit easier to pronounce.  The project sat on the shelf for about 18 months.

About two years ago, Steven Johnson and I again started talking about the need for a common platform where quotes and marginalia could be shared, re-organized and re-combined.  With devices that enabled “networked” long-form reading on the market, the potential behind the findings idea seemed both timely and unbounded.  About a year ago Corey jumped on board and the three of us got to work; Corey built another beta.  Once again, we didn’t ship this second version; rather we tested, trialed and kicked it around at betaworks.  We kept asking ourselves how to make this useful, how to retain the context of the book yet give the atomic unit (aka quote) a place to exist independently…and with social context.  Today we are launching findings.  The experience is simple yet the meta-data that is processed in the background is complex.  I hope you will give it a try.  Sign up for an account and clip and synch your highlights – let’s see what we can build around digital marginalia.

Since we started working on findings people like James Bridle have helped construct a roadmap.  As James has written, marginalia is a vital and vibrant part of the reading experience.  It’s both personal and social: “digital technologies do not just disseminate, they recombine, and in this reunification of our reading experiences is the future of the book”.  Thanks to James and others for their insight, we are collectively just starting to understand what is possible and what reading will be in the future.

Thanks to the current team of StevenCoreyJason, JefferyNeil and Alex its great to see findings live.  And thank you to the original development team of NateNeil and Trevor.  Sometimes ideas need time to develop, simmer and brew.

And a final note, please be patient with us the site had a lot more traffic than we expected today.

Steven’s post on the launch is here.

tumblr’s blow out round

     tumblr is on a tear.  The growth numbers are insane and they have just announced a big, big funding round.  Back in May of this year TechCrunch ran a post outlining that tumblr was doing the same number of pageviews a day as they did in a month back in 2009: 250m pageviews a day.  If you look at the same metric today, the Quancast pageview (impression) count is now close to 400m. The NYT reports that the service is now doing “13 billion page views per month from 2 billion page views per month. Since the site was first introduced, 30 million blogs have been created using the tool. Those 30 million blogs now generate more than 40 million posts each day.”

This is stunning growth and is a testament to great work by David and the team over the past five years.  Its also an indicator about how fast new social platforms can get to scale.  We are living in an age of mutlple social platforms.   The next five years is going to be fascinating as the established platforms (ie: Facebook, Twitter), relate to the new platforms (ie: tumblr).

I remember meeting David before we started betaworks; I was still running Fotolog and David was working with the  Next New Networks team.  It was April of 2007, and my old friends Emil and Fred had recruited David to work as a contractor to help them build out the Next New product.  tumblr was a side project that David had created because he believed web publishing could be different. He believed publishing could be a simple and beautiful experience; holistic design of the publishing experience, from the post dashboard to the layout of every pixel, could be something simple and bold. I remember talking with David about the early forms of blogging and how tumblogging was emerging as a short variant.  We talked about dashboards and how they should be integrated into the published experience (vs. a toolset that sits outside), and we talked about re-blogging and different tools and forms to amplify and syndicate posts.  We also talked about reposting from other networks and how he wanted tumblr to retain the layout of posts vs. linking out.

The thing I remember the most from the conversation was David himself.  He is one of the best and most dedicated product entrepreneurs I have ever met – he thinks carefully and deeply about every interaction that he and his team creates, and always has.  Every pixel has been considered with care.  It’s wonderful to see and work with someone who cares so much about the actual product experience. David is different and special. The rest of the story is history.  David left Next New Networks and started focusing on tumblr full time. I started betaworks and made tumblr one of our first investments. Its been a pleasure to work with David over the years and to be part of what is becoming a great “banner” New York company in the social web.  Congratulations to David, John and the team.  Here is a video of David speaking at last year’s betaday event.

Note that as backdrop this talk was the day after tumblr had a large outage, so I think David had been pretty much up all night.  It’s a wonderful example of his dedication and commitment as a person and a builder.

Interview with WSJ’s Allan Murray about the future of Social Media

 

An interview with the Wall Street Journal @alansmurray discussing the impact of social media.

news.me

News.me launched this morning as an iPad app and as an email service. Here is some background on why and how we built News.me:

Why News.me? For a while now at bitly and betaworks, we have been thinking about and working on applications that blend socially curated streams with great immersive reading interfaces.

Specifically we have been exploring and testing ways that the bitly data stack can be used to filter and curate social streams.   The launch of the iPad last April changed everything. Finally there was a device that was both intimate and public — a device that could immerse you into a reading experience that wasn’t bound by the user experience constraints naturally embedded in 30 years of personal computing legacy.  So we built News.me.

News.me is a personalized social news reading application for the Apple iPad. It’s an app that lets you browse, discover and read articles that other people are seeing in their Twitter streams.   These streams are filtered and ranked using algorithms developed by the bitly team to extract a measure of social relevance from the billions of clicks and shares in the bitly data set. This is fundamentally a different kind of social news experience. I haven’t seen or used anything quiet like it before. Rather than me reading what you tweet, I read the stream that you have selected to read — your inbound stream.  It’s almost as if I’m leaning over your shoulder — reading what you read, or looking at your book shelves: it allows me to understand how the people I follow construct their world.

As with many innovations, we stumbled upon this idea.  We started developing News.me last August after we acquired the prototype from The New York Times Company. For the first version we wanted to simply take your Twitter stream, filter it using a bitly-based algorithm (bit-rank) and present it as an iPad app. The goal was to make an easy to browse, beautiful reading experience.  Within weeks we had a first version working.  As we sat around the table reviewing it, we started passing our iPads around saying “let me look at your stream.” And that’s how it really started.  We stumbled into a new way of reading Twitter and consuming news — the reverse follow graph wherein I get to read not only what you share, but what you read as well.  I get to read looking over other people’s shoulders.

 

What Others Are Reading…

On News.me you can read your filtered stream and also those of people you follow on Twitter who use news.me.  When you sign into the iPad app it will give you a list of people you are already following. Additionally, we are launching with a group of recommended streams. This is a selection of people whose “reading lists” are particularly interesting.  From Maria Popova (a.k.a. brainpicker), to Nicholas Kristof and Steven Johnson, from Arianna Huffington to Clay Shirky … if you are curious to see what they are reading, if you want to see the world through their eyes, News.me is for you. Many people curate their Twitter experience to reflect their own unique set of interests.   News.me offers a window into their curated view of the world, filtered for realtime social relevance via the bit-rank algorithm.

 

Streamline Your Reading

The second thing we strove to accomplish was to make News.me into a beautiful and beautifully simple reading experience. Whether you are browsing the stream, snacking on an item (you can pinch open an item in the stream to see a bit more) or you have clicked to read a full article, News.me seeks to offer the best possible reading experience.  All content that is one click from the stream is presented within the News.me application.  You can read, browse and “save for later” all within the app. At any given moment, you can click the browser button to see a particular page on the web. News.me has a simple business model to offer this reading experience.

Today we are launching the iPad News.me application and a companion email product.  The email service offers a daily, personalized digest of relevant content powered by the bit-rank algorithm, and is delivered to your inbox at 6 a.m. EST each morning.   The app. costs $.99 per week, and we in turn pay publishers for the pages you read.  The email product is free.

——————————————————–


Created with flickrSLiDR.

——————————————————–

How was News.me developed? News.me grew out of an innovative relationship between The New York Times Company and bitly.   The Times Company was the first in its industry to create a Research & Development group. As part of its mission, the group develops interesting and innovative prototypes based on trends in consumer media. Last May, Martin Nisenholtz and Michael Zimbalist reached out to me about a product in the Times Company’s R&D lab that they wanted to show us at betaworks.  A few weeks later they showed us the following video, accompanied by an iPad-based prototype. The video was created in January 2010, a few months prior to the launch of the iPad, and it anticipated many of the device’s gestures and uses, in form and function. Here are some screenshots of the prototype.   PastedGraphic 1

On the R&D site there are more screenshots and background.   The Times Company decided it would be best to move this product into bitly and betaworks where it could grow and thrive. We purchased the prototype from the Times Company in exchange for equity in bitly and, as part of the deal, a team of developers from R&D worked at bitly to help bring the product to market.

PastedGraphic 4

 

With Thanks … The first thank you goes to the team. I remember the first few product discussions, the dislocation the Times Company’s team felt having been air lifted overnight from The New York Times Building to our offices in the heart of the Meatpacking District. Throughout the transition they remained focused on one thing: building a great product. Michael, Justin, Ted, Alexis — the original four — thank you.  And thank you to Tracy, who jumped in midstream to join the team.  And thank you the bitly team, without whom the data, the filtering, the bits, the ranking of stories would never be possible.  As the web becomes a connected data platform, bitly and its api are becoming an increasingly important part of that platform. The scale at which bitly is operating today is astounding for what is still a small company, 8bn clicks last month and counting.

I would also like the thank our new partners. We are launching today with over 600 publishers participating. Some of whom you can see listed here, most are not. Thank you to all of them we are excited about building a business with you.

Lastly, I would like to thank The New York Times Company for coming to betaworks and bitly in the first place and for having the audacity to do what most big companies don’t do. I ran a new product development group within a large company and I would like to dispel the simplistic myth that big companies don’t innovate.   There is innovation occurring at many big companies.  The thing that big companies really struggle to do is to ship.   How to launch a new product within the context of an existing brand, an existing economic structure, how to not impute a strategy tax on a new product, an existing organizational structure, etc.   These are the challenges that usually cause the breakdown and where big company innovation, in my experience, so often comes apart. The Times Company did something different here.  New models are required to break this pattern, maybe News.me will help lay the foundation of a new model.   I hope it does and I hope we exceed their confidence in us.

http://on.news.me/app-download

And for more information about the product see http://www.news.me/faq

#Jan25: “Sorry for the inconvenience, but we’re building Egypt.”

Its been a remarkable few months in the middle east.   Most recently the events in Egypt have captured the world and Al Jazeera’s english web site has become the place to watch many of the events unfold.   Given that the channel isnt carried by most US cable companies the web site has been the means to view the channel live over the Internet.

Al Jazeera is also a user of Chartbeat.   Chartbeat offers a real time window into what is happening on a web site right now.   Watching the traffic flows over the past few weeks has been fascinating — in Al Jazeera’s case, the site broke traffic record after record.   I wonder what popluar TV show would compare to having 150,000 to 200,000 simultaneous users on a web site, most of them watching TV?

A lot has been and will be written about the role of social media in this revolution here is some data and perspective from the vantage point of traffic to the Al Jazeera web site yesterday as seen via their Chartbeat dashboard right as Mubarek announced his resignation.

Many thanks to the Al Jazeera team and specifically Mohamed Nanabhay for letting us publish these snapshots.

———————————————————————————————————-
Just before noon yesterday, users started flooding into the Al Jazeera web site.

Aj1

The screen shot below shows the traffic sources — links, social and search at noon EST.

Aj2

If you zoom into the article level view you can see that 70%+ of the traffic is coming from social networks.   The picture on the left is the same as the one above — the one on the right zooms into the article level dashboard for the page titled “Hosni Mubarak resigns as President”.

Social71

Mohamed Nanabhay, Head of Online, Al Jazeera’s English web site described the experience:   “As you can imagine our newsrooms and field teams have been on full throttle over the past three weeks. While Al Jazeera very quickly became the worlds window into the revolution in Egypt, Chartbeat proved invaluable as my window into our audience and website. From deploying resources to prioritizing updates, from rolling new features to identifying technical issues on the site, we were able to make better decisions more quickly based on real-time data.”

Interesting snapshots and kind words from people who are monitoring the real time web in ways that could not have been imagined a revolution or two ago.

networked media

This is a different kind of post. I started thinking about “networked media” last August. This began in the same way my longer posts usually do: a slow process of thinking, writing, and editing that spans a few months. But the process took a left turn in October when I decided to speak about networked media at betaday. My work on the blog post ceased and I focused my attention on betaday. What I’m posting here is a compilation of the introduction that I wrote back in August, a video of the betaday talk, and my general notes.

The impact of the “socialization of the web” (i.e. the social components of the web that now pulse through every web page) is a fascinating subject that I think we are only just beginning to understand. Though “socialization” is a politically loaded word, my intent here is not political.   Rather, my use of the word “socialization” is three-fold: I seek to 1.) to show how media is changing as it becomes integrated with social experiences. 2.) to note that the economics of media production is changing and 3.) to emphasize that this shift is a process, not a product.

Social disruption

Over the past few years I have written a fair amount about how the social web will change the way people discover and distribute information online. This started with a post in the spring of 2008 on the Future of News. Then in early ’09 I outlined how “social” would change the discovery process and disrupt traditional search. And then I wrote a long piece about what this shift in discovery means for the user experience on sites. These ideas, and subsequent posts, have informed a lot of what we have built and invested in at betaworks. New modes of navigation and discovery are being developed – from Summize to Tumblr to TweetDeck, and more recently from GroupMe to Ditto. It is now generally accepted that the impact of “social” on discovery and navigation is under way, but I believe the impact goes beyond discovery.

Undoubtedly, search has changed, and continues to change, the way we write, create pages, layout pages, tag and relate to content. It has also encouraged the creation of sites with limited or distracting content that exist solely to optimize search.  Search has not driven a change in the content and user experience once a user is on a page that they value. By contrast, the “social web” is changing the web itself – “social” is altering the nature of what we find. Social experiences are becoming the backbone of many sites. A web page that is part of the “social web” transforms content into a liquid experience, giving rise to a new kind of media: networked media. In the video from betaday, I walk through this shift and show data we have at betaworks that illustrates this change.

_________________________________________________________

Link to: Networked Media presentation from betaday/10 on Vimeo.

_________________________________________________________

General Notes re: Networked Media from my September draft:

Starting about four years ago it became clear that the social, real time web could change the way search and discovery happened online.   Fast forward today and that is certainly happened. The impact of this shift in distribution economics isn’t over but the trend has tipped to scale during 2010.   Last year we saw site after site announce the percent of traffic that it is getting from the social web now exceeds or is a second only to search.   In my post on how social will disrupt search two years back I used the example of youtube, and showed the speed at which it had become the second largest search destination on the web.   Twitter, Facebook, tumblr and other vertical social networks are driving meaningful traffic to sites around the web.    Take a collection of sites in the chart below, from news to commerce, from TV based media to sports for many of them social is now the largest driver of traffic.  Nick Denton said last month that referrals to Gawker properties from Facebook had increased sixfold since the start of the year.    And this is different traffic to search traffic.  Its socially referred, its of higher quality and embedded in it is the multiplier effect that the social publishing platforms drive.

NewImage.jpg

The socialization of the page

The question I would like to turn to now is how web pages and applications are been changed by the social, real time web.  Search changed the way we discovered the web.  Web sites optimized their pages for search bots but in most cases they didn’t actually change the content or substance of the page that was presented to the end user.   Put another way, search brought little tangible benefit to the end user beyond discovery.     Search certainly created new forms of sites.   Domain parking, content farmers, link bait, search spawned thousands of sites that managed to game the discovery tool to gain attention, clicks and visits by users who find themselves on site that has the meta data they were looking for but often little of the content.

But unlike search the dynamic of a web page becoming part of the social web is transforming the experience and the content of that page into a liquid experience that is giving rise to a new kind of media.  Humor sites changed because of search.   This was the one exception I found.   Fred Seibert told me last summer about how humor sites changed the content of their pages, placing the punch line up front — because that is what people searched for.

(for the interested, a short primer is here on what we do at betaworks)

Three steps re: how does a page becomes networked?

#1. An Activity window opens up Somewhere between 1-3 hrs after a story is posted a window of social activity opens.   An example, albeit a slightly unusual one: a product page on amazon for a set of speaker wires that cost almost $7,000 — this past weekend this page has all of a sudden taken flight on on Twitter and some of the social blogs.  The page was actually posted to reddit a month ago.  Yet for whatever reason, the insanity of a $7,000 cable didnt mesh with the zeitgeist until November 27th.   On the 27th the page was Tweeted by @PaulandStorm.  And off it went.   Screen shot of the page here.   In the video above you see this process happen in detail. I use Chartbeat to understand the progression and dispersion that occurs in this initial activity window.   Take a look at the dispersion patterns of typical stories on Fred’s AVC blog you can clearly see the window of engagement happen — just take a look at this as Fred puts up a new post one morning.  Look at the uptake starting about 1 hour after the post hits.  Usually the peak occurs at the 100 minute mark. Chartbeat data from 1000’s of large sites around the web suggests that for a blog the peak is usually around 60 mins after posting and for a news site its 130mins.  Its great how open Fred is with this data, lots to learn. These are windows of meaningful, concurrent activity.   Concurrent users is the key metric to track at this point.     Amplification in the social web is what drives the metric.    And amflification happens because of relative influence within your and other social groups.   Link and discuss: It’s Betweenness That Matters, Not Your Eigenvalue: The Dark Matter Of Influence: http://sto.ly/ii40vr

#2. Social clustering occurs With the engagement window open and concurrent users on the page peeking clustering starts to happen.   What separates this from just an open engagement window is the level of engagement.  Users arrive on the site and they start posting comments and the conversation begins.  “Each comment someone takes the time to leave serves as a proxy for 100 or so folks who properly echo that sentiment” (Batelle).    Examples… The importance of the time of day that you publish into the social web.   Timing relative to what your social group is talking about now is what triggers clustering.    This is why socialflow works — it knows when is the right time to send the message that lights up the social web.   Below is an image from some analysis that the NY Times using bit.ly data.   It shows the dispersion of a particular story — in this case a Kristof piece about the Pill — across the social web.   In the image you can see the clustering occurring, this burst over time of influencers and social engagement.

#3. The page becomes Networked Snap a synchronous experience occurs.   Critical mass of users on one page at the same time and something magical happens.   Think about it as a page becoming a live event or a live site. Similar to a concert there is a residue of the social experience when you go back — even if its way after the event.   If you watch the opening of this live concert you will get a visceral experience this looks like and what happens when media becomes connected with the audience.  Its Springsteen’s hungry heart and while he plays the opening of the song he turns it right over the audience to pick it up and sing the opening.       Forking of content.

— Rise of agile publishing: what is it?   Lean editorial teams, instrumentation of sites, getting the data feedback, adaptive CMS’s, importance of posting at the right time up, importance of tracking social engagement, how every page is becoming a front page

— Serendipity.  Some of this is science, some of it isnt.   An “old” page can become networked out of no where — point back to the amazon example.   You don’t know where its going it’s going to happen, you need tools to track and alert you when its happening

— We are moving into an age of networked media.   Dana Boyd’s analysis of the shift from broadcast to networked media

— closing of comments post the activity window – proximity references / boyd article, couple of old ones are in close proximity to this one – Structured data types to allow for debate topics.

Example: Gawker.  gawker is experimenting ,  new design that is both more dynamic (real time) and more immersive, without the restrictions of reverse chronological.    Users are no longer navigating from page to page across isolated sites. Rather they are experiencing the subset of sites as a liquid experience, where there is a consistent flow from site to site and the consistent aspect is social.   Users flow — ambient experience of media.

Example:  Dribble and iTunes icon, this became a networked media event.

Example: Yahoo bloggers adapt content to the refers and links to the spiker

Example: “the quality of the dynamics of the conversation shift from one where parlor tricks can sustain themselves beyond the quality of the content to one where we can get sort of immediate tactile connection with people” (source: 4.18.09 Gillmor Gang 1.01 min).

Example: Red State :  Twitter 140 charac wish they could aggregate topics need standardized metrics re social engagement

Points of tension to discuss and think about further?

– Advertising as the primary mode monetization and pulling people in vs. pulling them away.

– Tension between platform owners who monetize w/ advertising on their site, trying to intergrate web sites into their monetization flow

– The monolithic assumption that one social platform will rule all.   How vertical use cases of social (from tumblr to Foursquare to Groupme to Instagram) illustrate how social is fragmenting into specific workflows and uses.  Do “digital networks architectures naturally incubate monopolies” Lanier?

– How are the economics of social media are effecting networked media.  Ownership of data, ownership of content, if users are creating the content what rights do they have over it?

– Importance of the link structure of the web its the most fluid form resist the temptation to vertically integrate and “consumption” sites.

– dimensionality reduction too much data

– Importance of the link structure of the web its the most fluid form resist the temptation to vertically intergrate and “consume” sites

– Heisenberg principle of social media, the act of a page becoming social changes it

My reading collection on networked media: http://bit.ly/bundles/johnb/u




Tweetdeck: multistream, unistream, getting it all streamed right

The Tweetdeck team have been hard at work for two years thinking about how to display and navigate streams on the web and on devices.   The Android version that moved into beta yesterday is a big step forward.    The tech blogs have done feature reviews, paid complements to the user experience, the speed and simplicity of use but there is more going on here.    It is going to take a some use to settle in on why this is different and what has changed, users are starting to see it.

What’s so different here is the concept of a single unified column for all your real time feeds.  Inside of the “home” column are the different services color coded and weighted to allow for the varying speed / cadence of different streams.  In the screen shot below you see the beta Android client, you are looking at my “home” column.    It includes updates from all my Twitter accounts, Facebook, Foursquare, Buzz etc.     You can see that a checkin is included in the home stream as a simple gesture that tells me “Sam checked in at Terminal 4”.    Its formatted differently to a Twitter update – it contains only the summary information I need “someone is checking in somewhere”.

If click on the “check in” the view pivots around place not person.

This cross stream integration is also evident in the “me” column — a single column that integrates all mentions across the various social services you have.   The “me” column is the first one to the right of home — you can see it in the screenshot below.  The subtle little dots on top offer a simple navigation note that you are now one column to the right of “home”.    And the “me” column again integrates mentions across streams — the top one is a reply to a Facebook update, if I click through I get the context, below it are Twitter mentions.

I wrote about the importance of context in the stream a while ago.  Context is more important now than ever as the pace of updates, vertical services (ie: local, q&a, payments) and re-syndication continues to only speed up.   Previously Tweetdeck ran all of these services in separate columns – one for each.   The Android version still has mutliple columns but the other columns are ways to track either topics (search) or people (individual people or groups of people) — you can see how those work  here.    It’s in beta and there is still work to do still but this new version of Tweetdeck breaks new ground — the team have created something very wonderful.

The original Tweetdeck broke new ground in how Twitter could be used.   All the Twitter clients had until that time taken their DNA from the IM clients.   They all sought to replicate a single column, a diminutive view of the stream.   Tweetdeck on the desktop changed all of that.   Offering a multi column view that was immersive, intense and full on.     As you move your service to different platforms (say from Web to mobile) you are faced with the perplexing question of whether you re-think the service to fit the dimensions and features of the new platform (mobile) or you offer users the same familiar experience.   Tweetdeck Android is a ground up re-invention of the desktop experience — created for for mobile.   I have been using it for a few weeks now and it is changing the way I experience the real time web.    Once again the Tweetdeck team have taken a big bold step into something new, you can get the beta here.

(note Tweetdeck is a betaworks co.)

bit.ly and platforms …

Twitter announced this week that they were launching there own URL shortener.   There has been a lot of chatter about this over the past week.  I thought it would be helpful to write a about how the partnership worked and what bit.ly relationship is to platforms, Twitter and others.    To do something unusual for me let me let me cut to the chase.

Twitter.com pretty much stopped using bit.ly to shorten URL’s on Twitter.com in December.    Since last fall the bit.ly team and Twitter have been talking about this transition.    Today Twitter.com represents less than 1% of bit.ly links shortened — when the transition took place in December it was closer to 3-8%, depending on the UX on Twitter.com and the day.   We continue to work with the Twitter team and we are currently figuring out how to get key whitelabel URL’s working on Twitter.com.    The default shortening partnership worked well for a period of time – approximately six months — during a period of hyper growth. Today bit.ly is growing and continues to scale — irrespective of the change in rules last December re: shortening on Twitter.com.   That is the summary — the detailed version follows.

bit.ly was launched May of 2008.    By the first quarter of 2009 bit.ly was growing fast, scaling well and offering a handful of key features beyond shortening that users – of both the api and the website – found critical in terms of understanding social distribution — most importantly real time metrics*.    I believe Twitter’s insane growth trajectory started in December 2008 — by early 2009 many of the short URL’s on Twitter were struggling to keep up with the scale and growth and none of them offered the real time metrics that bit.ly had.   So Twitter and bit.ly entered into an agreement where bit.ly would become the default URL shortener for Twitter.     This feature rolled out in May 2009 and ran until December of 2009.

bit.ly knew this would be a short term agreement — it was done to help Twitter scale and without a doubt it helped bit.ly scale.     In late November / December 2009 Twitter.com stopped shortening URL’s — except under one very narrow use case (and if you can find out what that is I will send or buy you a drink!).    As Techcrunch reported this week bit.ly growth has continued.

When Twitter changed its shortener policy in December Twitter.com represented 3-8% of bit.ly links created everyday.    So the change was barely noticeable in bit.ly systems.    Today Twitter.com represents less than .5% of bit.ly links created or clicked on each day.     There are other social platforms that are now larger than Twitter.com.    Last month there were  3.4bn clicks on bit.ly links — up from 2.7bn in February and 2.5bn in January.    bit.ly is fairly big for a little company, handling billions of clicks and real time metrics for 100’s of million URL’s each day isnt trivial.    Someone noted earlier this week — they believe — Yahoo does about 7.5bn clicks a month on its search product — while these clicks are not comparable to the bit.ly click experience, in terms of reach and scale it’s an interesting benchmark.

On Tuesday we announced 6,000 sign up’s for bit.ly pro.   As of today that number is over 7,000 and have in the past 48hrs a subsset have signed up for the enterprise version — so revenue.    The companies up and running include: nyti.ms, amzn.to, binged.it, huff.to, 4sq.com, pep.si, and n.pr — along side a set of bloggers and individuals who use the bit.ly service for their URL’s.    And incidentally — this Wednesday was our first day ever where over 150m bit.ly links were clicked on.          (For more data and charts of historical growth see )

All that said the noise level out there is well, noisy — “is bit.ly screwed?”, “is bit.ly the next google?” seemingly, no one can make up their mind.    We can — we love bit.ly.    bit.ly is short, sweet and out of control.   Someone asked me last summer, “is bit.ly part of the internet”.     We are working hard to make it part of the internet or at least the social, real time web — in scale, breadth, trust and performance.  bit.ly is the tracking tool that many many people use to understand how many times a Tweet or a Facebook link was clicked on*.

We thank Twitter, everyone there, for the kick start it gave bit.ly.   And we certainly hope we helped Twitter during a difficult scaling period — that was the intent.    bit.ly still works and will continue to work on Twitter, most of the clients and Twitter related services use the API everyday and we are working right now with the Twitter team on some publisher related services.   And most of all we thank our users — end users who use the bit.ly web site to shorten, share and track everyday — bit.ly 1.3 will be out in the next few days and we hope you love it.   And we thank our API users.    The myriad of services who use our API to shorten and track and monitor the pulse of the real time web.   And publishers who are using it for domain level / enterprise tracking.

In terms of lessons learnt there are many — but four come to mind right now, all four relate to broader points about web platforms.   Over the past few years a set of platforms have emerged online that give start-up’s a foundation to get a kick start to building their audience and/or their business.   Adwords/Adsense were probably the first scaled examples of this. And as these platforms mature its important for their to be clear boundaries between what the platform provider does and doesn’t do. Granted these boundaries shift over time — but the boundaries have to be sustained for long enough for the platform provider to achieve scale and trust and to get a critical mass of applications running on it.  They also have to sustained long enough for businesses to be built on the platform, not just tweaks, real businesses.

To play out the Google example take the UX of Google.   Google understood they werent in the content business — they were in the navigation business.   So for years the Google site just pointed outward.   Now after 10 years the line is getting hazy in some areas — this is why the local search stuff, the yelp conversations resonate with people — Google has for what ever reason decided that local is something it needs to wrap more of an arm around local. How long is that arm? How detrimental is it to local players? i’m not sure? — but if i had to put a dollar down I would bet that Yelp and say Opentable will do just fine.    So — clear sustained boundaries are necessary.    The second point is that these boundaries become increasingly important and easy to define once the monetization approach of the underlying platform is defined.   Emphasis is the reason why this is seperate point to the first one — vs. a subset, this is vital.   The third point is that people bootstrapping on these platforms should also try to spread their relevance beyond a single platform – so Yelp should extend its business model beyond adsense, Zynga beyond Facebook etc. etc.    In 2010, unlike 10 years ago, we are building in a world of multiple, often overlapping platforms, its not a monolithic world anymore.    That is what Stocktwits has done, same for bit.ly, Tweetdeck, Someecards, OMGpop etc… all of these services have a leg in multiple platforms.

Lastly, talk about holes and filling holes in platforms is misleading at best.    Take a list of emerging to mature companies — great companies … Is Groupon a hole in Facebook? Facebook a hole in Google?? Google is a hole in Microsoft???  Microsoft in IBM????  Maybe it’s holes all the way down?    Innovation — building great companies — is about finding, filling and even creating holes.   But entrepreneurs should n’t — and most dont — focus on filling holes in other people’s platforms — they should think about how to build great things — things that in 2010 may be bootstrapped on platforms but great products, products that people love, products that move people to organize their world differently, or to see the world differently.   The slogan “Think different” captured most if not all of what entrepreneurs need.   After 30yrs of personal computing history we have a lot of platform and application history to draw from — Apple understands this very well, so does Google,  same for Microsoft, Amazon, and Ebay.  And yes — once again, the cycle of innovation is turning – great new platforms are emerging and great businesses will be developed on of these new platforms.

*/ note:  if you place a “+” on the end of any bit.ly link and you will see real time traffic to that link

betaworks series b / betawhat

Last week we announced a series b funding round at betaworks.  Since then several people have asked us what we do at betaworks and how.   Here goes.

betaworks is a company focused on the social, real time web. We believe this represents a radical shift in how people use the internet. We believe it is swiftly becoming the primary navigational interface to the internet and we believe it’s different, so different that we see the change underway as a 10 year shift.

It’s about dynamic streams of information not static pages. It’s about push not pull. It’s about enabling publishing tools, data and the ways you can touch and experience the web as widely as possible. It’s about an open architecture that permits software developers and users the ability to move data back and forth across services. It’s about letting users stitch together a set of services they want to use, rather than using data to lock people into a single service. It’s about creating great new companies at an accelerated rate using the infrastructure that has been built over the past ten years — from AWS to OAuth, from the Twitter API to Google maps API, from Facebook Connect to AppEngine. It’s as if the metabolic rate of the Internet is changing and this shift is what betaworks is centered on.

What we do at betaworks is build and invest in great companies that make up the loosely coupled experience outlined above. This all happens out of a company, not an incubator, not a fund.  The company was formed and designed for this transformation, betaworks itself, the network of companies that are part of betaworks, are a sense a mirror image of this shift.   As the metabolic rate changes the possibility for connection, recombination and innovation increases, dramatically.   As many people have observed it cheaper and cheaper to trial and test yet, with appropriate instrumentation, its also cheaper to scale — scale product development, scale infrastructure and scale a business.   We like to scale fast or fail fast at betaworks.

When we build a company, invariable its an idea we have come up with, an itch we want to scratch and we have an ongoing operational role in building that company.  These companies are the core of betaworks. The things we build are born of the focus on the real time social web — they are more often than not white spaces where we see a need that hasn’t yet been filled.

betaworks investments does seed stage investing in the ecosystem around this core. For us seed investing means first money, our average investment size is 150k. These investments are done as part of a syndicate of angels or early stage VC’s. Our requirements from an investment side are simple:  it has to first fit the thesis, it has to fit the investment profile (early stage, tech centered etc.) and there needs to be a beta, public or not — a working product, our office is a ppt free zone.

That’s it

Ongoing tracking of the real time web …

The last post that I did about real time web data mixed data with a commentary and a fake headline about how data is sometimes misunderstood in regards to the real time web.    This post repeats some of that data but the focus of the post is the data.   I will update the post periodically with relevant data that we see at betaworks or that others share with us.   To that end this post is done in reverse order with the newest data on top.

Tracking the real time web data

The measurement tools we have still only sometimes work for counting traffic to web pages and they certainly dont track or measure traffic in streams let alone aggregate up the underlying ecosystems that are emerging around these new markets.  At betaworks we spend a lot of time looking at and tracking this underlying data set.   It’s our business and its fascinating.   Like many companies each of the individual businesses at betaworks have fragments of data sets but because betaworks acts as ecosystem of companies we can mix and match the data to get results that are more interesting and hopefully offer greater insight

——————————-

(i) tumblr growth for the last half of 2009

Another data point re: growth of the real time web through the second half of last year through to Jan 18th of this year.  tumblr continues to kill it.     I read this interesting post yesterday about how tumblr is leading in its  category through innovation and simple, effective, product design.   The compete numbers quoted in that post are less impressive than these directly measured quantcast numbers.


(h) Twitter vs. the Twitter Ecosystem

Fred Wilson’s post adds some solid directional data on the question of the size of the ecosystem.   “You can talk about Twitter.com and then you can talk about the Twitter ecosystem. One is a web site. The other is a fundamental part of the Internet infrastructure. And the latter is 3-5x bigger than the former and that delta is likely to grow even larger.”

(g) Some early 2010 data points re: the Real Time Web

  • Twitter: Jan 11th was the highest usage day ever (source: @ev via techcrunch)
  • Tweetdeck: did 4,143,687 updates on Jan 8, yep 4m. Or, 48 per second (source: Iain Dodsworth / tweetdeck internal data)
  • Foursquare: Jan 9th biggest day ever.    1 update or check-in per second (source: twitter and techcrunch)
  • Daily Booth: in past 30 days more than 10mm uniques (source: dailybooth internal data)
  • bit.ly: last week was the largest week ever for clicks on bit.ly links. 564m were clicked on in total. On the Jan 6th there were a record of 98m decodes.    1100 clicks every second.

(f) Comparing the real time web vs. Google for the second half of 2009

Andrew Parker commented on the last post that the chart displaying the growth trends was hard to decipher and that it maybe simpler to show month over month trending.  It turns out the that month over month is also hard to decipher.   What is easier to read is this summary chart.    It shows the average month over month growth rates for the RT web sites (the average from Chart A).   Note 27.33% is the average growth rate for the real time web companies in 2009 — that’s astounding.    The comparable number for the second half of 2009 was 10.5% a month — significantly lower but still a very big number for m/m growth.

(e) Ongoing growth of the real time stream in the second half of 2009

This is a question people have asked me repeatedly in the past few weeks.  Did the real time stream grow in Q4 2009?    It did.    Not at the pace that it grew during q1-q3, but our data at betaworks confirms continued growth.   One of the best proxies we use for directional trending in the real time web are the bit.ly decodes.   This is the raw number of bit.ly links that are clicked on across the web.    Many of these clicks occur within the Twitter ecosystem, but a large number are outside of Twitter, by people and by machines — there is a surprising amount of diversity within the real time stream as I posted about a while back.

Two charts are displayed below.    On the bottom are bit.ly decodes (blue) and encodes (red)  running through the second half of last year.    On the top is a different but related metric.   Another betaworks company is Twitterfeed.    Twitterfeed is the leading platform enabling publishers to post from their sites into Twitter and Facebook.    This chart graphs the total number of feeds processed (blue) and the total number of publishers using Twitterfeed, again through the second half of the year (note if the charts inline are too small to read you can click though and see full size versions).   As you can see similar the left hand chart — at Twitterfeed the growth was strong for the entire second half of 2009.

Both these charts illustrate the ongoing shift that is taking place in terms of how people use the real time web for navigation, search and discovery.    My preference is to look at real user interactions as strong indicators of user behavior.   For example I actually find Google trends more useful often than comScore, Compete or the other “page” based measurement services.   As interactions online shift to streams we are going to have to figure out how measurement works. I feel like today we are back to the early days of the web when people talked about “hits” — it’s hard to parse the relevant data from the noise.  The indicators we see suggest that the speed at which this shift to the real time web is taking place is astounding.   Yet it is happening in a fashion that I have seen a couple of times before.

(d) An illustration of the step nature of social growth. bit.ly weekly decodes for the second half of 2009.

Most social networks I have worked with have grown in a step function manner.  You see this clearly when you zoom into the bit.ly data set and look at weekly decodes, illustrated above.   You often have to zoom in and out of the data set to see and find the steps but they are usually there.     Sometimes they run for months — either up or sideways.    You can see the steps in Facebook growth in 2009.    I saw effect up close with ICQ, AIM, Fotolog, Summize and now with bit.ly.   Someone smarter than me has surely figured out why these steps occur.    My hypothesis is that as social networks grow they jump in a sporadic fashion from one dense cluster of relationships to a new one.   The upward trajectory is the adoption cycle of that new, dense cluster and the flat part of the step is the period between the step to next cluster.     Blended in here there are clearly issues of engagement vs. trial.   But it’s hard to weed those out from this data set.   As someone mentioned to me in regards to the last post this is a property of scale-free networks.

(c) Google and Amazon in 2009

Google and Amazon — this is what it looked like in 2009:

It’s basically flat.     Pretty much every user in the domestic US is on Google for search and navigation and on Amazon for commerce — impressive baseline numbers but flat for the year (source: Quantcast).  So then lets turn to Twitter.

(b) Twitter – an estimate of Twitter.com and the Twitter ecosystem

Much ink has been spilt over Twitter.com’s growth in the second half of the year.   During the first half of the year Twitter’s experience hyper growth — and unprecedented media attention.    In the second half of the year the media waned, the service went through what I suspect was a digestion phase — that step again?     Steps aside — because I dont in anyway seek to represent Twitter Inc. — there are two questions that in my mind haven’t been answered fully:

(i) what international growth in the second half of 2009?, that was clearly a driver for Facebook in ’09.  Recent data suggests growth continued to be strong.

(ii) what about the ecosystem.

Unsurprisingly its the second question that interests me the most.    So what about that ecosystem?    We know that approx 50% of the interactions with the Twitter API occur outside of Twitter.com but many of those aren’t end user interactions.     We also know that as people adopt and build a following on Twitter they often move up to use one of the client or vertical specifics applications to suit their “power” needs.   At TweetDeck we did a survey of our users this past summer.     The data we got suggested 92% of them then use Tweetdeck everyday — 51% use Twitter more frequently since they started using TweetDeck.  So we know there is a very engaged audience on the clients.     We also know that most of the clients arent web pages — they are flash, AIR, coco, iPhone app’s etc. all things that the traditional measurement companies dont track.

What I did to estimate the relative growth of the Twitter ecosystem is the following.   I used Google Trends and compiled data for Twitter and the key clients.    I then scaled that chart over the Twitter.com traffic.   Is it correct? — no.   Is it made up? — no.   It’s a proxy and this is what it looks like (again, you can click the chart to see a larger version).

Similar to the Twitter.com traffic you see the flattening out of the ecosystem in the summer.    But you see growth in the forth quarter that returns to the summer time levels.     I suspect if you could zoom in and out of this the way I did above you would see those steps again.

(a) The Real Time Web in 2009

Add in Facebook (blue) and Meebo (green) both steaming ahead — Meebo had a very strong end of year.    And then tile on top the bit.ly data and the Twitterfeed numbers (bit.ly on the right hand scale) and you have an overall picture of growth of the real time web vs. Google and Amazon.   As t

charting the real time web
OR
the curious tale of how TechCrunch traffic inexplicably fell off a cliff in December

For a while now I have been thinking about doing a post about some of the data we track at betaworks.   Over the past few months people have written about Twitter’s traffic being up, down or sideways — the core question that people are asking is the real time web growing or not, is this hype or substance?     Great questions — the answer to all of the above is from the data set I see: yes.   Adoption and growth is happening pretty much across the board — and in some areas its happening at an astounding pace.    But tracking this is hard.   It’s hard to measure something that is still emerging.    The measurement tools we have still only sometimes work for counting traffic to web pages and they certainly dont track or measure traffic in streams let alone aggregate up the underlying ecosystems that are emerging around these new markets.  At betaworks we spend a lot of time looking at and tracking this underlying data set.   It’s our business and its fascinating.

I was inspired to finally write something by first a good experience and then a bad one.    First the good one.    Earlier this week I saw a Tweet from Marshall Kirkpatrick about Gary Hayes’s social media counter.    It’s  very nicely done — and an embed is available.     This is what it looks like (note the three buttons on top are hot, you can see the social web, mobile and gaming):

The second thing was less fun but i’m sure it has happened to many an entrepreneur.    I was emailed earlier this week by a reporter asking about some data – I didnt spend the time to weed through the analysis and the reporter published data that was misleading.    More on this incident later.

Lets dig into some data.    First — addressing the question people have asked me repeatedly in the past few weeks.  Did the real time stream grow in Q4 2009?    It did.    Not at the pace that it grew during q1-q3, but our data confirms continued growth.   One of the best proxies we use for directional trending in the real time web are the bit.ly decodes.   This is the raw number of bit.ly links that are clicked on across the web.    Many of these clicks occur within the Twitter ecosystem, but a large number are outside of Twitter, by people and by machines — there is a surprising amount of diversity within the real time stream as I posted about a while back.  Two charts are displayed below.    On the left there are bit.ly decodes (blue) and encodes (red)  running through the second half of last year.    On the right is a different but related metric.   Another betaworks company is Twitterfeed.    Twitterfeed is the leading platform enabling publishers to post from their sites into Twitter and Facebook.    This chart graphs the total number of feeds processed (blue) and the total number of publishers using Twitterfeed, again through the second half of the year (note if the charts inline are too small to read you can click though and see full size versions).   As you can see similar the left hand chart — at Twitterfeed the growth was strong for the entire second half of 2009.

Both these charts illustrate the ongoing shift that is taking place in terms of how people use the real time web for navigation, search and discovery.    My preference is to look at real user interactions as strong indicators of user behavior.   For example I actually find Google trends more useful often than comScore, Compete or the other “page” based measurement services.   As interactions online shift to streams we are going to have to figure out how measurement works. I feel like today we are back to the early days of the web when people talked about “hits” — it’s hard to parse the relevant data from the noise.  The indicators we see suggest that the speed at which this shift to the real time web is taking place is astounding.   Yet it is happening in a fashion that I have seen a couple of times before.

Most social networks I have worked with have grown in a step function manner.  You see this clearly when you zoom into the bit.ly data set and look at weekly decodes.   This is less clear but also visible when you look at daily trending data (on the right) — but add a 3 week moving average on top of that and you can once again see the steps.   You often have to zoom in and out of the data set to see and find the steps but they are usually there.     Sometimes they run for months — either up or sideways.      I saw this with ICQ, AIM, Fotolog, Summize through to bit.ly.   Someone smarter than me has surely figured out why these steps occur.    My hypothesis is that as social networks grow they jump in a sporadic fashion to a new dense cluster or network of relationships.   The upward trajectory is the adoption cycle of that new, dense cluster and the flat part of the step is the period between the step to next cluster.     Blended in here there are clearly issues of engagement vs. trial.   But it’s hard to weed those out from this data set.   I learnt a lot of this from Yossi Vardi and Adam Seifer.    Two people I had the privilege of working with over the years — two people whose DNA is wired right into this stuff.  At Fotolog Adam could take the historical data set and illustrate how these clusters moved — in steps — from geography to geography, its fascinating.

TechCrunch falls off a cliff

Ok I’m sure there are some people reading who are thinking — well this is interesting but I actually want to read about TechCrunch falling off a traffic cliff.   I’m sorry – I actually don’t have any data to suggest that happened.  After noting yesterday that provocative headline is  sometimes a substitute for data I thought — heck I can do this too!    This section of the post is more of a cautionary tale — if you are confused by this twist let me back up to where I started.   I mentioned that there were two motivations for me sitting down and writing this post.   The second one was that earlier this week  TechCrunch story ran this week saying that bit.ly market share had shifted dramatically.     It hasn’t.   The data was just misunderstood by the reporter.   The tale (I did promise a tale) began last August when TechCrunch ran the following chart about the market share of URL shorteners.

The pie chart showed the top 5 URL shorteners and then calculated the market share each had  — what percent each was *of* the top five.     The  data looks like this:

bit.ly 79.61%
TinyURL 13.75%
is.gd 2.47%
ow.ly 2.26%
ff.im 1.92%
(79.61+13.75+2.47+2.26+1.92 = 100)
The comparable data from yesterday is:

bit.ly = 75%
TinyURL = 10%
ow.ly = 6%
is.gd = 4%
tumblr = 4%
(again this adds up to 100%)

Not much news in those numbers, especially when you consider they come from the Twitter “garden hose” (a subset of all tweets) and swing by as much as +/- 5% daily.   The tumblr growth into the top 5 and the ow.ly bump up is nice shift for them – but not really a story.     The hitch was that the reporter didn’t consider that there are other URL’s in the Twitter stream aside from these five.   Some are short URL’s and some aren’t.   So this metric doesn’t accurately reflect overall short URL market share — it shows the shuffling of market share amongst the top five.   But media will be media.   I saw a Tweet this week about how effective Twitter is at disseminating information — true and false — despite all the shifts that are going on headlines in a sense carry even more weight than in the “read all about it” days.

The lesson here for me was the importance of helping reporters and analysts get access to the underlying data — data they can use effectively.   We sent the reporter the  data but he saw a summary data set that included the other URL’s and didn’t understand that back in August there were also “other” URL’s.   After the fact we worked to sort this out and he put a correction in his post.   But the headline was off and running — irrespective of how dirty or clean the data was.   Basic mistake — my mistake — and this was with a reporter who knows this stuff well.   Given the paucity of data out there and the emergent state of the real time web  this stuff is bound to happen.

Ironically, yesterday, bit.ly hit an all time high in terms of decodes — over 90m.   But back to the original question — there is a valid question the reporter was seeking to understand, namely: what is the market share of dem short thingy’s?      We track this metric — using the Twitter garden hose and identifying most of the short URL’s to produce a ranking (note its a sample, so the occurrences are a fraction of the actuals).     And it’s a rolling 24 hr view — so it moves around quite a bit — but nonetheless it’s informative.  This is what it looked like yesterday:

Over time this data set is going to become harder to use for this purpose.    At bit.ly we kicked off our white label service before the holidays.   Despite months of preparation we weren’t expecting the demand.   As we provision and setup the thousands of publishers, blogger and brands who want white label services its going to result in a much more diverse stream of data in the garden hose.

Real Time Web Data

Finally I thought it would be interesting to try to get a perspective on the emergence of the real time web in 2009 — how did its growth compare and contrast with the incumbent web category leaders?    Let me try to frame up some data around this.   Hang in there, some of the things I’m going to do are hacks (at best) — as I said I was inspired!   Lets start with the user growth in the US among the current web leaders — Google and Amazon — this is what it looked like in 2009:

It’s basically flat.     Pretty much every user in the domestic US is on Google for search and navigation and on Amazon for commerce — impressive baseline numbers but flat for the year (source: Quantcast).  So then lets turn to Twitter.    Much ink has been spilt over Twitter.com’s growth in the second half of the year.   During the first half of the year Twitter’s growth, I suspect, was driven to a great extent by the unprecedented media attention it received — media and celebrities were all over it.    Yet in the second half of the year that waned and the traffic numbers to the Twitter.com web site were flat for the second half of the year.    That step issue again?

Placing steps aside — because I dont in anyway seek to represent Twitter Inc. — there are two questions that haven’t been answered  (a) what about international growth, that was clearly a driver for Facebook in ’09, where was Twitter internationally?   (b) what about the ecosystem.     Unsurprisingly its the second question that interests me the most.    So what about that ecosystem?

We know that approx 50% of the interactions with the Twitter API occur outside of Twitter.com but many of those aren’t end user interactions.     We also know that as people adopt and build a following on Twitter they often move up to use one of the client or vertical specifics applications to suit their “power” needs.   At TweetDeck we did a survey of our users this past summer.     The data we got suggested 92% of them then use Tweetdeck everyday — 51% use Twitter more frequently since they started using TweetDeck.  So we know there is a very engaged audience on the clients.     We also know that most of the clients arent web pages — they are flash, AIR, coco, iPhone app’s etc. all things that the traditional measurement companies dont track.

What I did to estimate the relative growth of the Twitter ecosystem is the following.   I used Google Trends and compiled data for Twitter and the key clients.    I then scaled that chart over the Twitter.com traffic.   Is it correct? — no.   Is it made up? — no.   It’s a proxy and this is what it looks like (again, you can click the chart to see a larger version):

Similar to the Twitter.com traffic you see the flattening out in the summer.    But similar to the data sets referenced above you see growth in the forth quarter.     I suspect if you could zoom in and out of this the way I did above you would see those steps again.     So lets put it all together!    Its one heck of a busy chart.   Add in Facebook (blue) and Meebo (green) both steaming ahead — Meebo had a very strong end of year.    And then tile on top the bit.ly data and the Twitterfeed numbers (both on different scales) and you have an overall picture of growth of the real time web vs. Google and Amazon.

Ok.   One last snap shot then im wrapping up.    Chartbeat — yep another betaworks company — had one of its best weeks ever this past week — no small thanks to Jason’s Calacanis’s New Year post about his Top 10 favorite web products of 2009.   To finish up here is a video of the live traffic flow coming into Fred Wilson’s blog at AVC.com on the announcement of the Google Nexus one Phone.    Steve Gilmore mentioned the other week how sometimes interactions in the real time web just amaze one.    Watching people swarm to a site is a pretty enthralling experience.    We have much work to do in 2010.    Some of it will be about figuring out how to measure the real time web.   Much of it will be continuing to build out the real time web and learning about this fascinating shift taking place right under our feet.

random footnote:

A data point I was sent this am by Iain that was interesting — yet it didnt seem to fit in anywhere?!   Asian twitter clients were yesterday over 5% of the requests visible in the garden hose.