Author John Borthwick

7 on 7

This weekend (May 14th) 7on7 runs for the second time in NYC – the event brings together artists and technologists – where they conceive and often build a project over the course of a single day.   Some people have referred to it as a YCombinator for the art world, sort of, but last year it was a little more unconventional and irreverent than a YC event.   Slamming an artist together with a technologist can have unexpected consequences.

Last year Matt Mullenweg and Evan Roth hacked WordPress to add in a feature that would create random and unexpected experience at points in the software that he described a lonely or threatening.  Marc Andre Robinson & Hilary Mason created an umbrella with a homing beacon so that you could see patterns of use and rain across a region. Joshua Schachter & Monica Narula devised a concept for a guilt exchange. You can see a video of these three presentation here.   The other four presentations were wonderful – the whole event from 2010 is posted here.

Why 7 on 7?

A handful of reasons: this event and the process that it represents is something I have been fascinated by for a long time.  The first site I created on the web was äda’web, back in 1994. It was a platform for artists and technologists to collaborate and create projects for the web – ones that were medium specific – ie: it wasn’t about putting paintings on the web, rather it was about using the web to create. The site is still up and running courtesy of the Walker Art Center, to whom we (and AOL) donated äda’web to in 1998.   For more about what “äda’web is” see this interview with my co-founder, Benjamin Weil, and / or read this piece he wrote about äda’web as a digital foundry.

Back in the late nineties it struck me that the process that an artist and a technologist apply to their craft is similar.  There is much to write on this subject, rather than diving in here there is a thread we started yesterday on quora titled Do Artists and Technologists create things the same way – it spells out similarities between creating art and creating technology.

7on7 slams technology together with Art.   As such it is a great platform for pranksters.   Pranksters have a vital role in any society — from Jesters, forward they help us gain perspective and see and say things that might otherwise be socially unacceptable.  I met this group earlier this year who setup a system to randomly wardial phone boxes in London — Art or Hack?  I’m not sure, either way, fierce fun.

Last thought. Art and technology are two communities that are well represented here in New York and yet they dont intersect that frequently.   This event was designed to become a bridge between these communities.  As technology becomes more deeply engrained in our lives and society it will become part of what we consider to be art and vica-versa.   See you on saturday, I can promise something will surprise.

7on7, this Saturday, May 14th details here.

note: I’m a board member at Rhizome and member of the motley crew who came up with this idea – others are: Lauren Cornell, Peter Rojas and Fred Benenson.

 

news.me

News.me launched this morning as an iPad app and as an email service. Here is some background on why and how we built News.me:

Why News.me? For a while now at bitly and betaworks, we have been thinking about and working on applications that blend socially curated streams with great immersive reading interfaces.

Specifically we have been exploring and testing ways that the bitly data stack can be used to filter and curate social streams.   The launch of the iPad last April changed everything. Finally there was a device that was both intimate and public — a device that could immerse you into a reading experience that wasn’t bound by the user experience constraints naturally embedded in 30 years of personal computing legacy.  So we built News.me.

News.me is a personalized social news reading application for the Apple iPad. It’s an app that lets you browse, discover and read articles that other people are seeing in their Twitter streams.   These streams are filtered and ranked using algorithms developed by the bitly team to extract a measure of social relevance from the billions of clicks and shares in the bitly data set. This is fundamentally a different kind of social news experience. I haven’t seen or used anything quiet like it before. Rather than me reading what you tweet, I read the stream that you have selected to read — your inbound stream.  It’s almost as if I’m leaning over your shoulder — reading what you read, or looking at your book shelves: it allows me to understand how the people I follow construct their world.

As with many innovations, we stumbled upon this idea.  We started developing News.me last August after we acquired the prototype from The New York Times Company. For the first version we wanted to simply take your Twitter stream, filter it using a bitly-based algorithm (bit-rank) and present it as an iPad app. The goal was to make an easy to browse, beautiful reading experience.  Within weeks we had a first version working.  As we sat around the table reviewing it, we started passing our iPads around saying “let me look at your stream.” And that’s how it really started.  We stumbled into a new way of reading Twitter and consuming news — the reverse follow graph wherein I get to read not only what you share, but what you read as well.  I get to read looking over other people’s shoulders.

 

What Others Are Reading…

On News.me you can read your filtered stream and also those of people you follow on Twitter who use news.me.  When you sign into the iPad app it will give you a list of people you are already following. Additionally, we are launching with a group of recommended streams. This is a selection of people whose “reading lists” are particularly interesting.  From Maria Popova (a.k.a. brainpicker), to Nicholas Kristof and Steven Johnson, from Arianna Huffington to Clay Shirky … if you are curious to see what they are reading, if you want to see the world through their eyes, News.me is for you. Many people curate their Twitter experience to reflect their own unique set of interests.   News.me offers a window into their curated view of the world, filtered for realtime social relevance via the bit-rank algorithm.

 

Streamline Your Reading

The second thing we strove to accomplish was to make News.me into a beautiful and beautifully simple reading experience. Whether you are browsing the stream, snacking on an item (you can pinch open an item in the stream to see a bit more) or you have clicked to read a full article, News.me seeks to offer the best possible reading experience.  All content that is one click from the stream is presented within the News.me application.  You can read, browse and “save for later” all within the app. At any given moment, you can click the browser button to see a particular page on the web. News.me has a simple business model to offer this reading experience.

Today we are launching the iPad News.me application and a companion email product.  The email service offers a daily, personalized digest of relevant content powered by the bit-rank algorithm, and is delivered to your inbox at 6 a.m. EST each morning.   The app. costs $.99 per week, and we in turn pay publishers for the pages you read.  The email product is free.

——————————————————–


Created with flickrSLiDR.

——————————————————–

How was News.me developed? News.me grew out of an innovative relationship between The New York Times Company and bitly.   The Times Company was the first in its industry to create a Research & Development group. As part of its mission, the group develops interesting and innovative prototypes based on trends in consumer media. Last May, Martin Nisenholtz and Michael Zimbalist reached out to me about a product in the Times Company’s R&D lab that they wanted to show us at betaworks.  A few weeks later they showed us the following video, accompanied by an iPad-based prototype. The video was created in January 2010, a few months prior to the launch of the iPad, and it anticipated many of the device’s gestures and uses, in form and function. Here are some screenshots of the prototype.   PastedGraphic 1

On the R&D site there are more screenshots and background.   The Times Company decided it would be best to move this product into bitly and betaworks where it could grow and thrive. We purchased the prototype from the Times Company in exchange for equity in bitly and, as part of the deal, a team of developers from R&D worked at bitly to help bring the product to market.

PastedGraphic 4

 

With Thanks … The first thank you goes to the team. I remember the first few product discussions, the dislocation the Times Company’s team felt having been air lifted overnight from The New York Times Building to our offices in the heart of the Meatpacking District. Throughout the transition they remained focused on one thing: building a great product. Michael, Justin, Ted, Alexis — the original four — thank you.  And thank you to Tracy, who jumped in midstream to join the team.  And thank you the bitly team, without whom the data, the filtering, the bits, the ranking of stories would never be possible.  As the web becomes a connected data platform, bitly and its api are becoming an increasingly important part of that platform. The scale at which bitly is operating today is astounding for what is still a small company, 8bn clicks last month and counting.

I would also like the thank our new partners. We are launching today with over 600 publishers participating. Some of whom you can see listed here, most are not. Thank you to all of them we are excited about building a business with you.

Lastly, I would like to thank The New York Times Company for coming to betaworks and bitly in the first place and for having the audacity to do what most big companies don’t do. I ran a new product development group within a large company and I would like to dispel the simplistic myth that big companies don’t innovate.   There is innovation occurring at many big companies.  The thing that big companies really struggle to do is to ship.   How to launch a new product within the context of an existing brand, an existing economic structure, how to not impute a strategy tax on a new product, an existing organizational structure, etc.   These are the challenges that usually cause the breakdown and where big company innovation, in my experience, so often comes apart. The Times Company did something different here.  New models are required to break this pattern, maybe News.me will help lay the foundation of a new model.   I hope it does and I hope we exceed their confidence in us.

http://on.news.me/app-download

And for more information about the product see http://www.news.me/faq

#Jan25: “Sorry for the inconvenience, but we’re building Egypt.”

Its been a remarkable few months in the middle east.   Most recently the events in Egypt have captured the world and Al Jazeera’s english web site has become the place to watch many of the events unfold.   Given that the channel isnt carried by most US cable companies the web site has been the means to view the channel live over the Internet.

Al Jazeera is also a user of Chartbeat.   Chartbeat offers a real time window into what is happening on a web site right now.   Watching the traffic flows over the past few weeks has been fascinating — in Al Jazeera’s case, the site broke traffic record after record.   I wonder what popluar TV show would compare to having 150,000 to 200,000 simultaneous users on a web site, most of them watching TV?

A lot has been and will be written about the role of social media in this revolution here is some data and perspective from the vantage point of traffic to the Al Jazeera web site yesterday as seen via their Chartbeat dashboard right as Mubarek announced his resignation.

Many thanks to the Al Jazeera team and specifically Mohamed Nanabhay for letting us publish these snapshots.

———————————————————————————————————-
Just before noon yesterday, users started flooding into the Al Jazeera web site.

Aj1

The screen shot below shows the traffic sources — links, social and search at noon EST.

Aj2

If you zoom into the article level view you can see that 70%+ of the traffic is coming from social networks.   The picture on the left is the same as the one above — the one on the right zooms into the article level dashboard for the page titled “Hosni Mubarak resigns as President”.

Social71

Mohamed Nanabhay, Head of Online, Al Jazeera’s English web site described the experience:   “As you can imagine our newsrooms and field teams have been on full throttle over the past three weeks. While Al Jazeera very quickly became the worlds window into the revolution in Egypt, Chartbeat proved invaluable as my window into our audience and website. From deploying resources to prioritizing updates, from rolling new features to identifying technical issues on the site, we were able to make better decisions more quickly based on real-time data.”

Interesting snapshots and kind words from people who are monitoring the real time web in ways that could not have been imagined a revolution or two ago.

networked media

This is a different kind of post. I started thinking about “networked media” last August. This began in the same way my longer posts usually do: a slow process of thinking, writing, and editing that spans a few months. But the process took a left turn in October when I decided to speak about networked media at betaday. My work on the blog post ceased and I focused my attention on betaday. What I’m posting here is a compilation of the introduction that I wrote back in August, a video of the betaday talk, and my general notes.

The impact of the “socialization of the web” (i.e. the social components of the web that now pulse through every web page) is a fascinating subject that I think we are only just beginning to understand. Though “socialization” is a politically loaded word, my intent here is not political.   Rather, my use of the word “socialization” is three-fold: I seek to 1.) to show how media is changing as it becomes integrated with social experiences. 2.) to note that the economics of media production is changing and 3.) to emphasize that this shift is a process, not a product.

Social disruption

Over the past few years I have written a fair amount about how the social web will change the way people discover and distribute information online. This started with a post in the spring of 2008 on the Future of News. Then in early ’09 I outlined how “social” would change the discovery process and disrupt traditional search. And then I wrote a long piece about what this shift in discovery means for the user experience on sites. These ideas, and subsequent posts, have informed a lot of what we have built and invested in at betaworks. New modes of navigation and discovery are being developed – from Summize to Tumblr to TweetDeck, and more recently from GroupMe to Ditto. It is now generally accepted that the impact of “social” on discovery and navigation is under way, but I believe the impact goes beyond discovery.

Undoubtedly, search has changed, and continues to change, the way we write, create pages, layout pages, tag and relate to content. It has also encouraged the creation of sites with limited or distracting content that exist solely to optimize search.  Search has not driven a change in the content and user experience once a user is on a page that they value. By contrast, the “social web” is changing the web itself – “social” is altering the nature of what we find. Social experiences are becoming the backbone of many sites. A web page that is part of the “social web” transforms content into a liquid experience, giving rise to a new kind of media: networked media. In the video from betaday, I walk through this shift and show data we have at betaworks that illustrates this change.

_________________________________________________________

Link to: Networked Media presentation from betaday/10 on Vimeo.

_________________________________________________________

General Notes re: Networked Media from my September draft:

Starting about four years ago it became clear that the social, real time web could change the way search and discovery happened online.   Fast forward today and that is certainly happened. The impact of this shift in distribution economics isn’t over but the trend has tipped to scale during 2010.   Last year we saw site after site announce the percent of traffic that it is getting from the social web now exceeds or is a second only to search.   In my post on how social will disrupt search two years back I used the example of youtube, and showed the speed at which it had become the second largest search destination on the web.   Twitter, Facebook, tumblr and other vertical social networks are driving meaningful traffic to sites around the web.    Take a collection of sites in the chart below, from news to commerce, from TV based media to sports for many of them social is now the largest driver of traffic.  Nick Denton said last month that referrals to Gawker properties from Facebook had increased sixfold since the start of the year.    And this is different traffic to search traffic.  Its socially referred, its of higher quality and embedded in it is the multiplier effect that the social publishing platforms drive.

NewImage.jpg

The socialization of the page

The question I would like to turn to now is how web pages and applications are been changed by the social, real time web.  Search changed the way we discovered the web.  Web sites optimized their pages for search bots but in most cases they didn’t actually change the content or substance of the page that was presented to the end user.   Put another way, search brought little tangible benefit to the end user beyond discovery.     Search certainly created new forms of sites.   Domain parking, content farmers, link bait, search spawned thousands of sites that managed to game the discovery tool to gain attention, clicks and visits by users who find themselves on site that has the meta data they were looking for but often little of the content.

But unlike search the dynamic of a web page becoming part of the social web is transforming the experience and the content of that page into a liquid experience that is giving rise to a new kind of media.  Humor sites changed because of search.   This was the one exception I found.   Fred Seibert told me last summer about how humor sites changed the content of their pages, placing the punch line up front — because that is what people searched for.

(for the interested, a short primer is here on what we do at betaworks)

Three steps re: how does a page becomes networked?

#1. An Activity window opens up Somewhere between 1-3 hrs after a story is posted a window of social activity opens.   An example, albeit a slightly unusual one: a product page on amazon for a set of speaker wires that cost almost $7,000 — this past weekend this page has all of a sudden taken flight on on Twitter and some of the social blogs.  The page was actually posted to reddit a month ago.  Yet for whatever reason, the insanity of a $7,000 cable didnt mesh with the zeitgeist until November 27th.   On the 27th the page was Tweeted by @PaulandStorm.  And off it went.   Screen shot of the page here.   In the video above you see this process happen in detail. I use Chartbeat to understand the progression and dispersion that occurs in this initial activity window.   Take a look at the dispersion patterns of typical stories on Fred’s AVC blog you can clearly see the window of engagement happen — just take a look at this as Fred puts up a new post one morning.  Look at the uptake starting about 1 hour after the post hits.  Usually the peak occurs at the 100 minute mark. Chartbeat data from 1000′s of large sites around the web suggests that for a blog the peak is usually around 60 mins after posting and for a news site its 130mins.  Its great how open Fred is with this data, lots to learn. These are windows of meaningful, concurrent activity.   Concurrent users is the key metric to track at this point.     Amplification in the social web is what drives the metric.    And amflification happens because of relative influence within your and other social groups.   Link and discuss: It’s Betweenness That Matters, Not Your Eigenvalue: The Dark Matter Of Influence: http://sto.ly/ii40vr

#2. Social clustering occurs With the engagement window open and concurrent users on the page peeking clustering starts to happen.   What separates this from just an open engagement window is the level of engagement.  Users arrive on the site and they start posting comments and the conversation begins.  “Each comment someone takes the time to leave serves as a proxy for 100 or so folks who properly echo that sentiment” (Batelle).    Examples… The importance of the time of day that you publish into the social web.   Timing relative to what your social group is talking about now is what triggers clustering.    This is why socialflow works — it knows when is the right time to send the message that lights up the social web.   Below is an image from some analysis that the NY Times using bit.ly data.   It shows the dispersion of a particular story — in this case a Kristof piece about the Pill – across the social web.   In the image you can see the clustering occurring, this burst over time of influencers and social engagement.

#3. The page becomes Networked Snap a synchronous experience occurs.   Critical mass of users on one page at the same time and something magical happens.   Think about it as a page becoming a live event or a live site. Similar to a concert there is a residue of the social experience when you go back — even if its way after the event.   If you watch the opening of this live concert you will get a visceral experience this looks like and what happens when media becomes connected with the audience.  Its Springsteen’s hungry heart and while he plays the opening of the song he turns it right over the audience to pick it up and sing the opening.       Forking of content.

– Rise of agile publishing: what is it?   Lean editorial teams, instrumentation of sites, getting the data feedback, adaptive CMS’s, importance of posting at the right time up, importance of tracking social engagement, how every page is becoming a front page

– Serendipity.  Some of this is science, some of it isnt.   An “old” page can become networked out of no where — point back to the amazon example.   You don’t know where its going it’s going to happen, you need tools to track and alert you when its happening

– We are moving into an age of networked media.   Dana Boyd’s analysis of the shift from broadcast to networked media

– closing of comments post the activity window – proximity references / boyd article, couple of old ones are in close proximity to this one – Structured data types to allow for debate topics.

Example: Gawker.  gawker is experimenting ,  new design that is both more dynamic (real time) and more immersive, without the restrictions of reverse chronological.    Users are no longer navigating from page to page across isolated sites. Rather they are experiencing the subset of sites as a liquid experience, where there is a consistent flow from site to site and the consistent aspect is social.   Users flow — ambient experience of media.

Example:  Dribble and iTunes icon, this became a networked media event.

Example: Yahoo bloggers adapt content to the refers and links to the spiker

Example: “the quality of the dynamics of the conversation shift from one where parlor tricks can sustain themselves beyond the quality of the content to one where we can get sort of immediate tactile connection with people” (source: 4.18.09 Gillmor Gang 1.01 min).

Example: Red State :  Twitter 140 charac wish they could aggregate topics need standardized metrics re social engagement

Points of tension to discuss and think about further?

- Advertising as the primary mode monetization and pulling people in vs. pulling them away.

- Tension between platform owners who monetize w/ advertising on their site, trying to intergrate web sites into their monetization flow

- The monolithic assumption that one social platform will rule all.   How vertical use cases of social (from tumblr to Foursquare to Groupme to Instagram) illustrate how social is fragmenting into specific workflows and uses.  Do “digital networks architectures naturally incubate monopolies” Lanier?

- How are the economics of social media are effecting networked media.  Ownership of data, ownership of content, if users are creating the content what rights do they have over it?

- Importance of the link structure of the web its the most fluid form resist the temptation to vertically integrate and “consumption” sites.

- dimensionality reduction too much data

- Importance of the link structure of the web its the most fluid form resist the temptation to vertically intergrate and “consume” sites

- Heisenberg principle of social media, the act of a page becoming social changes it

My reading collection on networked media: http://bit.ly/bundles/johnb/u




A case for basic rules of the road for the open Internet, now ….

Access to fast, affordable and open broadband, for users and developers alike is, I believe, the single most important driver of innovation in our business. The FCC will likely vote next week on a framework for net neutrality – we got aspects of this wrong ten years ago we can’t afford to be wrong again. For the reasons I outline below, we are at an important juncture in the evolution of how we connect to the Internet and how services are delivered on top of the platform. The lack of basic “rules of the road” for what network providers and others can and can’t do is starting to hamper innovation and growth. The proposals aren’t perfect but now is the time for the FCC to act.

Brad Burnham stopped by our office earlier this week to talk about his proposal for the future of net neutrality.The FCC has circulated a draft of a set of rules about neutrality that the Commission will likely vote on this week. Though the rules are not public, Chairman Genachowski outlined their substance last week. Through a combination of the Chairman’s talk, the Waxman Proposal, and the Google/Verizon proposal, one can derive the substance of the issue and understand its opportunities and risks. I strongly support much of what the Chairman has proposed and I support the clarifications that Burnham outlines. Before further discussing this point, I have to ask – why does this matter now? Over the past few years there has been a lot of discussion, a lot of promises, and some proposals with regard to net neutrality.

Three reasons why this matters now: READ ON OVER AT TECHCRUNCH

Share on Facebook

Tweet the post to your followers

post to tumblr


bit.ly series b

I’m excited to announce that bit.ly has completed a Series B funding.   The details are on the bit.ly blog The round was led by RRE Ventures, general partner, Eric Wiesen will be joining the bit.ly board.    It’s been an amazing two and a half years since the founding of bit.ly.   Growth has been the focus for much of the time — managing the growth and managing to continue to push new product out, on the site and through the API.

So far this year over 40.6 billion bit.ly links have been clicked, last month alone the number was almost 6bn (5.96bn to be exact).  The chart below shows the daily clicks volume — what we call decodes — the blue line is daily clicks, where you can see the variance around each week (ie: higher of weekdays, lower click volume on the weekend) and the red line is a 3 week moving average.   This past tuesday we had our biggest day ever of bit.ly links created.    There have been over 4bn unique URL’s that have been shortened using bit.ly — for every one of these and for all the 40+bn clicks bit.ly offers real time metrics with the simple addition of a “+” at the end of every link (ie: for traffic to this page see: http://bit.ly/bseries+).   All this growth and progress has happened because of our team and our users.

Thank you — we love our users and the team at bit.ly is one of the best I have ever worked with, so thank you.     We now have much more work to do as we build out what is now a cornerstone of the real time / social web.

a live blog

There was a discussion on the Gillmor Gang last Friday that I wanted to flesh out a bit.   The topic was the sale of TechCrunch to AOL.   Much of the talk on the web and some of it on the Gang centered on TechCrunch as a media property.    Are “content” acquisitions on the rise?  What does this mean for content sites?   How do old media, other content companies relate to this? etc. etc. etc.     I dont think these question are that interesting.   All media is internet media today — if the so called “content” provider doesn’t place them on the net they get there regardless.   It’s no longer the presence of content online that makes it interesting — its type of engagement that occurs that is is interesting.   TechCrunch is in my mind becoming a place — a real time, or live, conversational platform.

If you look at TechCrunch articles the number of comments that stream into the page within the first hour after an article is posted is meaningful.    It’s these real time interactions, the conversations that are happening on the page, the connections that are taking place real time or close to real time — that make TechCrunch such an interesting place.    Yes, a place not a site.  TechCrunch or the Huffington Post (the other example I mentioned on the show) are becoming conversational places or platforms where the content provides context to the conversation and visa versa.  A while ago I had a conversation with Bob Stein we were talking about writing, publishing and blogging.   Bob told me about a test he had run at the Institute for the futureofthebook.  In the test they placed comments on a blog to the right of the posts / articles.  The result was meaningfully more interesting discourse.  The comments werent placed at the bottom, hidden away, like a letter to the editor, they were part of the body of the post.    Think about it this way.  If you took TechCrunch and placed the comments to the right of the posts and let them stream live (most recent first) wouldnt it look like a mirror image of the new Twitter?   Stream on the right — media on the left — Twitter is stream on the left, media on the right.     Interesting.

TechCrunch is in my mind a conversational platform and its that + the personalities of the  team that what make it interesting.    And the “that” bit, is the real time participation of the users – that provide for a degree of authenticity and connection.   I think when Steve Gillmor was talking about Neil Young on the show it was this type of connection he was talking about.  Arrington in his post “Why We Sold TechCrunch To AOL, And Where We Go From Here” says “I don’t want to get all teary-eyed here, but the best comment I ever saw on TechCrunch was years ago in response to when I quipped something like “This is my blog and I’ll write what I want” in response to a troll. The response was “No Mike, This is OUR blog. You just work here.””     When @Auerbach pointed that comment out to me this week this thread of thoughts came together.   That is whats different here, the active, passionate users who are participating in the conversation, live – maybe we should call the category live blogs.    Place like these are emerging, most of them are in news, politics, tech or gossip but other vertical categories are starting to appear.  In a sense I see these sites as children of the old bbs’s.   And its happening the way things happen on the web — its somewhat chaordic, its messy, there is a pull from the centralized services that have the advantage of a tightly coupled integration and a more gradual, but eventually greater pull from the edge.

If this all sounds fairly general, I do have some data to back up the thesis.   I’m going to talk about this data generally since its not my data to publish in detail.   Via Chartbeat (a company we built at betaworks) we see engagement on a variety of sites, in real time.   The focus of Chartbeat is on how many people are on your site, right now and what are they doing.   Looking at the real time engagement dashboard on Chartbeat accross a set of customers, say: TechCrunch, WSJ, Gawker, Yahoo News, ChatRoulette and FoxNews we see very different patterns of engagement.

The pace at which TechCrunch is published, the degree of engagement, the real time updating of comments, the requirement of the blog to post with your real name, the direct engagement from the authors … all of this contributes to a what is much more of live experience than most blogs.    There is a public example of data around a live blog that I can point to, that’s AVC.com, @FredWilson has made his dashboard for Chartbeat open.    Take a look at it the engagement view as he publishes.   Again note the pace and consistency that Fred blogs and the relationship he has to his audience.  Or look at what Chamillonaire is doing … live is becoming live in a whole new way, participatory media is becoming more diverse and interesting.    And for AOL this is in a sense, a return to its roots of community and conversation.  There is potential in this deal, potential for TechCrunch & AOL and the team to turn more of the web into more of a conversation — the vision of AOL as a next generation content platform might start to emerge out of this.

Tweetdeck: multistream, unistream, getting it all streamed right

The Tweetdeck team have been hard at work for two years thinking about how to display and navigate streams on the web and on devices.   The Android version that moved into beta yesterday is a big step forward.    The tech blogs have done feature reviews, paid complements to the user experience, the speed and simplicity of use but there is more going on here.    It is going to take a some use to settle in on why this is different and what has changed, users are starting to see it.

What’s so different here is the concept of a single unified column for all your real time feeds.  Inside of the “home” column are the different services color coded and weighted to allow for the varying speed / cadence of different streams.  In the screen shot below you see the beta Android client, you are looking at my “home” column.    It includes updates from all my Twitter accounts, Facebook, Foursquare, Buzz etc.     You can see that a checkin is included in the home stream as a simple gesture that tells me “Sam checked in at Terminal 4″.    Its formatted differently to a Twitter update – it contains only the summary information I need “someone is checking in somewhere”.

If click on the “check in” the view pivots around place not person.

This cross stream integration is also evident in the “me” column — a single column that integrates all mentions across the various social services you have.   The “me” column is the first one to the right of home — you can see it in the screenshot below.  The subtle little dots on top offer a simple navigation note that you are now one column to the right of “home”.    And the “me” column again integrates mentions across streams — the top one is a reply to a Facebook update, if I click through I get the context, below it are Twitter mentions.

I wrote about the importance of context in the stream a while ago.  Context is more important now than ever as the pace of updates, vertical services (ie: local, q&a, payments) and re-syndication continues to only speed up.   Previously Tweetdeck ran all of these services in separate columns – one for each.   The Android version still has mutliple columns but the other columns are ways to track either topics (search) or people (individual people or groups of people) — you can see how those work  here.    It’s in beta and there is still work to do still but this new version of Tweetdeck breaks new ground — the team have created something very wonderful.

The original Tweetdeck broke new ground in how Twitter could be used.   All the Twitter clients had until that time taken their DNA from the IM clients.   They all sought to replicate a single column, a diminutive view of the stream.   Tweetdeck on the desktop changed all of that.   Offering a multi column view that was immersive, intense and full on.     As you move your service to different platforms (say from Web to mobile) you are faced with the perplexing question of whether you re-think the service to fit the dimensions and features of the new platform (mobile) or you offer users the same familiar experience.   Tweetdeck Android is a ground up re-invention of the desktop experience — created for for mobile.   I have been using it for a few weeks now and it is changing the way I experience the real time web.    Once again the Tweetdeck team have taken a big bold step into something new, you can get the beta here.

(note Tweetdeck is a betaworks co.)

getting to know the iPad

iPad 1-2.jpg

I have been running an experiment for the eleven weeks or so since the iPad launched. Each weekend I spend time going through directories hunting for apps that begin to expose native attributes of the device. My assumption is that the iPad opens up a new form of computing and we will see apps that are created specifically for this medium. Watching these videos of a two and a half year old and a 99 year old using the device for the first time offers a glimpse of its potential. Ease of introduction and interaction are the key points of distinction. I havent seen a full sized computing device that requires so little context or introduction.

When the iPad first came out much of what was published was on either end of a spectrum of opinion.  On one were the bleary eyed evangelists who considered it game changing and on the other people who were uninterested or unimpressed.    I think invariably the people who found it wanting were expecting to port their existing workflows to the device.   They were asking to do “what I do on my PC” on the iPad.  These people were frustrated and disappointed.   They assumed this was another form of PC, with some modifications but that it represented a transition similar to desktop to laptop.   Take this post from TechCrunch: “Why I’m Craigslisting My iPads” — three of the four reasons the author lists for dumping his iPad are about his disppointment that the iPad isnt a replacement for his laptop or desktop.     But in the comments section of the post an interesting conversation emerges: what if this device’s potential is different? Just like video has transformed the way our culture interacts with images, what if gesture based computing has the potential to transform the way we use, create, and express ourselves.

The iPad is the first full sized computing device with wide scale adoption with:

  • Hardware and software that requires little to no context or learning
  • An input screen large enough to manipulate (touch and type) with both hands
  • A gesture based interface that is so immersive, and personal that it verges on intimate
  • Hardware with battery and heat management that, simply, doesn’t suck
  • An application metaphor that is well suited to immersive, chunky, experiences. As @dbennahum says: “The ipad is the first innovation in digital media that has lengthened the basic unit of digital media”
  • A tightly coupled, well developed and highly controlled app development environment

For some people these attributes sum up to the promise that this will be the “consumption” device that re-kindles print and protects IP based video.   That may occur but for me that isnt the potential. The iPad is a connected computing device that extends human gestures. If you step back from the noise and hype, after almost 15 years of web experience, we know a few things. Connected / networked devices have consistently generated use cases that center around communication and social participation vs. passive consumption. Connecting devices to a network isnt just a more efficient means of distribution it opens up new paths of participation and creation.  The very term consumption maps to a world and a set of assumptions that I think is antithetical to the medium (for more on this see Jerry Michalski quote on the Cluetrain). I believe the combination of the interface on the iPad and the entry level experience I outlined above is sufficiently intuitive that this device and its applications has the potential to become an extension of us and transform computing similar to how the mouse did 45 years ago.

titles Mind the Gap.jpeg

Douglas Engelbart and his mouse changed everything.  Similar to the mouse the multitouch interface lets you navigate the surface of the computer.   But there is a key difference between this gesture based interface and the mouse.  The mouse is separate from the working surface, connected to the body but separate from the actual place of interaction. With the iPad gestures happen on the surface that you are creating on.  I have this general theory that when you narrow the gap between the surface that you “create on” and the surface that you “read on” you change the ratio of readers to writers and proportionally you reduce consumption as we used to know it and increase participation.  Some examples.  Images — still and video — where the tool you use to capture is increasingly the tool you use to view and edit.    Remember the analog experience — shoot a roll of film on one media type (coated celluloid) and then develop / display on another (paper). The gap here was large.  Digital cameras started to close the gap by eliminating the development process — by recording on a digital medium that permitted the direct transfer of that to a display and editing device (the PC).   The incorporation of display screens on cameras shrunk the gap further.  Now we are closing the gap even further. Embedding cheap cameras every display screen so that what you see also is what you record and display screen into the front of cameras.    With each closing of the gap between between production and display — participation increases.  Take the web itself.   The advent of wiki’s, blogging, comments and writable sites.   Or compare Facebook, Twitter and Tumblr vs. WordPress, Posterous and Typepad.   They are all CMS’s of one kind or another — but the experience is radically different in the first group.   Why?   Because they close that gap — specifically, they dont abstract the publishing into a dashboard.  You write on the surface you are reading on.

So, as a rule of thumb, when i see this gap narrow — I sit back and think.   And it is for this reason that I believe the gesture based interface on this device has the potential to open up a new form of computing.

Back to that experiment.   So while its has been less than 12 weeks since its launch I want to see if there are elements emerging on iPad apps that can tell us about what this new medium has to offer, what are the things we are going to be able to create on this device. My process is as follows:

(a) Hunt and peck for native apps. The discovery / search process is imperfect. I spend a fair amount of time using services like Appshopper, Appadvice and Position App. I also spend time in the limited app store that Apple offers (limited in that it sure is one crappy interface to browse, compare and find app’s).       I do find the “people who liked this also liked this” feature useful.   But hunt and peck is the apt term — its a tough discovery process — while Apple has done an awful lot to open up new forms of innovation they are simulateanously compromising others — the web isnt a good discovery platform for a lot of these app’s because many of them arent “visible” to basic web tools.   Any that is how I find things.

(b) I use the apps for a few days at least.   Given how visually seductive this platform is its important for me to use the app’s for a bit, let them settle into my workflow and interests and see if they mature or fade.      I then create a summary, of the app, on the iPad (might as well use the medium).   The app that I used to write many of the these summaries was Omnigafffle.

Six of the summaries are inserted below aggregated under some broad topic areas. I wanted to lay them out side by side on the table and see what I had learnt thus far.  I have some commentary around most sections and then some conclusions at the end.

1. This is the first post I did — summarizing the goal:

iPad 1-2.jpg

2. Extending the iPad

In the early days I was fascinated by camera A and camera B application — it lets you use your iPhone camera on your iPad, over WIFI. It’s one of those wow app’s — you show it to people and you can see their eyes open as they think of the possibilities this opens up. I think the possibility set that it opens up relate to the device as an extension of other connected devices. There a small handful of other applications I found that have done interesting things integrating iPads with other devices — ie: Scrabble, iBrainstorm and Airturn. Airturn is brilliant in it’s simplicity and well defined use – using a Bluetooth foot pedal to turn the iPad into a sheet music reader.   Apple might well have not put a camera on v1 of the iPad for commercial reasons (ie upgrade path) but the business restriction has opened up an opportunity.

CameraA/B is a good example of how those design choices are driving innovation. One of the first pictures I did was a requisite recursive image.

3. Take me back …

The only physical navigation on the device is a home button, like the iPhone no back button. I wish there was a back button. I find myself using the home button time and time again to go back when im in an application. I love how conservative Apple is with its hardware controls but a back button is missing — its one of the great navigational tools that the browser brought us, I really want one on this device.

My Diagram.jpg

4. Jump on in …

There are a lot of interesting immersive app’s that are beginning to pop up on the iPad. These are good examples of the kind of experiences that are emerging:

photo.JPG

This is another immersive application — the popular Osmos HD. I said at the outset that I avoided gaming app’s and this and the coaster are games. Its the immersive navigation that i want to emphasize — today, there aren’t many better ways to explore this than app’s like these. Both of them use the high resolution display, the multitouch interface and the accelerometer to give you a visceral sense of the possibilities.

5. Writing …

I want to write on the iPad, write with my hand. I tried getting a pen but the experience was disappointing. The mutitouch surface is designed for input from a finger — the pens simulates a finger. If you want to draw with a pen or have large fingers then a pen like this works but it doesn’t work to actually write on the device. There also isn’t an application that lets you scale down words you have written with your finger, or at least i havent found one.  But you you can type!

I have also used a wireless keyboard — I typed most of this post using a keyboard, it works well.

6. Reading, readers and browsing …

There are a whole collection of reading related experiences that are coming out for the iPad, its one of the most active areas of development.     My journey began with the book app’s on the device. iBooks, the Kindle app and then a handful of dedicated reading app’s (ie comic book app’s) I don’t have much to say about any of these experiences since they all pretty much use the device as a display to read on.   They all work well, and the display is better on my eyes than I expected. I liked the Kindle, e-ink display, a lot but unless you are reading outside, in full sun, the iPad display works very well. My favorite reading app is the Kindle app.    The reading surface is clean and immersive.   Navigation is simple and I love the “social highlight” feature. You can see it in the image below. Whilst you are reading there are sections with a light, dotted, underline — touch it and it tells you x number of people have highlighted this section as well as you. I love stuff like this — a meaningful social gesture displayed with minimal UI.

photo.PNG

A few weeks after the launch I started using reader app’s.   I define this category as app’s that offer a reading experience into either a social network (twitter, facebook), a selection of feeds (RSS), or a scrapped version of web sites.  Some people are calling these clients  – for me a client allows you to publish, these are readers of one kind and another.    Skygrid was one of the first I used.    Then came Pulse, GoodReader, Apollo and last week Flipboard.   Most of these readers offer simple, fluid interfaces into the real time streams.   Yet the degree to which we have turned the web into a mess is painfully evident in these applications. Take a look at the screen shots of web pages displayed on these applications. The highlight is mine but the page is a mess.    Less than 15% of the pixels on the first page below were actually written by the author.

2010-07-05_2151.pngphoto.PNG

It’s remarkable how the human brain can block out a visual experience in one context (web browser) but when its recontextualized into another experience (iPad) the insanity of the experience is clear. We have slow boiled so many web sites that we have turned the web into a mass of branding, redundant navigation and advertising.   And some wonder why value of these ad’s keeps falling.    As the number of devices that access the internet increases the possibilities forking the web, as Doc Searls calls it, increases. Remember pointcast, sidewiki, Google News, Digg bar — same questions.   Something has to give here — surfing the web works very well on the iPad, the surfing works, the problem is that its the web sites that dont.

The issues embedded in these readers stretch back to the beginning of the web — all the way back to the moment that HTML and then RSS formed a layer, a standard, for the abstraction of underlying data vs. its representation.   Regardless of your view of the touch based interface its undeniable that the iPad represents a meaningful shift in how you can view information.    Match that with the insanity of how many web sites look today and you have a rich opportunity for innovation.

Users, publishers, advertisers, browsers, aggregators, widget makers — pretty much everyone is going to try to address this issue.    Some of these reader app’s use the criteria that RSS established (excerpt or full text) to determine whether to re-contextualize the entire page or just a snippet of it.    Some of them just scrap the entire web page and then some of them are emerging as potentially powerful middleware tools.    PressedPad is installed on this blog — its somewhere between a wordpress plugin and a theme ( note to users: install it as a plugin).   PressedPad  gives me some basic controls re: how to display and manage the words on this site so that they are optimized for the iPad.   Similar to WPtouch — it does a great job of addressing this issue by passing control over to the site creator.   This approach makes sense but it will take time to scale.  In the short term we are going to see a lot of false starts here.  But ultimately the reading experience will get better because of this tension and evolution both on the iPad and the web.   And so will monetization.  Now that the inanity of what we have done is been laid bare we have to fix it.

Back to the app’s themselves.   Of all these reader app’s the Flipboard is the most innovative.  I’m still getting used to the experience – there is a lot to think about here.   There is much that I like about the Flipboard – its visually arresting for a start, beautifully laid out and stunning.   Take the image below — some app’s are just stop you in their tracks with their ability to show off the visual capabilities of the device, Flipboard is certainly one of these.

Visuals aside the thing that I find interesting is Flipboard’s approach to Twitter and Facebook.  It turns Twitter and Facebook into a well formatted reading experience — it takes a dynamic real time stream and re-prints it as if its a magazine. I like the application of Tweets as headlines. I have often thought about Twitter’s 140 character length as headline publishing. Flipboard takes this literally — using the Tweet as the headline with exerts of the content displayed under the headline.   The Facebook stream works less well.   Facebook isnt a news stream, its more of a social stream — and I find the Flipboard randomly drops me into the Facebook at a level that im not interested in. I flip pages and I find myself browsing personal pictures from someone I barely know — something that i would have skipped by on Facebook.com.

But it is this representation of a stream as a magazine that I struggle with the most. The metaphor is overwrought in my mind.  I hear the theoretical arguments that Scoble makes re: layout but they dont translate for me in practice.   The stream of data coming from Twitter and Facebook isnt a magazine — formatting it as such places it into a context that doesnt fit particularly well and certainly doesnt scale well (from a usage perspective).  Because it looks like a magazine and feels like one — I tend to read it like one, and this content isn’t meant to be used like a magazine. The presentation feels too finished, I have written before about the need for unfinished media and how it opens the door for participation.   This feels like it closes that door – it allows too narrow an entry path for interaction.     And then finally what they are trying to do is technically hard.   It’s hard to algorithmically determine which text should be large vs. small, where to place emphasis – just like its hard to algorithmically de-dup multiple streams, or to successfully display the images that correspond to the title.

These are my initial Flip thoughts.  I am facinated by this category and the conversations Pulse, Flip and others have started.   The innovation here is just getting going and I cant wait to see what comes next.

Browsers.   I’m using Life Browser a lot and liking it.   The Queue feature is great — enable the Q button and any links you click on the page get “queued up” behind in a stack.  Im interested to see things like candy tabs on Firefox come to the iPad.

Some conclusions …

1. Its early days.

There wasnt a single application that I found that really stood out and remained interesting after a few weeks of use.  Many were recast versions of iPhone applications. I did find things that are edging in the direction of truly native – and most of those I outlined above.  This conclusion isn’t surprising.  It’s very hard to re-conceptualize interfaces and experiences. The launch of the magic trackpad demonstrates how committed Apple is to this interface.  If this is truly a new form three months is barely a teaser — we have much to do and much to learn here.   And in the past few weeks the pace of launches of interesting applications has started to pickup significantly.   Im spending more time in drawing app’s and in some quasi enterprise app’s.    I cant wait to see what the next 6 months brings.

2. The visual dominates, gesture emerging.

Visually arresting applications are the things that pop today.   Many of them are just beautiful to look at.   The pond is lovely — have you been struck by the book shelf on iBooks, I was — what about the roller coaster, so are many of the games, so is Flipboard.   But I suspect much of what im responding to is the quality of the screen and the images been displayed ie: the candy not the sustenance.   Many of the app’s that had an initial wow factor im now deleted.    Visual graphics need to be part of the quality and essence of the experience not just eye candy.   And the visual needs to be integrated into the gestural.   Maybe artists will take it accross this threshold — I was sorry that the Seven on Seven event happened right around the launch, I hope that for the next one some artists will opt to produce something on the iPad.   Gesture based interfaces are emerging — slowly but they are coming.   I used Pressedpad to “iPad”ize this blog and the experience works well(ish) — the focus is simply on making the navigation gesture applicable.   But note even here — when I showed this iPad enabled blog to @wesissman he mailed me “looks amazing – i cant figure out how to actually read the posts – but looks great”.     We are in that early part of the experience of a new device where the visual is so astounding we in a sense need to get over it in order to figure out how we can make it useful.

3. Its a social device.

It’s a social device yet many of the applications are single user and not thinking through the connected aspects of the device.    While the device is highly personal it’s also a social device, it caters very well to multi users and multi devices.   I havent figured out why this is so but for some reason the iPad has both a highly personal inimate feel — yet its social representation is far less personal.   Try this out — leave an iPad lying around in a conference room people will feel very comfortable using it.   In the first few weeks it was fair to say that everyone simply wants to try one — but the behaviour persists.  In the same way I have brought an iPad to meetings and passed it around the table, its a very sharable social device.    In this mix of personal and not — single user and multi user/multi device is, I believe, a trove of opportunity for innovation.  And then add connectivity to this mix.    This device is designed as a connected device (connected to both other devices and connected to the network) — it will open up paths of connected innovation we can only imagine today.

4. Enterprise is a coming

I have been struck by how popular VPN and other virtualization app’s are.   It suggests a lot of people are starting to use the iPad in the enterprise.   I heard some numbers that suggested that more than 15% of the iPads sold are linked to corp accounts.     The use cases are a little outside of what i know and think about but I suspect there is a lot that will emerge here.   The device requires very little IT overhead — the total cost of ownership of these devices has to be a fraction of a normal PC.

So here are an initial set of thoughts about the iPad.   I’m interested to hear what you think.    One of the other incidental properties of the iPad is its initial lack of focus.  The iPhone is in its first instance a phone, the kindle is a book reader. — the iPad is an open tablet, for us to create on.   I believe there is much to do here — the tablet has been the next great form factor for a long time now, but I think its finally arrived.   We now have to build the experiences to suit the device.

bit.ly and platforms …

Twitter announced this week that they were launching there own URL shortener.   There has been a lot of chatter about this over the past week.  I thought it would be helpful to write a about how the partnership worked and what bit.ly relationship is to platforms, Twitter and others.    To do something unusual for me let me let me cut to the chase.

Twitter.com pretty much stopped using bit.ly to shorten URL’s on Twitter.com in December.    Since last fall the bit.ly team and Twitter have been talking about this transition.    Today Twitter.com represents less than 1% of bit.ly links shortened — when the transition took place in December it was closer to 3-8%, depending on the UX on Twitter.com and the day.   We continue to work with the Twitter team and we are currently figuring out how to get key whitelabel URL’s working on Twitter.com.    The default shortening partnership worked well for a period of time – approximately six months — during a period of hyper growth. Today bit.ly is growing and continues to scale — irrespective of the change in rules last December re: shortening on Twitter.com.   That is the summary — the detailed version follows.

bit.ly was launched May of 2008.    By the first quarter of 2009 bit.ly was growing fast, scaling well and offering a handful of key features beyond shortening that users – of both the api and the website – found critical in terms of understanding social distribution — most importantly real time metrics*.    I believe Twitter’s insane growth trajectory started in December 2008 — by early 2009 many of the short URL’s on Twitter were struggling to keep up with the scale and growth and none of them offered the real time metrics that bit.ly had.   So Twitter and bit.ly entered into an agreement where bit.ly would become the default URL shortener for Twitter.     This feature rolled out in May 2009 and ran until December of 2009.

bit.ly knew this would be a short term agreement — it was done to help Twitter scale and without a doubt it helped bit.ly scale.     In late November / December 2009 Twitter.com stopped shortening URL’s — except under one very narrow use case (and if you can find out what that is I will send or buy you a drink!).    As Techcrunch reported this week bit.ly growth has continued.

When Twitter changed its shortener policy in December Twitter.com represented 3-8% of bit.ly links created everyday.    So the change was barely noticeable in bit.ly systems.    Today Twitter.com represents less than .5% of bit.ly links created or clicked on each day.     There are other social platforms that are now larger than Twitter.com.    Last month there were  3.4bn clicks on bit.ly links — up from 2.7bn in February and 2.5bn in January.    bit.ly is fairly big for a little company, handling billions of clicks and real time metrics for 100′s of million URL’s each day isnt trivial.    Someone noted earlier this week — they believe — Yahoo does about 7.5bn clicks a month on its search product — while these clicks are not comparable to the bit.ly click experience, in terms of reach and scale it’s an interesting benchmark.

On Tuesday we announced 6,000 sign up’s for bit.ly pro.   As of today that number is over 7,000 and have in the past 48hrs a subsset have signed up for the enterprise version — so revenue.    The companies up and running include: nyti.ms, amzn.to, binged.it, huff.to, 4sq.com, pep.si, and n.pr — along side a set of bloggers and individuals who use the bit.ly service for their URL’s.    And incidentally — this Wednesday was our first day ever where over 150m bit.ly links were clicked on.          (For more data and charts of historical growth see )

All that said the noise level out there is well, noisy — “is bit.ly screwed?”, “is bit.ly the next google?” seemingly, no one can make up their mind.    We can — we love bit.ly.    bit.ly is short, sweet and out of control.   Someone asked me last summer, “is bit.ly part of the internet”.     We are working hard to make it part of the internet or at least the social, real time web — in scale, breadth, trust and performance.  bit.ly is the tracking tool that many many people use to understand how many times a Tweet or a Facebook link was clicked on*.

We thank Twitter, everyone there, for the kick start it gave bit.ly.   And we certainly hope we helped Twitter during a difficult scaling period — that was the intent.    bit.ly still works and will continue to work on Twitter, most of the clients and Twitter related services use the API everyday and we are working right now with the Twitter team on some publisher related services.   And most of all we thank our users — end users who use the bit.ly web site to shorten, share and track everyday — bit.ly 1.3 will be out in the next few days and we hope you love it.   And we thank our API users.    The myriad of services who use our API to shorten and track and monitor the pulse of the real time web.   And publishers who are using it for domain level / enterprise tracking.

In terms of lessons learnt there are many — but four come to mind right now, all four relate to broader points about web platforms.   Over the past few years a set of platforms have emerged online that give start-up’s a foundation to get a kick start to building their audience and/or their business.   Adwords/Adsense were probably the first scaled examples of this. And as these platforms mature its important for their to be clear boundaries between what the platform provider does and doesn’t do. Granted these boundaries shift over time — but the boundaries have to be sustained for long enough for the platform provider to achieve scale and trust and to get a critical mass of applications running on it.  They also have to sustained long enough for businesses to be built on the platform, not just tweaks, real businesses.

To play out the Google example take the UX of Google.   Google understood they werent in the content business — they were in the navigation business.   So for years the Google site just pointed outward.   Now after 10 years the line is getting hazy in some areas — this is why the local search stuff, the yelp conversations resonate with people — Google has for what ever reason decided that local is something it needs to wrap more of an arm around local. How long is that arm? How detrimental is it to local players? i’m not sure? — but if i had to put a dollar down I would bet that Yelp and say Opentable will do just fine.    So — clear sustained boundaries are necessary.    The second point is that these boundaries become increasingly important and easy to define once the monetization approach of the underlying platform is defined.   Emphasis is the reason why this is seperate point to the first one — vs. a subset, this is vital.   The third point is that people bootstrapping on these platforms should also try to spread their relevance beyond a single platform – so Yelp should extend its business model beyond adsense, Zynga beyond Facebook etc. etc.    In 2010, unlike 10 years ago, we are building in a world of multiple, often overlapping platforms, its not a monolithic world anymore.    That is what Stocktwits has done, same for bit.ly, Tweetdeck, Someecards, OMGpop etc… all of these services have a leg in multiple platforms.

Lastly, talk about holes and filling holes in platforms is misleading at best.    Take a list of emerging to mature companies — great companies … Is Groupon a hole in Facebook? Facebook a hole in Google?? Google is a hole in Microsoft???  Microsoft in IBM????  Maybe it’s holes all the way down?    Innovation — building great companies — is about finding, filling and even creating holes.   But entrepreneurs should n’t — and most dont — focus on filling holes in other people’s platforms — they should think about how to build great things — things that in 2010 may be bootstrapped on platforms but great products, products that people love, products that move people to organize their world differently, or to see the world differently.   The slogan “Think different” captured most if not all of what entrepreneurs need.   After 30yrs of personal computing history we have a lot of platform and application history to draw from — Apple understands this very well, so does Google,  same for Microsoft, Amazon, and Ebay.  And yes — once again, the cycle of innovation is turning – great new platforms are emerging and great businesses will be developed on of these new platforms.

*/ note:  if you place a “+” on the end of any bit.ly link and you will see real time traffic to that link

betaworks series b / betawhat

Last week we announced a series b funding round at betaworks.  Since then several people have asked us what we do at betaworks and how.   Here goes.

betaworks is a company focused on the social, real time web. We believe this represents a radical shift in how people use the internet. We believe it is swiftly becoming the primary navigational interface to the internet and we believe it’s different, so different that we see the change underway as a 10 year shift.

It’s about dynamic streams of information not static pages. It’s about push not pull. It’s about enabling publishing tools, data and the ways you can touch and experience the web as widely as possible. It’s about an open architecture that permits software developers and users the ability to move data back and forth across services. It’s about letting users stitch together a set of services they want to use, rather than using data to lock people into a single service. It’s about creating great new companies at an accelerated rate using the infrastructure that has been built over the past ten years — from AWS to OAuth, from the Twitter API to Google maps API, from Facebook Connect to AppEngine. It’s as if the metabolic rate of the Internet is changing and this shift is what betaworks is centered on.

What we do at betaworks is build and invest in great companies that make up the loosely coupled experience outlined above. This all happens out of a company, not an incubator, not a fund.  The company was formed and designed for this transformation, betaworks itself, the network of companies that are part of betaworks, are a sense a mirror image of this shift.   As the metabolic rate changes the possibility for connection, recombination and innovation increases, dramatically.   As many people have observed it cheaper and cheaper to trial and test yet, with appropriate instrumentation, its also cheaper to scale — scale product development, scale infrastructure and scale a business.   We like to scale fast or fail fast at betaworks.

When we build a company, invariable its an idea we have come up with, an itch we want to scratch and we have an ongoing operational role in building that company.  These companies are the core of betaworks. The things we build are born of the focus on the real time social web — they are more often than not white spaces where we see a need that hasn’t yet been filled.

betaworks investments does seed stage investing in the ecosystem around this core. For us seed investing means first money, our average investment size is 150k. These investments are done as part of a syndicate of angels or early stage VC’s. Our requirements from an investment side are simple:  it has to first fit the thesis, it has to fit the investment profile (early stage, tech centered etc.) and there needs to be a beta, public or not — a working product, our office is a ppt free zone.

That’s it

Ongoing tracking of the real time web …

The last post that I did about real time web data mixed data with a commentary and a fake headline about how data is sometimes misunderstood in regards to the real time web.    This post repeats some of that data but the focus of the post is the data.   I will update the post periodically with relevant data that we see at betaworks or that others share with us.   To that end this post is done in reverse order with the newest data on top.

Tracking the real time web data

The measurement tools we have still only sometimes work for counting traffic to web pages and they certainly dont track or measure traffic in streams let alone aggregate up the underlying ecosystems that are emerging around these new markets.  At betaworks we spend a lot of time looking at and tracking this underlying data set.   It’s our business and its fascinating.   Like many companies each of the individual businesses at betaworks have fragments of data sets but because betaworks acts as ecosystem of companies we can mix and match the data to get results that are more interesting and hopefully offer greater insight

——————————-

(i) tumblr growth for the last half of 2009

Another data point re: growth of the real time web through the second half of last year through to Jan 18th of this year.  tumblr continues to kill it.     I read this interesting post yesterday about how tumblr is leading in its  category through innovation and simple, effective, product design.   The compete numbers quoted in that post are less impressive than these directly measured quantcast numbers.


(h) Twitter vs. the Twitter Ecosystem

Fred Wilson’s post adds some solid directional data on the question of the size of the ecosystem.   “You can talk about Twitter.com and then you can talk about the Twitter ecosystem. One is a web site. The other is a fundamental part of the Internet infrastructure. And the latter is 3-5x bigger than the former and that delta is likely to grow even larger.”

(g) Some early 2010 data points re: the Real Time Web

  • Twitter: Jan 11th was the highest usage day ever (source: @ev via techcrunch)
  • Tweetdeck: did 4,143,687 updates on Jan 8, yep 4m. Or, 48 per second (source: Iain Dodsworth / tweetdeck internal data)
  • Foursquare: Jan 9th biggest day ever.    1 update or check-in per second (source: twitter and techcrunch)
  • Daily Booth: in past 30 days more than 10mm uniques (source: dailybooth internal data)
  • bit.ly: last week was the largest week ever for clicks on bit.ly links. 564m were clicked on in total. On the Jan 6th there were a record of 98m decodes.    1100 clicks every second.

(f) Comparing the real time web vs. Google for the second half of 2009

Andrew Parker commented on the last post that the chart displaying the growth trends was hard to decipher and that it maybe simpler to show month over month trending.  It turns out the that month over month is also hard to decipher.   What is easier to read is this summary chart.    It shows the average month over month growth rates for the RT web sites (the average from Chart A).   Note 27.33% is the average growth rate for the real time web companies in 2009 — that’s astounding.    The comparable number for the second half of 2009 was 10.5% a month — significantly lower but still a very big number for m/m growth.

(e) Ongoing growth of the real time stream in the second half of 2009

This is a question people have asked me repeatedly in the past few weeks.  Did the real time stream grow in Q4 2009?    It did.    Not at the pace that it grew during q1-q3, but our data at betaworks confirms continued growth.   One of the best proxies we use for directional trending in the real time web are the bit.ly decodes.   This is the raw number of bit.ly links that are clicked on across the web.    Many of these clicks occur within the Twitter ecosystem, but a large number are outside of Twitter, by people and by machines — there is a surprising amount of diversity within the real time stream as I posted about a while back.

Two charts are displayed below.    On the bottom are bit.ly decodes (blue) and encodes (red)  running through the second half of last year.    On the top is a different but related metric.   Another betaworks company is Twitterfeed.    Twitterfeed is the leading platform enabling publishers to post from their sites into Twitter and Facebook.    This chart graphs the total number of feeds processed (blue) and the total number of publishers using Twitterfeed, again through the second half of the year (note if the charts inline are too small to read you can click though and see full size versions).   As you can see similar the left hand chart — at Twitterfeed the growth was strong for the entire second half of 2009.

Both these charts illustrate the ongoing shift that is taking place in terms of how people use the real time web for navigation, search and discovery.    My preference is to look at real user interactions as strong indicators of user behavior.   For example I actually find Google trends more useful often than comScore, Compete or the other “page” based measurement services.   As interactions online shift to streams we are going to have to figure out how measurement works. I feel like today we are back to the early days of the web when people talked about “hits” — it’s hard to parse the relevant data from the noise.  The indicators we see suggest that the speed at which this shift to the real time web is taking place is astounding.   Yet it is happening in a fashion that I have seen a couple of times before.

(d) An illustration of the step nature of social growth. bit.ly weekly decodes for the second half of 2009.

Most social networks I have worked with have grown in a step function manner.  You see this clearly when you zoom into the bit.ly data set and look at weekly decodes, illustrated above.   You often have to zoom in and out of the data set to see and find the steps but they are usually there.     Sometimes they run for months — either up or sideways.    You can see the steps in Facebook growth in 2009.    I saw effect up close with ICQ, AIM, Fotolog, Summize and now with bit.ly.   Someone smarter than me has surely figured out why these steps occur.    My hypothesis is that as social networks grow they jump in a sporadic fashion from one dense cluster of relationships to a new one.   The upward trajectory is the adoption cycle of that new, dense cluster and the flat part of the step is the period between the step to next cluster.     Blended in here there are clearly issues of engagement vs. trial.   But it’s hard to weed those out from this data set.   As someone mentioned to me in regards to the last post this is a property of scale-free networks.

(c) Google and Amazon in 2009

Google and Amazon — this is what it looked like in 2009:

It’s basically flat.     Pretty much every user in the domestic US is on Google for search and navigation and on Amazon for commerce — impressive baseline numbers but flat for the year (source: Quantcast).  So then lets turn to Twitter.

(b) Twitter – an estimate of Twitter.com and the Twitter ecosystem

Much ink has been spilt over Twitter.com’s growth in the second half of the year.   During the first half of the year Twitter’s experience hyper growth — and unprecedented media attention.    In the second half of the year the media waned, the service went through what I suspect was a digestion phase — that step again?     Steps aside — because I dont in anyway seek to represent Twitter Inc. — there are two questions that in my mind haven’t been answered fully:

(i) what international growth in the second half of 2009?, that was clearly a driver for Facebook in ’09.  Recent data suggests growth continued to be strong.

(ii) what about the ecosystem.

Unsurprisingly its the second question that interests me the most.    So what about that ecosystem?    We know that approx 50% of the interactions with the Twitter API occur outside of Twitter.com but many of those aren’t end user interactions.     We also know that as people adopt and build a following on Twitter they often move up to use one of the client or vertical specifics applications to suit their “power” needs.   At TweetDeck we did a survey of our users this past summer.     The data we got suggested 92% of them then use Tweetdeck everyday — 51% use Twitter more frequently since they started using TweetDeck.  So we know there is a very engaged audience on the clients.     We also know that most of the clients arent web pages — they are flash, AIR, coco, iPhone app’s etc. all things that the traditional measurement companies dont track.

What I did to estimate the relative growth of the Twitter ecosystem is the following.   I used Google Trends and compiled data for Twitter and the key clients.    I then scaled that chart over the Twitter.com traffic.   Is it correct? — no.   Is it made up? — no.   It’s a proxy and this is what it looks like (again, you can click the chart to see a larger version).

Similar to the Twitter.com traffic you see the flattening out of the ecosystem in the summer.    But you see growth in the forth quarter that returns to the summer time levels.     I suspect if you could zoom in and out of this the way I did above you would see those steps again.

(a) The Real Time Web in 2009

Add in Facebook (blue) and Meebo (green) both steaming ahead — Meebo had a very strong end of year.    And then tile on top the bit.ly data and the Twitterfeed numbers (bit.ly on the right hand scale) and you have an overall picture of growth of the real time web vs. Google and Amazon.   As t

charting the real time web
OR
the curious tale of how TechCrunch traffic inexplicably fell off a cliff in December

For a while now I have been thinking about doing a post about some of the data we track at betaworks.   Over the past few months people have written about Twitter’s traffic being up, down or sideways — the core question that people are asking is the real time web growing or not, is this hype or substance?     Great questions — the answer to all of the above is from the data set I see: yes.   Adoption and growth is happening pretty much across the board — and in some areas its happening at an astounding pace.    But tracking this is hard.   It’s hard to measure something that is still emerging.    The measurement tools we have still only sometimes work for counting traffic to web pages and they certainly dont track or measure traffic in streams let alone aggregate up the underlying ecosystems that are emerging around these new markets.  At betaworks we spend a lot of time looking at and tracking this underlying data set.   It’s our business and its fascinating.

I was inspired to finally write something by first a good experience and then a bad one.    First the good one.    Earlier this week I saw a Tweet from Marshall Kirkpatrick about Gary Hayes’s social media counter.    It’s  very nicely done — and an embed is available.     This is what it looks like (note the three buttons on top are hot, you can see the social web, mobile and gaming):

The second thing was less fun but i’m sure it has happened to many an entrepreneur.    I was emailed earlier this week by a reporter asking about some data – I didnt spend the time to weed through the analysis and the reporter published data that was misleading.    More on this incident later.

Lets dig into some data.    First — addressing the question people have asked me repeatedly in the past few weeks.  Did the real time stream grow in Q4 2009?    It did.    Not at the pace that it grew during q1-q3, but our data confirms continued growth.   One of the best proxies we use for directional trending in the real time web are the bit.ly decodes.   This is the raw number of bit.ly links that are clicked on across the web.    Many of these clicks occur within the Twitter ecosystem, but a large number are outside of Twitter, by people and by machines — there is a surprising amount of diversity within the real time stream as I posted about a while back.  Two charts are displayed below.    On the left there are bit.ly decodes (blue) and encodes (red)  running through the second half of last year.    On the right is a different but related metric.   Another betaworks company is Twitterfeed.    Twitterfeed is the leading platform enabling publishers to post from their sites into Twitter and Facebook.    This chart graphs the total number of feeds processed (blue) and the total number of publishers using Twitterfeed, again through the second half of the year (note if the charts inline are too small to read you can click though and see full size versions).   As you can see similar the left hand chart — at Twitterfeed the growth was strong for the entire second half of 2009.

Both these charts illustrate the ongoing shift that is taking place in terms of how people use the real time web for navigation, search and discovery.    My preference is to look at real user interactions as strong indicators of user behavior.   For example I actually find Google trends more useful often than comScore, Compete or the other “page” based measurement services.   As interactions online shift to streams we are going to have to figure out how measurement works. I feel like today we are back to the early days of the web when people talked about “hits” — it’s hard to parse the relevant data from the noise.  The indicators we see suggest that the speed at which this shift to the real time web is taking place is astounding.   Yet it is happening in a fashion that I have seen a couple of times before.

Most social networks I have worked with have grown in a step function manner.  You see this clearly when you zoom into the bit.ly data set and look at weekly decodes.   This is less clear but also visible when you look at daily trending data (on the right) — but add a 3 week moving average on top of that and you can once again see the steps.   You often have to zoom in and out of the data set to see and find the steps but they are usually there.     Sometimes they run for months — either up or sideways.      I saw this with ICQ, AIM, Fotolog, Summize through to bit.ly.   Someone smarter than me has surely figured out why these steps occur.    My hypothesis is that as social networks grow they jump in a sporadic fashion to a new dense cluster or network of relationships.   The upward trajectory is the adoption cycle of that new, dense cluster and the flat part of the step is the period between the step to next cluster.     Blended in here there are clearly issues of engagement vs. trial.   But it’s hard to weed those out from this data set.   I learnt a lot of this from Yossi Vardi and Adam Seifer.    Two people I had the privilege of working with over the years — two people whose DNA is wired right into this stuff.  At Fotolog Adam could take the historical data set and illustrate how these clusters moved — in steps — from geography to geography, its fascinating.

TechCrunch falls off a cliff

Ok I’m sure there are some people reading who are thinking — well this is interesting but I actually want to read about TechCrunch falling off a traffic cliff.   I’m sorry – I actually don’t have any data to suggest that happened.  After noting yesterday that provocative headline is  sometimes a substitute for data I thought — heck I can do this too!    This section of the post is more of a cautionary tale — if you are confused by this twist let me back up to where I started.   I mentioned that there were two motivations for me sitting down and writing this post.   The second one was that earlier this week  TechCrunch story ran this week saying that bit.ly market share had shifted dramatically.     It hasn’t.   The data was just misunderstood by the reporter.   The tale (I did promise a tale) began last August when TechCrunch ran the following chart about the market share of URL shorteners.

The pie chart showed the top 5 URL shorteners and then calculated the market share each had  — what percent each was *of* the top five.     The  data looks like this:

bit.ly 79.61%
TinyURL 13.75%
is.gd 2.47%
ow.ly 2.26%
ff.im 1.92%
(79.61+13.75+2.47+2.26+1.92 = 100)
The comparable data from yesterday is:

bit.ly = 75%
TinyURL = 10%
ow.ly = 6%
is.gd = 4%
tumblr = 4%
(again this adds up to 100%)

Not much news in those numbers, especially when you consider they come from the Twitter “garden hose” (a subset of all tweets) and swing by as much as +/- 5% daily.   The tumblr growth into the top 5 and the ow.ly bump up is nice shift for them – but not really a story.     The hitch was that the reporter didn’t consider that there are other URL’s in the Twitter stream aside from these five.   Some are short URL’s and some aren’t.   So this metric doesn’t accurately reflect overall short URL market share — it shows the shuffling of market share amongst the top five.   But media will be media.   I saw a Tweet this week about how effective Twitter is at disseminating information — true and false — despite all the shifts that are going on headlines in a sense carry even more weight than in the “read all about it” days.

The lesson here for me was the importance of helping reporters and analysts get access to the underlying data — data they can use effectively.   We sent the reporter the  data but he saw a summary data set that included the other URL’s and didn’t understand that back in August there were also “other” URL’s.   After the fact we worked to sort this out and he put a correction in his post.   But the headline was off and running — irrespective of how dirty or clean the data was.   Basic mistake — my mistake — and this was with a reporter who knows this stuff well.   Given the paucity of data out there and the emergent state of the real time web  this stuff is bound to happen.

Ironically, yesterday, bit.ly hit an all time high in terms of decodes — over 90m.   But back to the original question — there is a valid question the reporter was seeking to understand, namely: what is the market share of dem short thingy’s?      We track this metric — using the Twitter garden hose and identifying most of the short URL’s to produce a ranking (note its a sample, so the occurrences are a fraction of the actuals).     And it’s a rolling 24 hr view — so it moves around quite a bit — but nonetheless it’s informative.  This is what it looked like yesterday:

Over time this data set is going to become harder to use for this purpose.    At bit.ly we kicked off our white label service before the holidays.   Despite months of preparation we weren’t expecting the demand.   As we provision and setup the thousands of publishers, blogger and brands who want white label services its going to result in a much more diverse stream of data in the garden hose.

Real Time Web Data

Finally I thought it would be interesting to try to get a perspective on the emergence of the real time web in 2009 — how did its growth compare and contrast with the incumbent web category leaders?    Let me try to frame up some data around this.   Hang in there, some of the things I’m going to do are hacks (at best) — as I said I was inspired!   Lets start with the user growth in the US among the current web leaders — Google and Amazon — this is what it looked like in 2009:

It’s basically flat.     Pretty much every user in the domestic US is on Google for search and navigation and on Amazon for commerce — impressive baseline numbers but flat for the year (source: Quantcast).  So then lets turn to Twitter.    Much ink has been spilt over Twitter.com’s growth in the second half of the year.   During the first half of the year Twitter’s growth, I suspect, was driven to a great extent by the unprecedented media attention it received — media and celebrities were all over it.    Yet in the second half of the year that waned and the traffic numbers to the Twitter.com web site were flat for the second half of the year.    That step issue again?

Placing steps aside — because I dont in anyway seek to represent Twitter Inc. — there are two questions that haven’t been answered  (a) what about international growth, that was clearly a driver for Facebook in ’09, where was Twitter internationally?   (b) what about the ecosystem.     Unsurprisingly its the second question that interests me the most.    So what about that ecosystem?

We know that approx 50% of the interactions with the Twitter API occur outside of Twitter.com but many of those aren’t end user interactions.     We also know that as people adopt and build a following on Twitter they often move up to use one of the client or vertical specifics applications to suit their “power” needs.   At TweetDeck we did a survey of our users this past summer.     The data we got suggested 92% of them then use Tweetdeck everyday — 51% use Twitter more frequently since they started using TweetDeck.  So we know there is a very engaged audience on the clients.     We also know that most of the clients arent web pages — they are flash, AIR, coco, iPhone app’s etc. all things that the traditional measurement companies dont track.

What I did to estimate the relative growth of the Twitter ecosystem is the following.   I used Google Trends and compiled data for Twitter and the key clients.    I then scaled that chart over the Twitter.com traffic.   Is it correct? — no.   Is it made up? — no.   It’s a proxy and this is what it looks like (again, you can click the chart to see a larger version):

Similar to the Twitter.com traffic you see the flattening out in the summer.    But similar to the data sets referenced above you see growth in the forth quarter.     I suspect if you could zoom in and out of this the way I did above you would see those steps again.     So lets put it all together!    Its one heck of a busy chart.   Add in Facebook (blue) and Meebo (green) both steaming ahead — Meebo had a very strong end of year.    And then tile on top the bit.ly data and the Twitterfeed numbers (both on different scales) and you have an overall picture of growth of the real time web vs. Google and Amazon.

Ok.   One last snap shot then im wrapping up.    Chartbeat — yep another betaworks company — had one of its best weeks ever this past week — no small thanks to Jason’s Calacanis’s New Year post about his Top 10 favorite web products of 2009.   To finish up here is a video of the live traffic flow coming into Fred Wilson’s blog at AVC.com on the announcement of the Google Nexus one Phone.    Steve Gilmore mentioned the other week how sometimes interactions in the real time web just amaze one.    Watching people swarm to a site is a pretty enthralling experience.    We have much work to do in 2010.    Some of it will be about figuring out how to measure the real time web.   Much of it will be continuing to build out the real time web and learning about this fascinating shift taking place right under our feet.

random footnote:

A data point I was sent this am by Iain that was interesting — yet it didnt seem to fit in anywhere?!   Asian twitter clients were yesterday over 5% of the requests visible in the garden hose.

lines in the sand …

screenshotI had the good fortune of receiving an advance copy of Ken Auletta’s forthcoming book “Googled, The End of the World as We Know It“. It’s a fascinating read, one that raises a whole set of interesting dichotomies related to Google and their business practices. Contrast the fact that the Google business drives open and free access to data and intellectual property, so that the world becomes part of their corpus of data – yet they tightly guard their own IP in regards to how to navigate that data. Contrast that users and publishers who gave Google the insights to filter and search data are the ones who are then taxed to access that data set. Contrast Google’s move into layers beyond web sites (e.g., operating systems, web browsers) with their apparent belief that they won’t have issues stemming from walled gardens and tying. In Google we have a company that believes “Don’t be evil” is sufficient a promise for their users to trust their intentions, yet it is a company that have never articulated what they think is evil and what is not (Google.cn, anyone?).

There is a lot to think about in Auletta’s book – it’s a great read. When I began reading, I hoped for a prescriptive approach, a message about what Google should do, but instead Auletta provides the corporate history and identifies the challenging issues but leaves it to the reader to form a position on where they lead.  In my case, the issue that it got me thinking most about was antitrust.

My bet is that in the coming few years Google is going to get hauled into an antitrust episode similar to what Microsoft went through a decade ago. Google’s business has grown to dominate navigation of the Internet. Matched with their incredibly powerful and distributed monetization engine, this power over navigation is going to run headlong into a regulator. I don’t know where (US or elsewhere) or when, but my bet is that it will happen sooner rather than later. And once it does happen, the antitrust process will again raise the thorny issue of whether regulation of some form is an effective tool in the fast-moving technology sector.

UsersjohnborthwickPicturesiPhoto-LibraryOriginals2008IMG_0859.jpg

I was a witness against Microsoft in the remedy phase of its antitrust trial, and I still think a lot about whether to technology regulation works. I now believe the core position I advocated in the Microsoft trial was wrong. I don’t think government has a role in participating in technology design and I believe the past ten years have adequately illustrated that the pace of innovation and change will outrun any one company’s ability to monopolize a market. There’s no question in my mind that Microsoft still has a de facto monopoly on the market for operating systems.  There’s also no question that the US and EU regulatory environment have constrained the company’s actions, mostly for the better. But the primary challenges for Microsoft have been from Google and, to a lesser extent, from Apple. Microsoft feels the heat today, but it is coming from Silicon Valley, not Brussels or Washington, and it would be feeling this heat no matter what had happened in the regulatory sphere. The EU’s decisions to unbundle parts of Windows did little good for RealNetworks or Netscape (which had been harmed by the bundling in the first place), and my guess is that Adobe’s Flash/ AIR and Mozilla’s Firefox would be thriving even if the EU had taken no action at all.

But if government isn’t effective at forward-looking technology regulation, what alternatives do we have? We can restrict regulation to instances where there is discernible harm (approach: compensate for past wrongs, don’t design for future ones) or stay out and let the market evolve (approach: accept the voracious appetite of these platforms because they’re temporary). But is there another path? What about a corporate statement of intent like Google’s “Don’t be evil”?

“Don’t be evil” resonated with me because it suggested that Google as a company would respect its users first and foremost and that its management would set boundaries on the naturally voracious appetite of its successful businesses.

In the famous cover letter in Google’s registration statement with the SEC before its IPO, its founders said: “Our goal is to develop services that significantly improve the lives of as many people as possible. In pursuing this goal, we may do things that we believe have a positive impact on the world, even if the near term financial returns are not obvious.” The statement suggests that there are a set of things that Google would not do. Yet as Auletta outlines, “don’t be evil” lacks forward looking intent, and most important it doesn’t outline what good might mean.

Nudge please …

Is there a third way — an alternative that places the company builders in a more active position? After almost two decades of development I believe many of the properties of the Internet have been documented and discussed, so why not distill these and use them as guideposts? I love reading and rereading works like the Stupid Network, or the Cluetrain Manifesto or the Cathedral and the Bazaar, or (something seasonal!) the Halloween Memo‘s. In these works, and others, there is mindset, an ethos or culture that is philosophically consistent with the medium. When I first heard “Don’t be evil” my assumption was that it, and by definition good, referred to that very ethos. What if we can unpack these principles, so that builders of the things that make up these internets can make explicit their intent and begin to establish a compact vs. a loose general statement of “goodness” that is subject to the constraint that “good” can be relative to the appetite of the platform? Regulation in a world of connected data, where the network effect of one platform helps form another, has much broader potential for unintended consequences. How we address these questions is going to affect the pace and direction of technology based innovation in our society. If forward looking regulation isn’t the answer, can companies themselves draw some lines in the sand, unpack what “don’t be evil” suggested, and nudge the market towards an architecture in which users, companies, and other participants in the open internet signal the terms or expectations they have.

Below is a draft list of principles. It is incomplete, I’m sure — I’m hoping others will help complete it — but after reading Auletta’s book and after thinking about this for a while I thought it would be worth laying out some thoughts in advance of another regulatory mess.

1. Think users 

When you start to build something online the first thing you think about are users. You may well think about yourself — user #1 — and use your own workflow to intuit what others might find useful, but you start with users and I think you should end with the users. This is less of a principle and more of a rule of thumb, and a foundation for the other principles. It’s something I try to remind myself of constantly. In my experience with big and small companies this rule of thumb seems to hold constant. If the person who is running the shop you are working for doesn’t think about end users and / or doesn’t use your product, it’s time to move on. As Eric Raymond says you should treate your users as co-developers.  Google is a highly user centric company for one of its scale, they stated this in the pre-ample to the IPO/s3 and they have managed to stay relatively user centric with few exceptions (Google.cn likely the most obvious, maybe the Book deal).   Other companies — ie: Apple, Facebook — are less user centric.   Working on the Internet is like social anthropology, you learn by participant observation — the practice of doing and building is how you learn.   In making decisions about services like Google Voice, Beacon etc. users interest need to be where we start and where we end.

2. Respect the layers

In 2004 Richard Whitt, then at MCI, framed the argument for using the layer model to define communication policy. I find this very useful: it is consistent with the architecture of the internet, it articulates a clear separation of content from conduit, and it has the added benefit of been a useful visual representation of something that can be fairly abstract. Whitt’s key principle is that companies should respect the distinction between these layers. Whitt captures in a simple framework what is wrong with the cable companies or the cell carriers wanting to mediate or differentially price bits. It also helps to frame the potential problems that Side Wiki, or the iPhone or Google Voice, or Chrome presents (I’m struck by the irony that “respecting the layers” in the case of a browser translates into no features from the browser provider will be embedded into the chrome of the browser, calling the browser Chrome is suggestive of exactly what I dont want, ie Google specific Chrome!).   All these products have the potential to violate the integrity of the layers, by blending the content and the applications layers. It would be convenient and simple to move on at this point, but its not that easy.

There are real user benefits to tight coupling (and the blurring of layers) in particular during the early stages of a product’s development. There were many standalone MP3 players on the market before the iPod. Yet it was the coupling of the iPod to iTunes and the set of business agreements that Apple embedded into iTunes that made that market take off (note that occurred eighteen months after the launch of the iPod). Same for the Kindle — coupling the device to Amazon’s store and to the wireless “Whispernet” service is what distinguishes it from countless other (mostly inferior) ebooks. But roll the movie forward: its now six and a half years after the launch of the coupled iTunes/iPod system. The device has evolved into a connected device that is coupled both to iTunes and AT&T and the store has evolved way beyond music. Somewhere in that evolution Apple started to trip over the layers. The lines between the layers became blurred and so did the lines between vendors, agents and users. Maybe it started with the DRM issue in iTunes, or maybe the network coupling which in turn resulted in the Google Voice issue. I’m not sure when it happened but it has happened and unless something changes its going to be more of problem, not less. Users, developers and companies need to demand clarity around the layers, and transparency into the business terms that bound the layers. As iTunes scales — to become what it is in essence a media browser — I believe the pressure to clarify these layers will increase.    An example of where the layers have blurred without the feature creep /conflict is the search box in say the Firefox browser.    Google is default, there is a transparent economic agreement that places them there and users can adjust and pick another default if they wish.    One of the unique attributes of the internet is that the platform on which we build things is the very same as the one we use to “consume” those things (remember the thrill of “view source” in the browser). Given this recursive aspect of the medium, it is especially important to respect the layers.   Things built on the Internet can them selves redefine the layers.

3. Transparency of business terms

When platform like Google, iTunes, Facebook, or Twitter gets to scale it rapidly forms a basis on which third parties can build businesses. Clarity around the business terms for inclusion in the platform and what drives promotion and monetization within the platform is vital to the long term sustainability of the underlying platform. It also reduces the cost of inclusion by standardizing the business interface into the platform. Adsense is a remarkable platform for monetization. The Google team did a masterful job of scaling a self service (read standardized) interface into their monetization system. The benefits of this have been written about at length yet aspects of the platform like “smart pricing” arent’t transparent.   See this blog post from Google about smart pricing and some of the comments in the thread.   They include: “My eCPM has tanked over the last few weeks and my earnings have dropped by more then half, yet my traffic is still steady. I’m lead to believe that I have been smart priced but with no information to tell me where or when”

Back in 2007 I ran a company called Fotolog. The majority of the monetization at Fotolog was via Google. One day our Google revenues fell by half. Our traffic hadn’t fallen and up to that point our Google revenue had been pretty stable. Something was definitely wrong, but we couldnt figure out what. We contacted our account rep at Google, who told us that there was a mistake on our revenue dashboard. After four days of revenues running at the same depressed level we were told we had been “smart priced”.   Google would not offer us visibility in how this is measured and what is the competitive cluster against which you are being tested. That opacity made it very hard for Fotolog to know what to do. If you get smart priced you can end up having to re-organize your entire base of inventory all while groping to understand what is happening in the black box of Google. Google points out they don’t directly benefit from many of these changes in pricing (the advertisers do pay less per click), but Google does benefit from the increased liquidity in the market. As with Windows, there is little transparency in regards to the pricing within the platform and the economics.    This in turn leaves a meaningful constituent on the sideline, unsatisfied or unclear about the terms of their business relationship with the platform. I would argue that smart pricing and a lack of transparency into how their monetization platform can be applied to social media is driving advertisers to services like Facebook’s new advertising platform.

Back to Apple.   iTunes is as I outlined about a media browser — we think about it as an application because we can only access Apple stuff through it, a simple, yet profound design decision.   Apple created this amazing experience that arguably worked because it was tightly coupled end to end, i.e, the experience stretched from the media through the software to the device. Then when the device became a phone, the coupling extended to the network (here in the US, AT&T). I remember two years ago I almost bricked my iPhone — Apple reset my iPhone to its birthstate — because I had enabled installing applications that weren’t “blessed” by Apple. My first thought was, “isn’t this my phone? what right does Apple have to control what I do with it, didn’t I buy it?” A couple of months ago, Apple blocked Google Voice’s iPhone application; two weeks ago Apple rejected someecards’ application into the app store while permitting access to a porn application (both were designated +17; one was satire, the other wasn’t). The issue here isn’t monopoly control, per se — Apple certainly does not have a monopoly on cell phones, nor AT&T on cell phone networks. The trouble is that there is little to no transparency into *why* these applications weren’t admitted into the app store. (someecards’ application did eventually make it over the bar; you can find it here.) Will Google Voice get accepted? Will Spotify?, Rdio? someecards?     As with the Microsoft of yesteryear (which, among other ills, forbade disclosure of its relationships with PC makers), there is an opaqueness to the business principles that underlie the iTunes app store. This is a design decision that Apple has made and one that, so far anyway, users and developers have accepted. And, in my opinion, it is flawed.    Ditto for Facebook. This past week, the terms for application developers were modified once again. A lot of creativity, effort, and money has been invested in Facebook applications — the platform needs a degree of stability and transparency for developers and users.

4. Data in, data out?

API’s are a corner stone to the emerging mesh of services that sit on top of and around platforms. The data flows from service providers should, where possible, be two way. Services that consume an API should publish one of their own. The data ownership issues among these services is going to become increasingly complex. I believe that users have the primary rights to their data and the applications that users select have a proxy right, as do other users who annotate and comment on the data set. If you accept that as a reasonable proposition, then it follows that service providers should have an obligation to let users export that data and also let other services providers “plug into” that data stream. The compact I outline above is meaningfully different to what some platforms offer today. Facebook asserts ownership rights over the data you place in its domain; in most cases the data is not exportable by the user or another service provider (e.g., I cannot export my Facebook pictures to Flickr, nor wire up my feed of pictures from Facebook to Twitter). Furthermore if I leave Facebook they still assert  rights to my images.   I know this is technically the easiest answer. Having to delete pictures that are now embedded in other people’s feed is a complex user experience but I think that’s what we should expect of these platforms. The problem is far simplier if you just link to things and then promote standards for interconnections. These standards exist today in the form of RSS, or Activity Streams — pick your flavor and let users move data from site to site and let users store and save their data.

5. Do what you do best, link to the rest

Jeff Jarvis’s moto for newsrooms applies to service providers as well. I believe the next stage of the web is going to be characterized by a set of loosely coupled services — services that share data — offering end users the ability to either opt for an end-to-end solution or the possibility of rolling their own in a specific domain where they have depth of interest, knowledge, data. The first step in this process is that real identity is becoming public and separable from the underlying platform (vs. private in, say The Facebook, or alias based in most earlier social networks). In the case of services like Facebook Connect and Twitter OAuth this not only simplifies the user experience but identity also pre-populates a social graph into the service in question. OAuth flows identity into a user’s web experience, vs. the disjointed efforts of the past. This is the starting point. We are now moving beyond identity into a whole set of services stitched together, by users. Companies of yesteryear, as they grew in scale, started to co-opt vertical services of the web into their domain (remember when AOL put a browser inside of its client, with the intention of “super-setting” the web). This was an extreme case — but it is not all that different from Facebook’s “integration” of email, providing a messaging system with no imap access, one sends me an email to my imap “email” account to tell me to check that I have a Facebook “email”.   This approach wont scale for users.  Kevin Marks, Marc Cantor, Jerry Michalski are some of the people who have been talking for years about an open stack.    In the later half of this presentation Kevin outlines the emerging stack.    I believe users will opt — over time — for best in class services vs. the walled garden roll it once approach.

 
Bart1


6. Widen the my experience – don't narrow it

Google search increasingly serves to narrow my experience on the web, rather than expand it. This is driven by a combination of pressure inherent in their business model to push page views within their domain vs. outside (think Yahoo Finance, Google Onebox etc.) and the evolution of an increasingly personalised search experience which in turn tends to feed back to me and amplify my existing biases — serving to narrow my perspective vs. broaden it. Auletta talked about this at the end of his book. He quotes Nick Carr: “They (Google) impose homogeneity on the Internet’s wild heterogeneity. As the tools and algorithms become more sophisticated and our online profiles more refined, the Internet will act increasingly as an incredibly sensitive feedback loop, constantly playing back to us, in amplified form, our existing preferences” Features like social search will only exacerbate this problem. This point is the more subtle side of the point above. I wrote a post a year or two ago about thinking of centres vs. wholes and networks vs. destinations. As the web of pages becomes a web of flow and streams the experience of the web is going widen again. You can see this in the data — the charts in distribution now post illustrate the shift that is taking place.   As the visible — user facing — part of a web site becomes less important than the API’s and the myriad of ways that users access the underlying data, the web, and our experience of it, will widen, again.

Conclusions

I have outlined six broad principles that I believe can be applied as a design methodology for companies building services online today. They are inspired by others, a list of whom would be very long,  I’m not going to attempt to document it, I will surely miss someone.   Building companies on today’s internet is by definition an exercise in standing on the shoulders of giants. Internet standards from TCP/IP onward are the strong foundation of an architecture of participation. As users pick and choose which services they want to stitch together into their cloud, can companies build services based on these shared data sets in a manner that is consistent with the expectations we hold for the medium? The web has a grain to it and after 15 years of innovation we can begin to observe the outlines of that grain. We may not be able to always describe exactly what it is that makes something “web consistent” but we do know it when we see it.

The Microsoft antitrust trial is a case study in regulators acting as design architects. It didn’t work. Google’s “don’t be evil” mantra represents an alternative approach, one that is admirable in principle but lacking in specificity. I outline a third way here, one in which we as company creators coalesce around a set of principles saying what we aspire to do and not do, principles that will be visible in our words and our deeds. We can then nudge our own markets forward instead of the “helping hand” of government.

buriedtreasure

diversity within the real time stream

I got a call on Friday from a journalist at the Financial Times who was writing on the Twitter ecosystem. We had an interesting conversation and he ran his piece over the weekend Twitter branches out as London’s ‘ecosystem’ flies.

As the title suggests the focus was on the Twitter ecosystem in London.    Our conversation also touched on the overall size and health of the real-time ecosystem — this topic didn’t make it into the article. It’s hard to gauge the health of a business ecosystem that is still very much under development and has yet to mature into one that produces meaningful revenues. Yet the question got me thinking — it also got me thinking that it has been a while since I had posted here. It was one busy summer. I have a couple of long posts I’m working on but for now I want to do this quick post on the real-time ecosystem and in it offer up some metrics on its health.

Back in June I did a presentation at Jeff Pulver’s 140conf, the topic of which was the real-time / Twitter ecosystem.   Since then, I have been thinking about the diversity of data sources, notably the question of where people are publishing and consuming real-time data streams. At betaworks we are fairly deep into the real time / Twitter ecosystem.  In fact, every company at betaworks is a participant, in one manner or another, in this ecosystem, and that’s a feature, not a bug! Of the 20 or so companies in the betaworks network, there is a subset that we we operate; one of those is bit.ly.

2puffsIn an attempt to answer this question about the diversity of the ecosystem, let me run through some internal data from bit.ly.   bit.ly is a URL shortener that offers among other things real-time tracking of the clicks on each link (add “+” to any bit.ly URL to see this data stream).   With a billion bit.ly links clicked on in August — 300m last week — bit.ly has become almost part of the infrastructure of the real time cloud.  Given its scale bit.ly’s data is a fair proxy for the activity of the real-time stream, at least of the links in the stream.

On Friday of this week (yesterday) there were 20,924,833 bit.ly links created across the web (we call these “encodes”). These 20.9m encodes are not unique URL’s, since one popular URL might have been shortened by multiple people. But each encode represents intentionality of some form. bit.ly in turn retains a parent : child mapping, so that you can see what your sharing of a link generates vs. the population (e.g., I shared a video on Twitter the other day; my specific bit.ly link got 88 clicks, out of a total of 250 clicks on any bit.ly link to that same video.  see http://bit.ly/Rmi25+).

So where were these 20.9m encodes created? Approximately half of the encodes took place within the Twitter ecosystem. No surprise here: Twitter is clearly the leading public, real-time stream and about 20% of the updates on Twitter contain at least one link, approx half of which are bit.ly links.   But here is something surprising: less than 5% of the 20.9m came from Twitter.com (i.e., from Twitter’s use of bit.ly as the default URL-shortener). Over 45% of the total encodes came from other services associated in some way with Twitter – i.e. the Twitter ecosystem — a long and diverse list of services and companies within the ecosystem who use bit.ly.

The balance of the encodes came from other areas of the real time web, outside of Twitter. Google Reader incorporated bit.ly this summer, as did Nokia, CBS, Dropbox, and some tools within Facebook. And then of course people use the bit.ly web site — which has healthy growth — to create links and then share them via instant-messaging services, MySpace, email, and countless other communications tools.

The bit.ly links that are created are also very diverse. Its harder to summarise this without offering a list of 100,000 of URL’s — but suffice it to say that there are a lot of pages from the major web publishers, lots of YouTube links, lots of Amazon and eBay product pages, and lots of maps. And then there is a long, long tail of other URL’s. When a pile-up happens in the social web it is invariably triggered by link-sharing, and so bit.ly usually sees it in the seconds before it happens.

This data says to me that the ecosystem as a whole is becoming fairly diverse. Lots of end points are publishing (i.e. creating encodes) and then many end points are offering ways to use the data streams.

In turn, this diversity of the emerging ecosystem is, I believe, an indicator of its health. Monocultures aren’t very resilient to change; ecosystems tend to be more resilient and adaptable. For me, these few data points suggest that the real-time stream is becoming more and more interesting and more and more diverse.