Category API’s

What can homescreens tell us about the way people use their phones. 

Screen Shot 2014-02-22 at 5.07.15 PM At betaworks we aim to build apps that people love: the essential apps that people use every day and that they obsessively want to have on the homescreen of their devices, one touch away. Yet, measuring progress against this goal is a challenge. We have internal analytics, tools, and KPIs that give us an indication of progress. We are obsessive users of Chartbeat, which we helped design specifically to track real-time social engagement. We use Twitter and social channels to measure the scale of engagement and its depth. Twitter is especially good at giving us a sense of depth: examining the language, the influencer clusters and the sentiment that people use to describe our work. When people talk about Dots as an obsession they love or a Tapestry story as something that moved them, we take these as indicators that we are accomplishing our goal. However, it’s just an indicator and the world we build in today is balkanized. More often than not, we can’t get enough visibility into many of the platforms on which we build experiences. Whether it’s the App Store, Facebook, Instagram, Snapchat, most platforms today are opaque in terms of metrics and data. But at the start of each year there is an elegant hack we apply.

Each new year, people share pictures of their homescreens on Twitter, Instagram and other social sharing platforms. If you search Twitter for #homescreen2014, you will see a stream of pictures of people’s homescreens — the primary screen of their phone with all the apps they choose to keep there. It is fascinating to browse through this stream of images — analyzing it is even more interesting. Right after the new year, we culled 1000 homescreen images from Twitter, cut up the images and tabulated the apps on the homescreens vs. those in folders. Admittedly, it’s a hack, and the sample is skewed: among all smartphone users, we’re biasing completely for people who use Twitter, and among Twitter users we’re selecting for the type of person who is willing to share a homescreen image. But, caveats aside, the data are fascinating. Eighty-seven percent of homescreens shared in our sample were iOS and 12 percent were Android (1 percent was Windows). For the sake of consistency, we focused the analysis below on iOS — the 87 percent.

The first metric that we pull from the sample is the percent of people who have apps we are developing at the betaworks studio on their homescreens. We then look at the investments we have made. These are our KPIs, so let me start with them and then offer up some data and perspective beyond betaworks.

Our results. Betaworks apps we are building at the studio are on 17.3 percent of people’s phones, up from less than 5 percent at the start of 2013. In terms of the investments that betaworks has made — that haven’t exited — they account for a further 15 percent. That is a significant jump in presence and share.

Read more over on medium

Distribution … now

In February 1948, Communist leader Klement Gottwald stepped out on the balcony of a Baroque palace in Prague to address hundreds of thousands of his fellow citizens packed into Old Town Square. It was a crucial moment in Czech history – a fateful moment of the kind that occurs once or twice in a millennium.

Gottwald was flanked by his comrades, with Clementis standing next to him. There were snow flurries, it was cold, and Gottwald was bareheaded. The solicitous Clementis took off his own fur cap and set it on Gottwald’s head.

The Party propaganda section put out hundreds of thousands of copies of a photograph of that balcony with Gottwald, a fur cap on his head and comrades at his side, speaking to the nation. On that balcony the history of Communist Czechoslovakia was born. Every child knew the photograph from posters, schoolbooks, and museums.

Four years later Clementis was charged with treason and hanged. The propaganda section immediately airbrushed him out of history, and obviously, out of all the photographs as well. Ever since, Gottwald has stood on that balcony alone. Where Clementis once stood, there is only bare palace wall. All that remains of Clementis is the cap on Gottwald’s head.

Book of Laughter and Forgetting, Milan Kundera

The rise of social distribution networks

Over the past year there has been a rapid shift in social distribution online.    I believe this evolution represents an important change in how people find and use things online. At betaworks I am seeing some of our companies get 15-20% of daily traffic via social distribution — and the percentage is growing.    This post outlines some of the aspects of this shift that I think are most interesting.   The post itself is somewhat of a collage of media and thinking.

Distribution is one of the oldest parts of the media business.    Content is assumed to be king so long as you control the distribution flow to that content. From newspapers to NewsCorp companies have understand this model well.   Yet this model has never suited the Internet very well.     From the closed network ISP’s to Netcenter.   Pathfinder to Active desktop, Excite Lycos, Pointcast to the Network computer.   From attempts to differentially price bits to preset bookmarks on your browser — these are all attempts at gate keeping attention and navigation online.    Yet the relative flatness of the internet and its hyperlinked structure has offered people the ability to route around these toll gates.   Rather than client software or access the nexus of distribution became search.    Today there seems to be a new distribution model that is emerging.   One that is based on people’s ability to publically syndicate and distribute messages — aka content — in an open manner.    This has been a part of the internet since day one — yet now its emerging in a different form — its not pages, its streams, its social and so its syndication.    The tools serve to produce, consume, amplify and filter the stream.     In the spirit of this new wave of Now Media here is a collage of data about this shift.

Dimensions of the now web and how is it different?

Start with this constant, real time, flowing stream of data getting published, republished, annotated and co-opt’d across a myriad of sites and tools.    The social component is complex — consider where its happening.    The facile view is to say its Twitter, Facebook, Tumblr or FriendFeed — pick your favorite service.    But its much more than that because all these sites are, to varying degrees, becoming open and distributed. Its blogs, media storage sites (ie: twitpic) comment boards or moderation tools (ie: disqus) — a whole site can emerge around an issue — become relevant for week and then resubmerge into the morass of the data stream, even publishers are jumping in, only this week the Times pushed out the Times Wire.    The now web — or real time web — is still very much under construction but we are back in the dark room trying to understand the dimensions and contours of something new, or even to how to map and outline its borders. Its exciting stuff.

Think streams …

First and foremost what emerges out of this is a new metaphor — think streams vs. pages.     This seems like an abstract difference but I think its very important.    Metaphors help us shape and structure our perspective, they serve as a foundation for how we map and what patterns we observe in the world.     In the initial design of the web reading and writing (editing) were given equal consideration – yet for fifteen years the primary metaphor of the web has been pages and reading.     The metaphors we used to circumscribe this possibility set were mostly drawn from books and architecture (pages, browser, sites etc.).    Most of these metaphors were static and one way.     The steam metaphor is fundamentally different.  Its dynamic, it doesnt live very well within a page and still very much evolving.    Figuring out where the stream metaphor came from is hard — my sense is that it emerged out of RSS.    RSS introduced us to the concept of the web data as a stream — RSS itself became part of the delivery infrastructure but the metaphor it introduced us to is becoming an important part of our eveyday day lives.

A stream.   A real time, flowing, dynamic stream of  information — that we as users and participants can dip in and out of and whether we participate in them or simply observe we are are a part of this flow.     Stowe Boyd talks about this as the web as flow: “the first glimmers of a web that isnt about pages and browsers” (see this video interview,  view section 6 –> 7.50 mins in).       This world of flow, of streams, contains a very different possibility set to the world of pages.   Among other things it changes how we perceive needs.  Overload isnt a problem anymore since we have no choice but to acknowledge that we cant wade through all this information.   This isnt an inbox we have to empty,  or a page we have to get to the bottom of — its a flow of data that we can dip into at will but we cant attempt to gain an all encompassing view of it.     Dave Winer put it this way in a conversation over lunch about a year ago.    He said “think about Twitter as a rope of information — at the outset you assume you can hold on to the rope.  That you can read all the posts, handle all the replies and use Twitter as a communications tool, similar to IM — then at some point, as the number of people you follow and follow you rises — your hands begin to burn. You realize you cant hold the rope you need to just let go and observe the rope”.      Over at Facebook Zuckerberg started by framing the flow of user data as a news feed — a direct reference to RSS — but more recently he shifted to talk about it as a stream: “… a continuous stream of information that delivers a deeper understanding for everyone participating in it. As this happens, people will no longer come to Facebook to consume a particular piece or type of content, but to consume and participate in the stream itself.”    I have to finish up this section on the stream metaphor with a quote from Steve Gillmor.    He is talking about a new version of Friendfeed, but more generally he is talking about real time streams.     The content and the language — this stuff is stirring souls.

We’re seeing a new Beatles emerging in this new morning of creativity, a series of devices and software constructs that empower us with both the personal meaning of our lives and the intuitive combinations of serendipity and found material and the sturdiness that only rigorous practice brings. The ideas and sculpture, the rendering of this supple brine, we’ll stand in awe of it as it is polished to a sparkling sheen. (full article here)

Now, Now, Now

The real time aspect of these streams is essential.  At betaworks we are big believers in real time as a disruptive force — it’s an important aspect of many of our companies — it’s why we invested a lot of money into making bit.ly real time.  I remember when Jack Dorsey first saw bit.ly’s  plus or info page (the page you get to by putting a “+” at the end of any bit.ly URL) –  he said this is “great but it updates on 30 min cycles, you need to make it real time”.   This was August of ’08 — I registered the thought, but also thought he was nuts.    Here we sit in the spring of ’09 and we invested months in making bit.ly real time –  it works, and it matters.   Jack was right — its what people want to see the effects on how a meme is are spreading — real time.   It makes sense — watching a 30 min delay on a stream — is somewhere between weird and useless.   You can see an example of the real time bit.ly traffic flow to an URL  here. Another betaworks company, Someecards, is getting 20% of daily traffic from Twitter.   One of the founders Brook Lundy said the following “real time is now vital to what do.    Take the swine flu — within minutes of the news that a pandemic level 5 had been declared — we had an ecard out on Twitter”.    Sardonic, ironic, edgy ecards — who would have thought they would go real time.    Instead of me waxing on about real time let me pass the baton over to Om — he summarizes the shift as well as one could:

  1. “The web is transitioning from mere interactivity to a more dynamic, real-time web where read-write functions are heading towards balanced synchronicity. The real-time web, as I have argued in the past, is the next logical step in the Internet’s evolution. (read)
  2. The complete disaggregation of the web in parallel with the slow decline of the destination web. (read)
  3. More and more people are publishing more and more “social objects” and sharing them online. That data deluge is creating a new kind of search opportunity. (read)”

Only connect …

The social aspects of this real time stream are clearly a core and emerging property.   Real time gives this ambient stream a degree of connectedness that other online media types haven’t.  Presence, chat, IRC and instant messaging all gave us glimmers of what was to come but the “one to one” nature of IM meant that we could never truly experience its social value.    It was thrilling to know someone else was on the network at the same time as you — and very useful to be able to message them but it was one to one.    Similarly IRC and chats rooms were open to one to many and many to many communications but they usually weren’t public.   And in instances that they were public the tools to moderate and manage the network of interactions were missing or crude.   In contrast the connectedness or density of real time social interactions emerging today is astounding — as the examples in the collage above illustrate.    Yet its early days.    There are a host of interesting questions on the social front.    One of the most interesting is, I think, how willthe different activity streams intersect and combine / recombine or will they simple compete with one another?      The two dominant, semi-public, activity streams today are Facebook and Twitter.    It is easy to think about them as similar and bound for head on competition — yet the structure of these two networks is fairly different.    Whether its possible or desirable to combine these streams is an emerging question — I suspect the answer is that over time they will merge but its worth thinking about the differences when thinking about ways to bring them together.      The key difference I observe between them are:

#1. Friending on Facebook is symmetrical — on Twitter it’s asymmetrical.    On Facebook if I follow you, you need to follow me, not so on Twitter, on Twitter I can follow you and you can never notice or care.   Similarly, I can unfollow you and again you may never notice or care.   This is an important difference.   When I ran Fotolog I observed the dynamics associated with an asymmetrical friend network — it is, I think, a closer approximation of the way human beings manage social relationships.    And I wonder the extent to which the Facebook sysmetrical friend network was / is product of the audience for which Facebook was intially created (students).   When I was a student I was happy to have a symmetrical social network, today not so much.

#2. The data on Facebook is assumed to be mostly private, or shared within private groups, Facebook itself has been mostly closed to the open web — and Facebook asserts a level of ownership over the data that passes through its network.   In contrast the data on Twitter is assumed to be public and Twitter asserts very few rights over the underlying data.    These are broad statements — worth unpacking a bit.    Facebook has been called a walled garden — there are real advantages to a walled garden — AOL certainly benefited from been closed to the web for a long long time.   Yet the by product of a closed system is that (a) data is not accessible or searchable by the web in general –ie: you need to be inside the garden to navigate it  (b) it assumes that the pace innovation inside the garden will match or exceed the rate of innovation outside of the garden and (c) the assertion of rights over the content within the garden means you have to mediate access and rights if and when those assets flow out of the garden.   Twitter takes a different approach.     The core of Twitter is a simple transport for the flow of data — the media associated with the post is not placed inline — so Twitter doesnt need to assert rights over it.    Example — if I post a picture within Facebook, Facebook asserts ownership rights over that picture, they can reuse that picture as they see fit.    If i leave Facebook they still have rights to use the image I posted.    In contrast if I post a picture within Twitter the picture is hosted on which ever service I decided to use.   What appears in Twitter is a simple link to that image.   I as the creator of that image can decide whether I want those rights to be broad or narrow.

#3. Defined use case vs. open use case.    Facebook is a fantastically well designed set of work-flows or use cases.   I arrive on the site and it present me with a myriad of possible paths I can follow to find people, share and post items and receive /measure associated feedback. Yet the paths are defined for the users.   If Facebook  is the well organized, pre planned town Twitter is more like new urban-ism — its organic and the paths are formed by the users.    Twitter is dead simple and the associated work-flows aren’t defined, I can devise them for myself (@replies, RT, hashtags all arose out of user behavior rather than a predefined UI.   At Fotolog we had a similar set of emergent, user driven features.  ie:  groups formed organically and then over time the company integrated the now defined work-flow into the system).    There are people who will swear Twitter is a communications platform, like email or IM — other say its micro-blogging — others say its broadcast — and the answer is that its all of the above and more.   Its work flows are open available to be defined by users and developers alike.   Form and content are separated in way that makes work-flows, or use cases open to interpretation and needs.

As I write this post Facebook is rapidly re-inventing itself on all three of the dimensions above.    It is changing at a pace that is remarkable for a company with its size membership.     I think its changing because Facebook have understood that they cant attempt to control the stream — they need to turn themselves inside out and become part of the web stream.   The next couple of years are going to be pretty interesting.       Maybe E.M. Forrester had it nailed in Howard’s End:  Only connect! That was the whole of her sermon  … Live in fragments no longer.

The streams are open and distributed and context is vital

The streams of data that constitute this now web are open, distributed, often appropriated, sometimes filtered, sometimes curated but often raw.     The streams make up a composite view of communications and media — one that is almost collage like (see composite media and wholes vs. centers).     To varying degrees the streams are open to search / navigation tools and its very often long, long tail stuff.  Let me run out some data as an example.     I pulled a day of bit.ly data — all the bit.ly links that were clicked on May 6th.      The 50 most popular links  generated only 4.4% (647,538) of the total number of clicks.    The top 10 URL’s were responsible for half (2%) of those 647,538 clicks.  50% of the total clicks (14m) went to links that received  48 clicks or less.   A full 37% of the links that day received only 1 click.   This is a very very long and flat tail — its more like a pancake.   I see this as a very healthy data set that is emerging.

Weeding out context out of this stream of data is vital.     Today context is provided mostly via social interactions and gestures.    People send out a message — with some context in the message itself and then the network picks up from there.   The message is often re-tweeted, favorite’d,  liked or re-blogged, its appropriated usually with attribution to creator or the source message — sometimes its categorized with a tag of some form and then curation occurs around that tag — and all this time, around it spins picking up velocity and more context as it swirls.    Over time  tools will emerge to provide real context to these pile up’s.   Semantic extraction services like Calais, Freebase, Zemanta, Glue, kynetx and Twine will offer a windows of context into the stream — as will better trending and search tools.      I believe search gets redefined in this world, as it collides with navigation– I blogged at length on the subject last winter.   And filtering  becomes a critical part of this puzzle.   Friendfeed is doing fascinating things with filters — allowing you to navigate and search in ways that a year ago could never have been imagined.

Think chunk
Traffic isnt distributed evenly in this new world.      All of a sudden crowds can show up on your site.     This breaks with the stream metaphor a little — its easy to think of flows in the stream as steady — but you have to think in bursts — this is where words like swarms become appropriate.    Some data to illustrate this shift.   The charts below are tracking the number of users simultaneously on a site.    The site is a political blog.    You can see on the left that the daily traffic flows are fairly predictable — peaking around 40-60 users on the site on an average day, peaks are around mid day.    Weekends are slow  — the chart is tracking Monday to Monday, from them wednesday seems to be the strongest day of the week — at least it was last week.   But then take a look at the chart on the right — tracking the same data for the last 30 days.   You can see that on four occasions over the last 30 days all of a sudden the traffic was more than 10x the norm.   Digging into these spikes — they were either driven by a pile up on Twitter, Facebook, Digg or a feature on one of the blog aggregation sites.    What do you do when out of no where 1000 people show up on your site?

CB traffic minnesotaindependent.com

The other week I was sitting in NY on 14th street and 9th Avenue with a colleague talking about this stuff.   We were accross the street from the Apple store and it struck me that there was a perfect example of a service that was setup to respond to chunky traffic.     If 5,000 people show up at an Apple store in the next 10 minutes — they know what to do.   It may not be perfect but they manage the flow of people in and out of the store, start a line outside, bring people standing outside water as they wait. maybe take names so people can leave and come back.   I’ve experienced all of the above while waiting in line at that store.   Apple has figured out how to manage swarms like a museum or public event would.    Most businesses and web sites have no idea how to do this.    Traffic in the other iterations of the web was more or less smooth but the future isnt smooth — its chunky.    So what to do when a burst takes place?   I have no real idea whats going to emerge here but cursory thoughts include making sure the author is present to manage comments etc., build in a dynamic mechanism to alert the crowd to other related items?    Beyond that its not clear to me but I think its a question that will be answered — since users are asking it.    Where we are starting at betaworks is making sure the tools are in place to at least find out if a swarm has shown up on your site.    The example above was tracked using Chartbeat — a service we developed.    We dont know what to do yet — but we do know that the first step is making sure you actually know that the tree fell — real time.

Where is Clementis’s hat? Where is the history?

I love that quote from Kundera.    The activity streams that are emerging online are all these shards — these ambient shards of people’s lives.    How do we map these shards to form and retain a sense of history?     Like the hat objects exist and ebb and flow with or without context.    The burden to construct and make sense of all of this information flow is placed, today, mostly on people.    In contrast to an authoritarian state eliminating history — today history is disappearing given a deluge of flow, a lack of tools to navigate and provide context about the past.    The cacophony of the crowd erases the past and affirms the present.   It started with search and now its accelerated with the now web.    I dont know where it leads but I almost want a remember button — like the like or favorite.   Something that registers  something as a memory — as an salient fact that I for one can draw out of the stream at a later time.   Its strangely compforting to know everything is out there but with little sense of priority of ability to find it it becomes like a mythical library — its there but we cant access it.

Unfinished

This media is unfinished, it evolves, it doesnt get finished or completed.    Take the two quotes below — both from Brian Eno, but fifteen years apart — they outline some of the boundaries of this aspect of the stream.

In a blinding flash of inspiration, the other day I realized that “interactive” anything is the wrong word. Interactive makes you imagine people sitting with their hands on controls, some kind of gamelike thing. The right word is “unfinished.” Think of cultural products, or art works, or the people who use them even, as being unfinished. Permanently unfinished. We come from a cultural heritage that says things have a “nature,” and that this nature is fixed and describable. We find more and more that this idea is insupportable – the “nature” of something is not by any means singular, and depends on where and when you find it, and what you want it for. The functional identity of things is a product of our interaction with them. And our own identities are products of our interaction with everything else. Now a lot of cultures far more “primitive” than ours take this entirely for granted – surely it is the whole basis of animism that the universe is a living, changing, changeable place. Does this make clearer why I welcome that African thing? It’s not nostalgia or admiration of the exotic – it’s saying, Here is a bundle of ideas that we would do well to learn from.  (Eno, Wired interview, 1995)

In an age of digital perfectability, it takes quite a lot of courage to say, “Leave it alone” and, if you do decide to make changes, [it takes] quite a lot of judgment to know at which point you stop. A lot of technology offers you the chance to make everything completely, wonderfully perfect, and thus to take out whatever residue of human life there was in the work to start with. It would be as though someone approached Cezanne and said, “You know, if you used Photoshop you could get rid of all those annoying brush marks and just have really nice, flat color surfaces.” It’s a misunderstanding to think that the traces of human activity — brushstrokes, tuning drift, arrhythmia — are not part of the work. They are the fundamental texture of the work, the fine grain of it. (Eno, Wired interview, 2008)

The media, these messages, stream — is clearly unfinished and constantly evolving as this post will likely also evolve as we learn more about the now web and the emerging social distribution networks.

Gottwald minus Clementis

Addendum, some new links

First — thank you to Alley Insider for re-posting the essay, and to TechCrunch and GigaOm for extending the discussion.    This piece at its heart is all about re-syndication and appropriation – as Om said “its all very meta to see this happen to the essay itself”.     There is also an article that I read after posting from Nova Spivack that I should have read in advance — he digs deep into the metaphor of the web as a stream.    And Fred Wilson and I did a session at the social media bootcamp last week where he talked about shifts in distribution dynamics — he outlines his thoughts about the emerging social stack here.   I do wish there was an easy way to thread all the comments from these different sites into the discussion here — the fragmentation is frustrating, the tools need to get smarter and make it easier to collate comments.

bit.ly now

We have had a lot going on at bit.ly over the past few weeks — some highlights — starting with some data.

• bit.ly is now encoding (creating) over 10m URL’s or links a week now — not too shabby for a company that was started last July.

• We picked the winners of the API contest last week after some excellent submissions

• Also last week the bit.ly team started to push out the new real time metrics system. This system offers the ability to watch in real time clicks to a particular bit.ly URL or link  The team are still tuning and adjusting the user experience but let me outline how it works.

If you take any bit.ly link and add a “+” to the end of the URL you get the Info Page for that link.  Once you are on the info page you can see the clicks to that particular link updated by week, by day or live — a real time stream of the data flow.

An example:

On the 15th of February a bit.ly user shortened a link to an article on The Consumerist about Facebook changing their terms of service.  The article was sent around a set of social networks and via email with the following link http://bit.ly/mDwWb.   It picked up velocity and two days later the bit.ly info page indicates that the link has been clicked on over 40,000 times — you can see the info page for this link below (or at http://bit.ly/mDwWb+ ).

In the screenshot below

1.) you see a thumbnail image of the page, its title, the source URL and the bit.ly URL.    You also see the total number of clicks to that page via bit.ly, the geographical distribution of those clicks, conversations about this link on Twitter, FriendFeed etc and the names of other bit.ly users who shortened the same link.

2.) you see the click data arrayed over time.:

bit.ly live

The view selected in the screenshot above is for the past day — in the video below you can see the live data coming in while the social distribution of this page was peaking yesterday.

This exposes intentionality of sharing in its rawest form.   People are taking this page and re-distributing it to their friends.     The article from the Consumerist is also on Digg — 5800 people found this story interesting enough to Digg it.   Yet more than 40,000 people actually shared this story and drove a click through to the item they shared.     bit.ly is proving to be an interesting complement to the thumbs up.   We also pushed out a Twitter bot last week that publishes the most popular link on bit.ly each hour.    The content is pretty interesting.   Take a look and tell me what you think — twitter user name: bitlynow.

————–

A brief note re: Dave Winer’s post today on on bit.ly.

Dave is moving on from his day to day involvement with bit.ly — I want to thank him for his ideas, help and participation.     It was an amazing experience working with Dave.    Dave doesnt pull any punches — he requires you to think — his perspective is grounded in a deep appreciation for practice — the act of using products — understanding workflow and intuiting needs from that understanding.   I learnt a lot.     From bit.ly and from from me — thank you.

A pleasure and a privildege.

Creative destruction … Google slayed by the Notificator?

The web has repeatedly demonstrated its ability to evolve and leave embedded franchises struggling or in the dirt.    Prodigy, AOL were early candidates.   Today Yahoo and Ebay are struggling, and I think Google is tipping down the same path.    This cycle of creative destruction — more recently framed as the innovators dilemma — is both fascinating and hugely dislocating for businesses.    To see this immense franchises melt before your very eyes — is hard to say the least.   I saw it up close at AOL.    I remember back in 2000, just after the new organizational structure for AOL / Time Warner was announced there was a three day HBS training program for 80 or so of us at AOL.   I loath these HR programs — but this one was amazing.   I remember Kotter as great (fascinating set of videos on leadership, wish I had them recorded), Colin Powell was amazing and then on the second morning Clay Christensen spoke to the group.    He is an imposing figure, tall as heck, and a great speaker — he walked through his theory of the innovators dilemma, illustrated it with supporting case studies and then asked us where disruption was going to come from for AOL?    Barry Schuler — who was taking over from Pittman as CEO of AOL jumped to answer.   He explained that AOL was a disruptive company by its nature.    That AOL had disruption in its DNA and so AOL would continue to disrupt other businesses and as the disruptor its fate would be different.     It was an interesting argument — heart felt and in the early days of the Internet cycle it seemed credible.   The Internet leaders would have the creative DNA and organizational fortitude to withstand further cycles of disruption.    Christensen didn’t buy it.     He said time and time again disruptive business confuse adjacent innovation for disruptive innovation.   They think they are still disrupting when they are just innovating on the same theme that they began with.   As a consequence they miss the grass roots challenger — the real disruptor to their business.   The company who is disrupting their business doesn’t look relevant to the billion dollar franchise, its often scrappy and unpolished, it looks like a sideline business, and often its business model is TBD.    With the AOL story now unraveled — I now see search as fragmenting and Twitter search doing to Google what broadband did to AOL.

a5e3161c892c7aa3e54bd1d53a03a803

Video First

Search is fragmenting into verticals.     In the past year two meaningful verticals have emerged — one is video — the other is real time search.   Let me play out what happened in video since its indicative of what is happening in the now web.     YouTube.com is now the second largest search site online — YouTube generates domestically close to 3BN searches per month — it’s a bigger search destination than Yahoo.     The Google team nailed this one.    Lucky or smart — they got it dead right.    When they bought YouTube the conventional thinking was they are moving into media –  in hindsight — its media but more importantly to Google — YouTube is search.     They figured out that video search was both hard and different and that owning the asset would give them both a media destination (browse, watch, share) and a search destination (find, watch, share).  Video search is different because it alters the line or distinction between search, browse and navigation.       I remember when Jon Miller and I were in the meetings with Brin and Page back in November of 2006 — I tried to convince them that video was primarily a browse experience and that a partnership with AOL should include a video JV around YouTube.     Today this blurring of the line between searching, browsing and navigation is becoming more complex as distribution and access of YouTube grows outside of YouTube.com.    44% of YouTube views happen in the embedded YouTube player (ie off YouTube.com) and late last year they added search into the embedded experience.    YouTube is clearly a very different search experience to Google.com.       A last point here before I move to real time search.    Look at the speed at which YouTube picked up market share.  YouTube searches grew 114% year over year from Nov 2007 to Nov 2008!?!     This is amazing — for years the web search shares numbers have inched up in Google favor — as AOL, Yahoo and others inch down, one percentage point here or there.    But this YouTube share shift blows away the more gradual shifts taking place in the established search market.     Video search now represents 26% of Google’s total search volume.

summize_fallschurch

The rise of the Notificator

I started thinking about search on the now web in earnest last spring.    betaworks had invested in Summize and the first version of the product (a blog sentiment engine) was not taking off with users.   The team had created a tool to mine sentiments in real-time from the Twitter stream of data.    It was very interesting — a little grid that populated real time sentiments.   We worked with Jay, Abdur, Greg and Gerry Campbell to make the decision to shift the product focus to Twitter search.   The Summize Twitter search product was launched in mid April.   I remember the evening of the launch — the trending topic was IMAP — I thought “that cant be right, why would IMAP be trending”, I dug into the Tweets and saw that Gmail IMAP was having issues.    I sat there looking at the screen — thinking here was an issue (Gmail IMAP is broken) that had emerged out of the collective Twitter stream — Something that an algorithmically based search engine, based on the relationships between links, where the provider is applying math to context less pages could never identify in real time.

A few weeks later I was on a call with Dave Winer and the Switchabit team — one member of the team (Jay) all of a sudden said there was an explosion outside.   He jumped off the conference call to figure out what had happened.    Dave asked the rest of us where Jay lived — within seconds he had Tweeted out “Explosion in Falls Church, VA?”  Over the nxt hour and a half the Tweets flowed in and around the issue (for details see & click on the picture above).    What emerged was a minor earthquake had taken place in Falls Church, Virginia.    All of this came out of a blend of Dave’s tweet and a real time search platform.  The conversations took a while to zero in on the facts — it was messy and rough on the edges but it all happened hours before main stream news, the USGS or any “official” body picked it up the story.  Something new was emerging — was it search, news — or a blend of the two.   By the time Twitter acquired Summize in July of ’08 it was clear that Now Web Search was an important new development.

Fast forward to today and take a simple example of how Twitter Search changes everything.    Imagine you are in line waiting for coffee and you hear people chattering about a plane landing on the Hudson.   You go back to your desk and search Google for plane on the Hudson — today — weeks after the event, Google is replete with results — but the DAY of the incident there was nothing on the topic to be found on Google.  Yet at http://search.twitter.com the conversations are right there in front of you.    The same holds for any topical issues — lipstick on pig? — for real time questions, real time branding analysis, tracking a new product launch — on pretty much any subject if you want to know whats happening now, search.twitter.com will come up with a superior result set.

How is real time search different?     History isnt that relevant — relevancy is driven mostly by time.    One of the Twitter search engineers said to me a few months ago that his CS professor wouldn’t technically regard Twitter Search as search.   The primary axis for relevancy is time — this is very different to traditional search.   Next, similar to video search — real time search melds search, navigation and browsing.       Way back in early Twitter land there was a feature called Track.  It let you monitor or track — the use of a word on Twitter.    As Twitter scaled up Track didn’t and the feature was shut off.   Then came Summize with the capability to refresh results — to essentially watch the evolution of a search query.      Today I use a product called Tweetdeck (note disclosure below) — it offers a simple UX where you can monitor multiple searches — real time — in unison.    This reformulation of search as navigation is, I think, a step into a very new and different future.   Google.com has suddenly become the source for pages — not conversations, not the real time web.   What comes next?   I think context is the next hurdle.    Social context and page based context.    Gerry Campbell talks about the importance of what happens before the query in a far more articulate way than I can and in general Abdur, Greg, EJ, Gerry, Jeff Jonas and others have thought a lot more about this than I have.    But the question of how much you can squeeze out of a context less pixel and how context can to be wrapped around data seems to be the beginning of the next chapter.    People have been talking about this for years– its not that this is new — its just that the implementation of Twitter and the timing seems to be right — context in Twitter search is social.   74 years later the Notificator is finally reaching scale.

A side bar thought: I do wonder whether Twitter’s success is partially base on Google teaching us how to compose search strings?    Google has trained us how to search against its index by composing  concise, intent driven statements.   Twitter with its 140 character limit picked right up from the Google search string.    The question is different (what are you doing? vs. what are you looking for?)  but  the compression of meaning required by Twitter is I think a behavior that Google helped engender.     Maybe, Google taught us how to Twitter.

On the subject of inheritance.  I also believe Facebook had to come before Twitter.    Facebook is the first US based social network — to achieve scale, that is based on real identity.  Geocities, Tripod, Myspace — you have to dig back into history to bbs’s to find social platforms where people used their real names, but none of these got to scale.    The Twitter experience is grounded in identity – you knowing who it was who posted what.    Facebook laid the ground work for that.

What would Google do?

I love the fact that Twitter is letting its business plan emerge in a crowd sourced manner.   Search is clearly a very big piece of the puzzle — but what about the incumbents?   What would Google do, to quote Jarvis?   Let me play out some possible moves on the chess board.   As I see it Google faces a handful of challenges to launching a now web search offering.    First up — where do they launch it,  Google.com or now.Google.com?    Given that now web navigational experience is different to Google.com the answer would seem to be now.google.com.   Ok — so move number one — they need to launch a new search offering lets call it now.google.com.    Where does the data come from for now.google.com?    The majority of the public real time data stream exists within Twitter so any http://now.google.com/ like product will affirm Twitter’s dominance in this category and the importance of the Twitter data stream.    Back when this started Summize was branded “Conversational Search” not Twitter Search.     Yet we did some analysis early on and concluded that the key stream of real time data was within Twitter.    Ten months later Twitter is still the dominant, open, now web data stream.   See the Google trend data below – Twitter is lapping its competition, even the sub category “Twitter Search” is trending way beyond the other services.   (Note: I am using Google trends here because I think they provide the best proxy for inbound attention to the real time microbloggging networks.   Its a measure of who is looking for these services.    It would be preferable to measure actual traffic measured but Comscore, Hitwise, Compete, Alexa etc. all fail to account for API traffic — let alone the cross posting of data (a significant portion of traffic to one service is actually cross postings from Twitter).   The data is messy here, and prone to misinterpretation, so much so that the images may seem blurry).   Also note the caveat re; open.   Since most of the other scaled now web streams of data are closed / and or not searchable (Facebook, email etc.).

screenshot
gTrends data on twitter

Google is left with a set of conflicting choices.     And there is a huge business model question.     Does Ad Sense work well in the conversational sphere?   My experience turning Fotolog into a business suggests that it would work but not as well as it does on Google.com.    The intent is different when someone posts on Twitter vs. searching on Google.   Yet, Twitter as a venture backed company has the resources to figure out exactly how to tune AdSense or any other advertising or payments platform to its stream of data.    Lastly, I would say that there is a human obstacle here.     As always the creative destruction is coming from the bottom up — its scrappy and and prone to been written off as NIH.     Twitter search today is crude — but so was Google.com once upon a not so long time ago.     Its hard to keep this perspective, especially given the pace that these platforms reach scale.     It would be fun to play out the chess moves in detail but I will leave that to another post.   I’m running out of steam here.

AOL has taken a long time to die.    I thought the membership (paid subscribers) and audience would fall off faster than it has.    These shifts happen really fast but business models and organizations are slow to adapt.  Maybe its time for the Notificator to go public and let people vote with their dollars.   Google has built an incredible franchise — and a business model with phenomenal scale and operating leverage.   Yet once again the internet is proving that cycles turn — the platform is ripe for innovation and just when you think you know what is going on you get blindsided by the Notificator.

Note:    Gerry Campbell wrote a piece yesterday about the evolution of search and ways to thread social inference into  search.    Very much worth a read — the chart below, from Gerry’s piece, is useful as a construct to outline the opportunity.

gerry-campbell-emerging-search-landscape1

Disclosure.   I am CEO of betaworks.    betaworks is a Twitter shareholder.  We are also a Tweetdeck shareholder.  betaworks companies are listed on our web site.

An experiment in Microfunding and new forms of giving

Late last week we kicked off a drive to raise $25,000 for http://www.charitywater.org/ — a non-profit that brings clean and safe drinking water to people in developing nations. We launched this over Twitter — in partnership with Pistachio and Tipjoy.

In the first 24 hrs we raised $944 from 144 people. As of today — Saturday — we have pledges of $1400 from 213 people, a total of about $2600. This is amazing, the money is going to have a very real impact on people’s lives. Unclean water is the cause of about 80% of disease. 43,000 people died last week from bad drinking water. $2600 in 48 hours is an amazing start, all raised over the Twitter platform. Of the $2600 about half of it was raised via Tipjoy. Here is a live update of the pledges to Charity: Water (@Wellwishes) via tipjoy, and the payment (vs. pledge) rate.

frameborder=”0″ style=”padding:0em;” height=”115px” width=”275px”<br /> marginwidth=”0″ marginheight=”0″ hspace=”0″ vspace=”0″ scrolling=”no”<br /> allowtransparency=”true”>

You can add a $2 gift right here:

9c3fde421a95e575466ed510ea93cb3c.png

In terms the approach it feels like we are scratching on something radically new here. It intersects with a set of trends I am fascinated by: dynamic community formation and participation, the now web or real time cloud and micro-lending or in this case micro-giving. Laura Fitton (@Pistachio) has written about this before, as have others — its giving me a lot to think about as we head into the Christmas season and the snow falls here. A payment rate of 83% is astoundingly high.

We also put together a little video of the launch of this effort. Laura is testing, Chartbeat, an un-released product from betaworks — it can track the traffic surge from Twitter to Larura’s blog post. If anyone wonders the effects of Twitter this little video says a lot. Watch what happens 20 seconds in.

Laura had a technical reaction to the video:

holy AWESOMENESS.

chartbeat is going to be INSANELY valuable. that is SO cool.

Keep it Chunky, Sticky in 1996

Fred Wilson’s keynote this week at the Web 2.0 conference will be interesting. He is doing a review of the history of the internet business in New York, the slides are posted here. History is something we don’t do a lot of in our business we tend to run forward so fast that we barely look back. I shared some pictures with Fred and I am posting a few more things here.   I also found a random missive I scribed I think in 1996, its pasted below. I was running what we called a web studio back then — we produced a group of web sites, including äda ’web , Total New York and Spanker.


truism1.gif

äda ’web’s first project created in the fall of 1994 — Jenny Holzer’s, Please Change Beliefs. This project is still up and available at adaweb. The project was a collaboration between Jenny, ada and John F. Simon, Jnr. I learnt so much from that one piece of work. I am not putting up more ada pieces since unlike the other sites it is still up and running thanks to the Walker Arts Center.

Total NY sends Greg Elin across country for the Silicon Alley to Silicon Valley tour. Greg and this project taught me the fundamentals of what would become blogging

Greg_Elin_SA2SV.gif

Man meets bike meets cam … Greg Elin prepares for Silicon Alley to Silicon Valley. Don’t miss the connextix “eye” camera on the handle bar!?!

1995, Total NY’s Cosmic Cavern, my first forway into 2d+ virtual worlds, a collaboration with Kenny Scharf. This was a weird and interesting project. We created a virtual world with Scharf based on the cosmic cavern the artist had created at the tunnel night club. Then within the actual Cosmic Cavern we placed PC’s for people to interact with the virtual cavern. Trying to explain it was like a Borges novel. He is a picture of Scharf in the “real” cavern, feels like the 90′s were a long time ago.

kenny_scharf.jpg

Some other random pictures i found from that era:

Pics_from_mexico.jpg

borthwick_stallman.jpg

yahoo_1995-tm.jpg

Keep it Chunky, Sticky and Open:

As the director of a studio dedicated to creating online content, a question I spend a lot of time thinking about is: what are the salient properties of this medium? Online isn’t print, it isn’t television, isn’t radio, nor telephony–and yet we consistently apply properties of all these mediums to online with varied result. But digging deeper, what are the unique properties of online that make the experience interesting and distinct? Well, there are three that we have worked with here the Studio, and we like to call them: chunky, sticky and open.

Chunky
What is chunky content? It is bite sized, it is discrete and modular, it is quick to understand because it has borders. Suck is chunky, CNET and Spanker (one of our productions) are chunky. Arrive at these sites and within seconds you understand what is going on–the content is simple, its bite sized. Chunkiness is especially relevant in large database-driven sites. Yesterday, my girlfriend and I were looking for hardware on the ZD Net sites (PC Magazine, Net Buyer etc.). She had found a hardware review a day earlier and wanted to show them to me. She typed in the URL for PC Magazine but the whole site had changed. When she looked at the page she had no anchors, she had no bearings to find the review that was featured a day earlier. The experience would have been far less frustrating if the site had been designed with persistent, recursive, chunks. Chunky media offers you a defined pool of content, not a boundless sea. It has clear borders and the parameters are persistent. Bounded content is important; I want to know the borders of the media experience, where it begins and where it ends. What is more, given the distributed, packet-based nature of this medium, both its form and function evokes modularity. Discreet servings of data. Chunks.

Sticky
Some, but not all, content should stick. Stickiness is about creating an immersive experience. It’s content that dives deep into associations and relationships. The opposite of sticky is slippery, take basic online chat rooms: most of them aren’t sticky. You move from one room to another, chatting about this and that, switching costs are low, they are slippery. Contrast this to MUDS and MOO’s which are very sticky: in MUDS the learning curve is steep (view this as a rite of entry into the community), and context is high (they give a very real sense of place). What you get out of these environments is proportional to your participation and involvement, relationship between characters is deep and associative. When content sticks time slows down and the experience becomes immersive– you look up and what you thought was ten minutes was actually half an hour. Stickiness is evoked through association, participation, and involvement. Personalized information gets sticky as does most content that demands participation. Peer to peer communication is sticky. Community and games are sticky. People (especially when they are not filtered) are sticky. My home page is both chunky and sticky.

Open
I want to find space for me in this medium. Content that is open, or unfinished permits association and participation (see Eno’s article in Wired 3.05, where he talks about unfinished media). There is space for me. I often describe building content in this medium as drawing a 260 degrees circle. The arc is sufficient to describe the circle (e.g.: provide the context) but is open to let the member fill in the remainder. We laugh and cry at movies, we associate with characters in books, they move us. We develop and frame our identity with them and through them–to varying degrees they are all open. Cartoons, comedy, and most forms of humor, theatre, especially improvisational theater, are all open. A joke isn’t really finished till someone laughs, this is the closing of the circle, they got it. Abstraction, generalities and stereotypes, all these forms are open, they leave room for association, room for me and for you.

So, chunky, sticky and open. Try them out and tell me what you think (john@dci-studio.com). Lets keep this open, in the first paragraph I said I wanted to discuss the characteristics that make a piece of online content interesting, I did not use the words great or compelling. I don’t think that anything online that has been created to date is great. These are still early days and we still have a lot to learn and a lot to unlearn. No one has produced the Great Train Robbery of online–yet. But when they do, I would bet that pieces of it will be chunky, sticky and open.

Ok enough reminiscing, closing with Jenny Holzer.

firef.ly goes public beta

We are pushing firef.ly into a public beta today.   Exciting stuff for us here at betaworks.   Firef.ly is a light weight messaging layer that sits on top of a site — permitting a real time perspective on who is where on your site and basic chat.   It’s intentionally light weight — no sign in, no install for users — one line of java script for the web site publisher (available here: http://firef.ly/install).  You can use firefly on this page — just slide the slider to the left and have fun.

Couple of thoughts here — first this is another layer application, something i have posted about before, second this is for me a return to days when you could just chat on any page — without the encumbrances of today, captcha’s, sign in etc.   Its a layer of the now web that we are experimenting with.    Yes yes i know it might get some spam — but web site owners have the ability to ban spammers and our hope is that the lightweight, spontaneous nature of firef.ly may open up some new conversations.    As it did a while back when we first trialed it on a Scripting post.    Last point — try the twitter feature — it sends out a message to your followers that you are on a particular page, its pretty powerful.   Have fun.

Summize growth

Summize organic traffic growth, week over week.   Its astounding to see the Summize business grow from 0 to 14M queries a week in over the space of two months (note I updated the chart with the past week) —  traffic over the past 2 weeks has made the insanity of WWDC hard to see on the chart.

A testament to what a great product and UI can achieve in no time at all.   This past week with the launch of bit.ly I spent much of my time on Twitter, Summize, Friend Feed and a handful of other services.  Google is playing nxt to no part in the now-web that is emerging out of this ecosystem.   Rafer also pointed me to this chart on compete.    More on search and navigation to come, for now some pictures — Summize traffic and a wonderful fireworks display from this evening in Shelter Island.

bit.ly a simple, professional URL shortener

We launched bit.ly yesterday and got an intense amount of buzz and attention.  We thought this was an important piece of the puzzle but didn’t fully appreciate the vacuum that we were running into.   A crazy day — Summize offers a great interface into the groundswell of activity — Nate, Jay and the team iterating and updating the service throughout the day (you can see the updates here). 

On the switchAbit/bitly/twitabit blog we did the official launch post.  Save you the jump here is the summary of what we offer and why its different

1. History — we remember the last 15 shortened URLs you’ve created. They’re displayed on the home page next time you go back. Cookie-based, sign in will come but the first rule of the game was keep it simple.

2. Click/Referrer tracking — Every time someone clicks on a short URL we add 1 to the count of clicks for that page and for the referring page.

3. There’s a simple API for creating short URLs from your web apps.

4. We automatically create three thumbnail images for each page you link through bit.ly, small, medium and large size. You can use these in presenting choices to your users.

5. We automatically mirror each page, never know when you might need a backup. :-)

6. Most important for professional applications, you can access all the data about each page through a simple XML or JSON interface. Example.

7. All the standard features you expect from serious url-shorteners.    

And it’s just the beginning, we’re tracking lots more data so that as more URLs are shortened by bit.ly we’ll be able to turn on more features.   Marshall talks about some of what we are going to do on the data side in the RWW article below. 

More to come on how this fits with switchabit, twitabit, findings — the cluster of services we are building.    For now some commentary:

ReadWriteWeb

Bit.ly Is a Big Deal URL Shortener

Scripting News

Alley Insider

Summize

NilsR


 

Dimensionalizing the web

What is a web page today? If you look at the average web page, it’s a compilation of a diverse set of data sources drawn into a construct that we think of as a concrete whole. It probably started with CGI — and the first commercial application was likely the ad banner — but today that simple web page is made up of a whole mix of things ranging from dynamic content, ad’s,  widgets, sidebar tools, gadgets — the frame that we think of as a web page is now constructed from data streams in from all these sources and more.   This componentization of the page was the first step in what is becoming a different architecture for information delivery. What we have today are the equivalent of early life forms – necessary building blocks that evolution will use as more sophisticated lateral services develop. The organization of data streams and how they are constructed relates to our understanding of the dimensions of the web.

Question?   What would the web look like if you picked it up and looked at the bottom? I imagine, what you would see would be a set of databases – with streams of data flowing between them, into these things we call web pages and between these things we call web sites. These metaphors we have applied to the web — pages and sites — are analog’s that helped us grasp and structure the web, yet like any proxy they also impose limits on our perspective. RDF/RSS started me thinking about a lot of these ideas but in the eight or so years since those standards were developed our understanding and approach to web sites as vertical businesses has barely evolved. The spacial assumption we imposed on the web — that a site is a discrete experience that a publisher can control — maps with both a human need to impose hard edges on a dynamic, complex system but also with how we have understood media for the past 100 years or so. I think those edges are been broken down and are offering a different view of the web, and therefore of media companies, one that is less structured around the hard edges of a web page or site, less vertical, less about data silos and more about dynamic, fluid use of data and connections between data points. Some examples.

Take a look at this picture of this post I found on tumblr last week. This person — Erin — is using tumblr to announce a meetup. In this case email and reblogging are the tools she is using to confirm attendants. Shouldn’t this person use meetup for this — clearly its their preference not to, but why?

tumb log

I would propose two theories: context and easy of use. First context — context is important, Erin has followers (an audience) on tumblr, she has an environment that is customized with a user experience she could control (nice background) — and so she wants her meetup to appear in that context. Ease of use — for a myriad of reasons it seems it was easier for her to roll her own meetup than use meetup.com or to quote Pip Coburn the perceived benefits outweigh the perceived pain of trying to learn something new. So here is an example of someone molding a use case (creating a meetup) into another web experience to fulfill a need.

Example #2. What about Twitter. What is the web site Twitter.com?   The first answer — the one I would tell a stranger in conversation — is that its a destination to access and use the microblogging service provided by Twitter, “want to try one to many micropublishing? go to twitter.com”.   Sounds simple enough. Yet that conclusion isn’t supported by the data. I don’t have the exact number but I think its safe to say that more than half of the interactions with Twitter occur off Twitter.com — and the number is in all likelihood a lot higher than that. So is Twitter a protocol?, maybe.   Maybe Ted Stevens actually understood the web better than we thought — thinking about Twitter as a pipe makes more sense than as a  destination.   But its not a pipe in way that old media understood pipes — its different, im not sure i understand exactly what that difference is going to yeild but what is clear today is that each interaction that takes place on the network add’s value or context to further interactions.    As data chunks move around Twitter the get organized and collated into conversations and meme’s.  Similar to the Meetup example — each node on the twitter network is contextualized in form that makes sense for that particular interaction. But unlike Meetup, Twitter is powering all these interactions. The data becomes more valuable as it moves from interface to interface — not less.     There is something very powerful that is happening with the simplicity and openess of this network.   A network is the best metaphor I can think of for Twitter.

Another example.  Iminlikewithyou — the flash casual gaming site, started off as a destination (disclosure note, a betaworks company).    All of a sudden users started grabbing the code and syndicating their game on to their web sites.    But this isnt just the game — its not a widget model — its the entire underlying game net that is getting syndicated.     IILWY is closer to our understanding of old media but its contains some of the bizarre distributed breadth and possibilities that Twitter holds.

So where does all of this lead us?  I believe we need new metaphors to understand and place dimensions around what a web experience is. I don’t have an answer but I do have a few thoughts on how we can begin to frame and understand the shape of what is to come.

i) Think Centers vs wholes, think about networks vs. destinations

Pic by CALast week I was re-reading Christopher Alexander the Nature of Order . In the first book he has a section about wholes vs. centers. He makes the argument that composing visual structures as whole’s — thinking of buildings, things, windows — anything as a whole — fails to recognize the context in which the object lives. He builds the argument up starting with a dot on a piece of paper — he then analyzes how the dot divides and structures our spacial understanding of the piece of paper.  From this point he starts to frame up a way of looking at the world that is based on thinking about centers, zones of spatial activity vs. wholes.   An example he cites:

“On one occasion, I was discussing the concept of centers, as it applied to some bedroom curtains, with my wife Pamela.     She made the comment that the use of the word “centers” as I had explained it to her, was already changing her view of everything around her, even as we were talking: “When I look at the curtain in the room, and think of the curtain, the curtain rod, the window, the sky, the light on the ceiling, as centers, then I become so much more cognizant of the relatedness of all things — it is as though my awareness increases”

I think Alexander’s point and work here is profoundly applicable to the web. If you start thinking about centers — clusters of information — vs. destinations and vertical sites, for me at least, it gives me a frame of reference a metaphor that is far more expansive and networked than the one in which we operate today.   At Fotolog I learned that centers can form and cluster with remarkable speed within a community — now this is starting to happen with information moving laterally between domains.

ii) Think what can move laterally and encourage it to move

People, those things we often call users, want to take data and move it laterally across the web.   They want it to exist in context’s that make sense for a particular interaction. Whether its data portability standards, micro-content standards, people want to cross post and move data from one service to another. There is much that needs to be done here.   A year ago when F8 was launched it seemed that Facebook was driving headlong into this domain.   Yet a year later it now seems like Facebook might become known as the last portal, the last walled garden experience — data comes in but not out.   Openness of interface, api’s — letting data come in an go out of a domain is central to this thesis.    The Facebook newsfeed could be a web wide service — instead the way its articulated today is about retaining eye balls and attention — a movie we have seen before.  Last week we started talking publicly about SwitchAbit — SwitchAbit is a service that is designed to help drive this lateral movement of data across the web, while retaining context, its a small contribution we are hoping to make to this larger puzzle.

iii) Think about how to atomize context so that it can travel with the data

Dirty DataAtomizing content is one piece of the puzzle, the other is doing the same for context so it can travel with the data as it moves around the web from center to center.    Outside.in — Steven Johnson’s creation — trawls through blog posts and attaches geo context to individual posts. I sometimes refer to Outside.in as a washing machine — dirty data comes in one end — Outside.in scrubs the data set and ships out geo-pressed results the other end.   The geo scrubbed post is now more useful for end users, publishers and advertisers.   A bit of structure goes a long way when the data can then move into other structures.   The breadth of what geo scrubbing can do is staggering — think about pivoting or organizing any piece of information around a map — the spatial dimension that is most familiar to our species.  A bit of context goes a long way.   (disclosure note, Outside.in / an investment of betaworks)

iv) Think Layers

There is a layering dimension that is worth consideration — there are services starting to emerge that offer functionality that is framed around exposing some separation between different layers of the web.   Photoshop is the software that first introduced me to the layer metaphor,  i still cant use photoshop, but I think I get the layer idea.   Google earth has applied a layering concept to mapping.   Similarly services like PMOG are experimenting with layers.   Back at betaworks Billy Chasen started working with layers about eight months ago.   He developed a simple navigational tool called Fichey that lets you navigate web pages independent of their domain – using a common navigational tool.    Want to flip thru the top digg stories? — fichey makes it fairly easy and fast.   This was just a beginning.    Billy has developed a service called firefly — it’s in testing now and over the coming weeks we will begin to preview it — but its all about creating a layer of interactivity that is contextualized with the web site you are on but its exists independent of that web site.

v) Accept uncertainty, keep it rough on the edges

What did Rummy say about the known unknown’s?    As we experiment and design these new forms of interactions its vital that we remain open to roughness and incompleteness on the edges of the web.   The more we try to place these services into the convenient, existing models, that we have used to structure our thinking thus far the more we will limit our ability to look ahead and think about these things differently.

This is just a beginning.   I hope these five areas have helped define and frame how to think about alternative data dimensions on the web.  Time to wrap this post up — enough for now.

Switching bits

Betaworks is starting to roll out SwitchAbit, our first homegrown product. SwitchAbit is a content router. A switchboard to connect one service to another. It will let people shuttle a flickr to twitter, or to tumblr, facebook or pownce or pretty much wherever people want. SwitchAbit doesn’t aspire to be another UI to aggregate data — in fact its the reverse — it assumes that people want to contextualize information streams within existing services and existing communities. I’m tired of companies seeking to jam users into a new user experience that is mostly designed to drive a business model rather than drive new, relevant or meaningful interactions. As a consequence SwitchAbit is designed to be a platform — Twittergram will be the first service that will be powered by the platform.

When we started working on SwitchAbit one of the foundational services that inspired us was Twittergram, a service that Dave Winer created almost a year ago. Few individuals have been more innovative in finding ways to move data — live & static data — laterally across the web. This lateral movement of data is exactly what SwitchAbit is about. Once we had an alpha version of SwitchAbit working I sent it to a handful of people, one was Dave. After a rapid set of email exchanges — we came to an agreement and Dave is joining SwitchAbit as an advisor. The last deal we worked on was back in Userland days, between AOL and Userland — after months we never managed to finalize a relationship — this time around we managed to get this done end to end in about an hour. Good stuff.

It’s less than six months since we setup the development team at betaworks and this is the first of three products that will roll out in the coming months. As I started to outline last week betaworks is a company that through focus and structure is designed to drive linkages and accelerate innovation across what we call our network. The intent is to create a set of loosely coupled components — some wholly owned, some partially owned — and drive innovation, context and value across the network — thru the exchange of data. What people today call monetization, but monetization as it applies to a network, not two isolated nodes. Over time this network will look like a company — I guess a media company is the best analog we have today — but a little different in focus, structure and purpose. And we aren’t going to start talking about new media, again. For now we are very excited about getting SwitchAbit rolling.

F8 and that Telegraph road

The launch last week of Facebook's platform initiative, F8, has generated a lot of talk, much of it in the mainstream press.  Its a compelling story, Facebook is becoming a platform, out maneuvering Myspace, doing to the web what Microsoft did to the PC.   Its a story we have heard before, it seems to recur periodically.  However, the announcement last week was mostly about distribution -  it didn't involve either deep or open access to Facebook data nor open access to its infrastructure.   F8 as it stands today is a partnering platform.  This one more small step in a long negotiation that is taking place between web sites on how data is owned, on how its shared between sites and how people navigate through services on one site to another.   This conversation is still in its infancy.  

XML really began the process of lateral data flows between sites and the vision of the semantic web offers a rich set possibilities — yet it's early days — most sites still operate in vaccum's and most user data is still stuck in proprietary silos.   And while the technology certainly needs to evolve so do the scope and kind of business arrangements.   The web of contracts, contracts between vertical sites, contacts between sites and users – needs to evolve in order for the vision of the semantic web to reach some of its compelling end points.   Weaving, back to the Facebook announcement.  What happens next is more interesting than what happened last week.    Facebook has taken a different approach to Myspace – who has opt'd to control much of its third party innovation through fairly simplistic interfaces and binary business driven rules, more like a traditional media company, vs. letting the community really build on top of the service in a meaningful manner.     As the Facebook platform evolves there are a handful of things I will be watching:

1. How deep are are the API's that Facebook is going to present to the community.    Facebook markup language is a proprietary API, the "platform" maybe wide in terms of distribution but its not deep, there is little to no access for third parties to the social data or infrastructure that makes Facebook such an interesting service, and its not open for developers to just build on, everyone accepted into the platform has to be sanctioned by Facebook, the degree of openness, real openness (vs. marketing gibberish) will dictate the depth and the value of the platform.   Amazon has done a great job at developing a set of platform services — starting with the affiliate model, extending it into community and then the Mechanical Turk and the elastic computing cloud services.  These web services were built step by step along with trust and a degree of openness that surprised many.    Pretty much every startup I work with today is using EC2/S3 — if Facebook going to have the same influence over the web application space, if so they need to open up more than a distribution funnel. iLike's weekend server hunt demonstrates a need on the infrastructure side, but the is also a real need re: social data.    Offering Facebook users the ability to port social data, their social network across applications and letting applications developers innovate on top of that data set would be really interesting.

2. How will the application metaphor evolve?   I see the metaphor Facebook has applied as the most interesting thing in the announcement last week.  The web has spawned many interesting platforms for micro application development.    Applets, plugin's -  from WordPress to Firefox to Myspace there is a an active ecosystem of development around many web sites.    But the term application suggests user control beyond a widget or plug-in, applications are often monolithic, the management of applications by the underlying OS is usually benign and in service to the application (get me that device driver)  — the term application presents a high bar for Facebook to jump over.    To me the use of the term suggests a rich set of API's and a clearly defined layer – a layering of both technical and business terms.   Its an exciting challenge to see if they can make this truly an application environments.   And if they do, what is Facebook's relationship to these applications?   The identity issue below is only scratching the surface of this question.   It was fascinating to me that in the announcement last week most of the mainstream press look in the rear view mirror for metaphors — this was going to be like windows was to the PC.   I hope not — we don't need another OS, what we need are open development platforms — and open access to data.    I did a lot of work on platforms a long time back — back in 1998, I invested in a company called WebOS that tried to go down the path of applying the desktop metaphor to the web, of duplicating the inadequacies of the desktop on the web.    There were few people comparing last week's announcement to Adobe's Apollo — Apollo is setup to be a more traditional, extensible platform.  One of the companies I am working with — im in like with you — is developing much of its service in Apollo.   Apollo is truly a web application environment — offering state management outside of the browser, for example Apollo will let me do my web mail while I am unconnected.  But Adobe is building this as a platform service, like Flash the intent is to proliferate the tool set across the web, developers will adopt it as will end users and like Flash it will provide revenue from scaled developers paying Adobe a license fee.   This is a platform business model that the market understands.   A cross platform run time isnt as sexy sounding at F8, but it might be more meaningful.  And then there is Firefox 3 — another valid comparison that didnt seem to come up in many discussions.   

3. How will application providers be promoted in Facebook?   This is critical to understanding the underlying business terms between the distributor and the application creator.   Last weeks announcement was about distribution, and it formalized an approach for Facebook partners, business development in a box, a highly scalable approach to partnering.     But what are the underlying economic drivers?     At AOL promotion and positioning was usually governed by dollars spent.    At Google it now seems to be about long term strategic value: years ago the Google services that were tiled above search results – were best in class – for finance related searches (search for a stock ticker), Yahoo finance was promoted, Mapquest was the default when you searched for a location.   Then slowly over time Google services received prominence equal or better to others.   Today its pretty much all Google services upfront, in default positions — nice to leave some pointers for competitors but as Google knows well defaults drive traffic and traffic drives revenue.  

Screenshot of Facebook's application directory

Last week the COO at Facebook, Owen Van Natta, said:  "How are we promising not to trump your application? We're going to level the playing field, developers won't be second-class citizens–we're going to compete directly with them."   Accordingly, the Facebook application directory is organized today mostly by popularity — but mostly is different to always. 

See the ringed sections of the screenshot — unlike third parties Facebook applications don't list the number of users of its applications (Marketplace is a Facebook application).    And note the that Application directory (boxed) starts with Facebook's top Applications.    Finally, as the users expands and contracts the application list (the more carat, where the arrow is pointing) Facebook's one advertisement on the page moves down, partially below the fold.  Tell me this execution isn't setup to collide with business priorities.

In Japan, on the cell phone, Do Co Mo understood that with a limited UI placement of third party services needed to be ranked by usage.   Is Facebook headed down the same path — and what does the COO really mean?? — Facebook owns this garden, competing directly with application providers is going to be, interesting.

4. How will Facebook manage identity and data across third party applications?   Some sites promoted in F8 seem to be managing identity independent from Facebook, others are doing a one click install and sign in (but even in the case of Mosoto, you are signed in for chat but to file share you need to sign in again?).    Does Facebook become a alternative identity broker on the web and if so they are going to have to a lot more open in their approach to data — open ID is a pretty high standard.      Facebook has traditionally had a fairly rough privacy policy — they gather a lot of data about their users and there has been a fair amount of controversy about it.    As they manage data across applications this is only going to get more challenging. 

5. Lastly, how does Zuckerberg social graph extend beyond the core college audience / behavior?   The feed feature added a whole new dimension to Facebook and extended the time people were spending on the site significantly, Comscore data suggests it went up by over 5 mins per day.   Fotolog has a similar, feature that alerts users to new uploads by friends — its a significant driver of our navigational based traffic.   But how does the audience and the use cases evolve beyond the core?   Will people outside of college enter in real names into profiles and will the social dynamics of the broader audience fit with the services that were built for the student based audience?   Over the past year I have started to use LinkedIn more — its starting to become useful, the network is large enough, the alerts I get from LinkedIn are useful — not spam.  I signed up for Facebook shortly after they opened up — but I didn't go back, till friends started inviting me.   Over the past 6 months I have visited the sites to confirm friends but there is nothing useful about Facebook as yet, and useful aside it better be either personal or entertaining — but like so many other social networks its about collecting connections, but whats are the services that are going to drive usage for me — I don't see it yet.   

This is a quote from Giga Om's review post the launch event, its worth a slow read.   "Zuckerberg says you can serve ads on your app pages and keep all the revenue, sell them yourselves or use a network, and process transactions within the site, keeping all the revenue without diverting users off Facebook. This was the opposite to what was stated in the WSJ article earlier this week, and gets by far the biggest reaction from the crowd."  

This got the biggest reaction from the crowd??  Maybe a crowd packed with Web 2.0 service and feature developers who are in need of an audience found it it interesting.    If a user today opt's in to use your site on Firefox — or your application on windows — or even within the grandfather of walled garden's AOL — you still get to keep the ad-revenue.  So why is this a big surprise?  Maybe the attention the announcement garnered is also about the proliferation of web based features searching for a destination to marry themselves to.

Intent and that Telegraph Road

A long time ago came a man on a track
Walking thirty miles with a pack on his back
And he put down his load where he thought it was the best
Made a home in the wilderness

I do think its worth do ask whats the intent behind the Facebook announcement, who is meant to serve and whats the need behind the F8 initiative?    The Facebook was launched as a service for US college students.   It was full of social tools, it let you build out your own network, post events, notes, photos and most importantly its all private, so that students can develop a profile that is real vs. many of the fantasy based profiling you see on Myspace and other sites.   Facebook achieved a lot of its early traction for the same reason as Cyworld did– you could enter your College, your year and actually find friends, colleagues, friends to be, cruches etc.  Because people used real names on the service — emails were verified by domain and you could find anyone in your university.   This was and is a big idea — few sites have a relationship based with their users that maps to real identities.     Anyone who has attended a US university or college knows exactly what this is about. Then came the monetization.  

Facebook started with advertising, they achieved some remarkable successes by mid 2005 they became profitable, they had 2,000+ colleges and 20,000+ high schools on the service.   And the audience was rabidly engaged — 2/3rd's of the active membership came to the site everyday.     But look at Facebook's reach through 2006 — it is flat, because by 2006 they had tapped into an audience and grown the business about as far as it could go given its natural limitations: students.    Reach tracked by AlexaThey were now faced with the question of how to scale your business beyond its base.   They could go global — there are services like FriendsReunited in the UK and Australia who are demonstrating, albeit with differences , that the market exists outside of the US for a Facebook like service.    And /or they could opt to extend the scope of the Facebook offering and try to reach a broader audience in the US beyond students.   They decided to push on both fronts but most significantly in September last year Facebook opened up to users irrespective of whether they were in school or not.   In 2007 Facebook's reach more than tripled.  Before they opened up the doors to the broader audience they were adding 15,000 members a day, today they are adding 100,000 a day (NYT stat, note Fortune says 150,000 a day).  They now have 24M active users, posting mostly Photos, notes and events.

Then came the churches then came the schools
Then came the lawyers then came the rules
Then came the trains and the trucks with their loads
And the dirty old track was the telegraph road

But now reach has extended they need to find ways to get people to spend more time on the site.  Here comes the platform initiative.  The platform that was released last week is about extending Facebook in a different manner to the other social networking sites.  Its about continuing to extend Facebook features by offering distribution of third party applications on Facebook.  Yet the features been added are contained within the Facebook experience.   Out the gate its a great opportunity for fledgling sites, particularly sites that are more of a feature than a destination — Facebook is offering one click installs for applications within Facebook. Its about distribution and its about continuing to drive the amount of time people are spending on the site, which in turns drives advertising.  Facebook is playing the same game as media aggregators have played since the dawn of time.    Whether its Disney, Yahoo or AOL — its all about getting in front of the distribution firehose — they are selling their audience.   Day 1 its not setup as a sale.   Remember that AOL used to pay service providers to offer content and services within the walled garden — then in 1996 when AOL hit a scale it stopped paying providers and started charging — bit by bit AOL flipped the model.  This all seems far less interesting and ambitious than the headlines suggest.   Zuckerberg told Kirkpatrick that what Facebook is unveiling would be "the most powerful distribution mechanism that's been created in a generation."  I hope its is more than that.     If Facebook's F8 is about trying to extend the size and scale of innovation and services in what amounts to another a walled garden experience it will another building block in the long history of web hype.  The Facebook has a great social platform to build off, I hope they are brave enough to let their users take their data and extend services beyond their control, beyond the walled garden.  

A last point worth making is the absence of Microsoft, Yahoo, Ebay and AOL in the platform / social networking space.     Live.com was meant to be a web development platform — but things hewed back to Windows with the launch of Vista.  Microsoft developed much of the thinking behind the web as a platform — with hailstorm and then live.com — but IE7 and Live haven't taken the lead.   Yahoo made all these great acquisitions, many of which they they have left in silos and failed to build upon.    Ebay has this amazing social / trust network that links merchants and end users.    We think of profiles as been specific to social net, but Ebays profiles as they relate to trust and commerce and communications (skype) are a trove of data that could be opened up to users, applications and the web as a whole.  And the merchant relationships, what about extending them into advertising.     Like wise with AOL — there was a recent comment about the importance of opening up AIM, again…     Its amazing to see the leaders of earlier generations of the web MIA — gone from this social networking race.

The semantic web needs to be distributed at its core, another walled garden is too low a bar for a really powerful and interesting social network to aim for.  I hope Facebook actually step beyond the marketing hype and deliver a social platform for the web.

Choice, end to end control, distributed innovation and that iphone thing

A lot of chatter about the iphone — just read Dave Winer's piece — lots of conspiracy theories about how real the Job's demo was and people are starting to focus on the question of how closed the platform is.  Jobs has said that the platform will allow third party development but it will be "restricted" and managed — like ipod games.  Apple believes that in order to get a product into market — out of the box — end to end control of the hardware and software experience is the easiest and fastest way to deliver something that works to users.   This worked in the case of the ipod — it wasnt the first MP3 player to hit the market, it was just the first to work as seamlessly as it did, from the device to the pc.   There are smart phones of many flavors out there today — but they all require a lot of setup, maintenance etc.  The iphone is clearly going to be different — take a look at the Pogue's list of what is does and doesnt do.    

Last year I lived in Italy for six months and I made some notes about what an insanely mobile the country was — 57M people with 70M cell phones.   There are more mobile phones here than fixed lines, estimates are that 18% of the population have cut the cord (chk). Kids and couples walk around listening to cell phones playing music, like 30 years ago people would walk around listening to a radio.    Someone we know was chatted up by a waiter at a restaurant — for follow up, he offered her a SIM chip instead of offering his phone number.   SMS is everywhere and its far more conversational than in the US. The rates and pricing plans push people to SMS.    Wifi is fairly available and the cell co's are clearly nervous about voip / skype – 3 (Hutchison Whampoa) has an offer in market for $15 a month unlimited voip calling to over 25 countries from your handset.    And in Italy Apple has next to no presence (as of 06 they had no stores and next to no market share).  In Italy Apple has next to no presence (as of 06 they had no stores and next to no market share).

Over time the iPod functionality needs to merge into the phone.     Yet Apple has created a business model that is based on tethering hardware to software and reaping all of the margins on the hardware.    The result is that music that I have "bought" on iTunes isn't transportable to other non apple devices.   I really haven't bought it, its a rental agreement – with the a right to listen to that music on 5 apple pc's / devices.  Jobs knows that the ipod is close to its peak and its time to move the ball — the question in my mind is whether open and unlocked alternatives — palm, symbian, rim and even linux phones can out run Apple. 

The pressure points are in my mind (a) apple's dependency on the ipod and its related business mode — the iphone needs to have everything the high end ipod has (focus will be on music, video and phone — watch how they execute on core ipod features (eg: access to itunes store from the device (which today is not available), music and video sharing (also not available)) and then non ipod functionality.    The phone is a messaging device, music and ipod functionality needs to balanced against great messaging capabilities — voice and text (Phones outside of the US are used more for messaging that voice — calling them phones is a cultural artifact — they are messaging devices with voice as a secondary features)   (b) apple's tie to cingular (2 years), and the associated restrictions this brings with it (re: no voip, open wifi roaming, no HSDPA/3g, requirement for a 2 year contract, no unlocked alternative etc.)  (c) the tension between a closed end to end platform with controlled innovation vs. an open platform with distributed innovation and lastly (d) the execution of the hardware / device and the lack of a keyboard.  If this is mostly a media device Apple will miss the broader market. 

I have no doubt people will buy this product — it seems like a beautiful piece of hardware and simply postioned as the highest end ipod it will find a market –  just like the nano or video ipod.  But neither the nano or the video ipod defined a new category — they were devices in a long stream of innovation that started with the orginal ipod.   The iphone needs to define a whole new stream of innovation independent from the ipod.  And the business model will likely also have to evolve — in more developed markets (south korea the flip has occurred to a subsription model, $5 a month for all the music you want / can eat).     I am going to be watching the pressure points listed above to see whether similar to the ps3 vs. Wii the lowend offer some real alternatives, without all the restrictions that Apple's business model now imposes on it as the category leader – the mobile world needs to see some real innovation and what I saw last week suggests that not going to come from Apple. 

Things to watch in 2007

7 4 07
(things to watch in 07)

1. Google will feel the tension between search and browse and their associated business models. Google quick check-out will emerge as the companies key innovation beyond search and paid listings. Yahoo and Ebay will follow AOL and be rolled into the operating theatre — the problem isnt technology (panama etc.) its the business model tradeoff’s they have both made re: the tail.

2. Sector wise e-commerce will rise in importance as alternative currencies emerge as legitimate ways to transact. Its a different take on the subscription model but using ingame currencies to transact for other products (see qq coin). On the subject of virtual worlds, growth will continue at a pace, but second life will emerge as the one everyone could understand but few actually wanted to visit more than once.

3. Geographically, the rest of the world will come into focus as internet and media companies search for customers and growth and innovation. ROW will start to be a legitimate force of innovation rather than just a platform to duplicate US business models.

4. Connectivity wise, wireless broadband will finally become a force to be contend with

5. Policy wise: the Net Neutrality debate will recede as it becomes evident that while network providers need to have the ability to ability to manage bits, those who think they can manage or shape the transport layer to the bias one application or service over another will be proven wrong. The influence and relative progress of the ROW will help here. And while the focus is on policy — the internet policy debate will switch to US broadband adoption and relative speed/price of offerings in US vs. ROW.

6. In terms of protocols and the evolution of the web — web 2.0 given that it has moved from a useful definition to a undefined meme will recede in importance and the semantic web will begin to take shape, standards, api’s will be extended to form the basis for the next iteration of the internet

7. Hardware and device wise, Vista’s influence will be mostly in the enterprise, the Ipod starts looking tired, the Itv box becomes a big deal. Leopard will be a bigger deal than most expect. Xbox 360 will get squeezed from the bottom (Wiiiii!), PS3 will make its numbers, the product is pretty good, not as much fun as Wii but nonetheless good. And Linux phones should be on your radar, they are on mine.

Gmail Just Got Perfect?

"Techcrunch » Blog Archive » Uh Oh, Gmail Just Got Perfect Google quietly added a small feature to Gmail this week called Mail Fetcher. When that feature launched, Gmail became perfect."

gmail perfect? not yet — all too often I find that Google's religion often gets in the way of it becoming a great service.    Google's world view is defined by and through the lens of search.   This drives features that are sometimes bent (no folders, only labels, pray tell whats the difference, metaphors are important, no need to bend them), features that are sorely lacking (eg: IMAP, in search centric world where everything lives in the cloud no one needs to sync with clients or devices, why bother with IMAP?  Or is it because IMAP will break the conversations feature, or because it will give users a path around the ad's?), and features which are good but not great (like the conversations feature, that every so often mis-files a mail and suddenly mail is a mess) and a data / privacy policy that serves search not the users.  Last, in a world where there is a rich set of tools emerging for client based email (eg: here , or here , or there ), wouldnt some API"s make sense in gmail?

There is so much head room for improvement in mail – gmail made some great strides forward, but perfect, not yet, and not for most of the world, at least thats what the data suggest.   Last time I saw usage data for web mail based services, in the US, Yahoo was the leader with 40+% share, gmail had less than 3% share — i often hear that internationally gmail is meant to be way ahead, but I recently saw a piece on market share in India of web mail services and gmail has 5% share, yahoo, reddif and hotmail have most of the rest of the market.  Alpha geeks seem to gloss over this data with the assumption that its only a question of time, and the rest of the world will figure it out.   Two and half years after the launch of gmail the rest of the world still hasnt figured it out — and btw, in the quest to follow google, no one seems to talk much about myspace's  20% domestic share of email, the Newscorp UK / google deal is interesting for that reason and some.