Category betaworks

lines in the sand …

screenshotI had the good fortune of receiving an advance copy of Ken Auletta’s forthcoming book “Googled, The End of the World as We Know It“. It’s a fascinating read, one that raises a whole set of interesting dichotomies related to Google and their business practices. Contrast the fact that the Google business drives open and free access to data and intellectual property, so that the world becomes part of their corpus of data – yet they tightly guard their own IP in regards to how to navigate that data. Contrast that users and publishers who gave Google the insights to filter and search data are the ones who are then taxed to access that data set. Contrast Google’s move into layers beyond web sites (e.g., operating systems, web browsers) with their apparent belief that they won’t have issues stemming from walled gardens and tying. In Google we have a company that believes “Don’t be evil” is sufficient a promise for their users to trust their intentions, yet it is a company that have never articulated what they think is evil and what is not (Google.cn, anyone?).

There is a lot to think about in Auletta’s book – it’s a great read. When I began reading, I hoped for a prescriptive approach, a message about what Google should do, but instead Auletta provides the corporate history and identifies the challenging issues but leaves it to the reader to form a position on where they lead.  In my case, the issue that it got me thinking most about was antitrust.

My bet is that in the coming few years Google is going to get hauled into an antitrust episode similar to what Microsoft went through a decade ago. Google’s business has grown to dominate navigation of the Internet. Matched with their incredibly powerful and distributed monetization engine, this power over navigation is going to run headlong into a regulator. I don’t know where (US or elsewhere) or when, but my bet is that it will happen sooner rather than later. And once it does happen, the antitrust process will again raise the thorny issue of whether regulation of some form is an effective tool in the fast-moving technology sector.

UsersjohnborthwickPicturesiPhoto-LibraryOriginals2008IMG_0859.jpg

I was a witness against Microsoft in the remedy phase of its antitrust trial, and I still think a lot about whether to technology regulation works. I now believe the core position I advocated in the Microsoft trial was wrong. I don’t think government has a role in participating in technology design and I believe the past ten years have adequately illustrated that the pace of innovation and change will outrun any one company’s ability to monopolize a market. There’s no question in my mind that Microsoft still has a de facto monopoly on the market for operating systems.  There’s also no question that the US and EU regulatory environment have constrained the company’s actions, mostly for the better. But the primary challenges for Microsoft have been from Google and, to a lesser extent, from Apple. Microsoft feels the heat today, but it is coming from Silicon Valley, not Brussels or Washington, and it would be feeling this heat no matter what had happened in the regulatory sphere. The EU’s decisions to unbundle parts of Windows did little good for RealNetworks or Netscape (which had been harmed by the bundling in the first place), and my guess is that Adobe’s Flash/ AIR and Mozilla’s Firefox would be thriving even if the EU had taken no action at all.

But if government isn’t effective at forward-looking technology regulation, what alternatives do we have? We can restrict regulation to instances where there is discernible harm (approach: compensate for past wrongs, don’t design for future ones) or stay out and let the market evolve (approach: accept the voracious appetite of these platforms because they’re temporary). But is there another path? What about a corporate statement of intent like Google’s “Don’t be evil”?

“Don’t be evil” resonated with me because it suggested that Google as a company would respect its users first and foremost and that its management would set boundaries on the naturally voracious appetite of its successful businesses.

In the famous cover letter in Google’s registration statement with the SEC before its IPO, its founders said: “Our goal is to develop services that significantly improve the lives of as many people as possible. In pursuing this goal, we may do things that we believe have a positive impact on the world, even if the near term financial returns are not obvious.” The statement suggests that there are a set of things that Google would not do. Yet as Auletta outlines, “don’t be evil” lacks forward looking intent, and most important it doesn’t outline what good might mean.

Nudge please …

Is there a third way — an alternative that places the company builders in a more active position? After almost two decades of development I believe many of the properties of the Internet have been documented and discussed, so why not distill these and use them as guideposts? I love reading and rereading works like the Stupid Network, or the Cluetrain Manifesto or the Cathedral and the Bazaar, or (something seasonal!) the Halloween Memo‘s. In these works, and others, there is mindset, an ethos or culture that is philosophically consistent with the medium. When I first heard “Don’t be evil” my assumption was that it, and by definition good, referred to that very ethos. What if we can unpack these principles, so that builders of the things that make up these internets can make explicit their intent and begin to establish a compact vs. a loose general statement of “goodness” that is subject to the constraint that “good” can be relative to the appetite of the platform? Regulation in a world of connected data, where the network effect of one platform helps form another, has much broader potential for unintended consequences. How we address these questions is going to affect the pace and direction of technology based innovation in our society. If forward looking regulation isn’t the answer, can companies themselves draw some lines in the sand, unpack what “don’t be evil” suggested, and nudge the market towards an architecture in which users, companies, and other participants in the open internet signal the terms or expectations they have.

Below is a draft list of principles. It is incomplete, I’m sure — I’m hoping others will help complete it — but after reading Auletta’s book and after thinking about this for a while I thought it would be worth laying out some thoughts in advance of another regulatory mess.

1. Think users 

When you start to build something online the first thing you think about are users. You may well think about yourself — user #1 — and use your own workflow to intuit what others might find useful, but you start with users and I think you should end with the users. This is less of a principle and more of a rule of thumb, and a foundation for the other principles. It’s something I try to remind myself of constantly. In my experience with big and small companies this rule of thumb seems to hold constant. If the person who is running the shop you are working for doesn’t think about end users and / or doesn’t use your product, it’s time to move on. As Eric Raymond says you should treate your users as co-developers.  Google is a highly user centric company for one of its scale, they stated this in the pre-ample to the IPO/s3 and they have managed to stay relatively user centric with few exceptions (Google.cn likely the most obvious, maybe the Book deal).   Other companies — ie: Apple, Facebook — are less user centric.   Working on the Internet is like social anthropology, you learn by participant observation — the practice of doing and building is how you learn.   In making decisions about services like Google Voice, Beacon etc. users interest need to be where we start and where we end.

2. Respect the layers

In 2004 Richard Whitt, then at MCI, framed the argument for using the layer model to define communication policy. I find this very useful: it is consistent with the architecture of the internet, it articulates a clear separation of content from conduit, and it has the added benefit of been a useful visual representation of something that can be fairly abstract. Whitt’s key principle is that companies should respect the distinction between these layers. Whitt captures in a simple framework what is wrong with the cable companies or the cell carriers wanting to mediate or differentially price bits. It also helps to frame the potential problems that Side Wiki, or the iPhone or Google Voice, or Chrome presents (I’m struck by the irony that “respecting the layers” in the case of a browser translates into no features from the browser provider will be embedded into the chrome of the browser, calling the browser Chrome is suggestive of exactly what I dont want, ie Google specific Chrome!).   All these products have the potential to violate the integrity of the layers, by blending the content and the applications layers. It would be convenient and simple to move on at this point, but its not that easy.

There are real user benefits to tight coupling (and the blurring of layers) in particular during the early stages of a product’s development. There were many standalone MP3 players on the market before the iPod. Yet it was the coupling of the iPod to iTunes and the set of business agreements that Apple embedded into iTunes that made that market take off (note that occurred eighteen months after the launch of the iPod). Same for the Kindle — coupling the device to Amazon’s store and to the wireless “Whispernet” service is what distinguishes it from countless other (mostly inferior) ebooks. But roll the movie forward: its now six and a half years after the launch of the coupled iTunes/iPod system. The device has evolved into a connected device that is coupled both to iTunes and AT&T and the store has evolved way beyond music. Somewhere in that evolution Apple started to trip over the layers. The lines between the layers became blurred and so did the lines between vendors, agents and users. Maybe it started with the DRM issue in iTunes, or maybe the network coupling which in turn resulted in the Google Voice issue. I’m not sure when it happened but it has happened and unless something changes its going to be more of problem, not less. Users, developers and companies need to demand clarity around the layers, and transparency into the business terms that bound the layers. As iTunes scales — to become what it is in essence a media browser — I believe the pressure to clarify these layers will increase.    An example of where the layers have blurred without the feature creep /conflict is the search box in say the Firefox browser.    Google is default, there is a transparent economic agreement that places them there and users can adjust and pick another default if they wish.    One of the unique attributes of the internet is that the platform on which we build things is the very same as the one we use to “consume” those things (remember the thrill of “view source” in the browser). Given this recursive aspect of the medium, it is especially important to respect the layers.   Things built on the Internet can them selves redefine the layers.

3. Transparency of business terms

When platform like Google, iTunes, Facebook, or Twitter gets to scale it rapidly forms a basis on which third parties can build businesses. Clarity around the business terms for inclusion in the platform and what drives promotion and monetization within the platform is vital to the long term sustainability of the underlying platform. It also reduces the cost of inclusion by standardizing the business interface into the platform. Adsense is a remarkable platform for monetization. The Google team did a masterful job of scaling a self service (read standardized) interface into their monetization system. The benefits of this have been written about at length yet aspects of the platform like “smart pricing” arent’t transparent.   See this blog post from Google about smart pricing and some of the comments in the thread.   They include: “My eCPM has tanked over the last few weeks and my earnings have dropped by more then half, yet my traffic is still steady. I’m lead to believe that I have been smart priced but with no information to tell me where or when”

Back in 2007 I ran a company called Fotolog. The majority of the monetization at Fotolog was via Google. One day our Google revenues fell by half. Our traffic hadn’t fallen and up to that point our Google revenue had been pretty stable. Something was definitely wrong, but we couldnt figure out what. We contacted our account rep at Google, who told us that there was a mistake on our revenue dashboard. After four days of revenues running at the same depressed level we were told we had been “smart priced”.   Google would not offer us visibility in how this is measured and what is the competitive cluster against which you are being tested. That opacity made it very hard for Fotolog to know what to do. If you get smart priced you can end up having to re-organize your entire base of inventory all while groping to understand what is happening in the black box of Google. Google points out they don’t directly benefit from many of these changes in pricing (the advertisers do pay less per click), but Google does benefit from the increased liquidity in the market. As with Windows, there is little transparency in regards to the pricing within the platform and the economics.    This in turn leaves a meaningful constituent on the sideline, unsatisfied or unclear about the terms of their business relationship with the platform. I would argue that smart pricing and a lack of transparency into how their monetization platform can be applied to social media is driving advertisers to services like Facebook’s new advertising platform.

Back to Apple.   iTunes is as I outlined about a media browser — we think about it as an application because we can only access Apple stuff through it, a simple, yet profound design decision.   Apple created this amazing experience that arguably worked because it was tightly coupled end to end, i.e, the experience stretched from the media through the software to the device. Then when the device became a phone, the coupling extended to the network (here in the US, AT&T). I remember two years ago I almost bricked my iPhone — Apple reset my iPhone to its birthstate — because I had enabled installing applications that weren’t “blessed” by Apple. My first thought was, “isn’t this my phone? what right does Apple have to control what I do with it, didn’t I buy it?” A couple of months ago, Apple blocked Google Voice’s iPhone application; two weeks ago Apple rejected someecards’ application into the app store while permitting access to a porn application (both were designated +17; one was satire, the other wasn’t). The issue here isn’t monopoly control, per se — Apple certainly does not have a monopoly on cell phones, nor AT&T on cell phone networks. The trouble is that there is little to no transparency into *why* these applications weren’t admitted into the app store. (someecards’ application did eventually make it over the bar; you can find it here.) Will Google Voice get accepted? Will Spotify?, Rdio? someecards?     As with the Microsoft of yesteryear (which, among other ills, forbade disclosure of its relationships with PC makers), there is an opaqueness to the business principles that underlie the iTunes app store. This is a design decision that Apple has made and one that, so far anyway, users and developers have accepted. And, in my opinion, it is flawed.    Ditto for Facebook. This past week, the terms for application developers were modified once again. A lot of creativity, effort, and money has been invested in Facebook applications — the platform needs a degree of stability and transparency for developers and users.

4. Data in, data out?

API’s are a corner stone to the emerging mesh of services that sit on top of and around platforms. The data flows from service providers should, where possible, be two way. Services that consume an API should publish one of their own. The data ownership issues among these services is going to become increasingly complex. I believe that users have the primary rights to their data and the applications that users select have a proxy right, as do other users who annotate and comment on the data set. If you accept that as a reasonable proposition, then it follows that service providers should have an obligation to let users export that data and also let other services providers “plug into” that data stream. The compact I outline above is meaningfully different to what some platforms offer today. Facebook asserts ownership rights over the data you place in its domain; in most cases the data is not exportable by the user or another service provider (e.g., I cannot export my Facebook pictures to Flickr, nor wire up my feed of pictures from Facebook to Twitter). Furthermore if I leave Facebook they still assert  rights to my images.   I know this is technically the easiest answer. Having to delete pictures that are now embedded in other people’s feed is a complex user experience but I think that’s what we should expect of these platforms. The problem is far simplier if you just link to things and then promote standards for interconnections. These standards exist today in the form of RSS, or Activity Streams — pick your flavor and let users move data from site to site and let users store and save their data.

5. Do what you do best, link to the rest

Jeff Jarvis’s moto for newsrooms applies to service providers as well. I believe the next stage of the web is going to be characterized by a set of loosely coupled services — services that share data — offering end users the ability to either opt for an end-to-end solution or the possibility of rolling their own in a specific domain where they have depth of interest, knowledge, data. The first step in this process is that real identity is becoming public and separable from the underlying platform (vs. private in, say The Facebook, or alias based in most earlier social networks). In the case of services like Facebook Connect and Twitter OAuth this not only simplifies the user experience but identity also pre-populates a social graph into the service in question. OAuth flows identity into a user’s web experience, vs. the disjointed efforts of the past. This is the starting point. We are now moving beyond identity into a whole set of services stitched together, by users. Companies of yesteryear, as they grew in scale, started to co-opt vertical services of the web into their domain (remember when AOL put a browser inside of its client, with the intention of “super-setting” the web). This was an extreme case — but it is not all that different from Facebook’s “integration” of email, providing a messaging system with no imap access, one sends me an email to my imap “email” account to tell me to check that I have a Facebook “email”.   This approach wont scale for users.  Kevin Marks, Marc Cantor, Jerry Michalski are some of the people who have been talking for years about an open stack.    In the later half of this presentation Kevin outlines the emerging stack.    I believe users will opt — over time — for best in class services vs. the walled garden roll it once approach.

 
Bart1


6. Widen the my experience – don't narrow it

Google search increasingly serves to narrow my experience on the web, rather than expand it. This is driven by a combination of pressure inherent in their business model to push page views within their domain vs. outside (think Yahoo Finance, Google Onebox etc.) and the evolution of an increasingly personalised search experience which in turn tends to feed back to me and amplify my existing biases — serving to narrow my perspective vs. broaden it. Auletta talked about this at the end of his book. He quotes Nick Carr: “They (Google) impose homogeneity on the Internet’s wild heterogeneity. As the tools and algorithms become more sophisticated and our online profiles more refined, the Internet will act increasingly as an incredibly sensitive feedback loop, constantly playing back to us, in amplified form, our existing preferences” Features like social search will only exacerbate this problem. This point is the more subtle side of the point above. I wrote a post a year or two ago about thinking of centres vs. wholes and networks vs. destinations. As the web of pages becomes a web of flow and streams the experience of the web is going widen again. You can see this in the data — the charts in distribution now post illustrate the shift that is taking place.   As the visible — user facing — part of a web site becomes less important than the API’s and the myriad of ways that users access the underlying data, the web, and our experience of it, will widen, again.

Conclusions

I have outlined six broad principles that I believe can be applied as a design methodology for companies building services online today. They are inspired by others, a list of whom would be very long,  I’m not going to attempt to document it, I will surely miss someone.   Building companies on today’s internet is by definition an exercise in standing on the shoulders of giants. Internet standards from TCP/IP onward are the strong foundation of an architecture of participation. As users pick and choose which services they want to stitch together into their cloud, can companies build services based on these shared data sets in a manner that is consistent with the expectations we hold for the medium? The web has a grain to it and after 15 years of innovation we can begin to observe the outlines of that grain. We may not be able to always describe exactly what it is that makes something “web consistent” but we do know it when we see it.

The Microsoft antitrust trial is a case study in regulators acting as design architects. It didn’t work. Google’s “don’t be evil” mantra represents an alternative approach, one that is admirable in principle but lacking in specificity. I outline a third way here, one in which we as company creators coalesce around a set of principles saying what we aspire to do and not do, principles that will be visible in our words and our deeds. We can then nudge our own markets forward instead of the “helping hand” of government.

buriedtreasure

diversity within the real time stream

I got a call on Friday from a journalist at the Financial Times who was writing on the Twitter ecosystem. We had an interesting conversation and he ran his piece over the weekend Twitter branches out as London’s ‘ecosystem’ flies.

As the title suggests the focus was on the Twitter ecosystem in London.    Our conversation also touched on the overall size and health of the real-time ecosystem — this topic didn’t make it into the article. It’s hard to gauge the health of a business ecosystem that is still very much under development and has yet to mature into one that produces meaningful revenues. Yet the question got me thinking — it also got me thinking that it has been a while since I had posted here. It was one busy summer. I have a couple of long posts I’m working on but for now I want to do this quick post on the real-time ecosystem and in it offer up some metrics on its health.

Back in June I did a presentation at Jeff Pulver’s 140conf, the topic of which was the real-time / Twitter ecosystem.   Since then, I have been thinking about the diversity of data sources, notably the question of where people are publishing and consuming real-time data streams. At betaworks we are fairly deep into the real time / Twitter ecosystem.  In fact, every company at betaworks is a participant, in one manner or another, in this ecosystem, and that’s a feature, not a bug! Of the 20 or so companies in the betaworks network, there is a subset that we we operate; one of those is bit.ly.

2puffsIn an attempt to answer this question about the diversity of the ecosystem, let me run through some internal data from bit.ly.   bit.ly is a URL shortener that offers among other things real-time tracking of the clicks on each link (add “+” to any bit.ly URL to see this data stream).   With a billion bit.ly links clicked on in August — 300m last week — bit.ly has become almost part of the infrastructure of the real time cloud.  Given its scale bit.ly’s data is a fair proxy for the activity of the real-time stream, at least of the links in the stream.

On Friday of this week (yesterday) there were 20,924,833 bit.ly links created across the web (we call these “encodes”). These 20.9m encodes are not unique URL’s, since one popular URL might have been shortened by multiple people. But each encode represents intentionality of some form. bit.ly in turn retains a parent : child mapping, so that you can see what your sharing of a link generates vs. the population (e.g., I shared a video on Twitter the other day; my specific bit.ly link got 88 clicks, out of a total of 250 clicks on any bit.ly link to that same video.  see http://bit.ly/Rmi25+).

So where were these 20.9m encodes created? Approximately half of the encodes took place within the Twitter ecosystem. No surprise here: Twitter is clearly the leading public, real-time stream and about 20% of the updates on Twitter contain at least one link, approx half of which are bit.ly links.   But here is something surprising: less than 5% of the 20.9m came from Twitter.com (i.e., from Twitter’s use of bit.ly as the default URL-shortener). Over 45% of the total encodes came from other services associated in some way with Twitter – i.e. the Twitter ecosystem — a long and diverse list of services and companies within the ecosystem who use bit.ly.

The balance of the encodes came from other areas of the real time web, outside of Twitter. Google Reader incorporated bit.ly this summer, as did Nokia, CBS, Dropbox, and some tools within Facebook. And then of course people use the bit.ly web site — which has healthy growth — to create links and then share them via instant-messaging services, MySpace, email, and countless other communications tools.

The bit.ly links that are created are also very diverse. Its harder to summarise this without offering a list of 100,000 of URL’s — but suffice it to say that there are a lot of pages from the major web publishers, lots of YouTube links, lots of Amazon and eBay product pages, and lots of maps. And then there is a long, long tail of other URL’s. When a pile-up happens in the social web it is invariably triggered by link-sharing, and so bit.ly usually sees it in the seconds before it happens.

This data says to me that the ecosystem as a whole is becoming fairly diverse. Lots of end points are publishing (i.e. creating encodes) and then many end points are offering ways to use the data streams.

In turn, this diversity of the emerging ecosystem is, I believe, an indicator of its health. Monocultures aren’t very resilient to change; ecosystems tend to be more resilient and adaptable. For me, these few data points suggest that the real-time stream is becoming more and more interesting and more and more diverse.

Distribution … now

In February 1948, Communist leader Klement Gottwald stepped out on the balcony of a Baroque palace in Prague to address hundreds of thousands of his fellow citizens packed into Old Town Square. It was a crucial moment in Czech history – a fateful moment of the kind that occurs once or twice in a millennium.

Gottwald was flanked by his comrades, with Clementis standing next to him. There were snow flurries, it was cold, and Gottwald was bareheaded. The solicitous Clementis took off his own fur cap and set it on Gottwald’s head.

The Party propaganda section put out hundreds of thousands of copies of a photograph of that balcony with Gottwald, a fur cap on his head and comrades at his side, speaking to the nation. On that balcony the history of Communist Czechoslovakia was born. Every child knew the photograph from posters, schoolbooks, and museums.

Four years later Clementis was charged with treason and hanged. The propaganda section immediately airbrushed him out of history, and obviously, out of all the photographs as well. Ever since, Gottwald has stood on that balcony alone. Where Clementis once stood, there is only bare palace wall. All that remains of Clementis is the cap on Gottwald’s head.

Book of Laughter and Forgetting, Milan Kundera

The rise of social distribution networks

Over the past year there has been a rapid shift in social distribution online.    I believe this evolution represents an important change in how people find and use things online. At betaworks I am seeing some of our companies get 15-20% of daily traffic via social distribution — and the percentage is growing.    This post outlines some of the aspects of this shift that I think are most interesting.   The post itself is somewhat of a collage of media and thinking.

Distribution is one of the oldest parts of the media business.    Content is assumed to be king so long as you control the distribution flow to that content. From newspapers to NewsCorp companies have understand this model well.   Yet this model has never suited the Internet very well.     From the closed network ISP’s to Netcenter.   Pathfinder to Active desktop, Excite Lycos, Pointcast to the Network computer.   From attempts to differentially price bits to preset bookmarks on your browser — these are all attempts at gate keeping attention and navigation online.    Yet the relative flatness of the internet and its hyperlinked structure has offered people the ability to route around these toll gates.   Rather than client software or access the nexus of distribution became search.    Today there seems to be a new distribution model that is emerging.   One that is based on people’s ability to publically syndicate and distribute messages — aka content — in an open manner.    This has been a part of the internet since day one — yet now its emerging in a different form — its not pages, its streams, its social and so its syndication.    The tools serve to produce, consume, amplify and filter the stream.     In the spirit of this new wave of Now Media here is a collage of data about this shift.

Dimensions of the now web and how is it different?

Start with this constant, real time, flowing stream of data getting published, republished, annotated and co-opt’d across a myriad of sites and tools.    The social component is complex — consider where its happening.    The facile view is to say its Twitter, Facebook, Tumblr or FriendFeed — pick your favorite service.    But its much more than that because all these sites are, to varying degrees, becoming open and distributed. Its blogs, media storage sites (ie: twitpic) comment boards or moderation tools (ie: disqus) — a whole site can emerge around an issue — become relevant for week and then resubmerge into the morass of the data stream, even publishers are jumping in, only this week the Times pushed out the Times Wire.    The now web — or real time web — is still very much under construction but we are back in the dark room trying to understand the dimensions and contours of something new, or even to how to map and outline its borders. Its exciting stuff.

Think streams …

First and foremost what emerges out of this is a new metaphor — think streams vs. pages.     This seems like an abstract difference but I think its very important.    Metaphors help us shape and structure our perspective, they serve as a foundation for how we map and what patterns we observe in the world.     In the initial design of the web reading and writing (editing) were given equal consideration – yet for fifteen years the primary metaphor of the web has been pages and reading.     The metaphors we used to circumscribe this possibility set were mostly drawn from books and architecture (pages, browser, sites etc.).    Most of these metaphors were static and one way.     The steam metaphor is fundamentally different.  Its dynamic, it doesnt live very well within a page and still very much evolving.    Figuring out where the stream metaphor came from is hard — my sense is that it emerged out of RSS.    RSS introduced us to the concept of the web data as a stream — RSS itself became part of the delivery infrastructure but the metaphor it introduced us to is becoming an important part of our eveyday day lives.

A stream.   A real time, flowing, dynamic stream of  information — that we as users and participants can dip in and out of and whether we participate in them or simply observe we are are a part of this flow.     Stowe Boyd talks about this as the web as flow: “the first glimmers of a web that isnt about pages and browsers” (see this video interview,  view section 6 –> 7.50 mins in).       This world of flow, of streams, contains a very different possibility set to the world of pages.   Among other things it changes how we perceive needs.  Overload isnt a problem anymore since we have no choice but to acknowledge that we cant wade through all this information.   This isnt an inbox we have to empty,  or a page we have to get to the bottom of — its a flow of data that we can dip into at will but we cant attempt to gain an all encompassing view of it.     Dave Winer put it this way in a conversation over lunch about a year ago.    He said “think about Twitter as a rope of information — at the outset you assume you can hold on to the rope.  That you can read all the posts, handle all the replies and use Twitter as a communications tool, similar to IM — then at some point, as the number of people you follow and follow you rises — your hands begin to burn. You realize you cant hold the rope you need to just let go and observe the rope”.      Over at Facebook Zuckerberg started by framing the flow of user data as a news feed — a direct reference to RSS — but more recently he shifted to talk about it as a stream: “… a continuous stream of information that delivers a deeper understanding for everyone participating in it. As this happens, people will no longer come to Facebook to consume a particular piece or type of content, but to consume and participate in the stream itself.”    I have to finish up this section on the stream metaphor with a quote from Steve Gillmor.    He is talking about a new version of Friendfeed, but more generally he is talking about real time streams.     The content and the language — this stuff is stirring souls.

We’re seeing a new Beatles emerging in this new morning of creativity, a series of devices and software constructs that empower us with both the personal meaning of our lives and the intuitive combinations of serendipity and found material and the sturdiness that only rigorous practice brings. The ideas and sculpture, the rendering of this supple brine, we’ll stand in awe of it as it is polished to a sparkling sheen. (full article here)

Now, Now, Now

The real time aspect of these streams is essential.  At betaworks we are big believers in real time as a disruptive force — it’s an important aspect of many of our companies — it’s why we invested a lot of money into making bit.ly real time.  I remember when Jack Dorsey first saw bit.ly’s  plus or info page (the page you get to by putting a “+” at the end of any bit.ly URL) —  he said this is “great but it updates on 30 min cycles, you need to make it real time”.   This was August of ’08 — I registered the thought, but also thought he was nuts.    Here we sit in the spring of ’09 and we invested months in making bit.ly real time —  it works, and it matters.   Jack was right — its what people want to see the effects on how a meme is are spreading — real time.   It makes sense — watching a 30 min delay on a stream — is somewhere between weird and useless.   You can see an example of the real time bit.ly traffic flow to an URL  here. Another betaworks company, Someecards, is getting 20% of daily traffic from Twitter.   One of the founders Brook Lundy said the following “real time is now vital to what do.    Take the swine flu — within minutes of the news that a pandemic level 5 had been declared — we had an ecard out on Twitter”.    Sardonic, ironic, edgy ecards — who would have thought they would go real time.    Instead of me waxing on about real time let me pass the baton over to Om — he summarizes the shift as well as one could:

  1. “The web is transitioning from mere interactivity to a more dynamic, real-time web where read-write functions are heading towards balanced synchronicity. The real-time web, as I have argued in the past, is the next logical step in the Internet’s evolution. (read)
  2. The complete disaggregation of the web in parallel with the slow decline of the destination web. (read)
  3. More and more people are publishing more and more “social objects” and sharing them online. That data deluge is creating a new kind of search opportunity. (read)”

Only connect …

The social aspects of this real time stream are clearly a core and emerging property.   Real time gives this ambient stream a degree of connectedness that other online media types haven’t.  Presence, chat, IRC and instant messaging all gave us glimmers of what was to come but the “one to one” nature of IM meant that we could never truly experience its social value.    It was thrilling to know someone else was on the network at the same time as you — and very useful to be able to message them but it was one to one.    Similarly IRC and chats rooms were open to one to many and many to many communications but they usually weren’t public.   And in instances that they were public the tools to moderate and manage the network of interactions were missing or crude.   In contrast the connectedness or density of real time social interactions emerging today is astounding — as the examples in the collage above illustrate.    Yet its early days.    There are a host of interesting questions on the social front.    One of the most interesting is, I think, how willthe different activity streams intersect and combine / recombine or will they simple compete with one another?      The two dominant, semi-public, activity streams today are Facebook and Twitter.    It is easy to think about them as similar and bound for head on competition — yet the structure of these two networks is fairly different.    Whether its possible or desirable to combine these streams is an emerging question — I suspect the answer is that over time they will merge but its worth thinking about the differences when thinking about ways to bring them together.      The key difference I observe between them are:

#1. Friending on Facebook is symmetrical — on Twitter it’s asymmetrical.    On Facebook if I follow you, you need to follow me, not so on Twitter, on Twitter I can follow you and you can never notice or care.   Similarly, I can unfollow you and again you may never notice or care.   This is an important difference.   When I ran Fotolog I observed the dynamics associated with an asymmetrical friend network — it is, I think, a closer approximation of the way human beings manage social relationships.    And I wonder the extent to which the Facebook sysmetrical friend network was / is product of the audience for which Facebook was intially created (students).   When I was a student I was happy to have a symmetrical social network, today not so much.

#2. The data on Facebook is assumed to be mostly private, or shared within private groups, Facebook itself has been mostly closed to the open web — and Facebook asserts a level of ownership over the data that passes through its network.   In contrast the data on Twitter is assumed to be public and Twitter asserts very few rights over the underlying data.    These are broad statements — worth unpacking a bit.    Facebook has been called a walled garden — there are real advantages to a walled garden — AOL certainly benefited from been closed to the web for a long long time.   Yet the by product of a closed system is that (a) data is not accessible or searchable by the web in general –ie: you need to be inside the garden to navigate it  (b) it assumes that the pace innovation inside the garden will match or exceed the rate of innovation outside of the garden and (c) the assertion of rights over the content within the garden means you have to mediate access and rights if and when those assets flow out of the garden.   Twitter takes a different approach.     The core of Twitter is a simple transport for the flow of data — the media associated with the post is not placed inline — so Twitter doesnt need to assert rights over it.    Example — if I post a picture within Facebook, Facebook asserts ownership rights over that picture, they can reuse that picture as they see fit.    If i leave Facebook they still have rights to use the image I posted.    In contrast if I post a picture within Twitter the picture is hosted on which ever service I decided to use.   What appears in Twitter is a simple link to that image.   I as the creator of that image can decide whether I want those rights to be broad or narrow.

#3. Defined use case vs. open use case.    Facebook is a fantastically well designed set of work-flows or use cases.   I arrive on the site and it present me with a myriad of possible paths I can follow to find people, share and post items and receive /measure associated feedback. Yet the paths are defined for the users.   If Facebook  is the well organized, pre planned town Twitter is more like new urban-ism — its organic and the paths are formed by the users.    Twitter is dead simple and the associated work-flows aren’t defined, I can devise them for myself (@replies, RT, hashtags all arose out of user behavior rather than a predefined UI.   At Fotolog we had a similar set of emergent, user driven features.  ie:  groups formed organically and then over time the company integrated the now defined work-flow into the system).    There are people who will swear Twitter is a communications platform, like email or IM — other say its micro-blogging — others say its broadcast — and the answer is that its all of the above and more.   Its work flows are open available to be defined by users and developers alike.   Form and content are separated in way that makes work-flows, or use cases open to interpretation and needs.

As I write this post Facebook is rapidly re-inventing itself on all three of the dimensions above.    It is changing at a pace that is remarkable for a company with its size membership.     I think its changing because Facebook have understood that they cant attempt to control the stream — they need to turn themselves inside out and become part of the web stream.   The next couple of years are going to be pretty interesting.       Maybe E.M. Forrester had it nailed in Howard’s End:  Only connect! That was the whole of her sermon  … Live in fragments no longer.

The streams are open and distributed and context is vital

The streams of data that constitute this now web are open, distributed, often appropriated, sometimes filtered, sometimes curated but often raw.     The streams make up a composite view of communications and media — one that is almost collage like (see composite media and wholes vs. centers).     To varying degrees the streams are open to search / navigation tools and its very often long, long tail stuff.  Let me run out some data as an example.     I pulled a day of bit.ly data — all the bit.ly links that were clicked on May 6th.      The 50 most popular links  generated only 4.4% (647,538) of the total number of clicks.    The top 10 URL’s were responsible for half (2%) of those 647,538 clicks.  50% of the total clicks (14m) went to links that received  48 clicks or less.   A full 37% of the links that day received only 1 click.   This is a very very long and flat tail — its more like a pancake.   I see this as a very healthy data set that is emerging.

Weeding out context out of this stream of data is vital.     Today context is provided mostly via social interactions and gestures.    People send out a message — with some context in the message itself and then the network picks up from there.   The message is often re-tweeted, favorite’d,  liked or re-blogged, its appropriated usually with attribution to creator or the source message — sometimes its categorized with a tag of some form and then curation occurs around that tag — and all this time, around it spins picking up velocity and more context as it swirls.    Over time  tools will emerge to provide real context to these pile up’s.   Semantic extraction services like Calais, Freebase, Zemanta, Glue, kynetx and Twine will offer a windows of context into the stream — as will better trending and search tools.      I believe search gets redefined in this world, as it collides with navigation– I blogged at length on the subject last winter.   And filtering  becomes a critical part of this puzzle.   Friendfeed is doing fascinating things with filters — allowing you to navigate and search in ways that a year ago could never have been imagined.

Think chunk
Traffic isnt distributed evenly in this new world.      All of a sudden crowds can show up on your site.     This breaks with the stream metaphor a little — its easy to think of flows in the stream as steady — but you have to think in bursts — this is where words like swarms become appropriate.    Some data to illustrate this shift.   The charts below are tracking the number of users simultaneously on a site.    The site is a political blog.    You can see on the left that the daily traffic flows are fairly predictable — peaking around 40-60 users on the site on an average day, peaks are around mid day.    Weekends are slow  — the chart is tracking Monday to Monday, from them wednesday seems to be the strongest day of the week — at least it was last week.   But then take a look at the chart on the right — tracking the same data for the last 30 days.   You can see that on four occasions over the last 30 days all of a sudden the traffic was more than 10x the norm.   Digging into these spikes — they were either driven by a pile up on Twitter, Facebook, Digg or a feature on one of the blog aggregation sites.    What do you do when out of no where 1000 people show up on your site?

CB traffic minnesotaindependent.com

The other week I was sitting in NY on 14th street and 9th Avenue with a colleague talking about this stuff.   We were accross the street from the Apple store and it struck me that there was a perfect example of a service that was setup to respond to chunky traffic.     If 5,000 people show up at an Apple store in the next 10 minutes — they know what to do.   It may not be perfect but they manage the flow of people in and out of the store, start a line outside, bring people standing outside water as they wait. maybe take names so people can leave and come back.   I’ve experienced all of the above while waiting in line at that store.   Apple has figured out how to manage swarms like a museum or public event would.    Most businesses and web sites have no idea how to do this.    Traffic in the other iterations of the web was more or less smooth but the future isnt smooth — its chunky.    So what to do when a burst takes place?   I have no real idea whats going to emerge here but cursory thoughts include making sure the author is present to manage comments etc., build in a dynamic mechanism to alert the crowd to other related items?    Beyond that its not clear to me but I think its a question that will be answered — since users are asking it.    Where we are starting at betaworks is making sure the tools are in place to at least find out if a swarm has shown up on your site.    The example above was tracked using Chartbeat — a service we developed.    We dont know what to do yet — but we do know that the first step is making sure you actually know that the tree fell — real time.

Where is Clementis’s hat? Where is the history?

I love that quote from Kundera.    The activity streams that are emerging online are all these shards — these ambient shards of people’s lives.    How do we map these shards to form and retain a sense of history?     Like the hat objects exist and ebb and flow with or without context.    The burden to construct and make sense of all of this information flow is placed, today, mostly on people.    In contrast to an authoritarian state eliminating history — today history is disappearing given a deluge of flow, a lack of tools to navigate and provide context about the past.    The cacophony of the crowd erases the past and affirms the present.   It started with search and now its accelerated with the now web.    I dont know where it leads but I almost want a remember button — like the like or favorite.   Something that registers  something as a memory — as an salient fact that I for one can draw out of the stream at a later time.   Its strangely compforting to know everything is out there but with little sense of priority of ability to find it it becomes like a mythical library — its there but we cant access it.

Unfinished

This media is unfinished, it evolves, it doesnt get finished or completed.    Take the two quotes below — both from Brian Eno, but fifteen years apart — they outline some of the boundaries of this aspect of the stream.

In a blinding flash of inspiration, the other day I realized that “interactive” anything is the wrong word. Interactive makes you imagine people sitting with their hands on controls, some kind of gamelike thing. The right word is “unfinished.” Think of cultural products, or art works, or the people who use them even, as being unfinished. Permanently unfinished. We come from a cultural heritage that says things have a “nature,” and that this nature is fixed and describable. We find more and more that this idea is insupportable – the “nature” of something is not by any means singular, and depends on where and when you find it, and what you want it for. The functional identity of things is a product of our interaction with them. And our own identities are products of our interaction with everything else. Now a lot of cultures far more “primitive” than ours take this entirely for granted – surely it is the whole basis of animism that the universe is a living, changing, changeable place. Does this make clearer why I welcome that African thing? It’s not nostalgia or admiration of the exotic – it’s saying, Here is a bundle of ideas that we would do well to learn from.  (Eno, Wired interview, 1995)

In an age of digital perfectability, it takes quite a lot of courage to say, “Leave it alone” and, if you do decide to make changes, [it takes] quite a lot of judgment to know at which point you stop. A lot of technology offers you the chance to make everything completely, wonderfully perfect, and thus to take out whatever residue of human life there was in the work to start with. It would be as though someone approached Cezanne and said, “You know, if you used Photoshop you could get rid of all those annoying brush marks and just have really nice, flat color surfaces.” It’s a misunderstanding to think that the traces of human activity — brushstrokes, tuning drift, arrhythmia — are not part of the work. They are the fundamental texture of the work, the fine grain of it. (Eno, Wired interview, 2008)

The media, these messages, stream — is clearly unfinished and constantly evolving as this post will likely also evolve as we learn more about the now web and the emerging social distribution networks.

Gottwald minus Clementis

Addendum, some new links

First — thank you to Alley Insider for re-posting the essay, and to TechCrunch and GigaOm for extending the discussion.    This piece at its heart is all about re-syndication and appropriation – as Om said “its all very meta to see this happen to the essay itself”.     There is also an article that I read after posting from Nova Spivack that I should have read in advance — he digs deep into the metaphor of the web as a stream.    And Fred Wilson and I did a session at the social media bootcamp last week where he talked about shifts in distribution dynamics — he outlines his thoughts about the emerging social stack here.   I do wish there was an easy way to thread all the comments from these different sites into the discussion here — the fragmentation is frustrating, the tools need to get smarter and make it easier to collate comments.

bit.ly now

We have had a lot going on at bit.ly over the past few weeks — some highlights — starting with some data.

• bit.ly is now encoding (creating) over 10m URL’s or links a week now — not too shabby for a company that was started last July.

• We picked the winners of the API contest last week after some excellent submissions

• Also last week the bit.ly team started to push out the new real time metrics system. This system offers the ability to watch in real time clicks to a particular bit.ly URL or link  The team are still tuning and adjusting the user experience but let me outline how it works.

If you take any bit.ly link and add a “+” to the end of the URL you get the Info Page for that link.  Once you are on the info page you can see the clicks to that particular link updated by week, by day or live — a real time stream of the data flow.

An example:

On the 15th of February a bit.ly user shortened a link to an article on The Consumerist about Facebook changing their terms of service.  The article was sent around a set of social networks and via email with the following link http://bit.ly/mDwWb.   It picked up velocity and two days later the bit.ly info page indicates that the link has been clicked on over 40,000 times — you can see the info page for this link below (or at http://bit.ly/mDwWb+ ).

In the screenshot below

1.) you see a thumbnail image of the page, its title, the source URL and the bit.ly URL.    You also see the total number of clicks to that page via bit.ly, the geographical distribution of those clicks, conversations about this link on Twitter, FriendFeed etc and the names of other bit.ly users who shortened the same link.

2.) you see the click data arrayed over time.:

bit.ly live

The view selected in the screenshot above is for the past day — in the video below you can see the live data coming in while the social distribution of this page was peaking yesterday.

This exposes intentionality of sharing in its rawest form.   People are taking this page and re-distributing it to their friends.     The article from the Consumerist is also on Digg — 5800 people found this story interesting enough to Digg it.   Yet more than 40,000 people actually shared this story and drove a click through to the item they shared.     bit.ly is proving to be an interesting complement to the thumbs up.   We also pushed out a Twitter bot last week that publishes the most popular link on bit.ly each hour.    The content is pretty interesting.   Take a look and tell me what you think — twitter user name: bitlynow.

————–

A brief note re: Dave Winer’s post today on on bit.ly.

Dave is moving on from his day to day involvement with bit.ly — I want to thank him for his ideas, help and participation.     It was an amazing experience working with Dave.    Dave doesnt pull any punches — he requires you to think — his perspective is grounded in a deep appreciation for practice — the act of using products — understanding workflow and intuiting needs from that understanding.   I learnt a lot.     From bit.ly and from from me — thank you.

A pleasure and a privildege.

Creative destruction … Google slayed by the Notificator?

The web has repeatedly demonstrated its ability to evolve and leave embedded franchises struggling or in the dirt.    Prodigy, AOL were early candidates.   Today Yahoo and Ebay are struggling, and I think Google is tipping down the same path.    This cycle of creative destruction — more recently framed as the innovators dilemma — is both fascinating and hugely dislocating for businesses.    To see this immense franchises melt before your very eyes — is hard to say the least.   I saw it up close at AOL.    I remember back in 2000, just after the new organizational structure for AOL / Time Warner was announced there was a three day HBS training program for 80 or so of us at AOL.   I loath these HR programs — but this one was amazing.   I remember Kotter as great (fascinating set of videos on leadership, wish I had them recorded), Colin Powell was amazing and then on the second morning Clay Christensen spoke to the group.    He is an imposing figure, tall as heck, and a great speaker — he walked through his theory of the innovators dilemma, illustrated it with supporting case studies and then asked us where disruption was going to come from for AOL?    Barry Schuler — who was taking over from Pittman as CEO of AOL jumped to answer.   He explained that AOL was a disruptive company by its nature.    That AOL had disruption in its DNA and so AOL would continue to disrupt other businesses and as the disruptor its fate would be different.     It was an interesting argument — heart felt and in the early days of the Internet cycle it seemed credible.   The Internet leaders would have the creative DNA and organizational fortitude to withstand further cycles of disruption.    Christensen didn’t buy it.     He said time and time again disruptive business confuse adjacent innovation for disruptive innovation.   They think they are still disrupting when they are just innovating on the same theme that they began with.   As a consequence they miss the grass roots challenger — the real disruptor to their business.   The company who is disrupting their business doesn’t look relevant to the billion dollar franchise, its often scrappy and unpolished, it looks like a sideline business, and often its business model is TBD.    With the AOL story now unraveled — I now see search as fragmenting and Twitter search doing to Google what broadband did to AOL.

a5e3161c892c7aa3e54bd1d53a03a803

Video First

Search is fragmenting into verticals.     In the past year two meaningful verticals have emerged — one is video — the other is real time search.   Let me play out what happened in video since its indicative of what is happening in the now web.     YouTube.com is now the second largest search site online — YouTube generates domestically close to 3BN searches per month — it’s a bigger search destination than Yahoo.     The Google team nailed this one.    Lucky or smart — they got it dead right.    When they bought YouTube the conventional thinking was they are moving into media —  in hindsight — its media but more importantly to Google — YouTube is search.     They figured out that video search was both hard and different and that owning the asset would give them both a media destination (browse, watch, share) and a search destination (find, watch, share).  Video search is different because it alters the line or distinction between search, browse and navigation.       I remember when Jon Miller and I were in the meetings with Brin and Page back in November of 2006 — I tried to convince them that video was primarily a browse experience and that a partnership with AOL should include a video JV around YouTube.     Today this blurring of the line between searching, browsing and navigation is becoming more complex as distribution and access of YouTube grows outside of YouTube.com.    44% of YouTube views happen in the embedded YouTube player (ie off YouTube.com) and late last year they added search into the embedded experience.    YouTube is clearly a very different search experience to Google.com.       A last point here before I move to real time search.    Look at the speed at which YouTube picked up market share.  YouTube searches grew 114% year over year from Nov 2007 to Nov 2008!?!     This is amazing — for years the web search shares numbers have inched up in Google favor — as AOL, Yahoo and others inch down, one percentage point here or there.    But this YouTube share shift blows away the more gradual shifts taking place in the established search market.     Video search now represents 26% of Google’s total search volume.

summize_fallschurch

The rise of the Notificator

I started thinking about search on the now web in earnest last spring.    betaworks had invested in Summize and the first version of the product (a blog sentiment engine) was not taking off with users.   The team had created a tool to mine sentiments in real-time from the Twitter stream of data.    It was very interesting — a little grid that populated real time sentiments.   We worked with Jay, Abdur, Greg and Gerry Campbell to make the decision to shift the product focus to Twitter search.   The Summize Twitter search product was launched in mid April.   I remember the evening of the launch — the trending topic was IMAP — I thought “that cant be right, why would IMAP be trending”, I dug into the Tweets and saw that Gmail IMAP was having issues.    I sat there looking at the screen — thinking here was an issue (Gmail IMAP is broken) that had emerged out of the collective Twitter stream — Something that an algorithmically based search engine, based on the relationships between links, where the provider is applying math to context less pages could never identify in real time.

A few weeks later I was on a call with Dave Winer and the Switchabit team — one member of the team (Jay) all of a sudden said there was an explosion outside.   He jumped off the conference call to figure out what had happened.    Dave asked the rest of us where Jay lived — within seconds he had Tweeted out “Explosion in Falls Church, VA?”  Over the nxt hour and a half the Tweets flowed in and around the issue (for details see & click on the picture above).    What emerged was a minor earthquake had taken place in Falls Church, Virginia.    All of this came out of a blend of Dave’s tweet and a real time search platform.  The conversations took a while to zero in on the facts — it was messy and rough on the edges but it all happened hours before main stream news, the USGS or any “official” body picked it up the story.  Something new was emerging — was it search, news — or a blend of the two.   By the time Twitter acquired Summize in July of ’08 it was clear that Now Web Search was an important new development.

Fast forward to today and take a simple example of how Twitter Search changes everything.    Imagine you are in line waiting for coffee and you hear people chattering about a plane landing on the Hudson.   You go back to your desk and search Google for plane on the Hudson — today — weeks after the event, Google is replete with results — but the DAY of the incident there was nothing on the topic to be found on Google.  Yet at http://search.twitter.com the conversations are right there in front of you.    The same holds for any topical issues — lipstick on pig? — for real time questions, real time branding analysis, tracking a new product launch — on pretty much any subject if you want to know whats happening now, search.twitter.com will come up with a superior result set.

How is real time search different?     History isnt that relevant — relevancy is driven mostly by time.    One of the Twitter search engineers said to me a few months ago that his CS professor wouldn’t technically regard Twitter Search as search.   The primary axis for relevancy is time — this is very different to traditional search.   Next, similar to video search — real time search melds search, navigation and browsing.       Way back in early Twitter land there was a feature called Track.  It let you monitor or track — the use of a word on Twitter.    As Twitter scaled up Track didn’t and the feature was shut off.   Then came Summize with the capability to refresh results — to essentially watch the evolution of a search query.      Today I use a product called Tweetdeck (note disclosure below) — it offers a simple UX where you can monitor multiple searches — real time — in unison.    This reformulation of search as navigation is, I think, a step into a very new and different future.   Google.com has suddenly become the source for pages — not conversations, not the real time web.   What comes next?   I think context is the next hurdle.    Social context and page based context.    Gerry Campbell talks about the importance of what happens before the query in a far more articulate way than I can and in general Abdur, Greg, EJ, Gerry, Jeff Jonas and others have thought a lot more about this than I have.    But the question of how much you can squeeze out of a context less pixel and how context can to be wrapped around data seems to be the beginning of the next chapter.    People have been talking about this for years– its not that this is new — its just that the implementation of Twitter and the timing seems to be right — context in Twitter search is social.   74 years later the Notificator is finally reaching scale.

A side bar thought: I do wonder whether Twitter’s success is partially base on Google teaching us how to compose search strings?    Google has trained us how to search against its index by composing  concise, intent driven statements.   Twitter with its 140 character limit picked right up from the Google search string.    The question is different (what are you doing? vs. what are you looking for?)  but  the compression of meaning required by Twitter is I think a behavior that Google helped engender.     Maybe, Google taught us how to Twitter.

On the subject of inheritance.  I also believe Facebook had to come before Twitter.    Facebook is the first US based social network — to achieve scale, that is based on real identity.  Geocities, Tripod, Myspace — you have to dig back into history to bbs’s to find social platforms where people used their real names, but none of these got to scale.    The Twitter experience is grounded in identity – you knowing who it was who posted what.    Facebook laid the ground work for that.

What would Google do?

I love the fact that Twitter is letting its business plan emerge in a crowd sourced manner.   Search is clearly a very big piece of the puzzle — but what about the incumbents?   What would Google do, to quote Jarvis?   Let me play out some possible moves on the chess board.   As I see it Google faces a handful of challenges to launching a now web search offering.    First up — where do they launch it,  Google.com or now.Google.com?    Given that now web navigational experience is different to Google.com the answer would seem to be now.google.com.   Ok — so move number one — they need to launch a new search offering lets call it now.google.com.    Where does the data come from for now.google.com?    The majority of the public real time data stream exists within Twitter so any http://now.google.com/ like product will affirm Twitter’s dominance in this category and the importance of the Twitter data stream.    Back when this started Summize was branded “Conversational Search” not Twitter Search.     Yet we did some analysis early on and concluded that the key stream of real time data was within Twitter.    Ten months later Twitter is still the dominant, open, now web data stream.   See the Google trend data below – Twitter is lapping its competition, even the sub category “Twitter Search” is trending way beyond the other services.   (Note: I am using Google trends here because I think they provide the best proxy for inbound attention to the real time microbloggging networks.   Its a measure of who is looking for these services.    It would be preferable to measure actual traffic measured but Comscore, Hitwise, Compete, Alexa etc. all fail to account for API traffic — let alone the cross posting of data (a significant portion of traffic to one service is actually cross postings from Twitter).   The data is messy here, and prone to misinterpretation, so much so that the images may seem blurry).   Also note the caveat re; open.   Since most of the other scaled now web streams of data are closed / and or not searchable (Facebook, email etc.).

screenshot
gTrends data on twitter

Google is left with a set of conflicting choices.     And there is a huge business model question.     Does Ad Sense work well in the conversational sphere?   My experience turning Fotolog into a business suggests that it would work but not as well as it does on Google.com.    The intent is different when someone posts on Twitter vs. searching on Google.   Yet, Twitter as a venture backed company has the resources to figure out exactly how to tune AdSense or any other advertising or payments platform to its stream of data.    Lastly, I would say that there is a human obstacle here.     As always the creative destruction is coming from the bottom up — its scrappy and and prone to been written off as NIH.     Twitter search today is crude — but so was Google.com once upon a not so long time ago.     Its hard to keep this perspective, especially given the pace that these platforms reach scale.     It would be fun to play out the chess moves in detail but I will leave that to another post.   I’m running out of steam here.

AOL has taken a long time to die.    I thought the membership (paid subscribers) and audience would fall off faster than it has.    These shifts happen really fast but business models and organizations are slow to adapt.  Maybe its time for the Notificator to go public and let people vote with their dollars.   Google has built an incredible franchise — and a business model with phenomenal scale and operating leverage.   Yet once again the internet is proving that cycles turn — the platform is ripe for innovation and just when you think you know what is going on you get blindsided by the Notificator.

Note:    Gerry Campbell wrote a piece yesterday about the evolution of search and ways to thread social inference into  search.    Very much worth a read — the chart below, from Gerry’s piece, is useful as a construct to outline the opportunity.

gerry-campbell-emerging-search-landscape1

Disclosure.   I am CEO of betaworks.    betaworks is a Twitter shareholder.  We are also a Tweetdeck shareholder.  betaworks companies are listed on our web site.

Micro-giving on the Huff Po

Ran the following essay on the Huff Po over xmas. Piece by Ken Lerer and I on what we are learning from the charity water drive and the possibilities of micro-giving.

Picture 19.png

Here is the article from the Huff Post:

Micro-Giving: A New Era in Fundraising

Thirty years ago, a young economics professor named Muhammad Yunus started a new kind of banking in Bangladesh — tiny loans to small entrepreneurs. Few thought these dreamers in a dirt-poor country would ever repay. But most did — and in 2006, Yunus won the Nobel Peace Prize.

Micro-lending has changed lives, built communities and created unlikely leaders.

Now a wave of friends and “loose ties” within the social media community are bringing the micro-lending concept and applying it to charitable giving.

Call it “Micro-giving”.

Late last week Laura Fitton of Pistachio Consulting launched a new kind of fundraising drive: an effort to raise $25,000 for a nonprofit called charity: water, a cause that works to bring clean, safe water to developing countries. She chose Twitter as her platform for financial pledges. And because she was aware of the bleak economy bearing down on her friends, she didn’t want to lean on them for significant contributions. “I asked for $25,000,” she says, “which would be just $2 for each reader I have on Twitter.”

In four days, @wellwishes had raised over $5,000. Average pledge size has been $8.50, the median is $2. And the beneficiary has taken notice. “I see micro-giving as the next stage of online fund raising,” says Scott Harrison, founder and president of charity: water. “The idea of thousands of $2 gifts adding up to wells in Africa that impact thousands of lives is something everybody can get behind.”

Though reminiscent of the Obama campaign’s decentralized funding, @wellwishes is a whole new model because it incorporates convenient, tiny donations made right on Twitter — the word-of-mouth powered social network and microblogging platform. Using payment service from a company called Tipjoy, it’s both simple and social to give. Your pledge shows up on Twitter as “p $2 @wellwishes for charity: water to save lives” (This is shorthand for “pay $2 to the Charity organization whose user name on Twitter is wellwishes.”) And that message goes — instantly — to all of the people who follow you on Twitter.

Laura Fitton (her Twitter user name is Pistachio) kicked off the campaign with an announcement of the experiment:

p $2 @wellwishes just to practice my hand at using micropayments on @tipjoy

In a later Tweet, she made her appeal:

I want something TOTALLY insane for Christmas: 12,500 people each to donate $2 for clean water @wellwishes.

And many did. Okay, these are pledges, not donations. But just as poor people pay their micro-loans, so micro-donors make good on their pledges — so far, an astonishing 86% have come through.

And then there’s the fact that the request gets personalized as people pass it on. Some add just a phrase: “very cool”. Others say the same thing, but with more characters: “small bits via Twitter + big audience = good xmas”.

The message is as important as the medium — using Twitter/Tipjoy, everyone who participates is both a donor and a broadcaster.

That suggests we’re entering a new era in fundraising and perhaps other social/political causes. What’s new? Virtual tribes — networks of caring people with more commitment than cash.

And that’s what excites us about micro-giving: It takes so little. You might not have much to spare, but you’ve got a penny jar — and we all know that if you reach in and remove a handful of change, you’ll feel no pain. What’s great about the new, frictionless online giving we’re testing here is that, if you’ve got a good cause, you no longer need to spend a fortune on real-world marketing. Online, with word of mouth and simple technology, pennies can become serious money.

Muhammad Yunus says that we can create a poverty-free world “if we collectively believe in it.” That’s a lot of belief. It will be easier to create that world if good causes have adequate funding — and if they can get that funding a few pennies at a time.

That, it seems to us, is a “very cool” idea. So give it a whirl. Give here and support charity: water, and be among the first to try what we hope is a new way to give online — micro-giving. For which you get large thanks.

disclosure note: betaworks is an investor in Twitter and Tipjoy. Tipjoy waived all fees for this effort, and, with betaworks, is making a matching gift.

We are making solid progress towards the goal. You can see a running total here.

An experiment in Microfunding and new forms of giving

Late last week we kicked off a drive to raise $25,000 for http://www.charitywater.org/ — a non-profit that brings clean and safe drinking water to people in developing nations. We launched this over Twitter — in partnership with Pistachio and Tipjoy.

In the first 24 hrs we raised $944 from 144 people. As of today — Saturday — we have pledges of $1400 from 213 people, a total of about $2600. This is amazing, the money is going to have a very real impact on people’s lives. Unclean water is the cause of about 80% of disease. 43,000 people died last week from bad drinking water. $2600 in 48 hours is an amazing start, all raised over the Twitter platform. Of the $2600 about half of it was raised via Tipjoy. Here is a live update of the pledges to Charity: Water (@Wellwishes) via tipjoy, and the payment (vs. pledge) rate.

frameborder=”0″ style=”padding:0em;” height=”115px” width=”275px”<br /> marginwidth=”0″ marginheight=”0″ hspace=”0″ vspace=”0″ scrolling=”no”<br /> allowtransparency=”true”>

You can add a $2 gift right here:

9c3fde421a95e575466ed510ea93cb3c.png

In terms the approach it feels like we are scratching on something radically new here. It intersects with a set of trends I am fascinated by: dynamic community formation and participation, the now web or real time cloud and micro-lending or in this case micro-giving. Laura Fitton (@Pistachio) has written about this before, as have others — its giving me a lot to think about as we head into the Christmas season and the snow falls here. A payment rate of 83% is astoundingly high.

We also put together a little video of the launch of this effort. Laura is testing, Chartbeat, an un-released product from betaworks — it can track the traffic surge from Twitter to Larura’s blog post. If anyone wonders the effects of Twitter this little video says a lot. Watch what happens 20 seconds in.

Laura had a technical reaction to the video:

holy AWESOMENESS.

chartbeat is going to be INSANELY valuable. that is SO cool.

Relative search

Its been a while since I have posted on my blog. There has been a lot of work to do and i’m in the process of consolidating this blog with my tumblog. More to come but for now a quick post on Google Search Wiki.

I have been using it for the past few weeks on and i finally figured out why it annoys me way more than the average beta product — I have philosophically issues with this product, its inconsistent with what I want from a Google search.

I have become accustomed to Google representing a canonical point of view. Now all of a sudden Google has become a swamp of relativity. I have almost a physical response to seeing the arrows up and down on the results. Its fascinating that they would try this — after years of debate on the merits of personalized search — im surprised and watching closely, but now from afar — I wish http://www.customizegoogle.com/ would add a switch to turn this off.

F17C6C3C-4257-46E0-A51E-E452F354720F.jpg

fresh bit.ly

Released a set of new features on bit.ly, these are the five things that I am using most.

1. Posting bit.ly URLs directly to twitter from bit.ly.     After shortening an URL in bit.ly, the new interface lets you add text and post to twitter from the bit.ly page

bitly1.png

2. The new bookmarklet grabs highlighted text from a web page, shortens the URL, then places these into a twitter-able message (the panel bookmarklet still works as it used to).

bitly3.png

3. Im using the information pages a lot.     They now offer traffic, conversation, and metadata information on each shortened URL. Click “Info” on any shortened URL and you see extensive data on referring sources, conversations on popular social networking services (twitter, delicious, friendfeed), and recognized metadata contained in the web page.

bitly2.png

3b.   Once you sign into bit.ly you can generate you own bit.ly URL and compare it to other people who have shared the same page to track relative traffic data.   I like this a lot — useful.

bitly6.png

4. New bitly Accounts lets you retrieve your complete History while remembering your twitter credentials for faster posting. You can also store multiple twitter accounts with bit.ly and select which one to use at post time.

bitly5.png

What changed last week …

man with head in the ground.jpg

Excerpts from a note to the betaworks companies sent last week. Some other charts, and some color added.  This was already posted by SAI and Fred — thank you both.

“By now you have all read the stories from the media and venture capital world about how larger macroeconomic issues will affect the start up world.

Last week, was one shitty week. That said a downturn has been evident for a while here in NY — and most of you have already started to make adjustments to your plan, we first talked about a downturn at the brown bag in February. But two things did change this past week. (a) The credit crisis and the crisis of confidence in our markets got way worse (b) silicon valley woke up that something had changed. The first point is the one to focus on — the second is a distraction. So on to the first point — what changed this week? The confluence of events — from housing, to energy prices, to credit to equity markets, to global coupling to domestic panic, matched with domestic political uncertainty … have created a toxic mix for the economy — a perfect storm. You need to think about things differently, things have changed and your priorities should change. I broke up our thinking at betaworks into two blocks, the first is what you should be thinking about in terms of your business and the second outlines some thoughts on what this means for the market.

03283fcba3b476adde9da72d78389f36.png

#1. Priorities for you and your business:

Remember it’s a cycle, but this is going to be a longer one than we expected (that perfect storm issue)
Andy, David and I and many of you have lived through a few business cycles. Things look ugly but with distress come opportunities. Scarcity drives innovation, always has, always will. Do more with less. DO MORE with less — a trite one liner that you need to make part of your companies DNA. There will be more emphasis on user value, more ways to make money from that value — we will finally fess up to the fact that many of the ad models of web 2.0 dont yield results and we will invent one’s that do, all around there will be more innovation, its counter intuitive but during an up cycle people accept conventional wisdom, during a down cycle people challenge it. Thats good. Very good. And the cycle will winnow competition. Over the last cycle the people who were standing at the end came out on top — it sounds like a low bar but its not. This week the shift (storm) got worse and it became global. Worse in that its clear its going to take a while for the broader economy to recover. And global in that for about a year financial analysts had been arguing as to how linked — or coupled — the global economy was, this week they got the answer. This will effect your business, how is not so clear. Some negatives, some positives — keep reading.

Follow the money
Many of you are running your businesses very cheaply right now and break-even is within reach. Get there. One of the headline shifts that is taking place is that people (partners, investors, the market) are going to shift focus from audience + revenue to just revenue. This happened in the last downturn and a lot of entrepreneurs didn’t adapt to the shift till it was too late. Investors have likely encouraged you to focus on audience, you now need to focus on revenue. Cash is king, cash gives you flexibility and options — once you get to break-even the whole world will look different. Making money, like everything you do — takes work, time and attention. It will take longer than you expect and it happens in ways you can’t plan (see voting example below). Start working on it now. If you have just raised money or are raising, get it closed. The cost of capital is going up — again cash, think runway, cash and revenue.

Watch your spend, make necessary cuts now
No surprises. If you think a piece of your product needs two developers to build it, do it with one. Be excessively creative in thinking about revenues and trying those ideas. Rethink *all* your projections, looking at how reductions in cost and accelerations in revenue strategies affect the numbers. Then redo them again. You’ll be stronger. Face reality as it is, not as you wish it was. Change the mix of sales and performance based employees. Think about what you can outsource — and how you can distribute your costs. There are companies and people we are working with who are doing great things with outsourced teams — its hard and it requires different workflow but when it works it can change your whole business make up. And if you need make cuts, make them now. Don’t cut 10% now and then another 10% early next year — make the change in one fell swoop. Piecemeal’ing your way through change kills momentum, it hurts culture and the team and is a chicken shit way to run a business. You know what your plan looks like. Figure out what your runway looks like and do more with less, figure out how to extend your runway till you get to break-even.

Know you data
You have heard me rant about this before, but you have to know what’s going on, know your data really really well. Financial data, your burn, your cash flow, revenues, runway and site usage data. You cant “follow the money” if you dont know where things stand. There are a lot of things you can do to improve everything from burn to traffic. But first you need to know where you stand. So every week you have a picture of your position — make this a habit. I used to hate to do this, but once you make it a habit it becomes a tool. During an up cycle you can follow instinct, and usually your raw instinct is what you should follow — during down cycle your instinct can lead you far astray (see Zillow example below). You need to know you data. At betaworks we are going to offer some SEO / SEM / analytics to our network. All part of knowing where things stand and optimizing from there.

Compete
Take the offensive. Many of your competitors are not as well positioned as you — this is an opportunity to take share. These points are in order of priority — once you know where you stand, where you are making money, what your burn is — think aggressively about growth and market share.

This is my favorite slide from the sequoia deck (link to the deck is below):

4ff630ec03516035119014d4a5a150c5.png
3D2C95CA-8BB7-4BF6-AE22-94C474C76F51.jpg

East Coast, West Coast drama
The fact that the west coast seemingly woke up to this shift — spilled a lot of ink — this week is something that needs to be considered in a balanced way. “High time” was my first reaction to the Valley waking up — heck back in March or thereabouts there was a small run on a CA bank, I thought that would be a wake up call that things had changed to West Coast VC, seemingly it wasnt. My second reaction was lets skip the fear and panic. Media will do what it always does, there is a lot of drama that will be injected into the conversation. Fear and loathing, RIP, Armageddon, War and Peace … all good movies / books but the media will blow this out of proportion — thats what they do (I love this quote from the sequoia meeting: “It¹s always darkest before it¹s pitch black”, really,?@#!). Also note many of the people writing went through the last contraction — it was painful for them personally and they are finally getting to talk about that pain. Leave that to shrinks. Focus on the fundamentals and how to adapt to change and you will get to the other side of this stronger and better. Alan Patricof outlined more of an East coast view yesterday: http://bit.ly/26hbqA . This cycle of technology and software innovation isnt stopping. Markets winnow out losers from winners. Entrepreneurship isnt easy, if you thought this was about a quick flip — its time to go home now. You are building companies — you know what you signed up for. This isnt like the last cycle where companies have been spending like drunks, the last party we had at betaworks, was the brownbag, it kinda said it all, bring your own lunch. Winners will emerge from this cycle — smart leaders will adapt, others will die. This is what we all signed up for when we decided to be entrepreneurs. Fear has become a key currency in our culture — dont trade in it, its a distraction — use it to change if you need but then put it aside. Sorry about the Doll$ar image up above — it just cracks me up. Read this if you need a kick in the pants: http://bit.ly/dcl0T

#2. Big, broad changes.
With the economy heading into the worst setback most of you, most of us, have ever seen — think big, broad changes. Its been a long week, but let me try to anticipate a few. Useful to think about how things will change now.

Momentum and change.
Some of our business is based on momentum. Thats taken a turn for the worse. You have to adjust fast — thats your job. I love this example that Gurley uses — where Zillow surveyed average decline of housing prices across the US (20-30%) and *then* asked the surveyed people how much they had personally lost in value — people said 0%. As humans we accept change as something that someone else needs to adapt to. Think about your business, you have to change, not someone else. See: http://bit.ly/1wE8K2. And remember a Welsch maxim or two: Control your own destiny or someone else will and Change before you have to.

Cost of capital
The cost of capital has gone way up (again face reality as it is, not as you wish it were). Dont panic, just make sure you realize the rules have changed

Advertising
There will be a flight to quality, this always happens (history: http://bit.ly/ppHcO) . But this time I think its going to be more than that. For TV and print this has been an unusual year, the shift to online has been stemmed first by the Olympics and second by the election. That said year over year % growth in ad spend has been down across the board (see slide 32 of the sequoia deck, linked below). Expect the next year to be ugly and different. I think spend will move online, very fast — print may slide downhill, right downhill. And people will look for ROI — real measurable results. Monetizing social media is hard — two of our companies Lotame and Lookery — are focussed on just that. Much to do here, much money/share to make/take.

Beyond advertising
Much web 2.0 was about advertising to the tail, the wonders of google and adsense. The truth that most people havent spoken up much about is that (a) neither Google or Yahoo did a great job of monetizing much outside of search (b) the Google business is still mostly in the head of the curve, not the tail. The scale focus on auction based ad buying has distracted us from other business models. This week I had a bite with the CEO of Hi Media (who bought Fotolog) — they are an ad network, but they are now making a lot of money on payments. And a significant chunk is via “microfame” payments — fotolog users voting each other up in popularity based boards, in september, month some users spent more than $2k each voting on these boards (http://flog.fotolog.com/rank). Many of our companies are experiment with payment models — Tipjoy, Ideeli, IILWY, Covestor, SomeEcards … there is money to be made here — from payments to item sales to t shirts. Businesses to be built.

The elephants will dance
Pieces are going to move on the chess board, big pieces — anticipate and watch and plan — this shouldn’t be your focus but things are going to have change around your business, and they might effect you. IMO, Yahoo is going to be sold or bought, AOL sold. Ebay will either be sold or bought or broken up. Facebook is going to have to change (cut spending, focus on revenue) or it will be bought, same for Linked-in. Microsoft, News Corp. TWX and other media companies will be buyers. What does Google do in this cycle — freeze or bold? The newspapers — do they act out of fear or freeze up? Tel co’s and cell co, cable co’s — the pipes — do they jump upstream? Why care? — well as these pieces move around the chess board they may well effect your future, so watch carefully. If Paypal — which by some estimates is now 50% of the value of Ebay — gets spun out of ebay then they will accelerate services beyond advertising. etc. etc. Remember september 11th 2001 — how, the day after, it felt like just another day — it wasnt the world had changed, and that awful day was just the demarcator that history used. Again everything has changed, and now is simply the line that history will use. So consider the moves the elephants make, the equation for them, public or private has changed.

Openness
I think this cycle is going to drive another significant shift in how open and interconnected the web is. This is good news for you, this is bad news for the Facebooks of the world who tried to replicate the walled garden strategy of web 1.0. Think about what happened through the last cycle … start with AWS. In the 1990’s internet companies had to own everything top to tail — today you can use Amazon and other services to pop up a new box for hundreds of dollars, if that. Thats a huge shift — its also a shift towards interdependency. We are all now dependent on the amazon’s of the world for parts of our infrastructure. I think this turn of the cycle or screw is going to drive a lot more openness. This in turn ties to the market figuring out how to rapidly establish bottom’s up standards, this is about working with others and figuring out how to do things without having to do all the work.

some light reading for your weekend:

http://bit.ly/22BJrj
Silicon Valley Finds It Isn’t Immune From Credit Crisis – WSJ.com

http://bit.ly/3BbqV3
Sequoia Capital deck startups and the economic downturn

http://bit.ly/1fVGrx
Inside Details of Sequoia Capital’s Doomsday Meeting With its Companies – G…

http://bit.ly/26hbqA
VC dean Alan Patricof warns against panic, urges entrepreneurs to seize the…

http://bit.ly/dcl0T
Master of 500 Hats: Fear is the Mind Killer of the Silicon Valley Entrepren…

http://bit.ly/4DsSUw
Angel Investor Ron Conway Emails His Portfolio Companies Over Financial Mel…

http://bit.ly/1wE8K2
Benchmark Capital Advises Startups To Conserve Capital, Look For Opportunit…

http://bit.ly/ppHcO
How Bad Will The Ad Market Get? Time To Get Out The History Books

http://bit.ly/FvwGK
News Corp. Estimates Cut in advertising

Some notes I was sent from the Sequoia Capital meeting:

Today, Sequoia Capital hosted a mandatory CEO All-Hands Meeting on Sand Hill
Road. There were about 100 CEO¹s in attendance and let me tell you, the
mood was somber. I¹m not one to perpetuate doom and gloom or bad news, but
let me underscore this for you: We are in a serious economic downturn and
this is just the beginning. Immediate, decisive and swift action is
required, along with frugal, day-to-day management of expenses and our
business is required.

***Here are my notes from the meeting. Keep this note in your in-box and
read it every day. I¹m serious folks, this is for our survival.***

Speakers:

· Mike Moritz, General Partner, Sequoia Capital (he moderated the
speakers).

· Eric Upin, Partner, Sequoia Capital (Eric ran the $26-Billion
Stanford Endowment Fund and knows a few things about Economics and
investing.)

· Michael Partner, Sequoia Capital (Michael was recruited to start
Sequoia¹s very first hedge fund, coming from Maverick Capital and Robertson
Stephens.)

· Doug Leone, , General Partner, Sequoia Capital

Slide projected on the huge conference room screen as people assembled
inside the conference center to take their seats: a gravestone with the
inscription: RIP, Good Times.

Mike Moritz:

· The only time Sequoia¹s assembled all CEO¹s like this was during
the dot.com crash.

· We are in drastic times. Drastic times mean drastic measures must
be taken to survive. Forget about getting ahead, we¹re talking survive.
Get this point into your heads.

· For those of you that are not cash-flow positive, get there now.
Raising capital is nearly impossible if you¹re too far off of cash flow
positive.

· There will be consequences for those who hesitate. Act now.

Eric Upin:

· It¹s always darkest before it¹s pitch black.

· Survival of this storm means drastic measures must be taken now, so
you will have the opportunity to capitalize on this down turn in the future.

· We are in the beginning of a long cycle, what we call a ³Secular
Bear Market.² This could be a 15 year problem. [many slides on historical
charts of previous recessions, averaging 17 year cycles.]

· The credit market [versus the Equity markets] are the issue and
will take time to recover.

· Inflection point: Make changes, slash expenses, cut deep and keep
marching. You can¹t be a general if you turn back.

· This is a global issue and not a OEnormal¹ time.

· There is significant risk to growth and your personal wealth.

· Advice:

o Manage what you can control. You can¹t control the economy, but you can
control everything else.

§ Cut spending. Cut fat. Preserve Capital.

§ Don¹t trust your models and spreadsheets. All assumptions prior to today
are wrong.

§ Focus on quality.

§ Reduce risk.

Michael Beckwith:

· Note: Michael had a lot of slides that were charts, data points
and comparisons.

· A ³V² shaped recovery is unlikely [^]

· Cuts in spending will accelerate in Q4/Q1. Look at eBay
just the beginning.

Doug Leone:

· This is a different animal and will take years to recover.

· Getting another round if you¹re not profitable will be rough.

· Do everything possible to get to cash flow positive. Now.

· Nail your Sales and Marketing message.

· Pound your competitors shortcomings. They¹re hurting and they will
be quiet.

· In a downturn, aggressive PR and Communications strategy is key.

· M&A will decrease dramatically and only lean companies, with proven
sales models will be acquired.

· Spectrum discussion:

o Capital Preservation ß———————————-à Grab Market

o Everyone should be far to the left (capital preservation)

· Requirements of our companies:

o You must have a proven product

o You must cut expenses. Now and deep.

o Your product should reduce expenses and drive revenue [NOTE: I want to
revisit this with the Management team. Our solution does both, we need to
quickly and crisply define the sound bite here.]

o Honestly assess your solution vs. your competitors.

o Cash is king [have you gotten this message yet?]

o You must get to profitability as soon as possible to weather this storm
and be self-sustaining.

· Operations review:

o Engineering: Since you already have a product, strongly consider
reducing the number of engineers that you have.

o Product: What features are absolutely essential? Choose carefully and
focus.

o Marketing: Measure everything and cut what is not working. You don¹t
need large Product Marketing, Product Management teams.

o Sales & Business Development: What is your return on this investment?
The Valley has gotten fat with Sales people: Big bases, big variables. Cut
base salaries on sales people, highly leverage them with upside (increase
variable) and make people pay for themselves via increased sales
productivity. Don¹t add sales people until you¹ve achieved your goals with
sales productivity. Be disciplined.

o Pipeline: Scrub the shit out of it and be honest with yourself.

o Finance: Defer payments, what is essential? Kill cash burn.

· Death Spiral (Nobody moves fast enough in times like these, so get
going and research later.)

o The death spiral sucks you in, you¹re in it before you know it and then
you die.

o Survival of the quickest.

o Cutting deeper is the formula for survival.

o You should have at least one year¹s worth of cash on hand.

o Tactics:

§ Assess your situation. Drop your assumptions, start with a blank page
and start zero-based budgeting.

§ Adapt quickly

§ Make your cuts

§ Review all salaries

§ Change sales comp

§ Bolster your balance sheet
and save it.

§ Spend like it¹s your last dollar.

· Get Real or Go Home.

—— End of Forwarded Message

Keep it Chunky, Sticky in 1996

Fred Wilson’s keynote this week at the Web 2.0 conference will be interesting. He is doing a review of the history of the internet business in New York, the slides are posted here. History is something we don’t do a lot of in our business we tend to run forward so fast that we barely look back. I shared some pictures with Fred and I am posting a few more things here.   I also found a random missive I scribed I think in 1996, its pasted below. I was running what we called a web studio back then — we produced a group of web sites, including äda ’web , Total New York and Spanker.


truism1.gif

äda ’web’s first project created in the fall of 1994 — Jenny Holzer’s, Please Change Beliefs. This project is still up and available at adaweb. The project was a collaboration between Jenny, ada and John F. Simon, Jnr. I learnt so much from that one piece of work. I am not putting up more ada pieces since unlike the other sites it is still up and running thanks to the Walker Arts Center.

Total NY sends Greg Elin across country for the Silicon Alley to Silicon Valley tour. Greg and this project taught me the fundamentals of what would become blogging

Greg_Elin_SA2SV.gif

Man meets bike meets cam … Greg Elin prepares for Silicon Alley to Silicon Valley. Don’t miss the connextix “eye” camera on the handle bar!?!

1995, Total NY’s Cosmic Cavern, my first forway into 2d+ virtual worlds, a collaboration with Kenny Scharf. This was a weird and interesting project. We created a virtual world with Scharf based on the cosmic cavern the artist had created at the tunnel night club. Then within the actual Cosmic Cavern we placed PC’s for people to interact with the virtual cavern. Trying to explain it was like a Borges novel. He is a picture of Scharf in the “real” cavern, feels like the 90’s were a long time ago.

kenny_scharf.jpg

Some other random pictures i found from that era:

Pics_from_mexico.jpg

borthwick_stallman.jpg

yahoo_1995-tm.jpg

Keep it Chunky, Sticky and Open:

As the director of a studio dedicated to creating online content, a question I spend a lot of time thinking about is: what are the salient properties of this medium? Online isn’t print, it isn’t television, isn’t radio, nor telephony–and yet we consistently apply properties of all these mediums to online with varied result. But digging deeper, what are the unique properties of online that make the experience interesting and distinct? Well, there are three that we have worked with here the Studio, and we like to call them: chunky, sticky and open.

Chunky
What is chunky content? It is bite sized, it is discrete and modular, it is quick to understand because it has borders. Suck is chunky, CNET and Spanker (one of our productions) are chunky. Arrive at these sites and within seconds you understand what is going on–the content is simple, its bite sized. Chunkiness is especially relevant in large database-driven sites. Yesterday, my girlfriend and I were looking for hardware on the ZD Net sites (PC Magazine, Net Buyer etc.). She had found a hardware review a day earlier and wanted to show them to me. She typed in the URL for PC Magazine but the whole site had changed. When she looked at the page she had no anchors, she had no bearings to find the review that was featured a day earlier. The experience would have been far less frustrating if the site had been designed with persistent, recursive, chunks. Chunky media offers you a defined pool of content, not a boundless sea. It has clear borders and the parameters are persistent. Bounded content is important; I want to know the borders of the media experience, where it begins and where it ends. What is more, given the distributed, packet-based nature of this medium, both its form and function evokes modularity. Discreet servings of data. Chunks.

Sticky
Some, but not all, content should stick. Stickiness is about creating an immersive experience. It’s content that dives deep into associations and relationships. The opposite of sticky is slippery, take basic online chat rooms: most of them aren’t sticky. You move from one room to another, chatting about this and that, switching costs are low, they are slippery. Contrast this to MUDS and MOO’s which are very sticky: in MUDS the learning curve is steep (view this as a rite of entry into the community), and context is high (they give a very real sense of place). What you get out of these environments is proportional to your participation and involvement, relationship between characters is deep and associative. When content sticks time slows down and the experience becomes immersive– you look up and what you thought was ten minutes was actually half an hour. Stickiness is evoked through association, participation, and involvement. Personalized information gets sticky as does most content that demands participation. Peer to peer communication is sticky. Community and games are sticky. People (especially when they are not filtered) are sticky. My home page is both chunky and sticky.

Open
I want to find space for me in this medium. Content that is open, or unfinished permits association and participation (see Eno’s article in Wired 3.05, where he talks about unfinished media). There is space for me. I often describe building content in this medium as drawing a 260 degrees circle. The arc is sufficient to describe the circle (e.g.: provide the context) but is open to let the member fill in the remainder. We laugh and cry at movies, we associate with characters in books, they move us. We develop and frame our identity with them and through them–to varying degrees they are all open. Cartoons, comedy, and most forms of humor, theatre, especially improvisational theater, are all open. A joke isn’t really finished till someone laughs, this is the closing of the circle, they got it. Abstraction, generalities and stereotypes, all these forms are open, they leave room for association, room for me and for you.

So, chunky, sticky and open. Try them out and tell me what you think (john@dci-studio.com). Lets keep this open, in the first paragraph I said I wanted to discuss the characteristics that make a piece of online content interesting, I did not use the words great or compelling. I don’t think that anything online that has been created to date is great. These are still early days and we still have a lot to learn and a lot to unlearn. No one has produced the Great Train Robbery of online–yet. But when they do, I would bet that pieces of it will be chunky, sticky and open.

Ok enough reminiscing, closing with Jenny Holzer.

firef.ly goes public beta

We are pushing firef.ly into a public beta today.   Exciting stuff for us here at betaworks.   Firef.ly is a light weight messaging layer that sits on top of a site — permitting a real time perspective on who is where on your site and basic chat.   It’s intentionally light weight — no sign in, no install for users — one line of java script for the web site publisher (available here: http://firef.ly/install).  You can use firefly on this page — just slide the slider to the left and have fun.

Couple of thoughts here — first this is another layer application, something i have posted about before, second this is for me a return to days when you could just chat on any page — without the encumbrances of today, captcha’s, sign in etc.   Its a layer of the now web that we are experimenting with.    Yes yes i know it might get some spam — but web site owners have the ability to ban spammers and our hope is that the lightweight, spontaneous nature of firef.ly may open up some new conversations.    As it did a while back when we first trialed it on a Scripting post.    Last point — try the twitter feature — it sends out a message to your followers that you are on a particular page, its pretty powerful.   Have fun.

Summize acquired by Twitter

As announced this am, Twitter is acquiring 100% of Summize. Deals between two private companies are easy to consider and hard to close. In this case we had both companies on a tear and the teams on both sides who were interested in a partnership — the hope here is that what makes sense today only makes more sense down the road. Search on twitter will evolve into more than search — this is starting to happen today (more below), but bringing these teams together will only accelerate the pace of that evolution. The deal started with a conversation with Fred Wilson about how conversational search can evolve into navigation, about how important navigation becomes for UGC as you go mainstream — it concluded with the deal that was announced this morning. Betaworks is now a twitter shareholder, and excited to be one.

Finding a pain point
The history of most startup’s is made up of iterations, learning and restarts — Summize was no exception. The Summize team worked hard for a little over a year developing sentiment based algorithms aimed at crawling the review and blogosphere. Late last year they formally launched a web product that let you search reviews for books, movies and music. It worked well — offering summaries of all the reviews for a particular book, structured programmatically so they could be organized and swiftly digested by users or publishers. Yet it was complicated — not in theory or in its presentation — but in practice it was a complicated problem that most end users didnt know they needed. As an old friend would put it Summize v1. didn’t address a discernible need or pain point.

I remember early this year we took the Summize team over to meet with an executive at News Corp. After the WSJ/Dow Jones acquisition, News Corp. was thinking about data centric media and how conversational media — the blogosphere — can be mapped and structured in a scalable manner. Jeremy was fascinated by the technology but pushed us hard as to whether we knew whether people were really looking for programmatic structured access to sentiment. By March it was clear we couldn’t get the sentiment focussed company funded by VC’s — many people were interested but no one was ready to take the risk. I think this is part of the chasm between east and west coast companies — out west, interesting technology can and is often funded purely on the merits of the technology — out east, not so. At betaworks we decided to work with the Summize team repoint the technology — and launch twitter search. Why Twitter? Three reasons: there was a gap in the market for a scaled search / navigation experience of twitter, summize technology was very capable of providing and scaling a great search experience across the twitter’s live river of conversations and finally Twitter, the base data set, was growing like a weed.

Growth
It’s astounding how fast the Summize service took off. The growth is charted in this post. The premise was that there is a real time data distributed across services online that is hard to digest and that search is a well know metaphor to aggregate up these conversations into something meaningful for people. Twitter was the logical starting point — traffic was exploding and Twitter was quickly becoming a real time, one to many communications platform. Search is so often viewed as a destination experience — get this result and move on. Summize search is different — because its conversational and real time you keep searches running and open in tabs, you repeat them time and time again, to watch the conversation evolve and change — watch that refresh bar on any of the topics linked to above. The approach worked. Traffic exploded, not only on the UI but also on the API. Distributed, live search — very, very different to how search has been done to date on the web.

Now web
There is something new going on here. Somewhere in the past few months the way that I experience the Internet and specifically live information changed — there is a “now web” emerging out of an ecosystem of loosely coupled products. There has always been an immediate, instant component to the web and web communications — it goes back to mailing lists, IM, email & blog commenting. But its taking on a whole new form — the density of the conversations and the speed at which they emerge and evolve is different. I first sensed the shift with the trending topic list on front page of Summize. This is a feature that the team created right out of the sentiment based technology of Summize v1. The first night we launched v2. I recall seeing the word IMAP was trending — my first thought this has to be a mistake, but when I ran the search it turned out that Gmail was having IMAP issues. Then a few weeks later during a telephone call one participant on the call heard an explosion outside his home. He jumped off the call to see what was happening, Jay came back 5 mins later, shook up but with no idea what the noise was. This post shows the Summize stream of responses to a simple question — there had been a minor earthquake in VA. A few weeks later the earthquake in China was also emerged out of the twitter stream before it hit MSM.

We experienced this again last week — in full force — when we launched the bit.ly product. A deceptively simple URL shortener that we developed with Dave Winer. Six days after its launch bit.ly is on a tear. The launch last week started with a fantastic write up by Marshall Kirkpatrick — it moved from there into twitter and summize and within mins we were getting live feedback on the product, how to tune and test it, complaints about the lack of privacy policy and ton of great ideas. I am learning as I go — but its a whole new world out there and thanks to Summize we can converse with in a far more direct and organized manner. This should be evident again today — run a search for this or this and watch it evolve.

In summary
Summize is a great example of what we aspire to do at betaworks. Working with a great team of technologists who created a wonderful product, one that on the surface is deceptively simple — where the smarts are all under the hood. One that we helped launch and scale. Many thanks to the Summize team. Jay, Abdur, Greg, Eric and team worked very very hard to make this happen — they peered into startup abyss and decided they werent going there — you guys are smart and brave. Thank you to the advisors who worked w/ Summize the make this happen — Gerry Campbell and Josh Auerbach. And thanks to the Twitter team. I have great hopes for the joint team.

Also see Summize post by Jay

Summize growth

Summize organic traffic growth, week over week.   Its astounding to see the Summize business grow from 0 to 14M queries a week in over the space of two months (note I updated the chart with the past week) —  traffic over the past 2 weeks has made the insanity of WWDC hard to see on the chart.

A testament to what a great product and UI can achieve in no time at all.   This past week with the launch of bit.ly I spent much of my time on Twitter, Summize, Friend Feed and a handful of other services.  Google is playing nxt to no part in the now-web that is emerging out of this ecosystem.   Rafer also pointed me to this chart on compete.    More on search and navigation to come, for now some pictures — Summize traffic and a wonderful fireworks display from this evening in Shelter Island.

bit.ly a simple, professional URL shortener

We launched bit.ly yesterday and got an intense amount of buzz and attention.  We thought this was an important piece of the puzzle but didn’t fully appreciate the vacuum that we were running into.   A crazy day — Summize offers a great interface into the groundswell of activity — Nate, Jay and the team iterating and updating the service throughout the day (you can see the updates here). 

On the switchAbit/bitly/twitabit blog we did the official launch post.  Save you the jump here is the summary of what we offer and why its different

1. History — we remember the last 15 shortened URLs you’ve created. They’re displayed on the home page next time you go back. Cookie-based, sign in will come but the first rule of the game was keep it simple.

2. Click/Referrer tracking — Every time someone clicks on a short URL we add 1 to the count of clicks for that page and for the referring page.

3. There’s a simple API for creating short URLs from your web apps.

4. We automatically create three thumbnail images for each page you link through bit.ly, small, medium and large size. You can use these in presenting choices to your users.

5. We automatically mirror each page, never know when you might need a backup. :-)

6. Most important for professional applications, you can access all the data about each page through a simple XML or JSON interface. Example.

7. All the standard features you expect from serious url-shorteners.    

And it’s just the beginning, we’re tracking lots more data so that as more URLs are shortened by bit.ly we’ll be able to turn on more features.   Marshall talks about some of what we are going to do on the data side in the RWW article below. 

More to come on how this fits with switchabit, twitabit, findings — the cluster of services we are building.    For now some commentary:

ReadWriteWeb

Bit.ly Is a Big Deal URL Shortener

Scripting News

Alley Insider

Summize

NilsR