Category interface

news.me

News.me launched this morning as an iPad app and as an email service. Here is some background on why and how we built News.me:

Why News.me? For a while now at bitly and betaworks, we have been thinking about and working on applications that blend socially curated streams with great immersive reading interfaces.

Specifically we have been exploring and testing ways that the bitly data stack can be used to filter and curate social streams.   The launch of the iPad last April changed everything. Finally there was a device that was both intimate and public — a device that could immerse you into a reading experience that wasn’t bound by the user experience constraints naturally embedded in 30 years of personal computing legacy.  So we built News.me.

News.me is a personalized social news reading application for the Apple iPad. It’s an app that lets you browse, discover and read articles that other people are seeing in their Twitter streams.   These streams are filtered and ranked using algorithms developed by the bitly team to extract a measure of social relevance from the billions of clicks and shares in the bitly data set. This is fundamentally a different kind of social news experience. I haven’t seen or used anything quiet like it before. Rather than me reading what you tweet, I read the stream that you have selected to read — your inbound stream.  It’s almost as if I’m leaning over your shoulder — reading what you read, or looking at your book shelves: it allows me to understand how the people I follow construct their world.

As with many innovations, we stumbled upon this idea.  We started developing News.me last August after we acquired the prototype from The New York Times Company. For the first version we wanted to simply take your Twitter stream, filter it using a bitly-based algorithm (bit-rank) and present it as an iPad app. The goal was to make an easy to browse, beautiful reading experience.  Within weeks we had a first version working.  As we sat around the table reviewing it, we started passing our iPads around saying “let me look at your stream.” And that’s how it really started.  We stumbled into a new way of reading Twitter and consuming news — the reverse follow graph wherein I get to read not only what you share, but what you read as well.  I get to read looking over other people’s shoulders.

 

What Others Are Reading…

On News.me you can read your filtered stream and also those of people you follow on Twitter who use news.me.  When you sign into the iPad app it will give you a list of people you are already following. Additionally, we are launching with a group of recommended streams. This is a selection of people whose “reading lists” are particularly interesting.  From Maria Popova (a.k.a. brainpicker), to Nicholas Kristof and Steven Johnson, from Arianna Huffington to Clay Shirky … if you are curious to see what they are reading, if you want to see the world through their eyes, News.me is for you. Many people curate their Twitter experience to reflect their own unique set of interests.   News.me offers a window into their curated view of the world, filtered for realtime social relevance via the bit-rank algorithm.

 

Streamline Your Reading

The second thing we strove to accomplish was to make News.me into a beautiful and beautifully simple reading experience. Whether you are browsing the stream, snacking on an item (you can pinch open an item in the stream to see a bit more) or you have clicked to read a full article, News.me seeks to offer the best possible reading experience.  All content that is one click from the stream is presented within the News.me application.  You can read, browse and “save for later” all within the app. At any given moment, you can click the browser button to see a particular page on the web. News.me has a simple business model to offer this reading experience.

Today we are launching the iPad News.me application and a companion email product.  The email service offers a daily, personalized digest of relevant content powered by the bit-rank algorithm, and is delivered to your inbox at 6 a.m. EST each morning.   The app. costs $.99 per week, and we in turn pay publishers for the pages you read.  The email product is free.

——————————————————–


Created with flickrSLiDR.

——————————————————–

How was News.me developed? News.me grew out of an innovative relationship between The New York Times Company and bitly.   The Times Company was the first in its industry to create a Research & Development group. As part of its mission, the group develops interesting and innovative prototypes based on trends in consumer media. Last May, Martin Nisenholtz and Michael Zimbalist reached out to me about a product in the Times Company’s R&D lab that they wanted to show us at betaworks.  A few weeks later they showed us the following video, accompanied by an iPad-based prototype. The video was created in January 2010, a few months prior to the launch of the iPad, and it anticipated many of the device’s gestures and uses, in form and function. Here are some screenshots of the prototype.   PastedGraphic 1

On the R&D site there are more screenshots and background.   The Times Company decided it would be best to move this product into bitly and betaworks where it could grow and thrive. We purchased the prototype from the Times Company in exchange for equity in bitly and, as part of the deal, a team of developers from R&D worked at bitly to help bring the product to market.

PastedGraphic 4

 

With Thanks … The first thank you goes to the team. I remember the first few product discussions, the dislocation the Times Company’s team felt having been air lifted overnight from The New York Times Building to our offices in the heart of the Meatpacking District. Throughout the transition they remained focused on one thing: building a great product. Michael, Justin, Ted, Alexis — the original four — thank you.  And thank you to Tracy, who jumped in midstream to join the team.  And thank you the bitly team, without whom the data, the filtering, the bits, the ranking of stories would never be possible.  As the web becomes a connected data platform, bitly and its api are becoming an increasingly important part of that platform. The scale at which bitly is operating today is astounding for what is still a small company, 8bn clicks last month and counting.

I would also like the thank our new partners. We are launching today with over 600 publishers participating. Some of whom you can see listed here, most are not. Thank you to all of them we are excited about building a business with you.

Lastly, I would like to thank The New York Times Company for coming to betaworks and bitly in the first place and for having the audacity to do what most big companies don’t do. I ran a new product development group within a large company and I would like to dispel the simplistic myth that big companies don’t innovate.   There is innovation occurring at many big companies.  The thing that big companies really struggle to do is to ship.   How to launch a new product within the context of an existing brand, an existing economic structure, how to not impute a strategy tax on a new product, an existing organizational structure, etc.   These are the challenges that usually cause the breakdown and where big company innovation, in my experience, so often comes apart. The Times Company did something different here.  New models are required to break this pattern, maybe News.me will help lay the foundation of a new model.   I hope it does and I hope we exceed their confidence in us.

http://on.news.me/app-download

And for more information about the product see http://www.news.me/faq

getting to know the iPad

iPad 1-2.jpg

I have been running an experiment for the eleven weeks or so since the iPad launched. Each weekend I spend time going through directories hunting for apps that begin to expose native attributes of the device. My assumption is that the iPad opens up a new form of computing and we will see apps that are created specifically for this medium. Watching these videos of a two and a half year old and a 99 year old using the device for the first time offers a glimpse of its potential. Ease of introduction and interaction are the key points of distinction. I havent seen a full sized computing device that requires so little context or introduction.

When the iPad first came out much of what was published was on either end of a spectrum of opinion.  On one were the bleary eyed evangelists who considered it game changing and on the other people who were uninterested or unimpressed.    I think invariably the people who found it wanting were expecting to port their existing workflows to the device.   They were asking to do “what I do on my PC” on the iPad.  These people were frustrated and disappointed.   They assumed this was another form of PC, with some modifications but that it represented a transition similar to desktop to laptop.   Take this post from TechCrunch: “Why I’m Craigslisting My iPads” — three of the four reasons the author lists for dumping his iPad are about his disppointment that the iPad isnt a replacement for his laptop or desktop.     But in the comments section of the post an interesting conversation emerges: what if this device’s potential is different? Just like video has transformed the way our culture interacts with images, what if gesture based computing has the potential to transform the way we use, create, and express ourselves.

The iPad is the first full sized computing device with wide scale adoption with:

  • Hardware and software that requires little to no context or learning
  • An input screen large enough to manipulate (touch and type) with both hands
  • A gesture based interface that is so immersive, and personal that it verges on intimate
  • Hardware with battery and heat management that, simply, doesn’t suck
  • An application metaphor that is well suited to immersive, chunky, experiences. As @dbennahum says: “The ipad is the first innovation in digital media that has lengthened the basic unit of digital media”
  • A tightly coupled, well developed and highly controlled app development environment

For some people these attributes sum up to the promise that this will be the “consumption” device that re-kindles print and protects IP based video.   That may occur but for me that isnt the potential. The iPad is a connected computing device that extends human gestures. If you step back from the noise and hype, after almost 15 years of web experience, we know a few things. Connected / networked devices have consistently generated use cases that center around communication and social participation vs. passive consumption. Connecting devices to a network isnt just a more efficient means of distribution it opens up new paths of participation and creation.  The very term consumption maps to a world and a set of assumptions that I think is antithetical to the medium (for more on this see Jerry Michalski quote on the Cluetrain). I believe the combination of the interface on the iPad and the entry level experience I outlined above is sufficiently intuitive that this device and its applications has the potential to become an extension of us and transform computing similar to how the mouse did 45 years ago.

titles Mind the Gap.jpeg

Douglas Engelbart and his mouse changed everything.  Similar to the mouse the multitouch interface lets you navigate the surface of the computer.   But there is a key difference between this gesture based interface and the mouse.  The mouse is separate from the working surface, connected to the body but separate from the actual place of interaction. With the iPad gestures happen on the surface that you are creating on.  I have this general theory that when you narrow the gap between the surface that you “create on” and the surface that you “read on” you change the ratio of readers to writers and proportionally you reduce consumption as we used to know it and increase participation.  Some examples.  Images — still and video — where the tool you use to capture is increasingly the tool you use to view and edit.    Remember the analog experience — shoot a roll of film on one media type (coated celluloid) and then develop / display on another (paper). The gap here was large.  Digital cameras started to close the gap by eliminating the development process — by recording on a digital medium that permitted the direct transfer of that to a display and editing device (the PC).   The incorporation of display screens on cameras shrunk the gap further.  Now we are closing the gap even further. Embedding cheap cameras every display screen so that what you see also is what you record and display screen into the front of cameras.    With each closing of the gap between between production and display — participation increases.  Take the web itself.   The advent of wiki’s, blogging, comments and writable sites.   Or compare Facebook, Twitter and Tumblr vs. WordPress, Posterous and Typepad.   They are all CMS’s of one kind or another — but the experience is radically different in the first group.   Why?   Because they close that gap — specifically, they dont abstract the publishing into a dashboard.  You write on the surface you are reading on.

So, as a rule of thumb, when i see this gap narrow — I sit back and think.   And it is for this reason that I believe the gesture based interface on this device has the potential to open up a new form of computing.

Back to that experiment.   So while its has been less than 12 weeks since its launch I want to see if there are elements emerging on iPad apps that can tell us about what this new medium has to offer, what are the things we are going to be able to create on this device. My process is as follows:

(a) Hunt and peck for native apps. The discovery / search process is imperfect. I spend a fair amount of time using services like Appshopper, Appadvice and Position App. I also spend time in the limited app store that Apple offers (limited in that it sure is one crappy interface to browse, compare and find app’s).       I do find the “people who liked this also liked this” feature useful.   But hunt and peck is the apt term — its a tough discovery process — while Apple has done an awful lot to open up new forms of innovation they are simulateanously compromising others — the web isnt a good discovery platform for a lot of these app’s because many of them arent “visible” to basic web tools.   Any that is how I find things.

(b) I use the apps for a few days at least.   Given how visually seductive this platform is its important for me to use the app’s for a bit, let them settle into my workflow and interests and see if they mature or fade.      I then create a summary, of the app, on the iPad (might as well use the medium).   The app that I used to write many of the these summaries was Omnigafffle.

Six of the summaries are inserted below aggregated under some broad topic areas. I wanted to lay them out side by side on the table and see what I had learnt thus far.  I have some commentary around most sections and then some conclusions at the end.

1. This is the first post I did — summarizing the goal:

iPad 1-2.jpg

2. Extending the iPad

In the early days I was fascinated by camera A and camera B application — it lets you use your iPhone camera on your iPad, over WIFI. It’s one of those wow app’s — you show it to people and you can see their eyes open as they think of the possibilities this opens up. I think the possibility set that it opens up relate to the device as an extension of other connected devices. There a small handful of other applications I found that have done interesting things integrating iPads with other devices — ie: Scrabble, iBrainstorm and Airturn. Airturn is brilliant in it’s simplicity and well defined use – using a Bluetooth foot pedal to turn the iPad into a sheet music reader.   Apple might well have not put a camera on v1 of the iPad for commercial reasons (ie upgrade path) but the business restriction has opened up an opportunity.

CameraA/B is a good example of how those design choices are driving innovation. One of the first pictures I did was a requisite recursive image.

3. Take me back …

The only physical navigation on the device is a home button, like the iPhone no back button. I wish there was a back button. I find myself using the home button time and time again to go back when im in an application. I love how conservative Apple is with its hardware controls but a back button is missing — its one of the great navigational tools that the browser brought us, I really want one on this device.

My Diagram.jpg

4. Jump on in …

There are a lot of interesting immersive app’s that are beginning to pop up on the iPad. These are good examples of the kind of experiences that are emerging:

photo.JPG

This is another immersive application — the popular Osmos HD. I said at the outset that I avoided gaming app’s and this and the coaster are games. Its the immersive navigation that i want to emphasize — today, there aren’t many better ways to explore this than app’s like these. Both of them use the high resolution display, the multitouch interface and the accelerometer to give you a visceral sense of the possibilities.

5. Writing …

I want to write on the iPad, write with my hand. I tried getting a pen but the experience was disappointing. The mutitouch surface is designed for input from a finger — the pens simulates a finger. If you want to draw with a pen or have large fingers then a pen like this works but it doesn’t work to actually write on the device. There also isn’t an application that lets you scale down words you have written with your finger, or at least i havent found one.  But you you can type!

I have also used a wireless keyboard — I typed most of this post using a keyboard, it works well.

6. Reading, readers and browsing …

There are a whole collection of reading related experiences that are coming out for the iPad, its one of the most active areas of development.     My journey began with the book app’s on the device. iBooks, the Kindle app and then a handful of dedicated reading app’s (ie comic book app’s) I don’t have much to say about any of these experiences since they all pretty much use the device as a display to read on.   They all work well, and the display is better on my eyes than I expected. I liked the Kindle, e-ink display, a lot but unless you are reading outside, in full sun, the iPad display works very well. My favorite reading app is the Kindle app.    The reading surface is clean and immersive.   Navigation is simple and I love the “social highlight” feature. You can see it in the image below. Whilst you are reading there are sections with a light, dotted, underline — touch it and it tells you x number of people have highlighted this section as well as you. I love stuff like this — a meaningful social gesture displayed with minimal UI.

photo.PNG

A few weeks after the launch I started using reader app’s.   I define this category as app’s that offer a reading experience into either a social network (twitter, facebook), a selection of feeds (RSS), or a scrapped version of web sites.  Some people are calling these clients  – for me a client allows you to publish, these are readers of one kind and another.    Skygrid was one of the first I used.    Then came Pulse, GoodReader, Apollo and last week Flipboard.   Most of these readers offer simple, fluid interfaces into the real time streams.   Yet the degree to which we have turned the web into a mess is painfully evident in these applications. Take a look at the screen shots of web pages displayed on these applications. The highlight is mine but the page is a mess.    Less than 15% of the pixels on the first page below were actually written by the author.

2010-07-05_2151.pngphoto.PNG

It’s remarkable how the human brain can block out a visual experience in one context (web browser) but when its recontextualized into another experience (iPad) the insanity of the experience is clear. We have slow boiled so many web sites that we have turned the web into a mass of branding, redundant navigation and advertising.   And some wonder why value of these ad’s keeps falling.    As the number of devices that access the internet increases the possibilities forking the web, as Doc Searls calls it, increases. Remember pointcast, sidewiki, Google News, Digg bar — same questions.   Something has to give here — surfing the web works very well on the iPad, the surfing works, the problem is that its the web sites that dont.

The issues embedded in these readers stretch back to the beginning of the web — all the way back to the moment that HTML and then RSS formed a layer, a standard, for the abstraction of underlying data vs. its representation.   Regardless of your view of the touch based interface its undeniable that the iPad represents a meaningful shift in how you can view information.    Match that with the insanity of how many web sites look today and you have a rich opportunity for innovation.

Users, publishers, advertisers, browsers, aggregators, widget makers — pretty much everyone is going to try to address this issue.    Some of these reader app’s use the criteria that RSS established (excerpt or full text) to determine whether to re-contextualize the entire page or just a snippet of it.    Some of them just scrap the entire web page and then some of them are emerging as potentially powerful middleware tools.    PressedPad is installed on this blog — its somewhere between a wordpress plugin and a theme ( note to users: install it as a plugin).   PressedPad  gives me some basic controls re: how to display and manage the words on this site so that they are optimized for the iPad.   Similar to WPtouch — it does a great job of addressing this issue by passing control over to the site creator.   This approach makes sense but it will take time to scale.  In the short term we are going to see a lot of false starts here.  But ultimately the reading experience will get better because of this tension and evolution both on the iPad and the web.   And so will monetization.  Now that the inanity of what we have done is been laid bare we have to fix it.

Back to the app’s themselves.   Of all these reader app’s the Flipboard is the most innovative.  I’m still getting used to the experience – there is a lot to think about here.   There is much that I like about the Flipboard – its visually arresting for a start, beautifully laid out and stunning.   Take the image below — some app’s are just stop you in their tracks with their ability to show off the visual capabilities of the device, Flipboard is certainly one of these.

Visuals aside the thing that I find interesting is Flipboard’s approach to Twitter and Facebook.  It turns Twitter and Facebook into a well formatted reading experience — it takes a dynamic real time stream and re-prints it as if its a magazine. I like the application of Tweets as headlines. I have often thought about Twitter’s 140 character length as headline publishing. Flipboard takes this literally — using the Tweet as the headline with exerts of the content displayed under the headline.   The Facebook stream works less well.   Facebook isnt a news stream, its more of a social stream — and I find the Flipboard randomly drops me into the Facebook at a level that im not interested in. I flip pages and I find myself browsing personal pictures from someone I barely know — something that i would have skipped by on Facebook.com.

But it is this representation of a stream as a magazine that I struggle with the most. The metaphor is overwrought in my mind.  I hear the theoretical arguments that Scoble makes re: layout but they dont translate for me in practice.   The stream of data coming from Twitter and Facebook isnt a magazine — formatting it as such places it into a context that doesnt fit particularly well and certainly doesnt scale well (from a usage perspective).  Because it looks like a magazine and feels like one — I tend to read it like one, and this content isn’t meant to be used like a magazine. The presentation feels too finished, I have written before about the need for unfinished media and how it opens the door for participation.   This feels like it closes that door – it allows too narrow an entry path for interaction.     And then finally what they are trying to do is technically hard.   It’s hard to algorithmically determine which text should be large vs. small, where to place emphasis – just like its hard to algorithmically de-dup multiple streams, or to successfully display the images that correspond to the title.

These are my initial Flip thoughts.  I am facinated by this category and the conversations Pulse, Flip and others have started.   The innovation here is just getting going and I cant wait to see what comes next.

Browsers.   I’m using Life Browser a lot and liking it.   The Queue feature is great — enable the Q button and any links you click on the page get “queued up” behind in a stack.  Im interested to see things like candy tabs on Firefox come to the iPad.

Some conclusions …

1. Its early days.

There wasnt a single application that I found that really stood out and remained interesting after a few weeks of use.  Many were recast versions of iPhone applications. I did find things that are edging in the direction of truly native – and most of those I outlined above.  This conclusion isn’t surprising.  It’s very hard to re-conceptualize interfaces and experiences. The launch of the magic trackpad demonstrates how committed Apple is to this interface.  If this is truly a new form three months is barely a teaser — we have much to do and much to learn here.   And in the past few weeks the pace of launches of interesting applications has started to pickup significantly.   Im spending more time in drawing app’s and in some quasi enterprise app’s.    I cant wait to see what the next 6 months brings.

2. The visual dominates, gesture emerging.

Visually arresting applications are the things that pop today.   Many of them are just beautiful to look at.   The pond is lovely — have you been struck by the book shelf on iBooks, I was — what about the roller coaster, so are many of the games, so is Flipboard.   But I suspect much of what im responding to is the quality of the screen and the images been displayed ie: the candy not the sustenance.   Many of the app’s that had an initial wow factor im now deleted.    Visual graphics need to be part of the quality and essence of the experience not just eye candy.   And the visual needs to be integrated into the gestural.   Maybe artists will take it accross this threshold — I was sorry that the Seven on Seven event happened right around the launch, I hope that for the next one some artists will opt to produce something on the iPad.   Gesture based interfaces are emerging — slowly but they are coming.   I used Pressedpad to “iPad”ize this blog and the experience works well(ish) — the focus is simply on making the navigation gesture applicable.   But note even here — when I showed this iPad enabled blog to @wesissman he mailed me “looks amazing – i cant figure out how to actually read the posts – but looks great”.     We are in that early part of the experience of a new device where the visual is so astounding we in a sense need to get over it in order to figure out how we can make it useful.

3. Its a social device.

It’s a social device yet many of the applications are single user and not thinking through the connected aspects of the device.    While the device is highly personal it’s also a social device, it caters very well to multi users and multi devices.   I havent figured out why this is so but for some reason the iPad has both a highly personal inimate feel — yet its social representation is far less personal.   Try this out — leave an iPad lying around in a conference room people will feel very comfortable using it.   In the first few weeks it was fair to say that everyone simply wants to try one — but the behaviour persists.  In the same way I have brought an iPad to meetings and passed it around the table, its a very sharable social device.    In this mix of personal and not — single user and multi user/multi device is, I believe, a trove of opportunity for innovation.  And then add connectivity to this mix.    This device is designed as a connected device (connected to both other devices and connected to the network) — it will open up paths of connected innovation we can only imagine today.

4. Enterprise is a coming

I have been struck by how popular VPN and other virtualization app’s are.   It suggests a lot of people are starting to use the iPad in the enterprise.   I heard some numbers that suggested that more than 15% of the iPads sold are linked to corp accounts.     The use cases are a little outside of what i know and think about but I suspect there is a lot that will emerge here.   The device requires very little IT overhead — the total cost of ownership of these devices has to be a fraction of a normal PC.

So here are an initial set of thoughts about the iPad.   I’m interested to hear what you think.    One of the other incidental properties of the iPad is its initial lack of focus.  The iPhone is in its first instance a phone, the kindle is a book reader. — the iPad is an open tablet, for us to create on.   I believe there is much to do here — the tablet has been the next great form factor for a long time now, but I think its finally arrived.   We now have to build the experiences to suit the device.

lines in the sand …

screenshotI had the good fortune of receiving an advance copy of Ken Auletta’s forthcoming book “Googled, The End of the World as We Know It“. It’s a fascinating read, one that raises a whole set of interesting dichotomies related to Google and their business practices. Contrast the fact that the Google business drives open and free access to data and intellectual property, so that the world becomes part of their corpus of data – yet they tightly guard their own IP in regards to how to navigate that data. Contrast that users and publishers who gave Google the insights to filter and search data are the ones who are then taxed to access that data set. Contrast Google’s move into layers beyond web sites (e.g., operating systems, web browsers) with their apparent belief that they won’t have issues stemming from walled gardens and tying. In Google we have a company that believes “Don’t be evil” is sufficient a promise for their users to trust their intentions, yet it is a company that have never articulated what they think is evil and what is not (Google.cn, anyone?).

There is a lot to think about in Auletta’s book – it’s a great read. When I began reading, I hoped for a prescriptive approach, a message about what Google should do, but instead Auletta provides the corporate history and identifies the challenging issues but leaves it to the reader to form a position on where they lead.  In my case, the issue that it got me thinking most about was antitrust.

My bet is that in the coming few years Google is going to get hauled into an antitrust episode similar to what Microsoft went through a decade ago. Google’s business has grown to dominate navigation of the Internet. Matched with their incredibly powerful and distributed monetization engine, this power over navigation is going to run headlong into a regulator. I don’t know where (US or elsewhere) or when, but my bet is that it will happen sooner rather than later. And once it does happen, the antitrust process will again raise the thorny issue of whether regulation of some form is an effective tool in the fast-moving technology sector.

UsersjohnborthwickPicturesiPhoto-LibraryOriginals2008IMG_0859.jpg

I was a witness against Microsoft in the remedy phase of its antitrust trial, and I still think a lot about whether to technology regulation works. I now believe the core position I advocated in the Microsoft trial was wrong. I don’t think government has a role in participating in technology design and I believe the past ten years have adequately illustrated that the pace of innovation and change will outrun any one company’s ability to monopolize a market. There’s no question in my mind that Microsoft still has a de facto monopoly on the market for operating systems.  There’s also no question that the US and EU regulatory environment have constrained the company’s actions, mostly for the better. But the primary challenges for Microsoft have been from Google and, to a lesser extent, from Apple. Microsoft feels the heat today, but it is coming from Silicon Valley, not Brussels or Washington, and it would be feeling this heat no matter what had happened in the regulatory sphere. The EU’s decisions to unbundle parts of Windows did little good for RealNetworks or Netscape (which had been harmed by the bundling in the first place), and my guess is that Adobe’s Flash/ AIR and Mozilla’s Firefox would be thriving even if the EU had taken no action at all.

But if government isn’t effective at forward-looking technology regulation, what alternatives do we have? We can restrict regulation to instances where there is discernible harm (approach: compensate for past wrongs, don’t design for future ones) or stay out and let the market evolve (approach: accept the voracious appetite of these platforms because they’re temporary). But is there another path? What about a corporate statement of intent like Google’s “Don’t be evil”?

“Don’t be evil” resonated with me because it suggested that Google as a company would respect its users first and foremost and that its management would set boundaries on the naturally voracious appetite of its successful businesses.

In the famous cover letter in Google’s registration statement with the SEC before its IPO, its founders said: “Our goal is to develop services that significantly improve the lives of as many people as possible. In pursuing this goal, we may do things that we believe have a positive impact on the world, even if the near term financial returns are not obvious.” The statement suggests that there are a set of things that Google would not do. Yet as Auletta outlines, “don’t be evil” lacks forward looking intent, and most important it doesn’t outline what good might mean.

Nudge please …

Is there a third way — an alternative that places the company builders in a more active position? After almost two decades of development I believe many of the properties of the Internet have been documented and discussed, so why not distill these and use them as guideposts? I love reading and rereading works like the Stupid Network, or the Cluetrain Manifesto or the Cathedral and the Bazaar, or (something seasonal!) the Halloween Memo‘s. In these works, and others, there is mindset, an ethos or culture that is philosophically consistent with the medium. When I first heard “Don’t be evil” my assumption was that it, and by definition good, referred to that very ethos. What if we can unpack these principles, so that builders of the things that make up these internets can make explicit their intent and begin to establish a compact vs. a loose general statement of “goodness” that is subject to the constraint that “good” can be relative to the appetite of the platform? Regulation in a world of connected data, where the network effect of one platform helps form another, has much broader potential for unintended consequences. How we address these questions is going to affect the pace and direction of technology based innovation in our society. If forward looking regulation isn’t the answer, can companies themselves draw some lines in the sand, unpack what “don’t be evil” suggested, and nudge the market towards an architecture in which users, companies, and other participants in the open internet signal the terms or expectations they have.

Below is a draft list of principles. It is incomplete, I’m sure — I’m hoping others will help complete it — but after reading Auletta’s book and after thinking about this for a while I thought it would be worth laying out some thoughts in advance of another regulatory mess.

1. Think users 

When you start to build something online the first thing you think about are users. You may well think about yourself — user #1 — and use your own workflow to intuit what others might find useful, but you start with users and I think you should end with the users. This is less of a principle and more of a rule of thumb, and a foundation for the other principles. It’s something I try to remind myself of constantly. In my experience with big and small companies this rule of thumb seems to hold constant. If the person who is running the shop you are working for doesn’t think about end users and / or doesn’t use your product, it’s time to move on. As Eric Raymond says you should treate your users as co-developers.  Google is a highly user centric company for one of its scale, they stated this in the pre-ample to the IPO/s3 and they have managed to stay relatively user centric with few exceptions (Google.cn likely the most obvious, maybe the Book deal).   Other companies — ie: Apple, Facebook — are less user centric.   Working on the Internet is like social anthropology, you learn by participant observation — the practice of doing and building is how you learn.   In making decisions about services like Google Voice, Beacon etc. users interest need to be where we start and where we end.

2. Respect the layers

In 2004 Richard Whitt, then at MCI, framed the argument for using the layer model to define communication policy. I find this very useful: it is consistent with the architecture of the internet, it articulates a clear separation of content from conduit, and it has the added benefit of been a useful visual representation of something that can be fairly abstract. Whitt’s key principle is that companies should respect the distinction between these layers. Whitt captures in a simple framework what is wrong with the cable companies or the cell carriers wanting to mediate or differentially price bits. It also helps to frame the potential problems that Side Wiki, or the iPhone or Google Voice, or Chrome presents (I’m struck by the irony that “respecting the layers” in the case of a browser translates into no features from the browser provider will be embedded into the chrome of the browser, calling the browser Chrome is suggestive of exactly what I dont want, ie Google specific Chrome!).   All these products have the potential to violate the integrity of the layers, by blending the content and the applications layers. It would be convenient and simple to move on at this point, but its not that easy.

There are real user benefits to tight coupling (and the blurring of layers) in particular during the early stages of a product’s development. There were many standalone MP3 players on the market before the iPod. Yet it was the coupling of the iPod to iTunes and the set of business agreements that Apple embedded into iTunes that made that market take off (note that occurred eighteen months after the launch of the iPod). Same for the Kindle — coupling the device to Amazon’s store and to the wireless “Whispernet” service is what distinguishes it from countless other (mostly inferior) ebooks. But roll the movie forward: its now six and a half years after the launch of the coupled iTunes/iPod system. The device has evolved into a connected device that is coupled both to iTunes and AT&T and the store has evolved way beyond music. Somewhere in that evolution Apple started to trip over the layers. The lines between the layers became blurred and so did the lines between vendors, agents and users. Maybe it started with the DRM issue in iTunes, or maybe the network coupling which in turn resulted in the Google Voice issue. I’m not sure when it happened but it has happened and unless something changes its going to be more of problem, not less. Users, developers and companies need to demand clarity around the layers, and transparency into the business terms that bound the layers. As iTunes scales — to become what it is in essence a media browser — I believe the pressure to clarify these layers will increase.    An example of where the layers have blurred without the feature creep /conflict is the search box in say the Firefox browser.    Google is default, there is a transparent economic agreement that places them there and users can adjust and pick another default if they wish.    One of the unique attributes of the internet is that the platform on which we build things is the very same as the one we use to “consume” those things (remember the thrill of “view source” in the browser). Given this recursive aspect of the medium, it is especially important to respect the layers.   Things built on the Internet can them selves redefine the layers.

3. Transparency of business terms

When platform like Google, iTunes, Facebook, or Twitter gets to scale it rapidly forms a basis on which third parties can build businesses. Clarity around the business terms for inclusion in the platform and what drives promotion and monetization within the platform is vital to the long term sustainability of the underlying platform. It also reduces the cost of inclusion by standardizing the business interface into the platform. Adsense is a remarkable platform for monetization. The Google team did a masterful job of scaling a self service (read standardized) interface into their monetization system. The benefits of this have been written about at length yet aspects of the platform like “smart pricing” arent’t transparent.   See this blog post from Google about smart pricing and some of the comments in the thread.   They include: “My eCPM has tanked over the last few weeks and my earnings have dropped by more then half, yet my traffic is still steady. I’m lead to believe that I have been smart priced but with no information to tell me where or when”

Back in 2007 I ran a company called Fotolog. The majority of the monetization at Fotolog was via Google. One day our Google revenues fell by half. Our traffic hadn’t fallen and up to that point our Google revenue had been pretty stable. Something was definitely wrong, but we couldnt figure out what. We contacted our account rep at Google, who told us that there was a mistake on our revenue dashboard. After four days of revenues running at the same depressed level we were told we had been “smart priced”.   Google would not offer us visibility in how this is measured and what is the competitive cluster against which you are being tested. That opacity made it very hard for Fotolog to know what to do. If you get smart priced you can end up having to re-organize your entire base of inventory all while groping to understand what is happening in the black box of Google. Google points out they don’t directly benefit from many of these changes in pricing (the advertisers do pay less per click), but Google does benefit from the increased liquidity in the market. As with Windows, there is little transparency in regards to the pricing within the platform and the economics.    This in turn leaves a meaningful constituent on the sideline, unsatisfied or unclear about the terms of their business relationship with the platform. I would argue that smart pricing and a lack of transparency into how their monetization platform can be applied to social media is driving advertisers to services like Facebook’s new advertising platform.

Back to Apple.   iTunes is as I outlined about a media browser — we think about it as an application because we can only access Apple stuff through it, a simple, yet profound design decision.   Apple created this amazing experience that arguably worked because it was tightly coupled end to end, i.e, the experience stretched from the media through the software to the device. Then when the device became a phone, the coupling extended to the network (here in the US, AT&T). I remember two years ago I almost bricked my iPhone — Apple reset my iPhone to its birthstate — because I had enabled installing applications that weren’t “blessed” by Apple. My first thought was, “isn’t this my phone? what right does Apple have to control what I do with it, didn’t I buy it?” A couple of months ago, Apple blocked Google Voice’s iPhone application; two weeks ago Apple rejected someecards’ application into the app store while permitting access to a porn application (both were designated +17; one was satire, the other wasn’t). The issue here isn’t monopoly control, per se — Apple certainly does not have a monopoly on cell phones, nor AT&T on cell phone networks. The trouble is that there is little to no transparency into *why* these applications weren’t admitted into the app store. (someecards’ application did eventually make it over the bar; you can find it here.) Will Google Voice get accepted? Will Spotify?, Rdio? someecards?     As with the Microsoft of yesteryear (which, among other ills, forbade disclosure of its relationships with PC makers), there is an opaqueness to the business principles that underlie the iTunes app store. This is a design decision that Apple has made and one that, so far anyway, users and developers have accepted. And, in my opinion, it is flawed.    Ditto for Facebook. This past week, the terms for application developers were modified once again. A lot of creativity, effort, and money has been invested in Facebook applications — the platform needs a degree of stability and transparency for developers and users.

4. Data in, data out?

API’s are a corner stone to the emerging mesh of services that sit on top of and around platforms. The data flows from service providers should, where possible, be two way. Services that consume an API should publish one of their own. The data ownership issues among these services is going to become increasingly complex. I believe that users have the primary rights to their data and the applications that users select have a proxy right, as do other users who annotate and comment on the data set. If you accept that as a reasonable proposition, then it follows that service providers should have an obligation to let users export that data and also let other services providers “plug into” that data stream. The compact I outline above is meaningfully different to what some platforms offer today. Facebook asserts ownership rights over the data you place in its domain; in most cases the data is not exportable by the user or another service provider (e.g., I cannot export my Facebook pictures to Flickr, nor wire up my feed of pictures from Facebook to Twitter). Furthermore if I leave Facebook they still assert  rights to my images.   I know this is technically the easiest answer. Having to delete pictures that are now embedded in other people’s feed is a complex user experience but I think that’s what we should expect of these platforms. The problem is far simplier if you just link to things and then promote standards for interconnections. These standards exist today in the form of RSS, or Activity Streams — pick your flavor and let users move data from site to site and let users store and save their data.

5. Do what you do best, link to the rest

Jeff Jarvis’s moto for newsrooms applies to service providers as well. I believe the next stage of the web is going to be characterized by a set of loosely coupled services — services that share data — offering end users the ability to either opt for an end-to-end solution or the possibility of rolling their own in a specific domain where they have depth of interest, knowledge, data. The first step in this process is that real identity is becoming public and separable from the underlying platform (vs. private in, say The Facebook, or alias based in most earlier social networks). In the case of services like Facebook Connect and Twitter OAuth this not only simplifies the user experience but identity also pre-populates a social graph into the service in question. OAuth flows identity into a user’s web experience, vs. the disjointed efforts of the past. This is the starting point. We are now moving beyond identity into a whole set of services stitched together, by users. Companies of yesteryear, as they grew in scale, started to co-opt vertical services of the web into their domain (remember when AOL put a browser inside of its client, with the intention of “super-setting” the web). This was an extreme case — but it is not all that different from Facebook’s “integration” of email, providing a messaging system with no imap access, one sends me an email to my imap “email” account to tell me to check that I have a Facebook “email”.   This approach wont scale for users.  Kevin Marks, Marc Cantor, Jerry Michalski are some of the people who have been talking for years about an open stack.    In the later half of this presentation Kevin outlines the emerging stack.    I believe users will opt — over time — for best in class services vs. the walled garden roll it once approach.

 
Bart1


6. Widen the my experience – don't narrow it

Google search increasingly serves to narrow my experience on the web, rather than expand it. This is driven by a combination of pressure inherent in their business model to push page views within their domain vs. outside (think Yahoo Finance, Google Onebox etc.) and the evolution of an increasingly personalised search experience which in turn tends to feed back to me and amplify my existing biases — serving to narrow my perspective vs. broaden it. Auletta talked about this at the end of his book. He quotes Nick Carr: “They (Google) impose homogeneity on the Internet’s wild heterogeneity. As the tools and algorithms become more sophisticated and our online profiles more refined, the Internet will act increasingly as an incredibly sensitive feedback loop, constantly playing back to us, in amplified form, our existing preferences” Features like social search will only exacerbate this problem. This point is the more subtle side of the point above. I wrote a post a year or two ago about thinking of centres vs. wholes and networks vs. destinations. As the web of pages becomes a web of flow and streams the experience of the web is going widen again. You can see this in the data — the charts in distribution now post illustrate the shift that is taking place.   As the visible — user facing — part of a web site becomes less important than the API’s and the myriad of ways that users access the underlying data, the web, and our experience of it, will widen, again.

Conclusions

I have outlined six broad principles that I believe can be applied as a design methodology for companies building services online today. They are inspired by others, a list of whom would be very long,  I’m not going to attempt to document it, I will surely miss someone.   Building companies on today’s internet is by definition an exercise in standing on the shoulders of giants. Internet standards from TCP/IP onward are the strong foundation of an architecture of participation. As users pick and choose which services they want to stitch together into their cloud, can companies build services based on these shared data sets in a manner that is consistent with the expectations we hold for the medium? The web has a grain to it and after 15 years of innovation we can begin to observe the outlines of that grain. We may not be able to always describe exactly what it is that makes something “web consistent” but we do know it when we see it.

The Microsoft antitrust trial is a case study in regulators acting as design architects. It didn’t work. Google’s “don’t be evil” mantra represents an alternative approach, one that is admirable in principle but lacking in specificity. I outline a third way here, one in which we as company creators coalesce around a set of principles saying what we aspire to do and not do, principles that will be visible in our words and our deeds. We can then nudge our own markets forward instead of the “helping hand” of government.

buriedtreasure

Creative destruction … Google slayed by the Notificator?

The web has repeatedly demonstrated its ability to evolve and leave embedded franchises struggling or in the dirt.    Prodigy, AOL were early candidates.   Today Yahoo and Ebay are struggling, and I think Google is tipping down the same path.    This cycle of creative destruction — more recently framed as the innovators dilemma — is both fascinating and hugely dislocating for businesses.    To see this immense franchises melt before your very eyes — is hard to say the least.   I saw it up close at AOL.    I remember back in 2000, just after the new organizational structure for AOL / Time Warner was announced there was a three day HBS training program for 80 or so of us at AOL.   I loath these HR programs — but this one was amazing.   I remember Kotter as great (fascinating set of videos on leadership, wish I had them recorded), Colin Powell was amazing and then on the second morning Clay Christensen spoke to the group.    He is an imposing figure, tall as heck, and a great speaker — he walked through his theory of the innovators dilemma, illustrated it with supporting case studies and then asked us where disruption was going to come from for AOL?    Barry Schuler — who was taking over from Pittman as CEO of AOL jumped to answer.   He explained that AOL was a disruptive company by its nature.    That AOL had disruption in its DNA and so AOL would continue to disrupt other businesses and as the disruptor its fate would be different.     It was an interesting argument — heart felt and in the early days of the Internet cycle it seemed credible.   The Internet leaders would have the creative DNA and organizational fortitude to withstand further cycles of disruption.    Christensen didn’t buy it.     He said time and time again disruptive business confuse adjacent innovation for disruptive innovation.   They think they are still disrupting when they are just innovating on the same theme that they began with.   As a consequence they miss the grass roots challenger — the real disruptor to their business.   The company who is disrupting their business doesn’t look relevant to the billion dollar franchise, its often scrappy and unpolished, it looks like a sideline business, and often its business model is TBD.    With the AOL story now unraveled — I now see search as fragmenting and Twitter search doing to Google what broadband did to AOL.

a5e3161c892c7aa3e54bd1d53a03a803

Video First

Search is fragmenting into verticals.     In the past year two meaningful verticals have emerged — one is video — the other is real time search.   Let me play out what happened in video since its indicative of what is happening in the now web.     YouTube.com is now the second largest search site online — YouTube generates domestically close to 3BN searches per month — it’s a bigger search destination than Yahoo.     The Google team nailed this one.    Lucky or smart — they got it dead right.    When they bought YouTube the conventional thinking was they are moving into media –  in hindsight — its media but more importantly to Google — YouTube is search.     They figured out that video search was both hard and different and that owning the asset would give them both a media destination (browse, watch, share) and a search destination (find, watch, share).  Video search is different because it alters the line or distinction between search, browse and navigation.       I remember when Jon Miller and I were in the meetings with Brin and Page back in November of 2006 — I tried to convince them that video was primarily a browse experience and that a partnership with AOL should include a video JV around YouTube.     Today this blurring of the line between searching, browsing and navigation is becoming more complex as distribution and access of YouTube grows outside of YouTube.com.    44% of YouTube views happen in the embedded YouTube player (ie off YouTube.com) and late last year they added search into the embedded experience.    YouTube is clearly a very different search experience to Google.com.       A last point here before I move to real time search.    Look at the speed at which YouTube picked up market share.  YouTube searches grew 114% year over year from Nov 2007 to Nov 2008!?!     This is amazing — for years the web search shares numbers have inched up in Google favor — as AOL, Yahoo and others inch down, one percentage point here or there.    But this YouTube share shift blows away the more gradual shifts taking place in the established search market.     Video search now represents 26% of Google’s total search volume.

summize_fallschurch

The rise of the Notificator

I started thinking about search on the now web in earnest last spring.    betaworks had invested in Summize and the first version of the product (a blog sentiment engine) was not taking off with users.   The team had created a tool to mine sentiments in real-time from the Twitter stream of data.    It was very interesting — a little grid that populated real time sentiments.   We worked with Jay, Abdur, Greg and Gerry Campbell to make the decision to shift the product focus to Twitter search.   The Summize Twitter search product was launched in mid April.   I remember the evening of the launch — the trending topic was IMAP — I thought “that cant be right, why would IMAP be trending”, I dug into the Tweets and saw that Gmail IMAP was having issues.    I sat there looking at the screen — thinking here was an issue (Gmail IMAP is broken) that had emerged out of the collective Twitter stream — Something that an algorithmically based search engine, based on the relationships between links, where the provider is applying math to context less pages could never identify in real time.

A few weeks later I was on a call with Dave Winer and the Switchabit team — one member of the team (Jay) all of a sudden said there was an explosion outside.   He jumped off the conference call to figure out what had happened.    Dave asked the rest of us where Jay lived — within seconds he had Tweeted out “Explosion in Falls Church, VA?”  Over the nxt hour and a half the Tweets flowed in and around the issue (for details see & click on the picture above).    What emerged was a minor earthquake had taken place in Falls Church, Virginia.    All of this came out of a blend of Dave’s tweet and a real time search platform.  The conversations took a while to zero in on the facts — it was messy and rough on the edges but it all happened hours before main stream news, the USGS or any “official” body picked it up the story.  Something new was emerging — was it search, news — or a blend of the two.   By the time Twitter acquired Summize in July of ’08 it was clear that Now Web Search was an important new development.

Fast forward to today and take a simple example of how Twitter Search changes everything.    Imagine you are in line waiting for coffee and you hear people chattering about a plane landing on the Hudson.   You go back to your desk and search Google for plane on the Hudson — today — weeks after the event, Google is replete with results — but the DAY of the incident there was nothing on the topic to be found on Google.  Yet at http://search.twitter.com the conversations are right there in front of you.    The same holds for any topical issues — lipstick on pig? — for real time questions, real time branding analysis, tracking a new product launch — on pretty much any subject if you want to know whats happening now, search.twitter.com will come up with a superior result set.

How is real time search different?     History isnt that relevant — relevancy is driven mostly by time.    One of the Twitter search engineers said to me a few months ago that his CS professor wouldn’t technically regard Twitter Search as search.   The primary axis for relevancy is time — this is very different to traditional search.   Next, similar to video search — real time search melds search, navigation and browsing.       Way back in early Twitter land there was a feature called Track.  It let you monitor or track — the use of a word on Twitter.    As Twitter scaled up Track didn’t and the feature was shut off.   Then came Summize with the capability to refresh results — to essentially watch the evolution of a search query.      Today I use a product called Tweetdeck (note disclosure below) — it offers a simple UX where you can monitor multiple searches — real time — in unison.    This reformulation of search as navigation is, I think, a step into a very new and different future.   Google.com has suddenly become the source for pages — not conversations, not the real time web.   What comes next?   I think context is the next hurdle.    Social context and page based context.    Gerry Campbell talks about the importance of what happens before the query in a far more articulate way than I can and in general Abdur, Greg, EJ, Gerry, Jeff Jonas and others have thought a lot more about this than I have.    But the question of how much you can squeeze out of a context less pixel and how context can to be wrapped around data seems to be the beginning of the next chapter.    People have been talking about this for years– its not that this is new — its just that the implementation of Twitter and the timing seems to be right — context in Twitter search is social.   74 years later the Notificator is finally reaching scale.

A side bar thought: I do wonder whether Twitter’s success is partially base on Google teaching us how to compose search strings?    Google has trained us how to search against its index by composing  concise, intent driven statements.   Twitter with its 140 character limit picked right up from the Google search string.    The question is different (what are you doing? vs. what are you looking for?)  but  the compression of meaning required by Twitter is I think a behavior that Google helped engender.     Maybe, Google taught us how to Twitter.

On the subject of inheritance.  I also believe Facebook had to come before Twitter.    Facebook is the first US based social network — to achieve scale, that is based on real identity.  Geocities, Tripod, Myspace — you have to dig back into history to bbs’s to find social platforms where people used their real names, but none of these got to scale.    The Twitter experience is grounded in identity – you knowing who it was who posted what.    Facebook laid the ground work for that.

What would Google do?

I love the fact that Twitter is letting its business plan emerge in a crowd sourced manner.   Search is clearly a very big piece of the puzzle — but what about the incumbents?   What would Google do, to quote Jarvis?   Let me play out some possible moves on the chess board.   As I see it Google faces a handful of challenges to launching a now web search offering.    First up — where do they launch it,  Google.com or now.Google.com?    Given that now web navigational experience is different to Google.com the answer would seem to be now.google.com.   Ok — so move number one — they need to launch a new search offering lets call it now.google.com.    Where does the data come from for now.google.com?    The majority of the public real time data stream exists within Twitter so any http://now.google.com/ like product will affirm Twitter’s dominance in this category and the importance of the Twitter data stream.    Back when this started Summize was branded “Conversational Search” not Twitter Search.     Yet we did some analysis early on and concluded that the key stream of real time data was within Twitter.    Ten months later Twitter is still the dominant, open, now web data stream.   See the Google trend data below – Twitter is lapping its competition, even the sub category “Twitter Search” is trending way beyond the other services.   (Note: I am using Google trends here because I think they provide the best proxy for inbound attention to the real time microbloggging networks.   Its a measure of who is looking for these services.    It would be preferable to measure actual traffic measured but Comscore, Hitwise, Compete, Alexa etc. all fail to account for API traffic — let alone the cross posting of data (a significant portion of traffic to one service is actually cross postings from Twitter).   The data is messy here, and prone to misinterpretation, so much so that the images may seem blurry).   Also note the caveat re; open.   Since most of the other scaled now web streams of data are closed / and or not searchable (Facebook, email etc.).

screenshot
gTrends data on twitter

Google is left with a set of conflicting choices.     And there is a huge business model question.     Does Ad Sense work well in the conversational sphere?   My experience turning Fotolog into a business suggests that it would work but not as well as it does on Google.com.    The intent is different when someone posts on Twitter vs. searching on Google.   Yet, Twitter as a venture backed company has the resources to figure out exactly how to tune AdSense or any other advertising or payments platform to its stream of data.    Lastly, I would say that there is a human obstacle here.     As always the creative destruction is coming from the bottom up — its scrappy and and prone to been written off as NIH.     Twitter search today is crude — but so was Google.com once upon a not so long time ago.     Its hard to keep this perspective, especially given the pace that these platforms reach scale.     It would be fun to play out the chess moves in detail but I will leave that to another post.   I’m running out of steam here.

AOL has taken a long time to die.    I thought the membership (paid subscribers) and audience would fall off faster than it has.    These shifts happen really fast but business models and organizations are slow to adapt.  Maybe its time for the Notificator to go public and let people vote with their dollars.   Google has built an incredible franchise — and a business model with phenomenal scale and operating leverage.   Yet once again the internet is proving that cycles turn — the platform is ripe for innovation and just when you think you know what is going on you get blindsided by the Notificator.

Note:    Gerry Campbell wrote a piece yesterday about the evolution of search and ways to thread social inference into  search.    Very much worth a read — the chart below, from Gerry’s piece, is useful as a construct to outline the opportunity.

gerry-campbell-emerging-search-landscape1

Disclosure.   I am CEO of betaworks.    betaworks is a Twitter shareholder.  We are also a Tweetdeck shareholder.  betaworks companies are listed on our web site.

Clever interface

My wife got me a Chumby — interface is surprisingly well done, this is how you authenticate the device — clever, you can see the pattern across the room, no typing necessary:

Summize and Twitter

On monday Summize ran a test partnership with Twitter to cover the WWDC, the results were fairly extraordinary (a colleague mailed me … “holy fuck”).   The raw data is displayed below. Traffic peaked at 190 queries per second, spikes went way over that number. For context — this is close to the search load that AOL manages today (at its peak AOL was doing several x that number).

People came, they searched … but they also seem to have left the browsers on, watching the WWDC conversations flow by. This is interesting and unusual — search as a browse/monitor experience is different to the way search has been thought of to date. We have also seen this with the trending topics on Summize. Conversational search is a big idea – the Summize team are starting to figure it out.

Picture 2.png

Zemanta Pixie

Ruler

Billy created a wonderful little ruler for the iphone. Its interesting, its dislocating and its beautiful done, lovely to see wood grain on a device.

F8 and that Telegraph road

The launch last week of Facebook's platform initiative, F8, has generated a lot of talk, much of it in the mainstream press.  Its a compelling story, Facebook is becoming a platform, out maneuvering Myspace, doing to the web what Microsoft did to the PC.   Its a story we have heard before, it seems to recur periodically.  However, the announcement last week was mostly about distribution -  it didn't involve either deep or open access to Facebook data nor open access to its infrastructure.   F8 as it stands today is a partnering platform.  This one more small step in a long negotiation that is taking place between web sites on how data is owned, on how its shared between sites and how people navigate through services on one site to another.   This conversation is still in its infancy.  

XML really began the process of lateral data flows between sites and the vision of the semantic web offers a rich set possibilities — yet it's early days — most sites still operate in vaccum's and most user data is still stuck in proprietary silos.   And while the technology certainly needs to evolve so do the scope and kind of business arrangements.   The web of contracts, contracts between vertical sites, contacts between sites and users – needs to evolve in order for the vision of the semantic web to reach some of its compelling end points.   Weaving, back to the Facebook announcement.  What happens next is more interesting than what happened last week.    Facebook has taken a different approach to Myspace – who has opt'd to control much of its third party innovation through fairly simplistic interfaces and binary business driven rules, more like a traditional media company, vs. letting the community really build on top of the service in a meaningful manner.     As the Facebook platform evolves there are a handful of things I will be watching:

1. How deep are are the API's that Facebook is going to present to the community.    Facebook markup language is a proprietary API, the "platform" maybe wide in terms of distribution but its not deep, there is little to no access for third parties to the social data or infrastructure that makes Facebook such an interesting service, and its not open for developers to just build on, everyone accepted into the platform has to be sanctioned by Facebook, the degree of openness, real openness (vs. marketing gibberish) will dictate the depth and the value of the platform.   Amazon has done a great job at developing a set of platform services — starting with the affiliate model, extending it into community and then the Mechanical Turk and the elastic computing cloud services.  These web services were built step by step along with trust and a degree of openness that surprised many.    Pretty much every startup I work with today is using EC2/S3 — if Facebook going to have the same influence over the web application space, if so they need to open up more than a distribution funnel. iLike's weekend server hunt demonstrates a need on the infrastructure side, but the is also a real need re: social data.    Offering Facebook users the ability to port social data, their social network across applications and letting applications developers innovate on top of that data set would be really interesting.

2. How will the application metaphor evolve?   I see the metaphor Facebook has applied as the most interesting thing in the announcement last week.  The web has spawned many interesting platforms for micro application development.    Applets, plugin's -  from WordPress to Firefox to Myspace there is a an active ecosystem of development around many web sites.    But the term application suggests user control beyond a widget or plug-in, applications are often monolithic, the management of applications by the underlying OS is usually benign and in service to the application (get me that device driver)  — the term application presents a high bar for Facebook to jump over.    To me the use of the term suggests a rich set of API's and a clearly defined layer – a layering of both technical and business terms.   Its an exciting challenge to see if they can make this truly an application environments.   And if they do, what is Facebook's relationship to these applications?   The identity issue below is only scratching the surface of this question.   It was fascinating to me that in the announcement last week most of the mainstream press look in the rear view mirror for metaphors — this was going to be like windows was to the PC.   I hope not — we don't need another OS, what we need are open development platforms — and open access to data.    I did a lot of work on platforms a long time back — back in 1998, I invested in a company called WebOS that tried to go down the path of applying the desktop metaphor to the web, of duplicating the inadequacies of the desktop on the web.    There were few people comparing last week's announcement to Adobe's Apollo — Apollo is setup to be a more traditional, extensible platform.  One of the companies I am working with — im in like with you — is developing much of its service in Apollo.   Apollo is truly a web application environment — offering state management outside of the browser, for example Apollo will let me do my web mail while I am unconnected.  But Adobe is building this as a platform service, like Flash the intent is to proliferate the tool set across the web, developers will adopt it as will end users and like Flash it will provide revenue from scaled developers paying Adobe a license fee.   This is a platform business model that the market understands.   A cross platform run time isnt as sexy sounding at F8, but it might be more meaningful.  And then there is Firefox 3 — another valid comparison that didnt seem to come up in many discussions.   

3. How will application providers be promoted in Facebook?   This is critical to understanding the underlying business terms between the distributor and the application creator.   Last weeks announcement was about distribution, and it formalized an approach for Facebook partners, business development in a box, a highly scalable approach to partnering.     But what are the underlying economic drivers?     At AOL promotion and positioning was usually governed by dollars spent.    At Google it now seems to be about long term strategic value: years ago the Google services that were tiled above search results – were best in class – for finance related searches (search for a stock ticker), Yahoo finance was promoted, Mapquest was the default when you searched for a location.   Then slowly over time Google services received prominence equal or better to others.   Today its pretty much all Google services upfront, in default positions — nice to leave some pointers for competitors but as Google knows well defaults drive traffic and traffic drives revenue.  

Screenshot of Facebook's application directory

Last week the COO at Facebook, Owen Van Natta, said:  "How are we promising not to trump your application? We're going to level the playing field, developers won't be second-class citizens–we're going to compete directly with them."   Accordingly, the Facebook application directory is organized today mostly by popularity — but mostly is different to always. 

See the ringed sections of the screenshot — unlike third parties Facebook applications don't list the number of users of its applications (Marketplace is a Facebook application).    And note the that Application directory (boxed) starts with Facebook's top Applications.    Finally, as the users expands and contracts the application list (the more carat, where the arrow is pointing) Facebook's one advertisement on the page moves down, partially below the fold.  Tell me this execution isn't setup to collide with business priorities.

In Japan, on the cell phone, Do Co Mo understood that with a limited UI placement of third party services needed to be ranked by usage.   Is Facebook headed down the same path — and what does the COO really mean?? — Facebook owns this garden, competing directly with application providers is going to be, interesting.

4. How will Facebook manage identity and data across third party applications?   Some sites promoted in F8 seem to be managing identity independent from Facebook, others are doing a one click install and sign in (but even in the case of Mosoto, you are signed in for chat but to file share you need to sign in again?).    Does Facebook become a alternative identity broker on the web and if so they are going to have to a lot more open in their approach to data — open ID is a pretty high standard.      Facebook has traditionally had a fairly rough privacy policy — they gather a lot of data about their users and there has been a fair amount of controversy about it.    As they manage data across applications this is only going to get more challenging. 

5. Lastly, how does Zuckerberg social graph extend beyond the core college audience / behavior?   The feed feature added a whole new dimension to Facebook and extended the time people were spending on the site significantly, Comscore data suggests it went up by over 5 mins per day.   Fotolog has a similar, feature that alerts users to new uploads by friends — its a significant driver of our navigational based traffic.   But how does the audience and the use cases evolve beyond the core?   Will people outside of college enter in real names into profiles and will the social dynamics of the broader audience fit with the services that were built for the student based audience?   Over the past year I have started to use LinkedIn more — its starting to become useful, the network is large enough, the alerts I get from LinkedIn are useful — not spam.  I signed up for Facebook shortly after they opened up — but I didn't go back, till friends started inviting me.   Over the past 6 months I have visited the sites to confirm friends but there is nothing useful about Facebook as yet, and useful aside it better be either personal or entertaining — but like so many other social networks its about collecting connections, but whats are the services that are going to drive usage for me — I don't see it yet.   

This is a quote from Giga Om's review post the launch event, its worth a slow read.   "Zuckerberg says you can serve ads on your app pages and keep all the revenue, sell them yourselves or use a network, and process transactions within the site, keeping all the revenue without diverting users off Facebook. This was the opposite to what was stated in the WSJ article earlier this week, and gets by far the biggest reaction from the crowd."  

This got the biggest reaction from the crowd??  Maybe a crowd packed with Web 2.0 service and feature developers who are in need of an audience found it it interesting.    If a user today opt's in to use your site on Firefox — or your application on windows — or even within the grandfather of walled garden's AOL — you still get to keep the ad-revenue.  So why is this a big surprise?  Maybe the attention the announcement garnered is also about the proliferation of web based features searching for a destination to marry themselves to.

Intent and that Telegraph Road

A long time ago came a man on a track
Walking thirty miles with a pack on his back
And he put down his load where he thought it was the best
Made a home in the wilderness

I do think its worth do ask whats the intent behind the Facebook announcement, who is meant to serve and whats the need behind the F8 initiative?    The Facebook was launched as a service for US college students.   It was full of social tools, it let you build out your own network, post events, notes, photos and most importantly its all private, so that students can develop a profile that is real vs. many of the fantasy based profiling you see on Myspace and other sites.   Facebook achieved a lot of its early traction for the same reason as Cyworld did– you could enter your College, your year and actually find friends, colleagues, friends to be, cruches etc.  Because people used real names on the service — emails were verified by domain and you could find anyone in your university.   This was and is a big idea — few sites have a relationship based with their users that maps to real identities.     Anyone who has attended a US university or college knows exactly what this is about. Then came the monetization.  

Facebook started with advertising, they achieved some remarkable successes by mid 2005 they became profitable, they had 2,000+ colleges and 20,000+ high schools on the service.   And the audience was rabidly engaged — 2/3rd's of the active membership came to the site everyday.     But look at Facebook's reach through 2006 — it is flat, because by 2006 they had tapped into an audience and grown the business about as far as it could go given its natural limitations: students.    Reach tracked by AlexaThey were now faced with the question of how to scale your business beyond its base.   They could go global — there are services like FriendsReunited in the UK and Australia who are demonstrating, albeit with differences , that the market exists outside of the US for a Facebook like service.    And /or they could opt to extend the scope of the Facebook offering and try to reach a broader audience in the US beyond students.   They decided to push on both fronts but most significantly in September last year Facebook opened up to users irrespective of whether they were in school or not.   In 2007 Facebook's reach more than tripled.  Before they opened up the doors to the broader audience they were adding 15,000 members a day, today they are adding 100,000 a day (NYT stat, note Fortune says 150,000 a day).  They now have 24M active users, posting mostly Photos, notes and events.

Then came the churches then came the schools
Then came the lawyers then came the rules
Then came the trains and the trucks with their loads
And the dirty old track was the telegraph road

But now reach has extended they need to find ways to get people to spend more time on the site.  Here comes the platform initiative.  The platform that was released last week is about extending Facebook in a different manner to the other social networking sites.  Its about continuing to extend Facebook features by offering distribution of third party applications on Facebook.  Yet the features been added are contained within the Facebook experience.   Out the gate its a great opportunity for fledgling sites, particularly sites that are more of a feature than a destination — Facebook is offering one click installs for applications within Facebook. Its about distribution and its about continuing to drive the amount of time people are spending on the site, which in turns drives advertising.  Facebook is playing the same game as media aggregators have played since the dawn of time.    Whether its Disney, Yahoo or AOL — its all about getting in front of the distribution firehose — they are selling their audience.   Day 1 its not setup as a sale.   Remember that AOL used to pay service providers to offer content and services within the walled garden — then in 1996 when AOL hit a scale it stopped paying providers and started charging — bit by bit AOL flipped the model.  This all seems far less interesting and ambitious than the headlines suggest.   Zuckerberg told Kirkpatrick that what Facebook is unveiling would be "the most powerful distribution mechanism that's been created in a generation."  I hope its is more than that.     If Facebook's F8 is about trying to extend the size and scale of innovation and services in what amounts to another a walled garden experience it will another building block in the long history of web hype.  The Facebook has a great social platform to build off, I hope they are brave enough to let their users take their data and extend services beyond their control, beyond the walled garden.  

A last point worth making is the absence of Microsoft, Yahoo, Ebay and AOL in the platform / social networking space.     Live.com was meant to be a web development platform — but things hewed back to Windows with the launch of Vista.  Microsoft developed much of the thinking behind the web as a platform — with hailstorm and then live.com — but IE7 and Live haven't taken the lead.   Yahoo made all these great acquisitions, many of which they they have left in silos and failed to build upon.    Ebay has this amazing social / trust network that links merchants and end users.    We think of profiles as been specific to social net, but Ebays profiles as they relate to trust and commerce and communications (skype) are a trove of data that could be opened up to users, applications and the web as a whole.  And the merchant relationships, what about extending them into advertising.     Like wise with AOL — there was a recent comment about the importance of opening up AIM, again…     Its amazing to see the leaders of earlier generations of the web MIA — gone from this social networking race.

The semantic web needs to be distributed at its core, another walled garden is too low a bar for a really powerful and interesting social network to aim for.  I hope Facebook actually step beyond the marketing hype and deliver a social platform for the web.

Happy birthday Fotolog

Today is Fotolog's 5th birthday — a few words, and some images to mark the day.   It has been an amazing five years for Fotolog.  The history of the site is fairly straightforward.    Fotolog was started in mid '02 by Scott Heiferman.   Adam Seifer came on board soon after and took over the project and Scott focussed on building Meetup.   

The vision of the service was to cater to new picture taking behavior — as people were starting to adopt digital cameras the use cases around the capture and processing of images was also evolving.  Pictures have always been social – but the digital world was giving images a whole new social dimension.  Fotolog was created as a social media network — the genesis was Photo Blogging, the result was a mixture of social networking and user created media sharing.   This is what Scott's original Flog looked like:     

First Cyper Picture

The layout of Fotolog, was and is intentionally simple.     Fotolog has resisted the temptation to add feature after feature — rather it has stuck to offering a handful of features, similar to Craig's list the focus has been on the content and the conversations.    From the early days Scott and Adam had the vision that the pages on Fotolog needed to be social.    They needed to include not only your images, but also images from across the network, providing a visual navigation that today drives much of the time our members spend on the site, a self formed, organic distribution system, letting members see and be seen.    Complementing this social network of images they added comments and guest book entries — making the experience one where media intersects with communications, day in day out, millions of images collide with billions of conversations.     The growth of Fotolog has been steady and consistent — but it took 2 years to gather real steam — as the chart below illustrates.   In early 2005 we hit a million members — amazing to consider, since we are now adding close to a million a month.  

Milestones Flog

The phenomena started in Brazil.   Adam will tell you that in those early days he was concerned that Fotolog might get stuck in Brazil, Portuguese isn't a global language.   But Brazilian's have turned out to be a strong early indicator of global internet phenomenas — from ICQ to Hotmail to Okrut to Fotolog, Brazilians seem to have a knack for early adoption of global social platforms.  The Fotolog audience started skipping geographies and borders and today we sign up members from approximately 70 different countries everyday.    Our audience is still very large in South and Central America and we have complemented that base with strong European growth.   The primary language of Fotolog is images, beyond that the chatter around the site includes and mixes many different languages.  

This is what the home page looked like when we hit a million members.   Its not that different to what the home page looks like today — again, simplicity and consistency has mattered to the history of Fotolog.  

 1MM Flog'ers

Out of interest I checked how many of the 15 members with images above were still active on Fotolog.    A quick check of member names and recent posts indicated that nine of them have updated Flogs in the past six months.    Four of them have updated their Flog in the past 3 weeks — juju15 , lepadilha, tabata, mash — its amazing that after years members are still coming back and using Fotolog to share their world 

Yesterday we had 673,150 uploads to the site — with our regime of one photo a day and 8.3M member accounts that means that yesterday a little over 8% of the people who have ever signed up to the site, uploaded a photo to Fotolog.    That doesnt included all the members who just visited friends Flogs — but to have 8 percent of your membership coming back everyday is pretty engaging and pretty amazing.   Fotolog also hit #18 on Alexa earlier this week — our highest ranking ever.   The traffic on the site continues to surge — our reach continues to grow (see a ranking vs. facebook), and for people who want to relate us to other US photo sites (which I always say is a poor comparison, given that Fotolog is about self publishing and socializing and photo's just happen to be the medium, they aren't the end), see the relative traffic rankings over the past three years, vs. other photo sites, Photobucket is picking up share, Flickr seems to be flatlining, and Shutterfly is still a seasonal processing site.     Fotolog is a testament to the creativity the internet has unleashed — millions of people sharing moments of their lives through images and conversations.  

A thank you from the team in NY to all of the people and all of our members who have made this global collage of conversations possible.  

And read Adam's Birthday post here .    

The Photobucket Sale and Fotolog

In the wake of Photobucket's sale last week to News Corp., people have asked me two questions: 

(i) How is Fotolog different from Photobucket?
(ii) Why did News Corp. buy photobucket? 

With the week now over, let me take a pass at answering both questions.  

How is Fotolog different from Photobucket?  

Photobucket and Fotolog are both built around media (photos and videos) and they are both related to social networking.  And they are both experiencing rapid growth.    But that’s where the similarity ends. Photobucket is a tool that is agnostic of destination – while Fotolog is a destination. Photobucket stores image-based media, then distributes it to your page on social networking sites such as Myspace, Bebo, Piczo, Friendster, etc. Fotolog is a destination where you post one image a day which then becomes the center of a social interaction/chat with your friends.  It’s intentionally simple – stripped down and focused on the social media experience. 

The Photobucket acquisition affirms the importance of user-generated content of any media type — images, video, etc. — and media's emerging relationship with social networking. I often call Fotolog a social media site because it's all about the intersection of media and communications, two things which were once like oil and water — they traveled on separate pipes and represented distinct experiences. But they are now coming together in fascinating ways. It's early days, but I believe that the combination of media and communications — gifting, sharing and transferring social capital, between users/members, via user-generated content or digital assets that represent identity — is a more than a trend. 

The first generation of social-networking sites stressed self-publishing over connections (from Geocities, to Tripod to Blogger).  The next generation focused mostly on connections (sixdegrees, and friendster are the classic examples here — tools to gather friends and connections, as social capital accrues in theory to the people with the most connections). The third and current generation of sites blends media with connections — each with a different emphasis. 

Focusing in on Photobucket and Fotolog — the difference between the two is clear when you look at traffic and usage data. Both sites are on a tear. Alexa (link #1 below) ranks Fotolog as 24th largest in the world — Photobucket is 44th . As CEO of Fotolog, I'm obviously privy to more data, but focussing on proportionate growth — the Alexa link shows rapid growth for both sites. Comscore measures Photobucket with 28M uniques and us with 13M. Comscore is panel-based, and at Fotolog we are working with some other data shops to confirm this data.  We recently starting dropping Quantcast pixels on our site and they track us at 26M uniques — data sources aside, the point here is that both sites are large and growing fast. 

Site usage patterns tell a different story. See the table below with Comscore data from March — the average minutes per day is hightlighted. Photobucket averages 7 minutes per day while Fotolog averages 23 minutes per day. Fotolog does 261M total visits, compared to 90M for Photobucket. Media-wise Photobucket has 2.5BN photos, Fotolog has about 1/10th that number at 230M – but in order to maximize user response, Fotololg only permits one up load per day.  Photobucket also offers video, which Fotolog is targeting for the future. Socially, the sites couldn’t be more different, given Fotolog’s status as a destination with an emphasis on conversations.  Our site has more than 2BN conversations posted, approximately 10 per image. 

Data table

In terms of user profile, Photobucket and Fotolog are both very international. Alexa tracks 29% of Photobucket's audience as US-based (Myspace-related) with a further 5% in the UK (Bebo) and the remainder apparently pretty evenly spread worldwide. I do wonder how accurate this data is — as approx 60% of Photobuckets traffic is tethered to Myspace which in turn is mostly US traffic. I know in Fotolog's case the Alexa geographical ratings are different to our Google analytic ratings. Last month a fifth of our traffic was in Southern Europe (Spain, Italy, Germany), which doesn’t come across clearly on Alexa. Fotolog has members signing up every day from 70 different countries, with the bulk of our audience in South/Central America (where the viral growth first took off) and Europe. The site is growing in some European countries, month over month, at a blistering 28%. Numbers like that compound fast. And the growth is 100% organic, with no marketing or member incentives. 

So why did Newscorp buy Photobucket?

The first reason that is much cited for the transaction is defensive — News Corp. / Myspace bought Photobucket to make sure no one else bought them. News Corp. understands that the media on its social network is vital to the experience, and having a third party manage the bulk of the media on MySpace was a risk. This concern can only have been exacerbated with the rise of YouTube and its purchase by Google.   Moreover, Photobucket's push into video must be attractive for News Corp. as a foil for its competition with YouTube – it’s no coincidence that since that deal, Myspace has been so aggressively promoting its videos on its homepage and elsewhere. So media matters — but this is more than media or UGC – It’s also the most common form of digital personalization.   Taking photos out of their analog construct, they are a very simple form of digital customization, it’s far easier to take a picture of something than to render some customization in photoshop. On Fotolog we have tens of thousands of pictures of people's computer screens while gaming, or desktops, or pictures of people sneekers – Fotolog members have posted over 60,000 pictures of Converse / Chuck Tailor's — or custom images.  In other words, this is about personalization, and the camera or a "picture" is just a tool.  

Beyond this strategy my guess there is a broader opportunity — Photobucket is a photo and video tool that could become a web-wide locker for the storage of digital media. Just as eBay's acquisition of Pay Pal wasn't meant to just serve just eBay, my guess is that NewsCorp’s purchase of Photobucket isn’t just meant to serve MySpace.  The opportunity is to serve the web, I suspect that’s the broad strategy.   Granted, there are risks to a broader strategy — eBay didn't effectively execute and while Pay Pal has recently picked up share the first few years after that acquisition amounted to treading water at best.    The fact that Google is now driving into the payments business is a testament to that failure — eBay had the running room to be the web payments platform.   There is also an audience risk — Photobucket users might not pick it as a the service of choice for other media types, the audience may move on and News Corp. could be faced with a whole new dominant parasite on its host in 18 months.    Given all of this the deal once again distinguishes News Corp. as one of the media companies in the world driving headlong into building digital media assets that are indigenous, not extensions of existing franchises.

Lastly, people wonder what the Photobucket deal means in terms of valuation and monetization of social media sites. On this front, the acquisition is good news for Fotolog and our peers. In contrast to You Tube, Photobucket demonstrated that UGC could be effectively monetized, a path that we are following at Fotolog.   The market has valued highly a popular tool that facilitates social media networking communities.   That only reflects well on both the segment and both the destinations and tools associated with it.   

Resolution, from Thomas the tank engine to the Wii

Thomas that tank engine

I was thinking about how the resolution of an experience changes the experience.    Thoughts began while playing with my children.   My son loves to play with trains, small Thomas trains, small tracks you piece together and trains you push around.   For Christmas my brother asked me what my son would like and I thought that a battery powered train (see right picture) would be a hit.    It was but it also changed the way my children play with the trains.   With the battery powered train the focus became setting up the tracks in some form of circular shape and then watch them go round and round.    Play with the push trains had been much more imaginative, it was about setting up the tracks, creating narratives and pushing the trains around, speaking the narratives out loud.   Thomas the electric tank engine stopped most of that.    It was now about just watching him (the train that is) chug round and round the track, usually pulling cars, sometimes on his own.   Less creative, less social, less physical, and shorter time wise.   I was thinking how does the resolution of media and experience effect the experience of the media and play associated with it.   

It seems that like with comics if media or the experience isnt too polished, too finished, it leaves plenty of room for the human mind to fill in the gaps and engage in the experience vs. observe the experience.   This reminded me of a great interview with Brian Eno where he talked about the importance of leaving media and cultural products open and "unfinshed" (from 1995, I found the orgnial and posted it to findin.gs DB).  But it also seems like engaging kinesthetically with the play transforms it — as my wife said when i asked her why the play was different with the trains you have to push she said because they "have to be the motion" not observe it.        

The Wii, is unfinished, resolution is low, characters (in sports for example) are comic like and the physical engagement in the experience manages to trick the human mind, at least mine, that the experience is pretty much "real".   Its amazing what a little bit of sound and a slight vibration in the remote does — its sophisticated enough to telling my brain that the experienceis so close to tennis or boxing that its real.  Its interesting to think about how these somewhat rough, unfinished experiences are open enough to let one become fully immersed.    Like WOW vs. Second Life.   The environment is unfinished and pretty rough — but the experience is one of total immersion.    And medieval narratives are such a dominant underpinning in our culture that the moment you engage in WOW you have a narrative to engage with.   Second Life seems more polished, and it doesn't have a narrative overlay, much of it is about events and engaging people in living a "second life".      Now its time to get back to my weekend and leave this post, well — unfinished.

Jim Gray Missing / Amazon Mechanical Turk

Great example of what the turk can do — distributed application to search for a missing person on satellite images, takes 5 mins of time to sign in and search five images / details below: 

Amazon Mechanical Turk Jim Gray Missing: Help find him by searching satellite imagery Jim Gray Background On Sunday, January 28th, 2007, Jim Gray, a renowned computer scientist was reported missing at sea. As of Thursday, Feb. 1st, the US Coast Guard has called off the search, having found no trace of the boat or any of its emergency equipment. Follow the story here. Through the generous efforts of his friends, family, various communities and agencies, detailed satellite imagery has been made available for his last known whereabouts.

Multitouch interface

This is a video of technology that is underdevelopment allowing the user to interact with a screen using more than one finger at a time . Its beautiful to watch and it offers a glimmer on how interaction with people, media and services is going to evolve. It makes me want to touch and play with it, the organge bubbles are great. (also available on youtube, see here. thks to Jerrys list for link)