Google & Facebook & Twitter, oh my!


Silicon Alley Insider is discussing an interesting analysis suggesting that Facebook could be a “Google Killer” thanks to Facebook’s greater rate of growth and the suggestion that Facebook now accounts for 19% of incoming Google unique user traffic, up from 9% a year ago.

My intuitive take on this is that the analysis is misleading and seriously flawed for several reasons:

1) Rates of growth will tend to be vastly larger as sites approach the market saturation levels we have with Google and I think we may soon have with Facebook.      The new 800 pound Gorilla on the social scene is  Twitter which is growing at over 1000% last year.   You can’t 10x your current traffic for long without exhausting all people on earth, so all these rates must slow, and soon.     e.g. at 1000% annual growth with 5,000,000 unique users you’ll exhaust earth’s population in about 3 years, 2 months.

2) Twitter will chip away at Facebook user’s time online, and fast.    No major application has grown at the rate we see now at Twitter.    For many reasons we’ll see Twitter continue to grow explosively for at least a few years and I’ll be surprised if it does not rival Facebook within 3 years in terms of use.    Most high tech early adopters are tending to move away from time on Facebook and towards time on Twitter, and major media is showing a huge enthusiasm for promoting Twitter feedback on TV to mainstream America.   Twitter, not Facebook, is the application with the most disruptive potential.

3) Monetization of Social Media sucks, and will continue to suck.    Google can easily monetize searches for things where Facebook continues to struggle to find ways to turn the vast numbers of views into big money.   Although they are likely to make modest progress,  I do not see social networking as potentially all that lucrative where keyword search, almost by definition, remains the best high value internet monetizing framework.

4) The claim that 19% of Google uniques from Facebook  seems very, very dubious.    This number appears to be from Comscore and does not even make sense.   Facebook searches do not generally direct people to Google, so presumably this is suggesting that a staggering number of people leave Facebook to go do a  search at Google?    I’m trying to find more detail about this but it does not pass the sniff test even if they are simply stating that people tend to jump to Google after visiting Facebook, which is correlation and probably not causation.
This suggests that Facebook’s 236m uniques drive  (.19 x 772m) =     146m uniques to Google?         Something is  Facebook fishy here.

I am confident that all three of these applications will continue to thrive because each is filling a different online need and doing the job well.   There is no need to converge online activity more than has already been done.   For example it’s not inconvenient to switch to your banking or travel booking website for those tasks, and many probably prefer this to having a single “one stop shop” for all online activity.     Ironically Facebook’s attempts to imitate Twitter may actually accelerate the growth of Twitter which seems to be a better way to communicate quickly and effectively and superficially with many contacts.      Facebook, however, has been making good progress with their “open social” efforts that allow users to log in to other sites easily and then post blog comments and other activity to their Facebook account.     Facebook will thrive but as the recent revaluations / downward valuations suggest Facebook is no Google and will never be Google.    Search trumps social in terms of making money, and the mother’s milk of internet growth and to some extent  innovation is …. money   (though I’d say innovation is fueled by the lure of wealth as much as real wealth).

The Man Who Sued Google – and won $731


The following fun item came up today from Aaron, who managed to sue Google in small claims court over a Google Adwords / Adsense dispute and actually ….. won the case.  Here’s the story.

Adsense expert Jennifer Slegg suggests Aaron may have been violating the terms and I think most advertisers would agree that we want Google to police Adsense very carefully to avoid the many problems that come when publishers’ material is unlikely to generate business for the advertiser.

However I’m also very sympathetic to Aaron’s criticisms of Google’s failure to bring enough transparency to the adsense and ranking processes despite very noble individual activity by guys like Matt Cutts, Adam Lasnick, Brian White, and pretty much all the engineers I’ve talked to in person. My beef is with Google’s company policy of sharing too little information and having “too weak” diagnostics that don’t allow webmasters to fix common problems or challenge fairly subjective ranking decisions, especially when what Google sees as questionable linking activity is involved.

Google suggests that ranking opacity prevents spam where I’d argue that on balance it would help avoid many common practices that now penalize people without them even knowing. Just last week, for example, Matt pointed to a very expensive Forrester business report on “legitimate” SEO approaches that suggested a “paid blog posting” tactic that could get both the blogger and the referenced site in ranking trouble with Google. Although Matt is one of the last people at Google I’d accuse of being “too secretive”, the overall policy is too opaque to reasonable let legitimate webmasters make the best decisions for their sites and clients. The Webmaster Console has helped but it’s too little too late in my view. Google owes every webmaster a clear answer to the simple question: Why is my site ranked below clearly inferior sites? Usually this answer would involve a downranking from link manipulations, selling links to other sites, or other things Google finds offensive and lists vaguely in the Webmaster Guidelines.

I do complement Google on the fairly new webmaster forums feature which can be very helpful in diagnosing problems with websites:
http://www.google.com/support/forum/p/Webmasters?hl=en

Google Social Search Wiki Launches


Today’s tech blogOsphere buzz is about Google’s new wiki search feature that allows users to rank their own results.     This appears to me to be a splendid idea although I agree with some who say it won’t get used much.

However, for those who use this it may eventually allow a kind of search ranking we have never seen, where user defined preferences trump the mysterious algorithmic magic mistakes, gradually giving the user a great set of results well optimized to their needs.

I’d suggest that “perfect individualized search” may only require two basic steps – the first is a *discovery* part where you surface content relevant to your particular query and then plow through that manually to determine which sites best fit your needs.   Google does a pretty good job of facilitating that right now. However a second piece would allow you to build on those “personally filtered” results in various ways – some as simple as just listing them in rough order of relevance to you as Google is now doing.

Is this a good Google idea?    Yes!     Will anybody much use this?   Nope, because our habits as humans don’t incline us to be this organized.     I had a great conversation a few days ago with the developer of Reuters Calais semantic search – a brilliant tool designed to surface relevancy and meaning from massive document archives.    We were noting how difficult is is to simply break the habit of using Google search, even when it’s not the most appropriate tool for the job at hand.

Funny primates we !

Google Blog reports on the new search wiki

Google Knol – very good but very failing?


Google Knol, the Googley competition for Wikipedia, was announced with some fanfare and really seemed like a great idea.    The ‘knol’ stands for “Knowledge”, and articles are written by people who verify their identities and presumably have some knowledge of the topic.    Community ratings are used to filter good from bad knol posts, presumably leaving the best topical coverage at the top of the knol heap.

However as with many Google innovations outside of pure keyword search knol appears to be making gaining little traction with the internet community.     I say this because I rarely see the sited linked to or referenced by blogs or websites and also from my own knol page for “Beijing” which as the top “Beijing” and “Beijing China”  listing you’d think would have seen fairly big traffic over the past months which included the Beijing Olympics.   Yet in about six months that page has only seen 249 total views – that is less than many of my blog posts would see in just a few days here at Joe Duck.

So what’s up with the decisions people make about using one resource over another?    Like Wikipedia Google Knol is an excellent resource.   Reading my Beijing page, for example, would give you some quick and helpful insights into “must see” attractions there.   It’s no travel guide but it would prove a lot more helpful than many sites that outrank it at Google for the term “Beijing”.    Google appears to have relegated their own knol listings to obscure rankings – perhaps because linkage is very low given the low use of knol.    Like many Google search innovations knol appears bound to the dustbin of obscurity as Wikipedia continues to dominate the rankings for many terms (as they should – it’s generally the best coverage although generally very weak for travel because they fail to capture commercial info adequately).

My simple explanation would be that we are prisoners of habit and have trouble managing the plethora of information resources that lie – literally – at our fingertips.   We all have yet to understand much about how the internet works, and how inadequate a picture one gets if they simply stick to a keyword search and hope for the best.

Steve Fossett ID, cash, jacket found near Mammoth Lakes, California. Hoax or real?


Hikers and searchers from the Mammoth Lakes Calfornia are reporting they have found an FAA ID for Steve Fossett along with a jacket and some crumpled money.     I’m confused as to why this is only coming out now after two days since the Fossett story was international news for many weeks as thousands participated in a huge search for the missing aviator.     He was lost after flying from Nevada on September 3, 2007.

Fossett was declared dead February 15, 2008.   If this find is not a hoax it implies Fossett may have been alive after the crash.   However it seems very odd that a survival expert of Fossett’s caliber could have survived the crash and wandered around leaving only a few items rather than marking a large area with rocks and markings to signal the aircraft he knew would come looking for him.

The Fossett mystery continues.

More from Mammoth Lakes News

Danger DataFossett Flight ?

Matt Cutts from Google


Matt Cutts at the Google Dance
Originally uploaded by JoeDuck

It’s always great to get a chance to talk to Matt Cutts at search conferences though I didn’t have any good complicated search questions to bug him about this year. Matt is one of the early Google folks and arguably the most knowledgeable search expert in the world since he’s one of the few people who knows the Google algorithm inside out. Matt’s actually listed on the key Google search patent.

Today I noticed that Matt’s post about Google Chrome is near the top at Techmeme after some early reports suggested Google was going to nab all the info people created via use of the Chrome browser. Although I do not worry about Google stealing the content I create using their tools I was surprised in the discussion at Matt’s blog to see how people probably do not understand how much of your data from searches, emails, and other online tools is analyzed by search engines, ISPs, and probably at least a few government agencies. I wrote over there:

Well, I’m sure folks like Marshall knew that Google was not out to steal content. What people should be as concerned about is how the Chrome datastream will be processed now and over time, and how open will it be to examination by companies for advertising purposes ? Personally I’m OK with that but I think many people are not, and the lack of transparency in this area bothers me.

Somebody even suggested I was foolish to think they’d use Chrome data to target advertising, to which I replied:

Josh – you are naive to assume Google does so little with the search term data they explicitly say they have the right to collect. In Gmail, for example, some portion of your header is read by Google (probably just the title and not the content) so that ads can be targeted to you on those topics. Google Toolbar collects a lot of information and my understanding this helps target PPC advertisements though I’m not sure about that. As i noted I’m personally OK with this level of snooping, but I believe Google should make it much clearer what they do with the data they collect and probably also have options so users can delete any information they created – including their search streams – as they see fit.

SES, SEO, Blogs


Blogging the conference has been a great way to test some ideas about blog ranking and watch Google struggle to bring the most relevant content into the main search (they’ve done pretty well with blog search, not so well with regular search which will have all the blog content listed in a week or so, basically too late to be all that helpful to users).  More importantly the stuff from *this year* will probably be ranked above the SES San Jose 2009 information that is likely what people using that term are searching for effective next year.   I’d think they could simply increase the value of ‘freshness’ for listings tagged as events related.

I had a nice discussion about this “events” ranking challenge with Jonathan from Google at the party. The problem is that to combat spam Google does not push out blog content immediately, meaning that if you search for “SES San Jose”, especially a month ago or so, you would have been likely to get old, dated content rather than the current SES page you’d normally want to find. This appears related to linking issues (newer has fewer), but also I think the regular engine is allergic to new content, which is why you’ll often find the most relevant Google stuff at the blog search if it’s a topic that is covered heavily by blogs such as SES or CES Las Vegas where I noted the same issues of “stale content” in the main search with “great content” in the blog search.

I remain convinced that some of the challenges faced in ranking could be solved by a combination of more algorithmic transparency from Google combined with greater accountability by publishers who’d agree to provide a lot more information about their companies so that Google can get a good handle on the credibility of the online landscape. This webmaster ID is happening now in several ways but I’d think it could be scaled up to include pretty much everybody publishing anything online (ie if you don’t register you’ll be subjected to higher scrutiny).

More Tech Posts moved here