Tapping is way easier than typing 🙂 – excited to see this launch! I will follow-up with a detailed post on some of the local features that launched with this release.
These days, you expect information at your fingertips. Starting today, we’re making it easier than ever to stay in the know and get the information you need quickly and easily. Starting today in the U.S., we’re introducing tappable shortcuts on the Google app for Android and iOS and Google.com on the mobile web that give…
via New shortcuts in Search help keep you in the know — The Keyword
There was this big splash yesterday from the allfacebook.com folks about facebook declaring war against google. In that post they said:
“While we suggested that the like had just replaced the link, it has now become abundantly clear what Facebook’s intentions are. Facebook wants to launch the social semantic search engine as we alluded to during f8. Now that the search results are officially showing up as Facebook search results, the war has begun.”
Using “Like” as an input into relevance is a fine idea – but Facebook (or Twitter for that matter) are long ways away from building a full-blown search engine purely based on that data. Why do I say that? Fundamentally “Likes” are ambiguous, random and spam-prone (and impossible to detect). These are real big issues for any search engine to solve before they label themselves a search engine.
Google’s “Link” approach has many merits over the Facebook’s “Like” approach; Link based approach fares better primarily in determining the context, and detecting spam. To that end, technically it is some-what easier to detect a spam link or a link that was “paid” for SEO purposes. However, it is nearly impossible to determine if a Like was a paid Like or a spurious Like. It’s just not easy.
On top of this, in Google’s eco-system a link is only as important as the link source – or the domain where its coming from. Google has come up with an authority system, Page Rank(PR), for domains based on a very complex but trusted model – this system helps us eliminate spam every time we perform a search on Google. If there were to be an equivalent to this from Facebook, it has to be around the people that are performing the Likes. So even for the sake of argument if you assume a Social Rank (SR) for a given individual based on their interests and friends and the related metadata, it is nearly impossible to determine context around that person’s likes and dislikes. Reason? Its extremely complicated and highly error prone to model a human’s interaction context.
Now, if Facebook were to combine this Like data with traditional Link based relevance, it could get interesting – but still its an incremental advancement but not a game changer that replaces Google.
I’m sure guys at Facebook are thinking of the potential pitfalls of Like based relevance and the shady “Paid Like” market that it could create. I’m also sure that if Facebook were to launch a “search engine” without proper technology/tools to combat “Like Spam”, we all will just grow to Dislike the Like.
As I said before this is not an easy problem to solve, so is Facebook ready to be a Google rival? not a chance, not yet, at least.
The Dealmap is now live.
This is the brand new local deal search and discovery engine that we have built at Center’d. The goal is very simple – we want you to find the best deals in your neighborhood and start saving money.
The origins of The Dealmap are deeply rooted into to the flavors work that we did with Center’d last year. When we originally designed various flavors for any city we thought “Cheap” flavor is a good candidate – so we rolled out a series of “deal” pages for different cities like this one: San Francisco Cheap Things To Do. As we started getting feedback on flavors it was pretty clear that people loved local deals – and the use case was so strong that we had to go with its own domain – that’s how The Dealmap was born!
While we are using Center’d data platform heavily in The Dealmap’s making, there are a ton of cool stuff that we have built ground-up for fast changing (deals are time sensitive) local data. I will be discussing various technology pieces here on my blog in coming weeks.
We also have a full-blown API – as a matter of fact – we built the API first and then built The Dealmap on top of that (hey, we eat our own dog-food! :). So if you are itching to play with this unique local deal dataset, feel free to join the conversation or just get started with the API docs.
It’s exciting to see The Dealmap live, but our job here is just getting started to get you the freshest deals daily!
So what do you think? Send me the feedback!