Tuesday, 12 October 2010

Yahoo launches new search interface

Yahoo has announced a revised look to its search engine, with a graphical interface that it hopes will attract more users. The change has been covered widely, including by Advertising Age, which reports that Yahoo's new design allows for more display-like advertising placements. This coincides with Yahoo's move to integrate Bing's search engine into these new look listings.

The changes are going to be introduced gradually so that whilst most searches conducted across the site won't be affected immediately, a search for musical artists, movies, Hollywood personalities or trending news topics will appear in a new format called the "accordion module." With the new layout, a window of information, called an "overview," loads at the top of the page that breaks up results into regular links, images, videos, events and even results from Twitter.

The idea from Yahoo is to "entertain" the user, as well as to provide relevant search results. Whereas Google's approach has always been to move searchers off their site as soon as possible by presenting them with the most relevant results, quickly, in contrast Yahoo's new search page appears to be designed to keep users on the page.

Labels: ,

0 Comments

Friday, 10 September 2010

More about Google Instant

Following the launch of Google Instant earlier this week, the Google, blog has published more information from the search engineering perspective as to how and why the updated search engine was developed. It outlines how Google have changed the search process from a static HTML page into an AJAX application and the challenges that were faced in doing so.

The post says that the key design challenge was to make sure users would notice relevant results without being distracted, and the constantly changing results appearing as you type can take some getting used to. Google tested a series of prototypes and ran usability studies and search experiments to try different interfaces and search results as the user typed their search.

For the launch, Google decided on a single search model which includes the query prediction in the search box in gray text as well as results for the top prediction that update continuously while the user types. In user studies, people quickly found that they would type until the gray text matches their search intention and then moved their eyes to the results. The speed of the results changing wasn't seen to be a distraction but this can depend on the user's connection speed and browser.

The mechanics of Google Instant mean that the search engine is serving five to seven times as many results pages for each query performed, compared to the original version of Google. This required some increases to Google's servers and back-end capacity, but they also developed other techniques such as new caches that can handle high request rates, user-state data to keep track of the results pages already shown to a given user, and optimized page-rendering JavaScript code to help ensure web browsers could keep up with the rest of the system.

This is undoubtedly a major step-forward in search engine technology and throws down a challenge for Bing to match their search experience to this. The search process will continue to develop and potentially change the way that people will search and possibly rely on the predictions made by Google.

Labels: ,

0 Comments

Thursday, 9 September 2010

Google launches Instant search results

A big announcement from Google this week has been widely covered in the press (such as the BBC) and 'blogosphere'. Google Instant has been tagged by the company as "search at the speed of thought" and represents a change in the way the search engine displays results, so that now the listings are displayed as soon as a user types in a query, rather than waiting until the Search or Enter button is clicked.

The other main change with Google's search results is that the engine now tries to predict the likely query and need of the searcher, so that the search suggestion bar and results will modify as the displayed results as more letters are typed into the search box. Google estimates that the typical user spends 9 seconds entering a query and 15 seconds looking for answers, so that the new Google Instant tool could shave between 2-5 seconds off a typical web search.

It's another move by Google to improve their search performance over Microsoft's Bing engine, as well as deflecting the coverage away from Bing now powering the search results for Yahoo in the US. The new search results are now available in the US, using a larger search home page and a centred layout for the search results. Instant will then be rolled out to other regional versions of Google in the coming weeks.

The launch of instant has also created a lot of comment in the search engine optimisation (SEO) community, with some saying this changes the whole landscape. However, this seems an over-reaction as the underlying search results are still being generated with the same basic principles, and although this may start to change search behaviour over time, the ultimate aim of SEO to present a business in front of relevant searchers and so drive traffic to a website remains unchanged.

Labels: ,

0 Comments

Monday, 19 July 2010

Google announces acquisition of Metaweb

Google has announced that it has acquired Metaweb, a company that maintains "an open database of things in the world". Central to Metaweb's products is Freebase, a free and open database of over 12 million items, including movies, books, TV shows, celebrities, locations, companies and more. Google plans to use this content to improve search beyond words by an understanding of the relationships between real-world entities that can help to deliver relevant information more quickly.

With features like rich snippets and the search answers feature, Google says that they are still just beginning to apply an understanding of the web to make search better. By using the technology developed by Metaweb, Google wants to make search more effective by developing semantic search which will be able to provide better answers.

Google says that they want to maintain Freebase as a free and open database, as well as further develop this tool with the Metaweb team to make it a richer online resource that will also benefit from third-party developers using the open source platform to improve the service.

Labels: , ,

0 Comments

Tuesday, 8 December 2009

Google launches 'real time' search

As widely reported by the global media, including the BBC, Google has launched a real-time search service as part of its main search results, giving users access to information that has just been published from various sources, such as news, social networks and Twitter (as they announced recently).

Google says that this information will be taken from over a billion pages on the web and reflects the changing nature of content and search on the web. The announcement was made at a special event staged at the Computer History Museum in California, where Google said this was the first time that any search engine has integrated the real-time web into its results page. The new real-time search will also be available on phones and is being rolled out now.

Google's vice-president of search Marissa Mayer was quoted as saying: "This is a technical marvel, getting all these updates in seconds, making them searchable right after they are posted and making them available so that anyone in the world can find them. The updates (on Twitter) are so truthful and so in the moment. That is a really, really powerful part of this. Are you at this event right now? Are you on this ski slope right now? And because of that 'right now' element of it , this is hugely valuable data".

Labels: , ,

0 Comments

Thursday, 24 September 2009

Google Sidewiki launched

The Google blog has announced the launch of another notable new product, Google Sidewiki. This new feature allows searchers to contribute helpful information next to any webpage and can display a browser sidebar next to the web page, where users can read and write entries along the side of the page.

This is an extension of Google's personalised search and 'search wiki' option launched at the end of 2007, which allows users to adjust their own search results and add comments against ranked websites. The new Sidewiki tool now takes this a step further by giving users the chance to share knowledge, experience or advice against web content.

In developing Sidewiki, Google says that a priority was for users to see the most relevant entries first, so they have developed a system to rank the comments that are added in the 'best' order. So, instead of displaying the most recent entries first, the Sidewiki ranks entries using an algorithm that promotes the most useful, high-quality entries. It takes into account feedback from users, previous entries made by the same author, and many other signals they have developed and tracked.

This should help to address the obvious concerns of website owners that competitors will post negative comments and reviews against their web content, in much the same way that review based websites have been trying to deal with competitive 'spam'. There is also the ultimate question of how the tone of comments will be used by Google in the long term to have another impact on the relevancy of search ranking results.

Another feature of the Sidewiki is that the technology will match comments about a web page with other websites where the same content is displayed. This will help to broaden the value of the system and to reduce the need for duplicated comments or posts. Google is also going to use relevant posts from blogs and other sources that talk about the specific page of content so that users can discover their insights more easily, right next to the page they refer to.

Google Sidewiki is being made available as a new feature of the Google Toolbar so you need to download the latest version to access this sidebar and add or view comments. It's still going through a beta stage of development and Google will be improving and enhancing this feature in the coming months. You can also view more information about this tool here: http://www.google.com/sidewiki/intl/en/learnmore.html

Labels: ,

0 Comments

Tuesday, 18 August 2009

Research shows loyalty of Google searchers

New figures published by US research agency, comScore, show that Google holds greater loyalty amongst its users compared to Yahoo! and Microsoft. As reported by Reuters, this new data illustrates that Google not only has a very strong market share, but also retains searchers for longer with more searches conducted each month.

The research also shows that Yahoo! and Microsoft have a combined search penetration of 73% in the US, which isn't too far behind Google's level of 84%. However, Google searchers conduct an average of 54.5 searches a month, which is about double the number of searches recorded by users of Yahoo! and Microsoft combined - these users search on average 26.9 times a month, according to the comScore report.

In terms of loyalty, the research found that Google searchers make nearly 70% of their searches on Google sites whereas people who use Yahoo! and Microsoft sites combined search there about 33% of the time and also use Google heavily. This gives the newly combined force of Yahoo! and Microsoft a challenging target to reach, which is likely to an even wider gap in Australia and other countries where Google dominates even more than in the US.

Labels: ,

0 Comments

Monday, 6 April 2009

The new era of "Search 3.0"

A concise article from Advertising Age (subscription required) outlines the different stages of search over the past 15 years and what the new 'search 3.0' means for advertisers. The new era is one that combines the traditional search experience with social networks and user generated content to help refine the results and feedback for the searcher. As the article says, finding the right content is as much about whom it comes from as where you find it. For many companies and brands, this creates a host of new challenges and opportunities beyond the traditional search channel.

Search 1.0 is now seen as the first era of search engines which were focused on pages and the content within them. Results were ranked based on the number of times a particular keyword showed up in the page content or meta data, so that SEO (search engine optimisation) began as a core method of online marketing.

Search 2.0 is now seen as the period following the launch of Google where the focus shifted to the search network due to Google's use of PageRank and the importance of links between sites to establish authority. During this period, quality also became important with the relevancy of a landing page to a search query, which also became a key factor within Google AdWords.

With the new era of Search 3.0, relevance is now seen as not only what's on a page and surrounds it (links to it) but how that data also relates to the searcher's personal network. As more and more people connect to each other through social networks, the resulting social graph with content, links and comment is proving extremely powerful in helping users filter the data coming at them.

An example of this is YouTube, which started as a service that allows people to post videos but has since become the primary source that people turn to when they want to find video content on any subject imaginable. Also Twitter started as a way to communicate short personal status updates to friends, but is now becoming a search engine i its own right that allows users to tap into what's going on now.

The impact of these changes for brands, marketers and advertisers has also changed, according to this article. Whereas 1.0 was about making sure the information within individual pages of your site could be found, and 2.0 was about making sure your site was optimized within a network of related sites, now Search 3.0 is going to be about finding ways to reach individuals by using their social graphs.

That means reaching people where they're already sharing, linking, publishing and tagging, and becoming another node on their social networks by interacting with them and adding value to their experiences online. It's potentially a more difficult and time consuming way to channel a message to a target market but one that needs to be understood and developed in the new online environment.

Labels: ,

0 Comments

Friday, 3 April 2009

Microsoft to advertise new search engine

There have been reports and rumours circulating for some time now about a new search engine that's being developed by Microsoft. Now Advertising Age reports that Microsoft have briefed their agency, JWT, to develop a new brand building campaign for the relaunched search engine, which may be called Kumo or retain the Live Search name.

The report suggests that the advertising push could be valued at US$80-$100 million to begin in June across online, TV, print and radio. Whether this spend will make much of a dent in Google's market dominance and halt the declining usage of Microsoft's search tool remains to be seen and even if the campaign does get people onto the new search engine to try it out, the experience will need to be something special to break the search habits of many web users.

Labels: ,

0 Comments

Tuesday, 24 February 2009

Searching the 'Deep Web'

An article in the New York Times reviews the issue of the 'Deep Web' - sometimes known as the 'Invisible Web' - and the difficulties for search engines to find this information. Despite Google claiming to now index over one trillion web pages, this is still believed to represents just a fraction of the entire web, since there is much more content that lies beyond the reach of search engines - such as database information, content controlled by login access, financial information, shopping catalogues, medical research, transport timetables and more.

The report focuses on a number of new search and index technologies that are trying to improve this coverage of the web's hidden content, such as Kosmix and DeepPeep. The former service, for example, has developed software that matches searches with the databases most likely to yield relevant information, then returns an overview of the topic drawn from multiple sources. If tools such as this do manage to delve deeper into the web's content, the quality and application of search results will be greatly expanded and, as the article claims, could ultimately reshape the way many companies do business online.

Labels: ,

0 Comments

Friday, 22 August 2008

Google's PageRank

One of the most heated debates in the SEO sector can be generated by Google's PageRank and specifically the green PageRank indicator on the Google Toolbar - is this really a useful indicator of how Google's views each web page, or should it be completely ignored as a distraction? The question is also raised as to what purpose this indicator serves for most web users and why Google even bothers to display this.

Google's trademarked 'PageRank' algorithm and underlying technology is one of the main foundations of the search engine developed by Sergey Brin and Larry Page and was also a core factor that enabled Google's search results quality to stand out from existing search engines when it first launched in the late 1990's. Google's own corporate pages describe PageRank as follows:

PageRank reflects our view of the importance of web pages by considering more than 500 million variables and 2 billion terms. Pages that we believe are important pages receive a higher PageRank and are more likely to appear at the top of the search results. PageRank also considers the importance of each page that casts a vote, as votes from some pages are considered to have greater value, thus giving the linked page greater value. We have always taken a pragmatic approach to help improve search quality and create useful products, and our technology uses the collective intelligence of the web to determine a page's importance.

The underlying PageRank algorithm is a complex mathematical formula, which is then simplified by the short indicator bar on the Google Toolbar, where the green colour filling the bar indicates the PageRank 'score' between 0/10 and 10/10. New sites will start with a completely grey bar with no score and then develop a higher PageRank as the site gets indexed and starts attracting links from other domains.

The PageRank score on the Toolbar is a snapshot and an occasionally updated figure - Google's Matt Cutts recently alerted people in his blog that a new update was being posted and back in 2006 had provided more information about the Toolbar indicator with answers to some readers' questions. It's clear that it would be wrong to place too much emphasis on this Toolbar figure for each website and web page, but it's also short-sighted to dismiss it completely when it does provide some degree of information from Google's perspective.

So the Google Toolbar shouldn't be a figure of primary concern but a useful indicator of relative performance and potential development. It does give website marketers a view of their own and competitors web pages and how pages within a site hold different PageRank scores. It shouldn't be a core driver of an SEO strategy but perhaps confirmation of how the search marketing support for a site is developing its potential performance on Google.

There's an excellent article on Google PageRank provided by Search Engine Land.

Labels: ,

0 Comments

Monday, 18 August 2008

Google discusses search quality

Over the past few months the 'Official Google Blog' has been posting an occasional series of articles about search quality, explaining what the team at Google do and how they develop and maintain the quality of their search rankings. Of course they aren't revealing the inner secrets of Google's algorithm, but there is some more openness being shown to explain to users what some of the main issues are that Google considers important.

The first post back in May provided a background to the search quality team at Google and explained what they do. It introduces the blog posts that will explain more about the process and outlines the factors behind the ways of determining ranking position and trying to relate a user's search query with the correct set of results. It explains how different parts of the search team work on developing and evaluating the ranking process, adding new features and fighting 'webspam'.

The second post appeared over a month later at the start of July and explained more about the process of Google's ranking system. This is based on 3 basic principles that are outlined in some more detail - namely that the best locally relevant results are served globally, the ranking system is kept as simple as possible, and that there should be no manual intervention.

The next blog post was a more technical look at the issue of Information Retrieval and how this technology is used to determine results based on understanding pages, search queries and user intent. The most recent post earlier this month describes the developments in the search experience and how Google has tried to enhance the ways that results are presented to users, including spelling corrections, the presentation of results and text 'snippets' to enable users to assess the listings, and also query refinements or suggestions.

Google will be continuing this series in the future and although this is very much a PR exercise, there are some useful insights in these articles to explain how the search engine works.

Labels: ,

0 Comments

Wednesday, 30 July 2008

Another serious rival for Google?

Every few months the press will announce the launch of another new search engine that 'may' challenge the dominance of Google. This week sees the launch of Cuil which has got the press more excited than usual and is notably different to many other new search engine launches, due to the people behind it.

Cuil - pronounced 'cool' from the Gaelic word for knowledge - has been developed by a number of ex-Google employees who worked on the development of Google's search technology. The site already claims to be the web's largest search engine, with "three times as many (pages) as Google and ten times as many as Microsoft".

They are also standing out on the privacy stakes by saying that "we believe that analyzing the Web rather than our users is a more useful approach, so we don’t collect data about you and your habits, lest we are tempted to peek. With Cuil, your search history is always private."

Also Cuil says that rather than rely on 'superficial popularity metrics' they search and rank pages based on their content and relevance, so that when a result is found they will analyze the rest of the resulting site's content as well as "its concepts, their inter-relationships and the page’s coherency". In doing so, Cuil will offer users more choices and suggestions to give them enough information to find the page required and other relevant information.

It certainly sounds like a great concept and the search engine looks pretty good as well, with a clean black search page (to contrast with Google's clean-ish white page), and search engine results being presented in 3 columns (or 2 depending on the user's preference) within a frame, with many sites also displaying small images. There are no sponsored ads showing at the moment although there is space for these to appear later.

Danny Sullivan has written a typically detailed and insightful assessment of Cuil over at Search Engine Land and the new launch is certainly attracting attention from the industry media (as well as immediate criticism). The question is whether it will catch on with users who are entrenched as Google-searchers and can break into the market through the same process that made Google so successful - quality search results and word-of-mouth recommendation.

Labels: ,

0 Comments

Wednesday, 11 June 2008

Dealing with duplicate content on Google

The Google Webmaster Blog has posted some useful information on ways to manage the issue of duplicate content, which is becoming a common issue for many sites. This may be caused due to the difficulties of managing large dynamic sites, or very often due to syndicated content being shared between sites. The other main example used by Google relates to content which may be duplicated due to 'screen scraping' by third-party websites that are creating mass content from original sources, which can occur for various reasons.

The Google article provides links to previously published advice and tips on how to handle such situations. It also provides guidelines on ways to ensure that original content can be indexed by Google, which should be the main concern for the content creator.

Labels: ,

0 Comments

Wednesday, 4 June 2008

Using the Robots Exclusion Protocol

The Google Webmaster blog has just published a succinct summary of the Robots Exclusion Protocol - the standard used by websites and search engines to allow or disallow the indexing of a site or particular sections or pages of a site. This is one element that is commonly missed by many websites but should be used to streamline the way that search engines can visit and index a site.

The Robots commands can either be used within a robot.txt file hosted on the website's server, or within a robots metatag at the page level of the site. The standard is now widely accepted by most search engines, although there has not been any common development between the main search tools in the same way that the Sitemaps protocol has been developed. However, this Google post outlines the main implementation requirements for a robots file or metatag, listing and defining the different directives that can be used.

Labels: ,

0 Comments

Thursday, 29 May 2008

Google opens up about Search Quality

A recent posting on the Official Google Blog by a senior search engineer reveals some more information about how the Google search engine ranks sites and the ongoing work that goes into improving ranking results. As the article says, the many ranking criteria that drive the search algorithms remain a trade secret to protect themselves from competitors and abuse of the system, but more insight is provided here.

It outlines all the different factors that can make an automatic assessment of a web page a difficult task and the need to match a short search query with the most relevant results within milliseconds. This can only have become harder with the introduction of 'universal' search results over the past year. The original PageRank algorithm remains a core part of Google's ranking criteria but this is combined with other issues such as different models to cope with language, query usage, recency, personalisation and regional results.

A team of engineers works on the evaluation of search results quality and many changes or enhancements can be made during the year - for example, in 2007, over 450 new improvements were introduced, ranging from simple improvements to more complicated changes. The article reveals that significant changes were made to the PageRank algorithm in January which could have dramatically affected the rankings for some websites.

Other teams work on new features and new user interfaces, with the latter group assisted by a team of usability experts who conduct user studies and evaluate new features with Google users around the world. Then, of course, there is the team of engineers who focus on fighting webspam and other types of search engine abuse, such as hidden text to off-topic pages stuffed with irrelevant keywords and other attempts to fix ranking positions. The team may spot new spam trends (or have them reported to them) and then works to counter those trends within the ranking algorithms.

Overall it's a well-written, clearly explained introduction to what goes on 'behind the scenes' at Google and indicates a new willingness to share some of this information with the wider public. It's also a sign of better PR being undertaken with possibly an attempt to avert some of the more negative press and comment that a company of Google's size and prominence starts to attract.

Labels: , ,

0 Comments