Google addresses duplicate content

Filed in SEO, Web Site Content by Matt McGee on December 18, 2006 0 Comments

Good post from Adam Lasnik on the Google Webmaster Central blog addressing “duplicate content.” This is a real hot topic these days, and came up during the Q&A after our “Big Ideas for Small Biz” session at SES Chicago.

In his post, Adam emphasizes a point made by my Chicago co-panelist John Carcutt, namely that the main method engines use to deal with unintentional duplicate content is a filter, not a penalty. Here’s what Adam has to say on that topic:

During our crawling and when serving search results, we try hard to index and show pages with distinct information. This filtering means, for instance, that if your site has articles in “regular” and “printer” versions and neither set is blocked in robots.txt or via a noindex meta tag, we’ll choose one version to list. In the rare cases in which we perceive that duplicate content may be shown with intent to manipulate our rankings and deceive our users, we’ll also make appropriate adjustments in the indexing and ranking of the sites involved. However, we prefer to focus on filtering rather than ranking adjustments … so in the vast majority of cases, the worst thing that’ll befall webmasters is to see the “less desired” version of a page shown in our index.

Adam also shares a good list of methods webmasters can follow to help avoid having duplicate content, but for that you’ll need to read his full post over at Google….

[tags]google, duplicate content, seo[/tags]

Leave a Reply

Your email address will not be published. Required fields are marked *