Google Panda
is a change to Google's search results ranking algorithm that was first
released in February 2011. The change aimed to lower the rank of
"low-quality sites" or "thin sites", and return
higher-quality sites near the top of the search results. CNET reported a surge
in the rankings of news websites and social networking sites, and a drop in
rankings for sites containing large amounts of advertising. This change
reportedly affected the rankings of almost 12 percent of all search results.
Soon after the Panda rollout, many websites, including Google's webmaster
forum, became filled with complaints of scrapers/copyright infringers getting
better rankings than sites with original content. At one point, Google publicly
asked for data points to help detect scrapers better. Google's Panda has
received several updates since the original rollout in February 2011, and the
effect went global in April 2011. To help affected publishers, Google published
an advisory on its blog, thus giving some direction for self-evaluation of a
website's quality. Google has provided a list of 23 bullet points on its blog
answering the question of "What counts as a high-quality site?" that
is supposed to help webmasters "step into Google's mindset".
The Panda process:
Google Panda
was built through an algorithm update that used artificial intelligence in a
more sophisticated and scalable way than previously possible. Human quality
testers rated thousands of websites based on measures of quality, including
design, trustworthiness, speed and whether or not they would return to the
website. Google's new Panda machine-learning algorithm was then used to look
for similarities between websites people found to be high quality and low
quality.
Many new
ranking factors have been introduced to the Google algorithm as a result, while
older ranking factors like PageRank have been downgraded in importance. Google
Panda is updated from time to time and the algorithm is run by Google on a
regular basis. On April 24, 2012 the Google Penguin update was released, which
affected a further 3.1% of all English language search queries, highlighting
the ongoing volatility of search rankings.
On September
18, 2012, a Panda update was confirmed by the company in its official Twitter
page, where it announced, “Panda refresh is rolling out—expect some flux over
the next few days. Fewer than 0.7% of queries noticeably affected"
Another
Panda update began rolling out on January 22, 2013, affecting about 1.2% of
English queries.
Significant differences between Panda and previous algorithms:
Google Panda
affects the ranking of an entire site or a specific section rather than just
the individual pages on a site.
In
March 2012, Google updated Panda and stated that they are deploying an
"over-optimization penalty," in order to level the playing field.
Panda
recovery:
Google
says it only takes a few poor quality, or duplicate content, pages to hold down
traffic on an otherwise solid site. Google recommends either removing those
pages, blocking them from being indexed by Google, or re-writing them. However,
Matt Cutts, head of webspam at Google, warns that re-writing duplicate content
so that it is original may not be enough to recover from Panda—the re-writes
must be of sufficient high quality. High quality content brings
"additional value" to the web. Content that is general, non-specific,
and not substantially different from what is already out there should not be
expected to rank well: "Those other sites are not bringing additional value.
While they’re not duplicates they bring nothing new to the table.".
No comments:
Post a Comment