SEO: Avoid Spider Traps
It is always my goal to increase my knowledge of search engine optimization (SEO). Daily I read trade journal articles on various subjects including SEO and today I learned of a concept that I never heard of… the Spider Trap.
Everyone knows that search engines and other entities send small roaming programs called ‘spiders’ that index every website. I now understand that these can get stuck when an action is required or there is no link leading back to the website. Because the spider is unable to continue or return, it counts as a usability issue and is scored negatively in search engine rankings.
Two excellent examples of this are login buttons and printable versions of content:
Login Button
Many sites are created with multiple options for a user to login in order to access more content, purchase an item, or what have you. This opens a dialog that must be complete by the user in order to continue. Since a spider cannot enter credentials, it gets stuck – creating a usability error.
Printable Content
Likewise, a printable version of content, i.e. a menu, price list, or schedule may be convenient for the end user, but can trap a spider since there is no return. I often place printable content on my websites in the form of a PDF file. A prime example of this is the Salvatori’s Printable Menu page.
For the user it is a great way to retain the various menus as a printed copy. I was even sure to include the target=”_blank” tag so that it would open in a new tab (or window) and not detract from the user experience. However after reading the article Top 10 Tips to Build a Google-Friendly Site I found that this page alone has ten Spider Traps! This undoubtedly has hurt my page rankings.
Solution
Not to fear! The solution is simple. On dead end links like I mentioned, one must only include the rel=”nofollow” tag.
I remember learning about it towards the beginning of my education, but at the time I couldn’t really think of too many practical uses for it. I mean why wouldn’t you want search engines to index your whole site? Today I view rel=”nofollow” in a whole new light. The dead link is creating a stumbling block for a site to be properly index.
Instead a web developer should restrict spiders from following the link and use a descriptive name, title tag, and possibly an alt tag to explain the contents of the link. This will eliminate the crawl errors created by the spider traps.