Friday, August 29, 2014

What Are Search Engine Spiders?

A spider, also known as a robot or a crawler, is actually just a program that follows, or "crawls", links throughout the Internet, grabbing content from sites and adding it to search engine indexes.


Spiders only can follow links from one page to another and from one site to another. That is the primary reason why links to your site (inbound links) are so important. Links to your website from other websites will give the search engine spiders more "food" to chew on. 

The more times they find links to your site, the more times they will stop by and visit. Google especially relies on its spiders to create their vast index of listings. 

Spiders find Web pages by following links from other Web pages, but you can also submit your Web pages directly to a search engine and request a visit by their spider. In essence, that is search engine submission. Because so many millions of web masters have submitted their sites over and over, the search engines have responded by putting more emphasis on sites that their spiders find naturally and less and less emphasis on sites submitted directly. So, being found is better than submitting directly.