Indexation is a part of SEO which involves the way in which search engines can reach a website, scan and evaluate. It involves questions like: What information or pages a search engine may or not indexed? What can a search engine or not scan and evaluate? How to deal with situations that might be assessed as SPAM? Indexation has many factors. Site speed (website speed) for example, is becoming an important factor: how easily a search engine can reach a page and how quickly all the information on a page scanned and assessed? Google also announced in 2014 that websites use a secure HTTPS connection, have a slight edge in terms of ranking.

Google's focus is on the more relevant, safer and faster to use the internet. The transition from HTTP / 2 is also important here. In addition, many new factors of interest, which had no effect several years ago on the ranking. It is now important that a website is mobile friendly.

Within indexation is also to solve problems within websites, for example, duplicate URLs. There is a number of solutions to deal with indexation problems, such as the use of robots.txt, meta robots to apply, use, redirects and / or apply canonical URLs. Many websites face problems in that one page is available with or without a final slash accessible URLs can be written both lowercase and uppercase, etc. There are two (or more) URLs available for one page. Which page is then indexing a search engine like Google? If you use within your website filters, have you considered how this SEO must be built friendly, thus preventing thousands filter URL Google's going to crawl?