Tips On Improving Your Websites Indexability
Webmasters have a difficult task to ensure their website meets each of the search engine guidelines. These will cover areas from unethical linking to on-page best practices. But in order to rank within the search engines, it's vital that these guidelines are followed. Particularly for online retailers who rely on sales being made only through their online store. So let's view the areas that can affect a websites ability to rank within search engines and also the scenarios that could cause indexation problems in the near future.
For SEO's, their job is focused on ensuring ranking within the search engine result pages for competitive keyphrases. The ability to rank and compete is structured on the basis of a website that can be crawled and indexed by the search engine crawlers. Therefore consider the work to be the equivalent of the foundations of a house. The foundation must be built correctly in order to ensure the quality and safety for the entire structure. Bad quality foundations could ultimately jeopardize a structure's credibility.
Therefore, to begin the most basic of all tasks is to view the website's current state to see how the website is currently being crawled and indexed. This can help isolate any problems before making any improvements. This will involve looking into the webmaster tools to see if there are any issues that have been flagged.
Another area that must be checked regularly is analytics tools. These can provide up-to-date information regarding traffic. Being able to spot sudden drops can help spot problems at an early stage.
Here are just some of the most common issues that webmasters face and can impact a websites ability to be crawled and indexed. All of which webmasters should have an extensive knowledge of, so they can be turned on, off or even updated when required.
Meta Tags
The meta tags are often overlooked and this is part of the problem. The meta tags can stop the crawlers from indexing pages and this could be the source of problems, if pages are not being indexed. Perhaps developers made several changes without your permission. Within the meta tags the following can be used:
< META NAME="ROBOTS" CONTENT="NOINDEX" >
< META NAME="ROBOTS" CONTENT="NOFOLLOW" >
The above meta tags will instruct the search engine crawlers to not index the pages this appears on the header. The second meta tag instructs the crawlers to nofollow the links on the page. So the first point of action will be to ensure the correct pages have the noindex meta tag listed and those such as product pages etc... have not got this meta tag on the pages header.
Robots.txt.
This is a simple text file that site on the root of a websites domain. The file itself details to the search engine crawlers a list of guidelines to follow when crawling and indexing the website.
This is achieved by including user agents that instruct the crawlers to index specific parts of the site, including entire folders within the domain. It's not uncommon for webmaster to enter the wrong information by accident. Therefore if any drastic drops in traffic appear within Analytics this is one of the best places to check first.
Orphan Pages
Another often overlooked cause of indexation issues can be down to orphan pages. This simply means there are pages not being linked to. This will result in the crawlers not being able to access these pages.
Therefore it's best practice to there are links pointing to all pages of your site, especially the deepest pages that are often product pages.
There are other areas that also must be checked, such as the URL parameters and the sitemap. Ensure these are up-to-date and not recently changed without your knowing. Ensuring there are internal and external link to these pages can help with the chances of the pages being indexed. Ensuring there is content on these pages that will encourage linking is also important.
Comments
Post a Comment