This provides instructions to search engines about how to crawl website pages in your way. We can say it’s like a recommendation than an instruction that is generally ignored by crawlers.
According to Google, these are compressed than rules in robots.txt. It is used to save pages from indexing, which gives you the chance to choose which page to show and which one to hide.
<meta name=”robots” content=”noindex,follow” />
This means all search engines not to index the page, but to follow the links that are found page.
Why meta robots tags?
Talking about the importance of meta robots tags allows controlling of indexing since it provides you with the capability to manage crawlers’ behavior. And this happens when website owners don’t want their pages to be indexed, like sitemap pages.
It’s better to keep such pages out of:
- pages with less content
- pages with confidential information
- admin or thank-you pages
- draft pages
- internal search results
- duplicate content
- pages including upcoming campaigns
It’s apparent that when your website grows, you should manage things like indexing and crawlability. And to do this, you need to balance directives in meta robots tag, robots.txt file, and X-Robots-Tag.
Robots Meta Directives
We have listed few robots meta directives you have to signal your indexing preferences to search engines:
- Noindex – Tells the search engine not to index/crawl a page
- Index – Tells the search engine to index a page (its default)
- Follow – If the page is not indexed, the crawler should follow the links and form a relation
- Nofollow – Tells that all links within the page shouldn’t be followed and not pass on link authority
- Noimageindex – Asks the crawler not to index any images.
- Noarchive – Tells the engine not to show an archived link of this page on SERP.
- None – Tell search engines to ignore this page.
- Nocache – This is the same as Noarchive, but only for Explorer and Firefox
- Nosnippet – Asks the crawler to not show a snippet on SERP
Best SEO practices
- If you are using ‘Noindex, follow’, you should restrict the crawler, instead of using directives in the robots.txt file.
- If you use both meta-robots and the X-robots tag on the same page, it becomes redundant.
- There could be a disaster if a webpage contains any private information. So, choose a secure approach, like password protection from preventing users from checking confidential pages.
Keep on Learning
People are Loving
What is an IP Addressing? Can I Trace Location By Using IP Address
We all mobile phone and computer user need to know that all of us has an unique virtual address attached...
Best 5G Mobile Phones for 2021
Best 5G mobile phones is in the rivalry at this current moment. Well famous populars brands are in this game...
How to transfer data from your old iPhone to new one
Transfer data from your old iphone to new one In iOS world, currently there is best iPhone line up i.e...
Step-wise Instructions to Discover Lost Apple Watch
Sometimes people lost their valuable gadgets in house, office or somewhere concerned. In such situation they may get frustrate and...
Background Processing in Linux. Here is what you need to know!
Introduction of Process Process is an instance of program. A new process is started when you gibe any command to...