Understanding the robots.txt File and Crafting Your Own

A robots.txt file serves as a resource to regulate how search engines perceive your website. In essence, it dictates the behavior search engines should adopt while crawling your content, proving highly beneficial for SEO and comprehensive site administration.

This article will delve into various aspects, including:

  1. The Definition of a robots.txt file
  2. The Necessity of Having a robots.txt file
  3. Steps to Generate a robots.txt file
  4. Illustrative Examples for robots.txt File Inclusions
  5. Important Note: The Use of robots.txt Is Not a Certainty
  6. Insights on robots.txt in the Context of WordPress
  7. Tools and Utilities: robots.txt File Generators

What Is a robots.txt File?


The robots.txt file adopts a text-only format, encompassing directives that instruct web crawlers and robots on their intended behavior.

However, the term “supposed to” is used because there exists no obligation for a crawler or bot to adhere to the instructions outlined in the robots.txt file. While major players typically follow most, though not all, of the rules, certain bots may entirely disregard the directives within your robots.txt file.

Situated in the root directory of your website (e.g., http://mywebsite.ca/robots.txt), the robots.txt file remains a crucial entity. In cases where subdirectories like blog.ggexample.com or forum.ggexample.com are employed, each subdirectory should feature its own robots.txt file.

Crawlers execute a straightforward text match, comparing the content of your robots.txt file with the URLs on your site. When a directive in the robots.txt file aligns with a URL on your site, the crawler adheres to the established rule.


Is a robots.txt File Necessary?

In the absence of a robots.txt file, search engine crawlers operate under the assumption that they have the liberty to crawl and index any page discovered on your site. If this aligns with your intentions, there’s no imperative need to generate a robots.txt file.

However, if there are specific pages or directories you prefer not to be indexed, the creation of a robots.txt file becomes essential. Such pages encompass those of a private, sensitive, proprietary, or administrative nature, as previously discussed. Additionally, this may extend to pages like “thank you” pages or those featuring duplicate content, such as printer-friendly versions or A/B testing pages.

Generating a robots.txt File: A Step-by-Step Guide


Creating a robots.txt file follows the same process as crafting any text file. Open your preferred text editor, save a document with the filename robots.txt, and subsequently upload the file to your site’s root directory using FTP or a cPanel file manager.

Important considerations:

  1. Ensure the filename is robots.txt, written in all lowercase. Any capitalization in the name may lead to crawlers not reading it.
  2. Entries in the robots.txt file are case-sensitive. For example, /Directory/ is distinct from /directory/.
  3. Utilize a text editor for file creation or editing, as word processors may introduce characters or formatting that could impede crawler readability.
  4. Prior to creating and uploading a new robots.txt file, check your site’s root directory, as one may already exist. This precaution prevents unintentional overwriting of any pre-existing directives.


Examples of What to Incorporate


A robots.txt file encompasses various variables and wildcards, providing numerous potential combinations. In this discussion, we will explore some prevalent and valuable entries, along with instructions on their inclusion.

Before delving into specific examples, let’s provide an overview of the available directives: “User-agent,” “Disallow,” “Allow,” “Crawl-delay,” and “Sitemap.” The primary focus of most robots.txt entries revolves around “User-agent” and “Disallow.”

  1. User-agent: The “User-agent” directive is employed to target specific web crawlers to which instructions are to be given. Common examples include Googlebot, Bingbot, Slurp (Yahoo), DuckDuckBot, Baiduspider (a Chinese search engine), and YandexBot (a Russian search engine). A multitude of user agents can be included to tailor instructions.
  2. Disallow: “Disallow” is one of the most frequently used attributes, serving as the primary command to instruct a user-agent not to crawl a specified URL.
  3. Allow: “Allow” is another prevalent element within the robots.txt file, exclusively utilized by the Googlebot. It signals to Googlebot that accessing pages or subfolders is permissible even when the parent page or subfolder is disallowed.
  4. Crawl-delay: The “Crawl-delay” function dictates the number of seconds a crawler should pause between pages. While many crawlers may disregard this directive, notable exceptions include Googlebot. However, the crawl rate for Googlebot can be adjusted in the Google Search Console.
  5. Sitemap: A crucial aspect of the robots.txt file is the “Sitemap.” This entry specifies the location of XML sitemaps for your site, significantly enhancing how content is indexed in search engines. For visibility in search engines like Google, Bing, or Yahoo, having a sitemap is often considered indispensable.


The initiation of a robots.txt file involves:

User-agent: *

The asterisk (*) functions as a wildcard, representing “all.” Any subsequent instructions will be applicable to all crawlers.


Now, we’ve introduced a “Disallow” directive for the /private/ directory. Therefore, the robots.txt file instructs every crawler not to crawl the /private/ section on the domain.

Should we wish to restrict access for only a particular crawler, we would specify the crawler’s name in the User-agent line:

User-agent: Bingbot
Disallow: /private/


This directs Bing to refrain from crawling anything within the /private/ directory.

If a slash is included in the Disallow line, it signifies to Bing (or any User-agent mentioned) that it is prohibited from crawling any content on the entire domain:

User-agent: Bingbot
Disallow: /


You have the option to instruct crawlers to avoid crawling a particular file.

User-agent: *
Disallow: /private.html

Yet another wildcard is the “$” symbol, which signifies the end of a URL. Consequently, in the given example, any URL concluding with “.pdf” would be restricted.

User-agent: *
Disallow: /*.pdf$


This would prevent any crawlers from indexing any PDF files.


Using numerous directives in the robots.txt file.

Up until now, we’ve created straightforward robots.txt files consisting of two lines. However, you have the flexibility to include as many entries in the file as you desire.

For instance, if the intention is to permit Google to crawl all content while restricting Bing, Baidu, and Yandex, the following format can be used:

User-agent: Googlebot
Disallow:

User-agent: Bingbot
Disallow: /

User-agent: Baiduspider
Disallow: /

User-agent: YandexBot
Disallow: /


Take note that a new User-agent line was employed for each directive, as each User-agent line can only specify a single crawler.

However, it’s worth mentioning that a single User-agent can accommodate multiple Disallow directives:

User-agent: Baiduspider
Disallow: /press/
Disallow: /shop/
Disallow: /products/

Each Disallow URL needs to occupy a separate line.

You have the option to assess your robots.txt file using Google Webmaster Tools.


The use of robots.txt does not provide a guarantee.


Incorporating a Disallow directive into robots.txt doesn’t assure that the file or URL won’t be indexed by search engines. While reputable search engine crawlers generally adhere to your robots.txt directives, not all of them do.

It’s important to note that preventing crawling on your domain doesn’t necessarily prevent indexing. Crawlers, being link-following entities, may not crawl a file if disallowed, but if someone from another website links to that file (e.g., /whitepapers/july.pdf), crawlers could discover and index it.

The usage of robots.txt with WordPress


By default, WordPress generates a “virtual” robots.txt file. This basic directive aims to prevent crawlers from attempting to crawl your admin panel.

User-agent: *
Disallow: /wp-admin/
Allow: /wp-admin/admin-ajax.php


The inclusion of the file /wp-admin/admin-ajax.php is permitted due to the use of AJAX by certain WordPress themes to add content to pages or posts.

For personalized adjustments to the WordPress robots.txt file, follow the steps mentioned earlier to create a robots.txt file and upload it to your website’s root directory.

It’s important to be aware that by uploading your robots.txt, the default WordPress virtual robots.txt will no longer be generated. A website can only have one robots.txt file. Therefore, if you require the AJAX Allow directive for your theme, you should incorporate the mentioned lines into your robots.txt.

Additionally, some SEO plugins for WordPress are capable of generating a robots.txt file automatically.

Robots.txt Serves a Purpose


While not all search engine crawlers adhere to the directives in the robots.txt file, it remains highly beneficial for SEO and website maintenance. This simple file enables actions ranging from disregarding specific directories and pages to managing browser cache settings.

RECENT POSTS

SOCIAL MEDIA

Unlock the Power of Your Website

Don't let your website haunt you with downtime and sluggish performance. Switch to HostGhost today and experience the difference! Sign up now and unleash the full potential of your online presence.
Scroll to Top