Published on 2025-08-07T06:18:08Z

dotbot

DotBot is the official web crawler for Moz, a leading SEO software company. Its primary function is to scan the web to collect data for Moz's suite of SEO tools. It focuses on discovering the link relationships between websites to build Moz's link index, which is the foundation for its well-known Domain Authority and Page Authority metrics. The data it gathers helps SEO professionals with backlink analysis and competitive research.

What is DotBot?

DotBot is the web crawler for Moz, a prominent SEO software company. It functions as an indexing and discovery bot that systematically visits websites to collect data that powers Moz's suite of SEO tools and analytics. The crawler identifies itself in server logs with the user-agent string DotBot. It is designed to analyze website structure, content quality, and link profiles, behaving similarly to other legitimate search engine bots by respecting standard web protocols.

Why is DotBot crawling my site?

DotBot is crawling your site to gather data about its structure, content, and, most importantly, its link relationships. It is primarily interested in discovering the links between websites to build the Moz link index, which is used to calculate their proprietary Domain Authority and Page Authority scores. The frequency of its visits depends on factors like your site's size, popularity, and update schedule. The crawling is generally considered authorized as it is part of a legitimate SEO analysis service and respects robots.txt directives.

What is the purpose of DotBot?

The core purpose of DotBot is to collect the web data that powers Moz's SEO analytics platform. Its main function is to map the web's link structure to calculate metrics like Domain Authority, which helps SEO professionals predict a site's ranking potential. The data also supports analysis of on-page elements and site structure. For website owners using Moz's tools, the bot provides valuable data about their online presence. For others, it contributes to the broader SEO ecosystem that helps professionals understand and improve website performance.

How do I block DotBot?

To prevent DotBot from accessing your website, you can add a rule to your robots.txt file. This is the standard method for instructing web crawlers not to visit your site.

Add the following lines to your robots.txt file to block DotBot:

User-agent: DotBot
Disallow: /

How to verify the authenticity of the user-agent operated by Moz?

Reverse IP lookup technique

To verify user-agent authenticity, you can use host linux command two times with the IP address of the requester.
  1. > host IPAddressOfRequest
    This command returns the reverse lookup hostname (e.g., 4.4.8.8.in-addr.arpa.).
  2. > host ReverseDNSFromTheOutputOfFirstRequest
If the output matches the original IP address and the domain is associated with a trusted operator (e.g., Moz), the user-agent can be considered legitimate.

IP list lookup technique

Some operators provide a public list of IP addresses used by their crawlers. This list can be cross-referenced to verify a user-agent's authenticity. However, both operators and website owners may find it challenging to maintain an up-to-date list, so use this method with caution and in conjunction with other verification techniques.