Published on 2025-08-07T06:18:08Z
YandexAdditional
YandexAdditional is a supplementary web crawler from the Russian search engine Yandex. It works alongside the main YandexBot to gather additional information from websites. Its purpose is to support and enhance Yandex's search index by collecting specific types of data, verifying content changes, or performing other specialized crawling tasks. Its presence is part of the normal operation of the Yandex search engine.
What is YandexAdditional?
YandexAdditional is a web crawler from Yandex that serves as a supplementary crawler to the main Yandex indexing systems. It identifies itself in server logs with the user-agent string Mozilla/5.0 (compatible; YandexAdditional/3.0; +http://yandex.com/bots)
. The 'Additional' in its name indicates its role in working alongside other Yandex bots to build and maintain a comprehensive search index.
Why is YandexAdditional crawling my site?
YandexAdditional is crawling your site to gather supplementary information that complements what the main Yandex crawler has already indexed. This could include looking for updated content or verifying changes to previously indexed pages. The frequency of visits depends on your site's popularity and relevance to Yandex users, particularly in Russian-speaking regions. This crawling is a standard and authorized activity for a search engine.
What is the purpose of YandexAdditional?
The purpose of YandexAdditional is to support Yandex Search by collecting supplementary information about web pages. While the main YandexBot handles core indexing, YandexAdditional likely focuses on gathering specific types of data or performing specialized tasks that enhance search quality. For website owners, having your content properly indexed by all of Yandex's crawlers can increase your visibility to Yandex users and drive traffic from its search results.
How do I block YandexAdditional?
To prevent YandexAdditional from accessing your website, you can add a specific disallow rule to your robots.txt
file. This is the standard method for managing crawler access.
To block this bot, add the following lines to your robots.txt
file:
User-agent: YandexAdditional
Disallow: /
How to verify the authenticity of the user-agent operated by Yandex?
Reverse IP lookup technique
host
linux command two times with the IP address of the requester.-
This command returns the reverse lookup hostname (e.g., 4.4.8.8.in-addr.arpa.).> host IPAddressOfRequest
-
> host ReverseDNSFromTheOutputOfFirstRequest