User-Agent Rules

User-agent: *
Disallow:
Enter a URL path above to test if it would be allowed or blocked by your robots.txt rules.

Robots.txt Directives

User-agent:

Specifies which search engine crawler the rules apply to. Use * for all bots.

Disallow:

Prevents crawlers from accessing specific URLs or directories.

Allow:

Overrides a Disallow directive to allow access to a specific URL.

Sitemap:

Specifies the location of your XML sitemap for search engines.

Crawl-delay:

Sets a delay (in seconds) between successive crawler requests.

Host:

Specifies the preferred domain (Yandex-specific).

Wildcard Patterns

*

Matches any sequence of characters.

$

Matches the end of a URL.

Examples

# Block all crawlers from admin area
User-agent: *
Disallow: /admin/

# Allow Google but block others from specific files
User-agent: Googlebot
Allow: /

User-agent: *
Disallow: /private/