robots.txt
Add or generate a robots.txt
file that matches the Robots Exclusion Standard in the root of app
directory to tell search engine crawlers which URLs they can access on your site.
Static robots.txt
app/robots.txt
User-Agent: *
Allow: /
Disallow: /private/
Sitemap: https://acme.com/sitemap.xml
Generate a Robots file
Add a robots.js
or robots.ts
file that returns a Robots
object.
app/robots.ts
import { MetadataRoute } from 'next';
export default function robots(): MetadataRoute.Robots {
return {
rules: {
userAgent: '*',
allow: '/',
disallow: '/private/',
},
sitemap: 'https://acme.com/sitemap.xml',
};
}
Output:
User-Agent: *
Allow: /
Disallow: /private/
Sitemap: https://acme.com/sitemap.xml
Robots object
type Robots = {
rules:
| {
userAgent?: string | string[];
allow?: string | string[];
disallow?: string | string[];
crawlDelay?: number;
}
| Array<{
userAgent: string | string[];
allow?: string | string[];
disallow?: string | string[];
crawlDelay?: number;
}>;
sitemap?: string | string[];
host?: string;
};