Tabelog Robots.txt -
At first glance, it looks like a standard robots.txt . But look closer. It tells a fascinating story about data protection, competitive moats, and Japan’s unique web culture. User-agent: * Disallow: /search/ Disallow: /rgsearch/ Disallow: /kw/ Disallow: /syop/ Disallow: /rr/ Disallow: /list/ Disallow: /rvw/ Disallow: /photo/ Disallow: /map/ Disallow: /guide/ Disallow: /sitemap/ Disallow: /navi/ Disallow: /rank/ Disallow: /shop/%A5%EA%A5%B9%A5%C8 Disallow: /bshop/ Disallow: /rstd/ Disallow: /west/ Disallow: /tokyo/ Disallow: /osaka/ Disallow: /aichi/ Disallow: /kyoto/ Disallow: /hyogo/ Disallow: /hokkaido/ Disallow: /fukuoka/ Disallow: /miyagi/ Disallow: /chiba/ Disallow: /saitama/ Disallow: /kanagawa/ Disallow: /shizuoka/ Disallow: /hiroshima/ What Tabelog is really saying 1. “Search results are off-limits.” The /search/ and /list/ paths are blocked. This is common for large sites to prevent infinite crawl loops, but for Tabelog, it’s strategic: search result pages contain ranked restaurant lists — their core IP. Letting search engines index those would let competitors reverse-engineer their ranking algorithm.
A surprising omission. A robots.txt often points to sitemap.xml . Tabelog’s doesn’t. Either they rely on Google Search Console’s submitted sitemaps, or they deliberately avoid publicizing their URL structure. Given the number of blocked paths, the latter feels intentional. The subtext: Defensive design Tabelog’s robots.txt is not about politeness. It’s about asymmetry . They want Google to index their restaurant detail pages (the core content users need), but not the scaffolding that makes those pages discoverable in bulk. tabelog robots.txt
If you’ve ever tried to crawl Tabelog (食べログ), Japan’s most authoritative restaurant review platform, you’ve met its first line of defense. It’s not a CAPTCHA. It’s not an IP ban. It’s a deceptively simple text file: https://tabelog.com/robots.txt . At first glance, it looks like a standard robots