> Your robots.txt file is the very first thing Googlebot looks for. If it can not reach this file, it will stop and won't crawl the rest of your site. Meaning your pages will remain invisible (on Google).
This implication (stopped crawl means your pages are invisible) directly contradicts Google's own documentation[0] that states:
> If other pages point to your page with descriptive text, Google could still index the URL without visiting the page. If you want to block your page from search results, use another method such as password protection or noindex.
What I get from the article is the big change is Google now treats missing robots.txt as if it disallowed crawling. Meaning you can still get indexed but not crawled (as per above).
My cynical take for this is this is a preparation for a future AI-related lawsuit. Everyone explicitly allowing Google (and/or other crawlers) is a proof they're doing it with website's permission.
Oh, you'd want to appear in Google search results without appearing in Gemini? Tough luck, bro.
[0] https://developers.google.com/search/docs/crawling-indexing/...