Fix Blocked by robots.txt Issue in Google Search Console – Complete Guide
Are your pages not appearing on Google? You might be seeing the message "Blocked by robots.txt" in Google Search Console.
![]() |
Fix Blocked by robots.txt Issue in Google Search Console |
This means that Googlebot is not allowed to access some parts of your website, which may hurt your SEO.
Don’t worry. In this guide, you’ll learn:
- What causes the "Blocked by robots.txt" issue
- How to check which pages are blocked
- How to fix it for better SEO
🔍 What is the "Blocked by robots.txt" Issue?
Your website has a file called robots.txt. It tells search engines which pages to crawl and which to ignore. If this file blocks important pages, Google won’t index them.
🚫 Common Causes:
- Blocking label pages: Blogger often blocks
/search/label/
pages to avoid duplicate content. - Blocking feed or admin pages: URLs like
/feeds/
or/admin/
are blocked for SEO or privacy. - Wrong robots.txt settings: Mistakenly blocking useful pages.
Example Blocked URL:
https://www.geekscodes.com/search/label/Blogging
🧪 How to Check Which Pages Are Blocked
✅ Method 1: Google Search Console
- Open Google Search Console.
- Go to Indexing → Pages.
- Find the section "Why pages aren’t indexed".
- Look for "Blocked by robots.txt".
- Click to see which URLs are affected.
🛠 Method 2: Robots.txt Tester
- Open Google Search Console.
- Go to Settings → Robots.txt Tester.
- Paste a blocked URL and click Test.
- If it shows a red warning, it’s blocked.
🛠 How to Fix the "Blocked by robots.txt" Issue
Step 1: Update Your robots.txt in Blogger
- Go to your Blogger Dashboard.
- Click Settings.
- Scroll to Crawlers and Indexing.
- Enable Custom robots.txt.
- Paste the following updated code:
✅ New robots.txt Code (Allows Label Pages)
User-agent: *
Allow: /search
Allow: /
Sitemap: https://www.geekscodes.com/sitemap.xml
❌ Old robots.txt Code (Blocks Label Pages)
User-agent: *
Disallow: /search
Allow: /
Sitemap: https://www.geekscodes.com/sitemap.xml
Tip: Change Disallow: /search
to Allow: /search
only if you want Google to index your label/tag pages.
Step 2: Validate the Fix in Google Search Console
- Go back to Google Search Console.
- Click on the "Blocked by robots.txt" error message.
- Click Validate Fix.
- Wait 1–2 weeks for Google to recrawl your site.
📌 Should You Fix This Error?
- Important content blocked? → Yes, fix it now.
- Only label, feed, or admin pages blocked? → It’s okay to ignore.
🧠 Final Thoughts
The "Blocked by robots.txt" warning is common in Google Search Console. Fix it if it affects key blog posts, pages, or product listings. You can ignore it for label, feed, or login pages.
Use the correct robots.txt settings and let Google crawl what matters. This improves visibility and ranking in search results.
❓ FAQs – Fixing robots.txt Issues in Blogger
1. What does 'Blocked by robots.txt' mean?
It means that Google can’t access some pages on your site due to rules in your robots.txt file.
2. Should I allow /search/label/ pages?
If you want those label pages indexed in Google, yes. Otherwise, you can block them to avoid duplicate content.
3. Will editing robots.txt harm my site?
No, if done correctly. Only allow access to pages that you want Google to crawl and rank.
4. How long does it take for Google to reindex after fixing?
Usually within 7 to 14 days, but it may vary.
5. How do I test if my page is still blocked?
Use the Robots.txt Tester in Google Search Console to confirm if a page is crawlable.