How to Crawl a Single Page in Screaming Frog

screaming frog seo spider logo
Screaming Frog SEO Spider

In your SEO work, sometimes you want to crawl a single page.

Maybe you want to check the HTTP Status of that one page or you have some other reason to crawl a single page.

You can achieve this using a number of tools, but in this post I’m going to show you how to accomplish this using Screaming Frog SEO Spider.

In my experience it makes sense to use Screaming Frog for this because you have very good flexibility in choosing your User Agent (Googlebot Mobile is recommended here because that’s what Google is crawling with currently) but also because you can get a lot of other useful information about that page.

Here are the steps to crawl a single page using Screaming Frog:

  1. Set the spider to “List” mode (in the top navigation, go “Mode” > “List”)
  2. Set the spider to crawl just 1 page (in the top navigation, go “Configuration > Spider > Limits > Limit Crawl Depth = 0)
  3. Now enter your URL (in the bar below top navigation, go “Upload”, then enter your single URL to crawl by either pasting, enter in manually by typing it in, or you can upload that single URL from a file)
  4. Click “OK” and Screaming Frog will crawl that single page

Voila – Screaming Frog will provide you with various bits of info about that page such as Content Type, Status Code, Indexability Status, and so on.

BUT – what if you want to check the HTTP Status of all of the links on that single page?

If you right click on your crawled single URL then choose “Export” > “Outlinks”, you’ll see that there is no HTTP Status listed for the links on the page.

This is because we need to tweak our configuration.

How to crawl a single page in Screaming Frog and get the HTTP Status of the outlinks on that page

You’ll need to make the following tweaks to your configuration:

  • Set crawl depth to 1 (in the top navigation, go “Configuration > Spider > Limits > Limit Crawl Depth = 1)
  • In the top navigation, go “Configuration > Spider > Crawl > and be sure you have the box ticked for “Check Links Outside of Start Folder)

Enter your URL to crawl as noted in previous steps – when crawl is completed, right click on the URL in question and choose “Export” > “Outlinks” – and you’ll see in the export that all the links on the page have the HTTP Status in the “Status Code” and “Status” columns.

Lastly – don’t be confused by the fact that your completed crawl will show more URLs than just the single page you’re crawling – that’s because to check the links on that page, the Spider needs to crawl those links and show the URLs crawled in the UI.

Questions? Comments? Let me know in the comments section below!

Leave a comment

Your email address will not be published. Required fields are marked *