Inoreader Fetcher

Inoreader Fetcher is the engine that collects RSS feeds on behalf of users who choose to subscribe to them in the Inoreader app. The Fetcher periodically connects to RSS feeds to check for new content. You might see those requests in your web server logs.

The Inoreader Fetcher always identifies itself with the following user agent:

Inoreader/1.0 (+http://www.inoreader.com/feed-fetcher; 3 subscribers; )

You can use the “subscribers” information to track how many Inoreader users consume your RSS feeds. Even if you have multiple subscribers, Inoreader still needs to fetch your feed only once to distribute the content to them. This is an excellent benefit of cloud-based RSS readers, unlike stand-alone desktop applications, where users must connect separately to your website to retrieve the feed.

How do I request that Inoreader not retrieve some or all of my site’s feeds?

Inoreader allows users to add any RSS feed to their account if they have the public URL. If your website provides a public RSS feed, Inoreader will not stop users from subscribing to this feed. Inoreader Fetcher doesn’t read your website’s robots.txt, so you can’t block it using rules in this file. If you want Inoreader to not be able to read some or all of your feeds, you can restrict it by serving a 404 page to user agents containing Inoreader/1.0.

How often will Inoreader retrieve my feeds?

Inoreader uses an advanced algorithm that learns the best timing to automatically fetch feeds to provide users with timely updates and not overload remote websites. Generally, a feed shouldn’t be polled more frequently than 30 minutes, and most feeds are updated only once an hour. If your feeds don’t have new content in the past month, you can expect Inoreader to crawl your feeds only once per day.

Why is Inoreader trying to fetch incorrect URLs from my web site.

Inoreader tries to fetch feeds when users are subscribed to them, even if they return errors. Some websites break occasionally, and their feeds return 404 errors until the website owners fix them. Inoreader cannot know if or when a feed will be restored, so it periodically retries 404 and other errors to check if the feed isn’t working again.

Why isn’t Inoreader obeying my robots.txt file?

Inoreader is not a typical crawler that downloads an index of your website and crawls your links recursively. Instead, Inoreader only fetches direct RSS feed URLs provided by our users. There is no need to disallow Inoreader from certain pages of your website, as it will not try on its own to fetch content that is not an RSS feed.

What IP addresses does Inoreader Fetcher use?

The Inoreader backend uses a few different pools of IP addresses. An up-to-date list of those pools can be found here: inoreader.com/.well-known/ip_list.txt

Do you support push technology?

Yes. Inoreader fully supports WebSub (formerly PubSubHubbub), and all website owners are advised to support it. You can add a special <hub> element in your RSS feeds to indicate that your feed supports WebSub. When Inoreader detects this element, it will subscribe to your WebSub endpoint and poll your feeds much less frequently. If you use WordPress, there is a free plugin that can automatically enable WebSub support for your blog.

Do you support conditional HTTP GET?

Yes, Inoreader Fetcher supports conditional HTTP requests. This means that if your web server is configured correctly, Inoreader Fetcher will save you on bandwidth by not downloading the contents of your RSS feeds if they are not changed from the last fetch.

I need to redirect my feeds to a new website or domain.

Inoreader Fetcher follows HTTP redirects. Temporary (302) redirects are cached for 24 hours, after which the original URL is retried. Permanent redirects (301) are cached forever, and the original URL will never be retried again. Use 301 redirects only if you move your website to a new host or domain.

My question isn’t answered here. Where can I get more help?

If you’re still having trouble, you can contact us at any time.