We are trying to scrape a Website. We have created a playbook but it is only letting us scrape 30 contacts after that it is crashing up and saying “Access Restricted due to exceeded limit”. We have tried including time delays of 10 seconds in the scrape actions but it is not helping.
CONFIDENTIALITY NOTICE: The information contained in this email, including any attachments, is intended solely for the use of the individual or entity to whom it is addressed and may contain confidential, proprietary, or legally privileged information. If you are not the intended recipient, you are hereby notified that any dissemination, distribution, copying, or use of this email or its contents is strictly prohibited. If you have received this email in error, please notify the sender immediately and permanently delete the original message and any copies.
Thanks so much for your patience, and I apologize for the delay in getting back to you.
I’ve been looking into your issue and wanted to share a few things:
I don’t immediately recognize the error message “Access Restricted due to exceeded limit” — could you clarify whether this is appearing within Bardeen or on the target website itself? That would help me better understand where the block is happening.
I also reviewed the playbook and noticed a conflict in the configuration (see attached screenshot). Both the number of items and the number of pages were set — Bardeen expects you to choose only one. Once I corrected this, I was able to successfully scrape up to 50 items, but couldn’t go beyond that.
Upon deeper inspection, it looks like the pagination button in the scraper isn’t functioning properly. The website you’re scraping seems to require a custom CSS selector for pagination, which makes it incompatible with general-purpose scraping tools. Unfortunately, I’m not able to assist with custom code or selector development directly.
That said, if you’re considering a Teams plan or higher, our engineering team can build a fully functional playbook tailored to your website for you — including the necessary custom logic.
Let me know what plan you’re considering or if you’d like help exploring the options!
This particular error is coming on the website. It is only letting us scrape 40-50 contacts after that it just crashes giving us this error. Then it gets fine automatically after 5 mins.
Thanks for clarifying this.
Is this the case? After subscribing – the support team would be able to build the required scrapers that we want on the websites?
Thanks for the follow-up and for confirming that the error appears on the website itself.
That kind of restriction sounds like an anti-bot mechanism on the site — especially if it starts working again after a few minutes. You can try adding a delay between fetching each list item directly inside the scraper settings (see screenshot below). This often helps reduce the chances of being blocked.
Regarding your question: yes, custom scraper development is available on theTeams plan or higher. If that’s within your budget, I’d actually recommend it — you won’t need to spend time troubleshooting or coding anything manually. Our engineers will fully build and test the scrapers tailored to your websites.
That said, I’m still more than happy to assist you here where I can — just note that my support is limited when it comes to writing custom selectors or handling technical pagination logic.
Let me know what you’d like to do next, and I’ll support you however I can!