Setting up a proxy in ScraperAPI

Scraper API is an open-source platform designed for automated web data extraction, effectively functioning as a scraping tool. It's a versatile system that allows the integration of custom scripts and collaborative project management.

A key feature of Scraper API is its cloud service API, enabling users to craft scripts tailored to specific websites. This makes it suitable for a wide range of services, including Facebook, LinkedIn, Google search results, Amazon marketplace, and others. Users can configure it to extract various types of data such as HTML documents, tables, text, images, and even information from .js files.

One challenge with web scraping using Scraper API is the potential triggering of anti-fraud systems. These are scripts employed by websites to prevent data scraping. They get activated when detecting numerous requests from the same IP or traffic from dubious sources, often resulting in a captcha verification page. To circumvent these anti-fraud measures and other related restrictions, setting up a proxy is essential prior to using Scraper API for your web scraping needs.

Video tutorial for proxy configuration in ScraperAPI

How to set up a proxy in ScraperAPI for scraping

The Scraper API is compatible with various programming environments, including Bash shell for UNIX systems, JavaScript (Node), Python/Scrapy, PHP, Ruby, and Java. It offers a user-friendly way to configure your proxy, as detailed in these instructions.

  1. Log into your Scraper API account. On the Dashboard, you’ll find essential details like your API key, a command for connecting to the service via API, and a command for proxy connection.
  2. Screenshot_3.png

  3. Under the “Sample Proxy Code” section, you’ll see a template command:

    ccurl -x "http://scraperapi:[email protected]:8001" -k "http://httpbin.org/ip"

  4. To connect your proxy, copy this template into your script. Replace “scraperapi” with your username, “APIKEY” with your proxy password. After the ‘@’ symbol, enter your proxy server’s IP and port separated by a colon. Change "http://httpbin.org/ip" to the URL of the page you are scraping. Your modified command should look something like this:

    curl -x "http://USERNAME:PASS@IP-proxy:Port" -k "http://webscrapingtarget"

Apply this method to integrate the proxy in scripts written in other programming languages as well.

For Python, the adapted command would follow a similar pattern. Here’s an example for Python:


Likewise, for Ruby, the request would be adjusted in a comparable way:


Setting up a proxy with the Scraper API is crucial for several reasons. It automates data collection while avoiding detection by anti-fraud systems, which are often triggered by multiple requests from a single IP. This minimizes the risk of triggering anti-bot protection and account blocks. Additionally, using a proxy facilitates access to region-restricted data, broadening the scope of your scraping capabilities.