SERP Eagle

Get Source code

What is SERP Eagle?

SERP Eagle is an open-source Google Search Results Scraper written in Python.

It takes user query , google it and then scrapes it.

What it scrapes?

It scrapes following fields:

  • Organic Results
  • Paid Results
  • Featured Snippet
  • Question that people also ask
  • Related queries or searchies

How to install it?

You can clone the repo by using this command:

pip install

How to use it?

  • Install the required dependencies by using this command:

    pip install -r requirements.txt

  • Now first thing first, you have to set variables in file
    Variables are:

    SEARCH_QUERY: Enter the query that you want to search on Google

    TIMEOUT_FOR_PAGE_LOAD: It will set timeout for page loading. Mean if in this time, page does not load, playwright will throw exception of timeout

    DEFAULT_TIMEOUT: It will set default timeout of playwright. For example when selecting elements. If element does not appear in that time it will throw exception. If you facing timeout error related to loading elements. You can increase it.
    SLEEP_TIME: It is the time that scraper will take to sleep before continuing scrolling the window. Mean if you want to increase speed of scrolling, you can set it to zero, or if you want to slow down, you can increase the time.
    HEADLESS: If you want to use scraper with headless browser, set it to true else false.
    PROXIES: If you dont want to use proxies in the scraper, set this variable to none. Else, give the details of your proxies in this variable in a dictionary  form.  Example:
    PROXIES = {
        “server”: “”,
        “username”: “user”,
        “password”: “password”
  • Now after setting variables, run the file It will start the scraping.

How it works?

Load page: Once it is started, it will first load the main page.

Load results: Then it will load all the results. It will continue to scroll down until there are no more results to load

Scrape results: Then it will scrape the loaded results. 

Saving results: After scraping the results, it will save them in a JSON file.  Note: If any field is not available in the results, then there will be null in the JSON file for that particular field.

Enjoy Scraping 😁

Scroll to Top