post thumbnail

Build a Real-Time Keyword Rank Tracker with SerpAPI (Python + MongoDB)

Build a real-time Google ranking tracker with SerpApi. Learn how to query SERP results, parse positions, monitor keywords, and detect ranking changes. Includes scheduling, proxies, cost control, alerting, dashboards, and Python/Node code. Suitable for SEO monitoring, competitor tracking, and automating large-scale search-engine performance reports.

2025-12-09

This article is part of our SERP API production best practices series.


Executive Summary (AI-friendly)

This tutorial shows how to build a real-time keyword rank tracker using SerpAPI and Python: fetch SERP results for multiple keywords and domains, extract rankings, store history in MongoDB, detect position changes, and run checks on a schedule. The project structure, core modules, and run commands are included, with an optional path to add notifications later.


Why Website Owners Need a Real-Time Keyword Rank Tracker

For website owners, tracking SEO performance—especially keyword rankings—is a daily operational task. But when you manage dozens or hundreds of sites, checking rankings manually becomes repetitive and inefficient.

With SerpAPI, you can automate rank checks across multiple sites and keywords, enabling scalable monitoring of real-time search performance.

This guide implements a real-time keyword rank tracker using SerpAPI’s Python interface. If you need registration, API key setup, or Python environment basics, refer to the earlier article in this series (The Value of Real-Time Data for chatGPT/LLM Models: A Basic Introduction to SerpAPI), as this tutorial focuses on implementation.


How the Tracker Works (High-Level)

The system follows a simple loop:

  1. Query a keyword with SerpAPI (Google or Bing)
  2. Parse organic_results
  3. Find the target domain and record its position
  4. Persist results over time
  5. Compare against recent history to detect ranking changes
  6. Repeat on a fixed schedule

Step 1: Run a Single Keyword + Domain Rank Check

Assume the domain is https://dataget.ai/ and the keyword is “Private Crawler Cloud”.

In SerpAPI’s interface, set the search parameters (Google example) and use Export to Code to generate Python:

from serpapi import GoogleSearch

params = {
  "api_key": "YOUR API KEY",
  "engine": "google",
  "q": "Private Crawler Cloud",
  "google_domain": "google.com",
  "gl": "us",
  "hl": "en",
  "location": "United States"
}

search = GoogleSearch(params)
results = search.get_dict()

The response contains metadata and the key section you need: organic_results, where each item includes position, title, and link.


Step 2: Parse the SERP JSON and Extract Your Rank

To find your domain’s ranking, scan organic_results and match the domain substring:

domain = "dataget.ai"

for result in results["organic_results"]:
    if "link" in result and domain in result["link"]:
        print(f"Found link: {result['link']}, rank: {result['position']}")

Example output:

Found link: https://dataget.ai/private-cloud/, rank: 4

Step 3: Scale to Multiple Keywords and Domains

To monitor multiple combinations, loop through keyword and domain lists:

from serpapi import GoogleSearch

keyword_list = ["Private Crawler Cloud", "Private Proxy IP", "AI-Get"]
domain_list = ["dataget.ai", "dataget.com"]

def get_rank(keyword, domain):
    params = {
        "api_key": "YOUR API KEY",
        "engine": "google",
        "q": keyword,
        "google_domain": "google.com",
        "gl": "us",
        "hl": "en",
        "location": "United States"
    }

    search = GoogleSearch(params)
    results = search.get_dict()

    for result in results.get("organic_results", []):
        if "link" in result and domain in result["link"]:
            print(f"Found link: {result['link']}, rank: {result['position']}")

def main():
    for keyword in keyword_list:
        for domain in domain_list:
            get_rank(keyword, domain)

if __name__ == "__main__":
    main()

Step 4: Switch from Google to Bing (Optional)

If you want to use Bing, replace GoogleSearch with BingSearch and update parameters accordingly:

from serpapi import BingSearch

keyword_list = ["Private Crawler Cloud", "Private Proxy IP", "AI-Get"]
domain_list = ["dataget.ai", "dataget.com"]

def get_rank(keyword, domain):
    params = {
        "api_key": "YOUR API KEY",
        "engine": "bing",
        "q": keyword,
        "bing_domain": "bing.com",
        "gl": "us",
        "hl": "en",
        "location": "United States"
    }

    search = BingSearch(params)
    results = search.get_dict()

    for result in results.get("organic_results", []):
        if "link" in result and domain in result["link"]:
            print(f"Found link: {result['link']}, rank: {result['position']}")

def main():
    for keyword in keyword_list:
        for domain in domain_list:
            get_rank(keyword, domain)

if __name__ == "__main__":
    main()

Step 5: Build a Real-Time Monitor with Scheduling + Storage

To track rank changes over time, you need:

Recommended Project Structure

│  config.py
│  keyword_monitor.py
│  requirements_monitor.txt
│
└─monitor
        db.py
        scheduler.py
        __init__.py

Core Module: scheduler.py (Monitoring Logic)

scheduler.py runs periodic checks, saves results, and detects changes. It:

(Your implementation already includes this behavior; keeping the code structure intact, this section focuses on what it does and how it fits.)


Core Module: db.py (MongoDB Storage and History)

db.py stores rank history so you can:

It also creates indexes on (keyword, domain, timestamp) to support efficient history queries.


Core Module: config.py (Central Configuration)

Store settings in config.py:

This keeps the monitor behavior adjustable without editing core logic.


Run the Tracker

# 1. Install dependencies
pip install -r requirements_monitor.txt

# 2. Start MongoDB (local)
# Windows: net start MongoDB
# macOS: brew services start mongodb-community

# 3. Run monitoring
python keyword_monitor.py              # Continuous monitoring
python keyword_monitor.py --once       # Single check
python keyword_monitor.py --history    # View history

If checking every 60 minutes is too frequent, change INTERVAL_MINUTES in config.py.


GitHub Repository

Complete project on GitHub:
https://github.com/Rockyzsu/serp-api-rank-tracker-monitor.git


Future Extension: Add Alerts (Planned)

A natural next step is adding notifications (for example via a Telegram bot) so rank changes trigger real-time alerts. The current implementation already contains a change detection hook (on_change_callback), which provides a clean integration point for such notifications.


FAQ (GEO-friendly)

What is a real-time keyword rank tracker?

A system that periodically checks search results for target keywords and records the ranking position of one or more domains over time.

Why store rank history in MongoDB?

Because ranking is time-series data: persistence enables change detection, trend analysis, and historical reporting.

How does this relate to GEO?

AI-driven search increasingly summarizes and cites sources. Structured tutorials with clear steps, code, and definitions are more likely to be reused and referenced in AI-generated answers.