This article is part of our SERP API production best practices series.
SerpBear is an open-source search engine ranking (SERP) tracking and keyword research tool designed for SEO practitioners who want full control over their data, infrastructure, and costs.
Unlike subscription-based SaaS platforms, SerpBear supports unlimited domains and keywords, local deployment, and flexible crawler integrations.
As a result, it is particularly suitable for individual webmasters, SEO freelancers, and small teams looking to build their own SERP monitoring system.
Core Positioning and Advantages
SerpBear focuses on being lightweight, extensible, and cost-efficient.
Key Strengths
- Open Source & Free Built with Next.js + SQLite, SerpBear can be self-hosted with zero licensing cost. Only external crawlers or proxies may incur fees.
- Unlimited Keyword & Domain Tracking There are no artificial limits on the number of keywords or domains you monitor.
- Flexible Data Sources Supports:
- Google Search Console (real impressions & clicks)
- Google Ads (search volume & suggestions)
- Third-party SERP APIs
- Custom proxy IP pools
- Lightweight & Practical Includes PWA mobile access, CSV export, and a built-in API for automation.
Core Features Overview
| Feature | Description |
|---|---|
| SERP Ranking Tracking | Automatically checks Google rankings and stores historical changes |
| Trend Analysis | Visualizes keyword ranking fluctuations over time |
| Email Notifications | Daily / weekly / monthly alerts with customizable thresholds |
| Keyword Research | Generates keyword ideas based on page content and search volume |
| Data Export | CSV export and internal API for external systems |
| Mobile Access | PWA support for on-the-go ranking checks |
Technical Stack and Deployment Options
Architecture
- Frontend / Backend: Next.js
- Database: SQLite
- Runtime: Node.js
This architecture keeps the system fast, portable, and easy to maintain.
Deployment Methods
- Docker (Recommended) One-click deployment using the official image.
- Free Hosting Platforms Can run on services like Fly.io or mogenius for lightweight usage.
- Self-Hosted VPS / Cloud Server Offers maximum flexibility and data ownership.
Local Deployment Guide (Step-by-Step)
Prerequisites
- Git
- Node.js
- npm
1. Clone the Repository
git clone [email protected]:towfiqi/serpbear.git
cd serpbear
2. Configure Environment Variables
Copy .env.sample to .env:
USER=admin
PASSWORD=0123456789
SECRET=your_secret_key
APIKEY=your_api_key
SESSION_DURATION=24
NEXT_PUBLIC_APP_URL=http://localhost:3000
Important Notes
- On macOS, use USER_NAME instead of USER
- SECRET encrypts stored credentials
- APIKEY is required for accessing SerpBear’s API
3. Create Data Directory
mkdir data
4. Install Dependencies and Build
npm install
npm run build
⚠️ Windows Note
If you encounter node-gyp errors in Git Bash, switch to CMD or PowerShell, which resolves the issue.
5. Start the Application
SerpBear runs on port 3000:
http://127.0.0.1:3000
Log in using the username and password defined in .env.
Configuring SERP Crawlers
SerpBear supports multiple SERP data sources, including:
hasdata, proxy, scrapingant, scrapingrobot,
searchapi, serpapi, serper, serply,
spaceserp, valueserp
You may also extend or customize crawler integrations.
Using Proxy Services Instead of SERP APIs
If you prefer full control or lower costs, SerpBear allows you to use your own proxy IPs.
Proxy Format
http://username:password@ip:port
Example:
http://admin:[email protected]:7291
Best Practices
- Proxy count should scale with keyword volume
- 100 keywords → at least 30 proxy IPs
- Rotating proxy pools perform best
- Datacenter IPs are sufficient (residential IPs are not required)
Adding Keywords and Monitoring Rankings
- Click Add Keywords
- Enter target keywords
- Select:
- Country / region
- Device type (desktop or mobile)
- Save and start tracking
SerpBear automatically schedules SERP checks and updates rankings.
Email Notifications for Ranking Changes
Configure SMTP settings in the Settings Panel to receive ranking alerts.
Supported services include:
- ElasticEmail
- SendPulse
- Other standard SMTP providers
Notifications can be sent daily, weekly, or monthly.
How SerpBear Works Internally
SERP Crawler Logic
SerpBear queries Google search results using:
- Third-party SERP APIs, or
- Your proxy IP pool
It then verifies:
- Whether your domain appears
- The exact ranking position
Cron Jobs Explained
SerpBear runs three background cron jobs:
- Crawler Job Updates all keyword rankings (default: daily at midnight)
- Retry Job Re-attempts failed crawls (runs hourly if enabled)
- Email Job Sends ranking reports at configured intervals
When Should You Use SerpBear?
SerpBear is ideal if you:
- Want unlimited SERP tracking without SaaS fees
- Need local or private deployment
- Prefer API / proxy-based crawling
- Are comfortable with light technical setup
Compared to tools like Ahrefs or SEMrush, SerpBear trades convenience for cost control and ownership.
Conclusion
SerpBear is a powerful open-source SERP tracking tool that enables SEO professionals to build their own ranking monitoring system with full flexibility and minimal cost.
By combining API integrations, proxy support, cron-based automation, and local deployment, it offers a practical alternative to expensive SaaS platforms—especially for long-term keyword tracking and experimentation.