|

|

Tracking the Keyword Ranking Position Using Python & Collab


Even some paid tools like Semrush doesn’t offer real-time keyword position tracking. 

In Semrush, maybe you can see “Updated 4hrs ago” – but not the real-time data. 

By using the script I am dropping here, you can track the ranking position for multiple keywords for any location you wish—for free.

In our case, we have two website, where we have similar products on both sites; which means similar keywords.

And tracking it manually, we can’t afford that time.

We had around around 25 keywords to track in the US location

With ChatGPT, I automated this tedious task and let the script track the real-time data.

All I need to do is run the Python script, and the code does the rest.

Use Case

Tracking several keyword ranking position fot the US location.

What you Need?

Google Collab – That’s it!

Getting Started!

It’s not a big deal.

Just open Google Collab and name it.

!pip install beautifulsoup4
!pip install requests

Copy and paste the above-given code – and click run. If you don’t know how to run, then look below and note the highlighted circle. Click the circle to run the code.

After installing the required packages, we need to run the script.

Click “+ Code” and paste the code give below.

import requests
from bs4 import BeautifulSoup

def get_google_ranking_position(keyword, site_url, location='United States', max_pages=5):
 headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'}

    # Initialize page counter
 current_page = 1

    while current_page <= max_pages:
        # Use the 'gl' parameter to set the location (country)
 url = f'https://www.google.com/search?q={keyword}&gl={location}&start={(current_page - 1) * 10}'

 response = requests.get(url, headers=headers)
 html = response.text

 soup = BeautifulSoup(html, 'html.parser')

        # Extracting search results
 search_results = soup.find_all('div', class_='tF2Cxc')

        # Iterate through search results on the current page
        for index, result in enumerate(search_results, start=1):
 url = result.find('a')['href']
            if site_url in url:
                return (current_page - 1) * 10 + index

        # Increment the page counter
 current_page += 1

    # If the site is not found in the search results within the specified pages
    return None

# Keywords to track
keywords_to_track = ['keyword 1', 'keyword 2', 'keyword 3', 'keyword 4', 'keyword 5’, 'keyword 6', 'keyword 7', 'keyword 8']

# Your website URL
site_url = 'https://www.example.com/'

# Specify the location (optional, default is United States)
location = 'United States'

# Track rankings for each keyword
for keyword in keywords_to_track:
 ranking_position = get_google_ranking_position(keyword, site_url, location, max_pages=5)

    # Print the result
    if ranking_position is not None:
        print(f'{ranking_position}')
    else:
        print(f'Site not found in the search results for "{keyword}" in {location} within the specified pages.')

Note: Replace the ‘https://www.example.com/ with the actual domain and don’t forget to add the keyword.

If you run the code, the result will be displayed as an output.

That’s how you track the keyword position in a real-time.