Introduction
In this tutorial, you'll learn how to create a simple web scraper using Python to monitor Amazon product prices. This is a practical skill that can help you track deals like the ones mentioned in the ZDNet article about readers' Amazon purchases. By the end of this tutorial, you'll have a working Python script that can fetch product information from Amazon and monitor price changes.
Prerequisites
To follow this tutorial, you'll need:
- A computer with Python installed (version 3.6 or higher)
- Basic understanding of Python programming concepts
- Internet connection
- Text editor or Python IDE (like VS Code or PyCharm)
Step-by-Step Instructions
Step 1: Set Up Your Python Environment
Install Required Libraries
First, you need to install the necessary Python libraries for web scraping. Open your terminal or command prompt and run:
pip install requests beautifulsoup4
Why this step? The requests library helps us send HTTP requests to Amazon's servers, while beautifulsoup4 allows us to parse and extract data from HTML pages.
Step 2: Create Your Python Script
Initialize Your Project
Create a new file called amazon_scraper.py in your preferred directory. Open it in your text editor and start by importing the required libraries:
import requests
from bs4 import BeautifulSoup
import time
Why this step? These imports give us access to the functionality we need to make HTTP requests, parse HTML, and add delays between requests.
Step 3: Create the Web Scraper Function
Write the Main Scraping Logic
Add the following function to your script:
def get_amazon_product_info(url):
# Set headers to mimic a real browser
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'
}
try:
# Send GET request to the Amazon URL
response = requests.get(url, headers=headers)
response.raise_for_status() # Raise an exception for bad status codes
# Parse the HTML content
soup = BeautifulSoup(response.content, 'html.parser')
# Extract product title
title = soup.find('span', {'id': 'productTitle'})
title_text = title.get_text().strip() if title else 'Title not found'
# Extract price
price = soup.find('span', {'class': 'a-price-whole'})
price_text = price.get_text().strip() if price else 'Price not found'
return {
'title': title_text,
'price': price_text,
'url': url
}
except requests.RequestException as e:
print(f"Error fetching data: {e}")
return None
Why this step? This function handles the core scraping logic - it sends a request to Amazon, parses the HTML, and extracts the product title and price. The headers help avoid being blocked by Amazon's servers.
Step 4: Test Your Scraper
Add Test Code to Your Script
Now add this test code at the bottom of your script:
# Example Amazon product URL (replace with actual product URL)
product_url = "https://www.amazon.com/dp/B08N5WRWNW"
# Get product information
product_info = get_amazon_product_info(product_url)
if product_info:
print(f"Product: {product_info['title']}")
print(f"Price: {product_info['price']}")
print(f"URL: {product_info['url']}")
else:
print("Failed to retrieve product information")
Why this step? This test code allows you to verify that your scraper works correctly with a real Amazon product URL before building more complex features.
Step 5: Create a Price Monitoring Loop
Build a Continuous Monitoring System
Replace the test code with this monitoring loop:
def monitor_price(url, target_price=None, delay=3600): # Default delay of 1 hour
print(f"Starting price monitoring for: {url}")
while True:
product_info = get_amazon_product_info(url)
if product_info:
print(f"\n--- Current Data ---")
print(f"Product: {product_info['title']}")
print(f"Price: {product_info['price']}")
if target_price and product_info['price'] != 'Price not found':
# Simple price comparison (you may need to clean the price string)
current_price = product_info['price'].replace(',', '').replace('$', '')
try:
if float(current_price) <= target_price:
print(f"\n🎉 TARGET PRICE ACHIEVED! Price is now ${current_price}")
except ValueError:
print("Could not convert price to number")
print(f"\nWaiting {delay//60} minutes before next check...")
time.sleep(delay) # Wait before next check
Why this step? This function creates a continuous monitoring system that checks the product price at regular intervals, which is perfect for tracking deals like those mentioned in the ZDNet article.
Step 6: Run Your Complete Scraper
Execute Your Monitoring Script
Replace the test code with this final execution:
# Example usage
if __name__ == "__main__":
# Replace with actual Amazon product URL
product_url = "https://www.amazon.com/dp/B08N5WRWNW"
# Monitor price with target price of $50 (adjust as needed)
monitor_price(product_url, target_price=50, delay=1800) # Check every 30 minutes
Why this step? This final setup runs your complete monitoring system, checking the price every 30 minutes and alerting you when the price drops to your target level.
Summary
In this tutorial, you've learned how to build a simple Amazon price monitoring tool using Python. You've created a web scraper that can fetch product information from Amazon, extract key details like title and price, and monitor price changes over time. This skill allows you to track deals and purchases like those highlighted in the ZDNet article about readers' Amazon purchases during the Spring Sale.
Remember that web scraping should be done responsibly. Always respect website terms of service and implement appropriate delays between requests to avoid overwhelming servers. This tool can be extended with features like email notifications, database storage, or more sophisticated price tracking algorithms.



