How to Scrape Websites in Python Using BeautifulSoup

How to Scrape Websites in Python Using BeautifulSoup

 

A web scraper extracts information from websites automatically. The technique is very useful for obtaining data from the web to use for your own purposes. Web scraping with Python is very easy because Python has some great libraries for web scraping. In this post, we will focus on using the popular BeautifulSoup library to scrape websites in Python.

Overview of the Scrape Process

Here is a quick overview of the web scraping process we will walk through in this tutorial:

  1. Find the URL of the page you want to scrape
  2. Inspect the page to identify the data you want to extract
  3. Write the code to:
    • Download the page content using requests
    • Parse the HTML content using BeautifulSoup
    • Find the relevant data using BeautifulSoup
    • Store/export the scraped data

Importing the Libraries

To scrape websites in Python, the two main libraries we need are:

  • requests – used to download the web page content
  • BeautifulSoup – used to parse and extract information from the HTML

So we need to import these libraries first:

import requests
from bs4 import BeautifulSoup
import json

Downloading the Web Page Content

The first step is to download the HTML content of the web page we want to scrape. We can use the requests library to download the page content and store it in a response object.

For example:

access_token = 'L5vCo54n13B7p1J8fWZYNh' #access_token = Get you access token from app.quickscraper.co
url = f"<https://api.quickscraper.co/parse?access_token={access_token}&url=https://www.amazon.com/deals>" 
response = requests.get(url)

This downloads the content from the given URL and stores it in the response variable.

Parsing the HTML

Next, we need to parse the HTML content to extract useful information from the page. BeautifulSoup allows us to parse HTML easily.

We can create a BeautifulSoup object from the response text like so:

soup = BeautifulSoup(response.text, 'html.parser')

This will parse the HTML content using the built-in HTML parser.

Extracting Data

With the BeautifulSoup object ready, we can now find and extract useful bits of information from the HTML.

BeautifulSoup provides methods like:

  • find() – find a single element by CSS selector or tag name
  • find_all() – find all elements that match a selector/name

For example, to extract all the <h2> elements, we can use:

# Find all divs containing the desired class pattern
deal_items = soup.find_all('div', class_=lambda x: x and 'DealGridItem-module__' in x)

And then we can loop through the headings and print the text:

for heading in headings:
    print(heading.text)

There are many more ways to search for and extract data – like using CSS selectors, attributes, and more. Check BeautifulSoup’s documentation for additional functionality.

Storing Scraped Data

Once you have extracted the information you need, the final step is to store or export that data for further processing and analysis.

Common ways to save scraped data include:

  • Saving to a file (JSON, CSV, TXT)
  • Storing in a database (SQL, NoSQL)
  • Exporting to an Excel sheet

For example, here is how we can quickly save the scrapped data into a JSON file:

import json

data = []
# Loop through the divs to find the titles
for item in deal_items:
    title_element = item.select('div[class*=DealContent-module__truncate_]')
    for title_ele in title_element:
      title = title_ele.text.strip()
      data.append({
            'title': title
        })

print(data)

# Write the response in json file
with open('amazon_product.json', 'w') as f:
    json.dump(data, f)

The scraped data can then be accessed for future use.

Summary

That covers the basics of how to effectively scrape websites using Python and BeautifulSoup. The key steps are:

  1. Downloading page content with requests
  2. Parsing HTML with BeautifulSoup
  3. Finding relevant data
  4. Extracting and storing scraped data

Web scraping can save huge amounts of time versus manually copying data. Follow the process outlined above, and you’ll be able to scrape data from just about any site.

Let us know in the comments if you have any other questions!

Share on facebook
Share on twitter
Share on linkedin

Related Articles


Get started with 1,000 free API credits.

Get Started For Free
Copyright All Rights Reserved ©
💥 FLASH SALE: Grab 30% OFF on all monthly plans! Use code: QS-ALNOZDHIGQ. Act fast!
+