last modified July 27, 2020
- Beautiful Soup Documentation
- Web Scraping Beautiful Soup Python
- Beautiful Soup Web Scraping Tutorial
- Creating A Web Scraper Python
- Python Beautiful Soup Web Scraping Script
Part one of this series focuses on requesting and wrangling HTML using two of the most popular Python libraries for web scraping: requests and BeautifulSoup After the 2016 election I became much more interested in media bias and the manipulation of individuals through advertising.
Python BeautifulSoup tutorial is an introductory tutorial to BeautifulSoup Python library.The examples find tags, traverse document tree, modify document, and scrape web pages.
Loading Web Pages with 'request' The requests module allows you to send HTTP requests using. Let's say you find data from the web, and there is no direct way to download it, web scraping using Python is a skill you can use to extract the data into a useful form that can then be imported and used in various ways. Some of the practical applications of web scraping could be: Gathering resume of candidates with a specific skill. In this Python Programming Tutorial, we will be learning how to scrape websites using the BeautifulSoup library. BeautifulSoup is an excellent tool for parsi.
BeautifulSoup
BeautifulSoup is a Python library for parsing HTML and XML documents. It is often usedfor web scraping. BeautifulSoup transforms a complex HTML document into a complextree of Python objects, such as tag, navigable string, or comment.
Installing BeautifulSoup
We use the pip3
command to install the necessary modules.
We need to install the lxml
module, which is usedby BeautifulSoup.
BeautifulSoup is installed with the above command.
The HTML file
In the examples, we will use the following HTML file:
Python BeautifulSoup simple example
In the first example, we use BeautifulSoup module to get three tags.
The code example prints HTML code of three tags.
We import the BeautifulSoup
class from the bs4
module. The BeautifulSoup
is the main class for doing work.
We open the index.html
file and read its contentswith the read
method.
A BeautifulSoup
object is created; the HTML data is passed to theconstructor. The second option specifies the parser.
Here we print the HTML code of two tags: h2
and head
.
There are multiple li
elements; the line prints the first one.
This is the output.
BeautifulSoup tags, name, text
The name
attribute of a tag gives its name andthe text
attribute its text content.
The code example prints HTML code, name, and text of the h2
tag.
This is the output.
BeautifulSoup traverse tags
With the recursiveChildGenerator
method we traverse the HTML document.
The example goes through the document tree and prints thenames of all HTML tags.
In the HTML document we have these tags.
BeautifulSoup element children
With the children
attribute, we can get the childrenof a tag.
The example retrieves children of the html
tag, places theminto a Python list and prints them to the console. Since the children
attribute also returns spaces between the tags, we add a condition to includeonly the tag names.
The html
tags has two children: head
and body
.
BeautifulSoup element descendants
With the descendants
attribute we get all descendants (children of all levels)of a tag.
The example retrieves all descendants of the body
tag.
These are all the descendants of the body
tag.
BeautifulSoup web scraping
Requests is a simple Python HTTP library. It provides methods foraccessing Web resources via HTTP.
The example retrieves the title of a simple web page. It alsoprints its parent.
We get the HTML data of the page.
We retrieve the HTML code of the title, its text, and the HTML codeof its parent.
This is the output.
BeautifulSoup prettify code
With the prettify
method, we can make the HTML code look better.
We prettify the HTML code of a simple web page.
This is the output.
BeautifulSoup scraping with built-in web server
We can also serve HTML pages with a simple built-in HTTP server.
We create a public
directory and copy the index.html
there.
Then we start the Python HTTP server.
Now we get the document from the locally running server.
BeautifulSoup find elements by Id
With the find
method we can find elements by various meansincluding element id.
The code example finds ul
tag that has mylist
id.The commented line has is an alternative way of doing the same task.
BeautifulSoup find all tags
With the find_all
method we can find all elements that meetsome criteria.
The code example finds and prints all li
tags.
This is the output.
The find_all
method can take a list of elementsto search for.
The example finds all h2
and p
elementsand prints their text.
The find_all
method can also take a function which determineswhat elements should be returned.
The example prints empty elements.
The only empty element in the document is meta
.
It is also possible to find elements by using regular expressions.
The example prints content of elements that contain 'BSD' string.
This is the output.
Beautiful Soup Documentation
BeautifulSoup CSS selectors
With the select
and select_one
methods, we can usesome CSS selectors to find elements.
This example uses a CSS selector to print the HTML code of the third li
element.
This is the third li
element.
The # character is used in CSS to select tags by theirid attributes.
The example prints the element that has mylist
id.
BeautifulSoup append element
The append
method appends a new tag to the HTML document.
The example appends a new li
tag.
First, we create a new tag with the new_tag
method.
We get the reference to the ul
tag.
We append the newly created tag to the ul
tag.
We print the ul
tag in a neat format.
BeautifulSoup insert element
The insert
method inserts a tag at the specified location.
The example inserts a li
tag at the thirdposition into the ul
tag.
BeautifulSoup replace text
The replace_with
replaces a text of an element.
The example finds a specific element with the find
method andreplaces its content with the replace_with
method.
BeautifulSoup remove element
The decompose
method removes a tag from the tree and destroys it.
The example removes the second p
element.
In this tutorial, we have worked with the Python BeautifulSoup library.
Read Python tutorial or listall Python tutorials.
Internet extends fast and modern websites pretty often use dynamic content load mechanisms to provide the best user experience. Still, on the other hand, it becomes harder to extract data from such web pages, as it requires the execution of internal Javascript in the page context while scraping. Let's review several conventional techniques that allow data extraction from dynamic websites using Python.
What is a dynamic website?#
A dynamic website is a type of website that can update or load content after the initial HTML load. So the browser receives basic HTML with JS and then loads content using received Javascript code. Such an approach allows increasing page load speed and prevents reloading the same layout each time you'd like to open a new page.
Usually, dynamic websites use AJAX to load content dynamically, or even the whole site is based on a Single-Page Application (SPA) technology.
In contrast to dynamic websites, we can observe static websites containing all the requested content on the page load.
A great example of a static website is example.com
:
The whole content of this website is loaded as a plain HTML while the initial page load.
To demonstrate the basic idea of a dynamic website, we can create a web page that contains dynamically rendered text. It will not include any request to get information, just a render of a different HTML after the page load:
All we have here is an HTML file with a single <div>
in the body that contains text - Web Scraping is hard
, but after the page load, that text is replaced with the text generated by the Javascript:
To prove this, let's open this page in the browser and observe a dynamically replaced text:
Alright, so the browser displays a text, and HTML tags wrap this text.
Can't we use BeautifulSoup or LXML to parse it? Let's find out.
Extract data from a dynamic web page#
BeautifulSoup is one of the most popular Python libraries across the Internet for HTML parsing. Almost 80% of web scraping Python tutorials use this library to extract required content from the HTML.
Let's use BeautifulSoup for extracting the text inside <div>
from our sample above.
This code snippet uses os
library to open our test HTML file (test.html
) from the local directory and creates an instance of the BeautifulSoup library stored in soup
variable. Using the soup
we find the tag with id test
and extracts text from it.
In the screenshot from the first article part, we've seen that the content of the test page is I ❤️ ScrapingAnt
, but the code snippet output is the following:
And the result is different from our expectation (except you've already found out what is going on there). Everything is correct from the BeautifulSoup perspective - it parsed the data from the provided HTML file, but we want to get the same result as the browser renders. The reason is in the dynamic Javascript that not been executed during HTML parsing.
We need the HTML to be run in a browser to see the correct values and then be able to capture those values programmatically.
Below you can find four different ways to execute dynamic website's Javascript and provide valid data for an HTML parser: Selenium, Pyppeteer, Playwright, and Web Scraping API.
Selenuim: web scraping with a webdriver#
Selenium is one of the most popular web browser automation tools for Python. It allows communication with different web browsers by using a special connector - a webdriver.
To use Selenium with Chrome/Chromium, we'll need to download webdriver from the repository and place it into the project folder. Don't forget to install Selenium itself by executing:
Selenium instantiating and scraping flow is the following:
- define and setup Chrome path variable
- define and setup Chrome webdriver path variable
- define browser launch arguments (to use headless mode, proxy, etc.)
- instantiate a webdriver with defined above options
- load a webpage via instantiated webdriver
In the code perspective, it looks the following:
And finally, we'll receive the required result:
Selenium usage for dynamic website scraping with Python is not complicated and allows you to choose a specific browser with its version but consists of several moving components that should be maintained. The code itself contains some boilerplate parts like the setup of the browser, webdriver, etc.
I like to use Selenium for my web scraping project, but you can find easier ways to extract data from dynamic web pages below.
Pyppeteer: Python headless Chrome#
Pyppeteer is an unofficial Python port of Puppeteer JavaScript (headless) Chrome/Chromium browser automation library. It is capable of mainly doing the same as Puppeteer can, but using Python instead of NodeJS.
Puppeteer is a high-level API to control headless Chrome, so it allows you to automate actions you're doing manually with the browser: copy page's text, download images, save page as HTML, PDF, etc.
To install Pyppeteer you can execute the following command:
The usage of Pyppeteer for our needs is much simpler than Selenium:
I've tried to comment on every atomic part of the code for a better understanding. However, generally, we've just opened a browser page, loaded a local HTML file into it, and extracted the final rendered HTML for further BeautifulSoup processing.
As we can expect, the result is the following:
We did it again and not worried about finding, downloading, and connecting webdriver to a browser. Though, Pyppeteer looks abandoned and not properly maintained. This situation may change in the nearest future, but I'd suggest looking at the more powerful library.
Playwright: Chromium, Firefox and Webkit browser automation#
Playwright can be considered as an extended Puppeteer, as it allows using more browser types (Chromium, Firefox, and Webkit) to automate modern web app testing and scraping. You can use Playwright API in JavaScript & TypeScript, Python, C# and, Java. And it's excellent, as the original Playwright maintainers support Python.
The API is almost the same as for Pyppeteer, but have sync and async version both.
Installation is simple as always:
Let's rewrite the previous example using Playwright.
As a good tradition, we can observe our beloved output:
We've gone through several different data extraction methods with Python, but is there any more straightforward way to implement this job? How can we scale our solution and scrape data with several threads?
Meet the web scraping API!
Web Scraping API#
ScrapingAnt web scraping API provides an ability to scrape dynamic websites with only a single API call. It already handles headless Chrome and rotating proxies, so the response provided will already consist of Javascript rendered content. ScrapingAnt's proxy poll prevents blocking and provides a constant and high data extraction success rate.
Usage of web scraping API is the simplest option and requires only basic programming skills.
You do not need to maintain the browser, library, proxies, webdrivers, or every other aspect of web scraper and focus on the most exciting part of the work - data analysis.
As the web scraping API runs on the cloud servers, we have to serve our file somewhere to test it. I've created a repository with a single file: https://github.com/kami4ka/dynamic-website-example/blob/main/index.html
To check it out as HTML, we can use another great tool: HTMLPreview
The final test URL to scrape a dynamic web data has a following look: http://htmlpreview.github.io/?https://github.com/kami4ka/dynamic-website-example/blob/main/index.html
The scraping code itself is the simplest one across all four described libraries. We'll use ScrapingAntClient library to access the web scraping API.
Let's install in first:
And use the installed library:
Web Scraping Beautiful Soup Python
To get you API token, please, visit Login page to authorize in ScrapingAnt User panel. It's free.
And the result is still the required one.
Beautiful Soup Web Scraping Tutorial
All the headless browser magic happens in the cloud, so you need to make an API call to get the result.
Creating A Web Scraper Python
Check out the documentation for more info about ScrapingAnt API.
Summary#
Today we've checked four free tools that allow scraping dynamic websites with Python. All these libraries use a headless browser (or API with a headless browser) under the hood to correctly render the internal Javascript inside an HTML page. Below you can find links to find out more information about those tools and choose the handiest one:
Python Beautiful Soup Web Scraping Script
Happy web scraping, and don't forget to use proxies to avoid blocking 🚀