Monday, November 18, 2024
Google search engine
HomeLanguagesImplementing web scraping using lxml in Python

Implementing web scraping using lxml in Python

Web scraping basically refers to fetching only some important piece of information from one or more websites. Every website has recognizable structure/pattern of HTML elements. 

Steps to perform web scraping :
1. Send a link and get the response from the sent link 
2. Then convert response object to a byte string. 
3. Pass the byte string to ‘fromstring’ method in html class in lxml module. 
4. Get to a particular element by xpath. 
5. Use the content according to your need.
 

For accomplishing this task some third-party packages is needed to install. Use pip to install wheel(.whl) files.  

pip install requests
pip install lxml

xpath to the element is also needed from which data will be scrapped. An easy way to do this is –

1. Right-click the element in the page which has to be scrapped and go-to “Inspect”.  

2. Right-click the element on source-code to the right. 

3. Copy xpath. 

Here is a simple implementation on “neveropen homepage“: 

Python3




# Python3 code implementing web scraping using lxml
 
import requests
 
# import only html class
from lxml import html
 
# url to scrap data from
 
# path to particular element
path = '//*[@id ="post-183376"]/div / p'
 
# get response object
response = requests.get(url)
 
# get byte string
byte_data = response.content
 
# get filtered source code
source_code = html.fromstring(byte_data)
 
# jump to preferred html element
tree = source_code.xpath(path)
 
# print texts in first element in list
print(tree[0].text_content())


The above code scrapes the paragraph in first article from “neveropen homepage” homepage. 
Here’s the sample output. The output may not be same for everyone as the article would have changed.

Output :  

"Consider the following C/C++ programs and try to guess the output?
Output of all of the above programs is unpredictable (or undefined).
The compilers (implementing… Read More »"

Here’s another example for data scraped from Wiki-web-scraping

Python3




import requests
from lxml import html
 
# url to scrap data from
link = 'https://en.wikipedia.org / wiki / Web_scraping'
 
# path to particular element
path = '//*[@id ="mw-content-text"]/div / p[1]'
 
response = requests.get(link)
byte_string = response.content
 
# get filtered source code
source_code = html.fromstring(byte_string)
 
# jump to preferred html element
tree = source_code.xpath(path)
 
# print texts in first element in list
print(tree[0].text_content())


Output : 

Web scraping, web harvesting, or web data extraction is data scraping used for extracting data from websites.[1] Web scraping software may access the World Wide Web directly using the Hypertext Transfer Protocol, or through a web browser. While web scraping can be done manually by a software user, the term typically refers to automate processes implemented using a bot or web crawler. It is a form of copying, in which specific data is gathered and copied from the web, typically into a central local database or spreadsheet, for later retrieval or analysis. 
 

 

RELATED ARTICLES

Most Popular

Recent Comments