In this article, we are going to see how to scrape google reviews and ratings using Python.
Modules needed:
- Beautiful Soup: The mechanism involved in scraping here is parsing the DOM, i.e. from HTML and XML files, the data is extracted
# Installing with pip pip install beautifulsoup4 # Installing with conda conda install -c anaconda beautifulsoup4
- Scrapy: An open-source package and it is meant to scrape larger datasets and as open-source, it is also effectively used.
- Selenium: Usually, to automate testing, Selenium is used. We can do this for scraping also as the browser automation here helps with interacting javascript involved with clicks, scrolls, movement of data between multiple frames, etc.,
# Installing with pip pip install selenium # Installing with conda conda install -c conda-forge selenium
Chrome driver manager:
# Below installations are needed as browsers # are getting changed with different versions pip install webdriver pip install webdriver-manager
Initialization of Web driver:
Python3
from selenium import webdriver from webdriver_manager.chrome import ChromeDriverManager # As there are possibilities of different chrome # browser and we are not sure under which it get # executed let us use the below syntax driver = webdriver.Chrome(ChromeDriverManager().install()) |
Output:
[WDM] – ====== WebDriver manager ======
[WDM] – Current google-chrome version is 99.0.4844
[WDM] – Get LATEST driver version for 99.0.4844
[WDM] – Driver [C:\Users\ksaty\.wdm\drivers\chromedriver\win32\99.0.4844.51\chromedriver.exe] found in cache
Let us try to locate “Rashtrapati Bhavan” and then do the further proceedings, Sometimes it will ask permission to access the page if it is done for the first time, If there is a kind of permission issue seen, just agree to it and move further.
Python3
driver.get(url) |
Output:
https://www.google.com/maps/place/Rashtrapati+Bhavan/@28.6143478,77.1972413,17z/data=!3m1!4b1!4m5!3m4!1s0x390ce2a99b6f9fa7:0x83a25e55f0af1c82!8m2!3d28.6143478!4d77.19943
Scrape Google Reviews and Ratings
Here we will try to fetch three entities from google Maps, like Books shop, Food, and Temples for this we will make specific conditions and merge them with the location.
Python3
from selenium import webdriver from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.action_chains import ActionChains from selenium.webdriver.support import expected_conditions as EC from selenium.common.exceptions import ElementNotVisibleException from selenium.webdriver.common.by import By from selenium.common.exceptions import TimeoutException from bs4 import BeautifulSoup driver = webdriver.Chrome(ChromeDriverManager().install()) driver.maximize_window() driver.implicitly_wait( 30 ) # Either we can hard code or can get via input. # The given input should be a valid one location = "600028" print ( "Search By " ) print ( "1.Book shops" ) print ( "2.Food" ) print ( "3.Temples" ) print ( "4.Exit" ) ch = "Y" while (ch.upper() = = 'Y' ): choice = input ( "Enter choice(1/2/3/4):" ) if (choice = = '1' ): query = "book shops near " + location if (choice = = '2' ): query = "food near " + location if (choice = = '3' ): query = "temples near " + location wait = WebDriverWait(driver, 10 ) ActionChains(driver).move_to_element(wait.until(EC.element_to_be_clickable( (By.XPATH, "//a[contains(@href, '/search?tbs')]" )))).perform() wait.until(EC.element_to_be_clickable( (By.XPATH, "//a[contains(@href, '/search?tbs')]" ))).click() names = [] for name in driver.find_elements(By.XPATH, "//div[@aria-level='3']" ): names.append(name.text) print (names) ch = input ( "Do you want to continue (Y/N): " ) |
Output: