Using Ruby and Capybara to Scrape
Use every day rubyist tools to scrape websites.
TL; DR
- Create a wrapper class
include Capybara::DSL
visit 'http://any.website.you.want.com'
The Copy-Pasteable Bits
The Setup
1 require 'csv'
2 require 'capybara'
3 require 'capybara/poltergeist'
4
5 class WebScraper
6 include Capybara::DSL
7 Capybara.default_driver = :poltergeist
8 Capybara.register_driver :poltergeist do |app|
9 options = { js_errors: false }
10 Capybara::Poltergeist::Driver.new(app, options)
11 end
12
13 def scrape
14 yield page
15 end
16
17 def self.scrape(&block)
18 new.scrape(&block)
19 end
20 end
The Crawling
1 WebScraper.scrape do |page|
2 page.visit 'http://google.com/search?q=foo'
3
4 10.times do |i|
5 begin
6 File.write("search_#{i}.html", page.html)
7 page.within '#foot' do
8 page.has_content? 'Next'
9 page.find_link('Next').trigger('click')
10 end
11 sleep 5 # wait so that we don't hit their server too hard
12 rescue Capybara::ElementNotFound
13 # require 'byebug'; byebug # use this to start investigating why the page link is broken
14 puts 'could not find link'
15 end
16 end
17 end
The Parsing
1 csv_rows = []
2 Dir.glob('search*.html').each do |file_path|
3 page = File.open(file_path)
4 rows = Nokogiri::HTML(page).css('div#search div#ires li.g') # grab each result
5 rows.each do |row|
6 link_text = row.css('h3.r').text
7 green_url = row.css('div.kv cite').text
8 description = row.css('span.st').text
9
10 csv_rows << [link_text, green_url, description]
11 end
12 end
13
14 CSV.open('searches.csv', 'wb') do |csv|
15 csv_rows.each do |csv_row|
16 csv << csv_row
17 end
18 end
Random Thoughts That May Be Helpful
We decided to use Capybara since it readily uses javascript in its headless browser, but Mechanize doesn’t. Also since we already are using Capybara for our test suite, it’s a much more familiar tool to use when programmatically navigating web pages. The code above is a bit of a simplified example, but structurally can be extended to handle arbitrary page navigation and data grabbing.
- Doing the page grabbing and data scraping separate allows us to gather data and save it in case our scraper runs into problems
- You will likely have to cleanup your csv after you finish. Pages will have edge cases you didn’t plan for when scraping. If you’ve saved your files while scraping you’ll save yourself the trouble of re-running the entire web crawling part.
- Scope your css or xpaths as specific as you can to make sure you get only the right elements
Code Tips
sleep x
timers are good to keep from getting rate limited or blocked, it’s also being nice to their servers :-)- Extra
puts x
statements can be helpful in keeping track of what page you’re on in a long and slow scrape. - Use
byebug
and other debugging tools to figure out why links aren’t working or pages are parsing wrong - Put all your scraping stuff into its own dir so that you don’t mix files up:
FileUtils.mkdir_p "page_dumps/#{Date.today.iso8601}"
- Useful trick for file naming:
1.to_s.rjust(3, '0') # => 001
Other useful tools for scraping
:wq