Beautiful Soup vs. lxml – Speed

When comparing Python parsing frameworks, you often hear people complaining that Beautiful Soup is considerably slower than using lxml. Thus, some people conclude that lxml should be used in any performance critical project. Having used Beautiful Soup in a large number of web scraping projects and never having had any real trouble with its performance, I wanted to properly measure the performance of the popular parsing library.

The Test

To test the two libraries, I wrote a simple single threaded crawler which crawls a total of 100 URLs and then simply extracts links and the page title from the page in question. By implementing two different parser methods, one using lxml and one using Beautiful Soup. I also tested the speed of the Beautiful Soup with various non-default parsers.

Each of the various setups were tested a total of five times to account for varying internet and server response times, with the below results outlining the different performance based on library and underlying parser.

The Results

Run #1 Run #2 Run #3 Run #4 Run #5 Avg. Speed Overhead Per Page (Seconds)
lxml 38.49 36.82 39.63 39.02 39.84 38.76 N/A
Bs4 (html.parser) 49.17 52.1 45.35 47.38 47.07 48.21 0.09
Bs4 (html5lib) 54.61 53.17 53.37 56.35 54.12 54.32 0.16
Bs4 (lxml) 42.97 43.71 46.65 44.51 47.9 45.15 0.06

As you can see lxml is significantly faster than Beautiful Soup. A pure lxml solution is several seconds faster than using Beautiful Soup with lxml as the underlying parser. The built Python parsing library is around 10 seconds slower, whereas the extremely liberal html5lib is even slower. The overhead per page parsed is still relatively small with both bs4(html.parser) and bs4(lxml) adding less than 0.1 seconds per page parsed.

Overhead 100,000 URLs (Extra Hours) 500,000 URLs (Extra Hours)
Bs4 (html.parser) 0.09454 2.6 13.1
Bs4 (html5lib) 0.15564 4.3 21.6
Bs4 (lxml) 0.06388 1.8 8.9

While the overhead seems very low, when you try and scale a crawler using Beautiful Soup will add a significant overhead. Even using Beautiful Soup with lxml adds significant overhead when you are trying to scale to hundreds of thousands of URLs. It should be noted that the above table assumes a crawler running a single thread. Anyone looking to crawl more than 100,000 URLs would be highly recommended to build a concurrent crawler making use of a library such as Twisted, Asycnio, or Concurrent futures.

So, the question of whether Beautiful Soup is suitable for your project really depends on the scale and nature of the project. Replacing Beautiful Soup with lxml is likely to see you achieve a small (but considerable at scale) performance improvements. This does however come at the cost of losing the Beautiful Soup API, which makes selecting on-page elements a breeze.

Leave a Reply

Your email address will not be published. Required fields are marked *