Automated web scraping spider generation using Browser Use and LLMs.
Generate Playwright spiders with minimal technical expertise.
Extracting data with LLMs is expensive. Spider Creator offers an alternative where LLMs are only used for the spider creation process, and the spiders themselves can then run using traditional methods, which are very cheap. This tradeoff makes it ideal for users seeking affordable, recurring scraping tasks.
We don't have benchmarks yet. However, in internal tests using GPT, initial figures show a cost of around $2.50 per page. If you need to create a spider that takes into account two types of pages (for example, product listings and product details), this would cost around $5.
DeepWiki Docs: https://deepwiki.com/carlosplanchon/spidercreator
git clone https://github.com/carlosplanchon/spidercreator.git
cd spidercreator
# If you are using uv just run:
uv sync
# If you run pyenv:
pip install -r requirements.txt
Install Playwright:
playwright install chromium
Export your OpenAI API KEY:
export OPENAI_API_KEY=...
Generate your Playwright spider:
from spidercreator import create_spider
from prompts.listings import PRODUCT_LISTING_TASK_PROMPT
# Uruguayan supermarket with product listings:
url = "https://tiendainglesa.com.uy/"
# PRODUCT_LISTING_TASK_PROMPT is a prompt to guide the extraction
# of sample product data from a web homepage,
# optionally following links to product detail pages.
browser_use_task = PRODUCT_LISTING_TASK_PROMPT.format(url=url)
# This function generates a spider
# and saves it to results/<task_id>/spider_code.py
task_id: str = create_spider(browser_use_task=browser_use_task)
Result:
from playwright.sync_api import sync_playwright
from parsel import Selector
import prettyprinter
prettyprinter.install_extras()
class TiendaInglesaScraper:
def __init__(self, base_url):
self.base_url = base_url
def fetch(self, page, url):
page.goto(url)
page.wait_for_load_state('networkidle')
return page.content()
def parse_homepage(self, html_content, page):
selector = Selector(text=html_content)
product_containers = selector.xpath("//div[contains(@class,'card-product-container')]")
products = []
for container in product_containers:
product = {}
product['name'] = container.xpath(".//span[contains(@class,'card-product-name')]/text()").get('').strip()
relative_link = container.xpath(".//a/@href").get()
product['link'] = self.base_url.rstrip('/') + relative_link if relative_link else None
product['discount'] = container.xpath(".//ul[contains(@class,'card-product-promo')]//li[contains(@class,'card-product-badge')]/text()").get('').strip()
product['price_before'] = container.xpath(".//span[contains(@class,'wTxtProductPriceBefore')]/text()").get('').strip()
product['price_after'] = container.xpath(".//span[contains(@class,'ProductPrice')]/text()").get('').strip()
if product['link']:
detailed_html = self.fetch(page, product['link'])
detailed_attrs = self.parse_product_page(detailed_html)
product.update(detailed_attrs)
print("--- PRODUCT ---")
prettyprinter.cpprint(product)
products.append(product)
return products
def parse_product_page(self, html_content):
selector = Selector(text=html_content)
detailed_attributes = {}
detailed_attributes['description'] = selector.xpath("//span[contains(@class, 'ProductDescription')]/text()").get('').strip()
return detailed_attributes
if __name__ == "__main__":
base_url = 'https://www.tiendainglesa.com.uy/'
scraper = TiendaInglesaScraper(base_url)
with sync_playwright() as p:
browser = p.chromium.launch(headless=False)
page = browser.new_page()
homepage_html = scraper.fetch(page, base_url)
products_data = scraper.parse_homepage(homepage_html, page)
browser.close()
for idx, product in enumerate(products_data, 1):
print(f"\nProduct {idx}:")
for key, value in product.items():
print(f"{key.title().replace('_', ' ')}: {value}")
Find practical usage examples in the examples folder.
The user provides a prompt describing the desired task or data.
- The system, leveraging Browser Use opens a browser and performs the task based on the prompt. The browser activity is recorded.
- The Spider Creator module generates a web scraper (spider) from the recorded actions.
Steps:
- Load Browser Use recordings.
- Generate mindmap to visualize web navigation process.
- Make a multi-stage plan on how to generate xpaths.
- For each stage of the plan:
- Given the website HTML, generate a compressed & chunked DOM representation.
- Traverse the compressed DOM chunk by chunk, select Regions of Interest, and generate candidate spiders based on the intention described in the planning stage.
- Execute candidate spiders in a virtual execution enviroment (ctxexec)
- Verify execution results and select the best candidate spider for this stage.
- Combine spiders from each stage into a final spider.
- Save spider code and return task_id.
We love contributions! Feel free to open issues for bugs or feature requests.