Hey, wxpath author here. It's pretty cool seeing this project reach the front page a week after posting it.
Just wanted to mention a few things.
wxpath is a result of a decade of working and thinking about web crawling and scraping. I created two somewhat popular Python web-extraction projects a decade ago (eatiht, and libextract), and even helped publish a metaanalysis on scrapers, all heavily relying on lxml/XPath.
After finding some time on my hands and after a hiatus on actually writing web scrapers, I decided to return to this little problem domain.
Obviously, LLMs have proven to be quite formidable at web content extraction, but they encounter the now-familiar issues of token limits and cost.
Besides LLMs, there's been some great projects making great progress on the problem of web data extraction, like the Scrapy and Crawlee frameworks, and projects like Ferret (https://www.montferret.dev/docs/introduction/) - another declarative web crawling framework - and others (Xidel, https://github.com/benibela/xidel).
The shared, common abstraction of most web-scraping frameworks and tools is "node selectors" - the syntax and engine for extracting nodes and their data.
XPath has proven resilient and continues to be a popular node-selection and processing language. However, what it lacks, which other frameworks provide, is crawling.
wxpath is an attempt to fill that gap.
Hope people find it useful!
https://github.com/rodricios/eatiht https://github.com/datalib/libextract