I'm sorry but there is nothing new here? This seems like a backwards step if anything.

Usually when web scraping, I can just load in the HtmlAgilityPack (c#), point it at a URL then write some functional code to extract the necessary data.

Even better, I'll examine the website in Fiddler and hope they have a data-view separation going on, and be able to just intercept the json file they load instead.

Worse case scenario I need to dynamically click on buttons etc, but this can usually be handled by selenium, or if they detect that just roll a custom implementation of CefSharp (again not hard, just download the nuget, and it lets you run your own custom javascript).

A new, more limited, language (with no IDE tools) is not the way to go. If anything a better web scraper just make the above processes I mentioned more seamless, for example combining finding/selecting of elements in chrome with codegen.

The main advantage of this over your approach with HtmlAgilityPack is that Ferret can handle dynamic web pages - those that are rendered with JS. And also, it can emulate user interactions. But anyway, thanks for your feedback :)

The code for doing this isn't too difficult with https://github.com/chromedp/chromedp, is this just some helpers around that? I haven't used it or puppeteer on the node side that heavily, but what have you found difficult that deserves this kind of wrapper/abstraction instead of direct library use?