(After scraping the site, that script does a Mac-specific thing, but you can do whatever you want. Sending a text message to your cell phone is a fun and effective thing to do, for example.)
You can just as easily do this on Windows using the 'wget' program to hit the page instead of 'curl', or better yet, by using a scripting language such as Ruby.
If you need to drill into pages, and HTTP GETs with URLs won't get you there for some reason, you can use something like HttpUnit. Here's a simple method that uses the HttpUnit API to log into a web application using a login form, which then redirects the user to a product catalog page. Then it uses the resulting 'response' object to scrape an HTML table's contents:
You can also get the DOM tree back from an HttpUnit 'response' object and traverse it using XPath, for example.
Using HttpUnit is more heavy-handed than a scripting approach, but it demonstrates another way of going about screen-scraping deep into a web application.
Mike [ September 23, 2004: Message edited by: Mike Clark ]
Mike Clark<br />Author of <a href="http://www.amazon.com/exec/obidos/ASIN/0974514039/ref=jranch-20" target="_blank" rel="nofollow">Pragmatic Project Automation</a>