web scraper
Kurt Wall
kwall
Tue Nov 21 20:27:54 PST 2006
On Tue, Nov 21, 2006 at 10:43:11PM -0500, Kurt Wall wrote:
[le snip]
> A poor man's scraper might involve running "w3m --dump" on each URL of interest if all
> you want is content or "w3m --source" on each interesting URL if you want to entire
> page. Perl has modules for scraping HTML screens, too.
Well, make that "w3m -dump" and "w3m -dump_source", respectively.
Kurt
--
Let us remember that ours is a nation of lawyers and order.
More information about the Linux-users
mailing list