Linux Script to download web pages/RSS feeds from database of numbers...

Discussion in 'Software' started by PaulW, 24 May 2006.

  1. PaulW

    PaulW What's a Dremel?

    Joined:
    2 Feb 2004
    Posts:
    458
    Likes Received:
    0
    I have a fairly large database of numbers, for which I would like to generate a script which will go through the list, download the page from an internal HTTP server, and then save the output to a file (or grep out the data I actually need) and store it into the database along with the number.

    Its a bit of a hard one to explain really I guess...

    I remember once in an old eCommerce reviewing job I was in, one lad I was good mates with (who has since moved to New Zealand and I have lost contact with him) wrote something I believe was written in Perl?? Which queried the Google search engine for keywords, then built a file which just contained the URL's themselves without any descriptions or extra data associated with the results. In a way, this is kind of similiar.

    I'll be looking at running this script from a Debian server which is running MySQL (the database I have all the ID's and such stored in which I also need to store the filtered data which is retrieved from the queries)

    Cheers!
     
  2. PaulW

    PaulW What's a Dremel?

    Joined:
    2 Feb 2004
    Posts:
    458
    Likes Received:
    0
    Sorry, forgot to mention, the data is actually contained within the META Keywords tag, but is all comma seperated, and I only need to extract data to a specific keyword...
     
Tags:

Share This Page