I need a crawler to run searches for specific keywords on a pre-determined number of sites and return links to the original articles.
Ideally the crawler should run from a Linux server 24/7, storing the results into a database, with the possibility to add/delete keywords as needed, but I would eventually accept a desktop robot running on OpenKapow Robomaker or similars.
## Deliverables
1) Complete and fully-functional working program(s) in executable form as well as complete source code of all work done.
2) Deliverables must be in ready-to-run condition, as follows (depending on the nature of the deliverables):
a) For web sites or other server-side deliverables intended to only ever exist in one place in the Buyer's environment--Deliverables must be installed by the Seller in ready-to-run condition in the Buyer's environment.
b) For all others including desktop software or software the buyer intends to distribute: A software installation package that will install the software in ready-to-run condition on the platform(s) specified in this bid request.
3) All deliverables will be considered "work made for hire" under U.S. Copyright law. Buyer will receive exclusive and complete copyrights to all work purchased. (No GPL, GNU, 3rd party components, etc. unless all copyright ramifications are explained AND AGREED TO by the buyer on the site per the coder's Seller Legal Agreement).
* * *This broadcast message was sent to all bidders on Sunday Mar 23, 2008 3:58:15 PM:
Update to original requirements: instead of running the crawler against a number of websites, I've decided to start with a version checking for pre-defined keywords on Google News only, publishing a feed of the results, with the possibility to exclude specific websites from the results.
## Platform
windows,linux