Create a python application that will walk a web site with bibliographic data, gathering author names. These names and their papers' names will be stored in a couple of DBMS tables. Application will keep track of how often it has crawled and extracted data from these web sites. When the web sites change (date/diff) they will be crawled again and new information will be added to the database tables (and timestamp noted in table entries). This system will be used by a researcher to perform a continuous prior art search. The researcher will usually keep track of 20-30 other researcher's home pages. These home pages usually have a listing of papers. The format of these listings varies, so you cannot definitively parse the information. Therefore, the researcher needs to perform the "mapping". So, if initially you present the listing to the researcher in HTML format, the researcher can cut and paste the relevant paper titles into entry fields. You can then persist these titles in some "paper" table. You can also cache a copy of the listing web page for future crawls. So, in future crawls, crawl the page and present the HTML diff text to the researcher. This most likely will contain text of new papers. All code is to be written in Python in the Plone framework, and database operations should all use SQLAlchemy.
2) Deliverables must be in ready-to-run condition, as follows (depending on the nature of the deliverables):
a) For web sites or other server-side deliverables intended to only ever exist in one place in the Buyer's environment--Deliverables must be installed by the Seller in ready-to-run condition in the Buyer's environment.
b) For all others including desktop software or software the buyer intends to distribute: A software installation package that will install the software in ready-to-run condition on the platform(s) specified in this bid request.