During this project you will implement a Haskell program for harvesting information from the Web. Two examples of such programs are provided together with this project [url removed, login to view] first example provided is a basic Web crawler, a program recursively downloads webpages, collects web links from these pages, and follows these links to collect more pages. That is the basic principle of the Google bot, the Google web crawler that “travels” the Internet collecting information from webpages. In the second example. This is an application that visits webpages looking for podcasts. The application then saves these podcasts into a [url removed, login to view] both cases a component of “parsing” the webpage is needed, similarly to the parsing you have done in the mini-project to obtain information from a JSON file. In this project the parsing component can be simpler, as the focus is on the HTTP requests and database [url removed, login to view] web programs follow exactly this architecture: One component for web requests, one component for parsing the information received, and one component for saving/accessing this information on/from a database. I have also put example files for the project. Please refer it before asking any further questions regarding this project.