Cancelado

very simple web page scrapper

hello php coders this is what i am looking for: i have a couple of urls like this: [url removed, login to view] this is a blog content [url removed, login to view] another blog [url removed, login to view] this is an article here is another one: [url removed, login to view] another one [url removed, login to view] and there are some wiki pages and so on I would like very much if someone could go to those pages and get the main content of the webpage and save it in a database. There are blogs, wiki, article/news pages, all kind of webpages. Those pages contain some main text in them the actual content along with menu pictures and so on. Can someone get the content of the page, lets say if a web page contain an article and that article will have images, movies or flash, i need that too. This means that the content of the webpage need to be plan html section. This is not a strpos functionality but rather an regexp or something like this. so go to webpages get the content remove all the unnecessary menus, ads, etc adn put the content in a database or display to the screen. i need to pass the url in the GET method because i would like to test it in browser . Please not that the page content is dynamic

## Deliverables

1) Complete and fully-functional working program(s) in executable form as well as complete source code of all work done.

2) Deliverables must be in ready-to-run condition, as follows (depending on the nature of the deliverables):

a) For web sites or other server-side deliverables intended to only ever exist in one place in the Buyer's environment--Deliverables must be installed by the Seller in ready-to-run condition in the Buyer's environment.

b) For all others including desktop software or software the buyer intends to distribute: A software installation package that will install the software in ready-to-run condition on the platform(s) specified in this bid request.

3) All deliverables will be considered "work made for hire" under U.S. Copyright law. Buyer will receive exclusive and complete copyrights to all work purchased. (No GPL, GNU, 3rd party components, etc. unless all copyright ramifications are explained AND AGREED TO by the buyer on the site per the coder's Seller Legal Agreement).

* * *This broadcast message was sent to all bidders on Friday Jul 11, 2008 2:54:48 AM:

hello many of you requested a list of urls. But i do not have one because those urls will be gathered from another location, The content of the webpage is dynamically. All i can say is that there are 3 or 4 major category: 1. blogs (wp and other blog scripts) have a fixed format ( here the rss can be used) 2. wiki pages has a fixed format. 3. Articles pages: hubpages, squidoo, ezine 4. News pages from yahoo, google zdnet, news (The post begins with Posted by Matthew Miller @ 7:57 am or simmilar and ending with something. I am willing to buy multiple solutions: if some one can do blogs i will pay for the blogs. If some one can so 2 or 3 i can pay for those only. My budget is very limited. I am willing to pay 10 $ each.

## Platform

php regexp no strpos etc

Habilidades: PHP

Ver mais: zdnet, yahoo entertainment, web wiki software, web source format, web pages scripts, webpage scripts, web page scripts, web page format, thebiglead, simple html scripts, scripts web page, scrapper hire, post blog buy, ma code, html web scripts, html blog scripts, hire someone format blog, hire party entertainment, hire entertainment, flash web site buy, fixed web page, espn ads, entertainment hire, dynamic p, coders blog

Acerca do Empregador:
( 4 comentários ) iasi, Germany

ID do Projeto: #3044569

4 freelancers estão ofertando em média $282 para este trabalho

calitek

See private message.

$850 USD in 10 dias
(28 Comentários)
5.0
webdevdad

See private message.

$85 USD in 10 dias
(29 Comentários)
4.3
reza56

See private message.

$153 USD in 10 dias
(28 Comentários)
4.0
f3mitch

See private message.

$38.25 USD in 10 dias
(1 Comentário)
0.7