The ultimate guide to hiring a web developer in 2021
If you want to stay competitive in 2021, you need a high quality website. Learn how to hire the best possible web developer for your business fast.
Web Scraping is the process of extracting data or information from an online source such as a website, database, application, etc. Web Scraping Specialists have the skill that helps people collect valuable digital data and quickly find the useful information they need from websites, mobile apps, and APIs. The experts usually use web scraping tools and advanced technologies to collect large amounts of targeted data without any manual work for the client.
With web scraping, tasks that otherwise may require a lot of time can be automated and done faster. Our experienced Web Scraping Specialists use their expertise to develop scripts that continuously target structured and unstructured data sources.
Here's some projects that our expert Web Scraping Specialist made real:
Web Scraping Specialists are skilled professionals who know how to help businesses optimize processes while collecting rich structured data they need for their specific purposes. Our experts fasten the process and return accurate results in less time, so that the customer can make better decisions more quickly without any manual labour. If you are looking for a talented professional to make a web scraping project for you, you have come to the right place. Here in Freelancer.com you can find talented professionals who will get the job done with top quality results! Post your project now and see what our Web Scraping professionals can do for you!
A partir das avaliações de 363,775, os clientes avaliam nosso Web Scraping Specialists 4.9 de 5 estrelas.Web Scraping is the process of extracting data or information from an online source such as a website, database, application, etc. Web Scraping Specialists have the skill that helps people collect valuable digital data and quickly find the useful information they need from websites, mobile apps, and APIs. The experts usually use web scraping tools and advanced technologies to collect large amounts of targeted data without any manual work for the client.
With web scraping, tasks that otherwise may require a lot of time can be automated and done faster. Our experienced Web Scraping Specialists use their expertise to develop scripts that continuously target structured and unstructured data sources.
Here's some projects that our expert Web Scraping Specialist made real:
Web Scraping Specialists are skilled professionals who know how to help businesses optimize processes while collecting rich structured data they need for their specific purposes. Our experts fasten the process and return accurate results in less time, so that the customer can make better decisions more quickly without any manual labour. If you are looking for a talented professional to make a web scraping project for you, you have come to the right place. Here in Freelancer.com you can find talented professionals who will get the job done with top quality results! Post your project now and see what our Web Scraping professionals can do for you!
A partir das avaliações de 363,775, os clientes avaliam nosso Web Scraping Specialists 4.9 de 5 estrelas.Olá, posso começar a trabalhar nisso imediatamente. Tenho muita certeza de que atingiremos suas expectativas e lhe daremos 100% de satisfação e um trabalho de qualidade. espero que você me dê esta oportunidade de atendê-lo melhor. Obrigado e cumprimentos Vishal
I have a collection of web pages that I need turned into clean, original copy and then loaded into my system. The raw material is plain-text extracted directly from those pages—no numerical or mixed data involved—so the entire job revolves around handling text content only. Here is the workflow I have in mind: first you’ll grab the plain text from each specified URL, strip away anything that isn’t core content, and feed that text into your preferred rewriting engine (OpenAI, GPT-based, or another high-quality NLP tool). The goal is a fluent, human-sounding rewrite that preserves meaning while clearing any potential plagiarism checks. Once the rewrite is approved, you will insert the new text back into the destination I provide (CSV template or the web form in my CM...
I need a sharp, Excel-savvy researcher to turn scattered developer brochures and website data into one clean, filter-ready spreadsheet. Your task is to compile every pre-selling or RFO project you can find from the major developers that operate in my target markets—primarily Metro Manila (with emphasis on Quezon City, Manila, Pasig and Valenzuela) plus key growth hubs in North Luzon. For each project, capture the essentials I use when pitching to buyers: • Developer name and exact project name • Precise location/address • Project type (condo, house-and-lot, lot only) • Highlight amenities offered • Complete payment terms and a sample computation straight from the developer’s price sheet • Contract price range and reservation fee Pleas...
I have a predefined list of topics and I need a methodical web-researcher to comb through the internet, identify credible organization sites related to each item, and capture every relevant online asset they host. My end goal is a clean, well-structured spreadsheet that I can tap for future research. Here’s what I expect: • For every topic on my list, locate organization websites that speak directly to it. • Record the full site URL, the specific page URL where the asset lives, the page title, and a one-line summary of why that page is useful. • If the page offers downloadable material (reports, documents, images, videos, or any other internet asset), note the direct download link. No need to download the files yourself—just give me reliable links. •...
I want a comprehensive, ready-to-use database of transport-related businesses that operate anywhere in New South Wales. The goal is roughly 15,000 unique, verified email contacts pulled from Google Maps, company websites or any other reliable public source you normally trust for web-scraping. The scope covers every organisation in these six categories: • Freight & Transport / Haulage / Trucking Companies • Courier Services & Delivery Providers • Bus & Coach Operators, Charters, Hire and School Bus Services • Logistics Services / 3PL operators • Taxi, Hire-Car, Rideshare Fleet Operators plus Car Rental & Hire Firms • Removalists and Furniture Movers For each listing I need the following fields, all separated into their ...
Project Description: Find school districts and charter schools who use a specific vendor for a large list of domains. I am seeking an experienced web scraping specialist to improve our Python script to analyze a large list of school district websites (approximately 4000+ URLs) and identify the ones who show a specific link on any page found in their sitemap. The primary method of identification must be to scan the website's for specific, known vendor links. Deliverables Required 1. A Production-Ready Python Script (.py file): The script must be commented, easily configurable, and capable of reading the provided CSV list, performing the scan, and generating the output CSV. It should handle timeouts and basic error handling gracefully. 2. The Final Results (CSV/Excel File): A c...
I need an .xlsm workbook whose VBA macro fetches product data from both and lowes.com. When I type a valid item or model number into a row, the code should automatically pull back: product name, full description, regular price, sale price (if available), brand, product type/category, and the main image (inserted into the sheet or stored in an Image column). I work comfortably with VBA, so a concise, well-commented routine is all I need—no step-by-step user guide. The workbook must stay self-contained, relying only on standard references such as Microsoft XML, HTML, or WinHTTP libraries; please avoid external add-ins or Python bridges. Deliverables: • Finished macro-enabled Excel file (.xlsm) ready to test with my own SKU list • Clearly commented VBA code so I can...
Proyecto: Sistema de Gestión Integral Rubro: Librería, Juguetería e Insumos de Computación (5.000 SKUs). A. Objetivos Principales Desarrollar o implementar una plataforma de gestión (ERP/POS) que agilice la venta masiva en mostrador, lleve control de stock (actualmente no lo tiene) y automatice la comparación de costos con proveedores para una compra inteligente y rápida o al menos facilite el proceso de compra con sistema de alertas en el caso de que el scraping no sea posible de realizarlo. B. Módulos Específicos Requeridos 1. Módulo de Compras e Inteligencia de Precios: • Web Scraping: El sistema debe conectarse a las URL de proveedores principales para extraer precios de costo en tiempo real. • Comparador...
I need assistance merging my current football dataset with a new one. This new dataset will be sourced from online scraping of weather and expected goals (Xg) data. Requirements: - Scrape data from official weather and football statistics websites. - Integrate the following weather data: temperature, humidity, and precipitation. - Work with datasets in Excel format. - Correlate this new data with historical football match data in my existing dataset. Ideal Skills and Experience: - Proficiency in data scraping and data manipulation. - Experience with Excel and handling large datasets. - Familiarity with weather and football data. - Strong analytical skills to ensure accurate correlation of datasets. Looking forward to your proposals!
I need a Selenium-based solution that runs reliably on Windows and opens Google Chrome to simulate human visits to LinkedIn (and occasionally other) profile URLs listed in a Google Sheet. For each URL the program should: • Pull the next unused link from the sheet • Load the page in Chrome, wait a random time between 20 seconds and 3 minutes • Apply truly randomized scrolling patterns while the profile is open so behaviour looks organic • Fire a webhook the moment the visit completes, passing back any ID or payload I define so our CRM reflects the touch instantly Configuration items such as Google Sheet ID, webhook endpoint, minimum/maximum dwell time, and daily visit caps should live in a simple file I can edit without touching code. A short README on installi...
I need a reliable solution that will pull every public post mentioning a set of keywords I will share with you and do so for all the data on X, Instagram and Facebook. The scrape must cover three primary markets I am focused on right now—Thailand, Philippines and India, etc —so geo-filtering or language filtering needs to be baked in from the start. For every matching post I want the full engagement picture captured: the comment text, number of comments, likes, reposts/shares, the post date and any other readily available metadata (author handle, follower count, post URL, media links, etc.). Accuracy is critical because the data will feed a trend-analysis dashboard later. Please build the workflow in a way that respects rate limits and login requirements: if you intend to use...
I need a freelancer outside the USA to gather some data and provide me with a code snippet. Ideal skills and experience: - Experience in data gathering - Familiarity with coding in Python, JavaScript, or Ruby - Ability to work independently and deliver accurate results Please provide details on your data collection methods and coding expertise in your bids.
We are building a full internal marketplace analytics web system, not just a reporting script. The system is designed to combine competitive intelligence with internal sales and stock analytics in a single interface. Functional Requirements The system must provide the following capabilities: 1. Product and SKU structure - Each product must be split into individual SKUs based on flavor and volume. - All analytics and reports are built at the SKU level. 2. Our product analytics (primary focus) - Current stock levels (total and per SKU). - Sales volume for selected periods (daily / weekly / monthly). - Reorder recommendations based on stock thresholds and sales dynamics. - Revenue calculations per product and per SKU with period filtering. 3. Competitive analytics - Automated collection o...
I need a lightweight Windows-based application that can interact with a specific website entirely in the background—no browser window or UI should ever be visible. The software must: • Log in with a stored username and password • Navigate through the site, click the necessary elements, submit forms, and collect the returned data • Solve any CAPTCHA the site presents automatically (an API such as 2Captcha, Anti-Captcha, or a comparable service is acceptable) • Return the scraped information in JSON or CSV so it can be consumed by another process A simple tray icon, CLI, or service is fine; the key requirement is headless operation with reliable error handling. Source code and a compiled executable are both expected so I can run the tool on multiple machines...
1. CONTEXTO Y DESAFÍO REAL Proyecto del sector de la trefilería y el galvanizado con más de 40 líneas de producción activas. El desafío no es la falta de información, sino que el conocimiento crítico es volátil: reside en la experiencia de supervisores y operarios veteranos y se transmite de forma verbal. Cuando surge una solución técnica en planta, esta no se documenta y se pierde para el siguiente turno. Buscamos desarrollar un ecosistema de IA que no solo responda preguntas, sino que capture, valide, estructure y democratice el conocimiento técnico que surge en el día a día, creando una infraestructura de inteligencia industrial sostenible a largo plazo. 2. LA SOLUCIÓN: "THE ...
I have a sizable dump of customer records—names, contact numbers, email addresses, and a few extra fields—that must be transferred into a single, well-organized Excel workbook. I will send you the exact header template, so every column you create must match it precisely. Your task involves: • Importing the raw files into Excel (or Power Query, if you prefer) and mapping each entry to the columns I supply: Names, Contact Numbers, Email Addresses, and the additional fields. • Removing every duplicate without losing valid information. • Applying basic data-validation rules (drop-downs, text length limits, email format checks, etc.) so the sheet remains clean long after this project ends. • Consistently formatting phone numbers and email addresses, fixing...
I need an interactive dashboard built in Streamlit that lets end-users explore time-series data coming from three different sources—raw CSV uploads, existing relational databases, and live API endpoints. The app should read, clean, and merge these feeds on the fly, then offer clear visual insights through line charts, area charts, and any other plots that make trends, seasonality, and anomalies obvious. Under the hood I expect well-structured, reusable Python code that leans on pandas for manipulation, SQLAlchemy (or similar) for database access, and a lightweight requests layer for the APIs. Caching, session-state handling, and responsive layout controls are important so the interface feels fast even as data volumes grow. Deliverables • Streamlit app folder with modular...
I have about 500 genuine customer testimonials sitting on another well-known review platform, and they belong on my Google Business profile instead. Every word has already been approved by the original authors, so no rewriting or polishing is required—I want them posted exactly as they appear now. Here is what I need from you: pull each review from the source link I’ll provide, publish it to my Google Business page without altering a single character, then give me proof that every post has gone live (a simple spreadsheet with the review text and a direct Google URL or timestamped screenshot is fine). Accuracy is crucial; I will cross-check that nothing has been omitted or modified. If you already manage multiple Google accounts or have an efficient, policy-compliant workflow ...
Complete Lottery Prediction and Betting Automation System (Focused on Loterías y Apuestas del Estado - Spain) 2. System Features 2.1. Historical Data Collection and Update The system must automatically download complete historical results (drawn numbers, draw dates, prize breakdowns by category, accumulated jackpots) from the first draw of each lottery, directly from or reliable associated sources. Specific sources: Euromillones: (since Feb 13, 2004) La Primitiva: (since Oct 17, 1985 – modern version) El Gordo de la Primitiva: (since Oct 31, 1993) Updates automatic at exactly 00:02 the day after each draw, using ethical scraping (BeautifulSoup/Scrapy) with proper user-agent headers to mimic human behavior. Store data in PostgreSQL (structured) or MongoDB (flex...
IM TYRING TO RUN THE ATTACHED JPNY SCRIPT TO GET INFO FROM A WEBSITE BUT I CANT UNDERSTAND IT DOESN'T WORK. I NEED THIS SCRIPT TO BE FIX + PAGINATION TO FETCH AROUND 2400 RECORDS FOR YELLOWPAGES I ONLY USE JUPYTER
I’m looking for a data engineer who can take full ownership of a daily web-scraping workflow aimed at ongoing market research. The job centers on extracting selected data points from public web pages, transforming them into a clean, structured format, and making them available for analysis every 24 hours. Here’s what I need you to handle from end to end: • Source acquisition – fetch HTML from the URLs I provide, even when content is hidden behind JavaScript (a headless browser such as Playwright or Selenium is fine). • Parsing & cleansing – pull the specific fields I’ll list (product name, price, SKU, availability, and a time-stamp), remove duplicates, and standardize values. • Storage & delivery – load the daily output into ...
I have a working Python script that talks to the Kalshi prediction-market API, pulls live data, and fires off trades automatically through simple web-request helpers. Functionally it looks solid from my end, but I’m not a developer and would like an expert eye on it before I trust it with larger positions. The review should cover every critical angle—accuracy of the trading logic, efficiency of each call or loop, and robust error-handling so a bad response or network hiccup never leaves an order hanging. Because the script relies heavily on APIs and a small amount of web-scraping, please verify that authentication, rate-limit handling, and data parsing follow best practices and won’t put the account at risk. Deliverables • A line-by-line code review (commented or...
I'm seeking a versatile virtual assistant to join my team for 15+ hours per week. The role involves a mix of marketing and admin-related support tasks. The ideal candidate should be skilled in creating pitch decks and PowerPoint presentations, branding and design using Figma, and video editing. Additionally, the role includes web scraping, bookkeeping specific to Australia, and tasks requiring excellent written English. Key Requirements: - Proficiency in Figma for branding and design - Experience in creating engaging pitch decks and PowerPoint presentations - Video editing skills - Ability to perform web scraping tasks efficiently - Knowledge of Australian bookkeeping practices - Strong written English for various tasks Ideal Skills and Experience: - Previous experience as a virtual...
I have three specific school-website links that list all current teachers and administrators. From each page I need a clean scrape of every staff member’s name, role, email address, plus the city/town and the school name, compiled into a single Excel workbook. Alongside that, I already hold an Excel sheet that contains a roster of Tow and roadside drivers. The sheet has their names and the URLs of the companies they work for, but no contact details. Please crawl those company sites, locate each driver’s email address, and append the results to the same workbook, using matching columns so everything stays consistent. Key points to keep in mind: • Final deliverable: one Excel file ready for copy-and-paste outreach. • Source material: my three school websites and...
I am looking for a Python developer to create a simple and focused scraper script for Facebook Marketplace. Project Idea: The script will open a single Facebook Marketplace seller page and: • Extract all product links belonging to that seller only • Ignore any other data (no names, no prices, no images) • The final output should be a list of links only • Each product link on a separate line (link under link) Exact Requirements: • Input: Facebook Marketplace seller page URL • Output: • A file containing all product URLs for that seller • File format: TXT or CSV • Handle infinite scrolling to load all products Technical Requirements: • Python • Selenium or Playwright • Experience with dynamic websites • Clean, ...
I have a set of voter-list PDFs released by the election commission. The layout across all files is identical, so positional parsing is reliable. Right now I simply need the current batch converted, but long-term I want a reusable Python utility that pulls the following six columns straight into Excel: • Name • FathersName • Age • Gender • VoterID • SerialNumber . Section Name . Polling Station Name .etc. Scope of work 1. Run the first extraction and hand me the .xlsx file so I can verify accuracy. 2. Package the underlying code (Python 3.x) with clear instructions and any so I can repeat the conversion on future lists without further help. Technical notes – Consistent layout means you can lean on libraries like pdfplumber, camelo...
AI Automation for Finance Analytics AI / Machine Learning DO NOT BID IF BIDDING FOR 40-HOUR WORK WEEK WE ARE LOOKING FOR A CONSULTANT / BUILDER / TUTOR TO WORK WITH OUR TEAM 3-10 HOURS A WEEK TO BUILD THE SYSTEM JONITLY DO NOT BID FOR LONGER THAN THOSE HOURS. DO NOT BID FOR FULL-TIME WORK DETAILS OF WHAT I NEED HELP WITH I run a real estate private equity and hotel development platform. We want to replace manual analysis and reporting with a practical AI workflow. This is about extracting, comparing, and interpreting data. Excel and PowerPoint remain the source of truth. What we need: -Compare PowerPoint vs Excel and flag mismatches - Explain underwriting models and trace outputs - Compare legal/term sheets vs financial assumptions - Track document versions and changes - Summarize deal...
AI Automation for Finance Analytics AI / Machine Learning DO NOT BID IF BIDDING FOR 40-HOUR WORK WEEK WE ARE LOOKING FOR A CONSULTANT / BUILDER / TUTOR TO WORK WITH OUR TEAM 3-10 HOURS A WEEK TO BUILD THE SYSTEM JONITLY DO NOT BID FOR LONGER THAN THOSE HOURS. DO NOT BID FOR FULL-TIME WORK DETAILS OF WHAT I NEED HELP WITH I run a real estate private equity and hotel development platform. We want to replace manual analysis and reporting with a practical AI workflow. This is about extracting, comparing, and interpreting data. Excel and PowerPoint remain the source of truth. What we need: -Compare PowerPoint vs Excel and flag mismatches - Explain underwriting models and trace outputs - Compare legal/term sheets vs financial assumptions - Track document versions and changes - Summarize deal...
I am currently using apify for $1.5/1000 leads. Need things at scale - around 50k emails, this need cost effective solution. Bid on this proposal and I shall DM you, need to know cost for: 1. Apollo emails 2. Linkedin emails
Hindi and Indonesian Safety Hardening and Safety Dataset - Annotation 1. Annotation Requirement Description This annotation task aims to construct safety datasets for Hindi and Indonesian through manual annotation. 1.1 Basic Task Information Task Summary: Annotate five types of raw data (sensitive words, text samples, image samples, "image-text" pairs, "video-text" pairs) in Hindi and Indonesian according to requirements. Deliverable Types and Formats: a. Sensitive Words: Words, phrases. Delivered in Excel and JSONL formats only. b. Text Samples: Sentences, paragraphs. Delivered in Excel and JSONL formats only. c. Image Samples: Images in JPG or PNG format, stored in folders. Deliver Excel, JSONL, and corresponding attachment folders. d. "Image-Text" Pairs...
I need a reliable scraper that monitors every basketball league listed on Bet365 () if accessing that is an issue you can use The script must do two separate pulls for each game: Objective 1 • Run #1 – as soon as Bet365 publishes the starting lineup. • Run #2 – again on game day, no later than one hour before tip-off. For each run, capture Teams and scores, all published lineups and odds, plus the Q1 Total, full Quarter and Half statistics as soon as they appear. The goal is to analyse how the line and odds move between the first and second snapshot, feeding a broader betting-strategy model, so accuracy and time-stamping are essential. Store everything in a structured database of your choice (PostgreSQL or MySQL are fine). The tables must let me query: • ...
I need a one-time, UK-wide scrape that captures every wedding-related business you can find across England, Wales, Scotland and Northern Ireland—no single directory limitations, so feel free to pull from any public site that meets the brief. Deliverable • A single Excel file containing the following columns: URL, Business Name, Full Address, Post Code, Telephone, and every email address that appears on the site (not just the first one you find). • The sheet should be neatly de-duplicated and ready for filter/sort. Business types to include • Wedding & Bridal Wear • Wedding Planners / Services • Wedding Cars, Horse & Carriages • Wedding Venues • Photographers & Videographers • Florists & Wedding Flowers •...
I need a small automation script that periodically checks item availability on the Bigbasket website and pings me on Telegram the moment any of the tracked products come back in stock. You are free to choose the underlying tech stack (Python + Requests/BeautifulSoup, Selenium, Playwright, or a headless browser of your choice) as long as it works reliably with Bigbasket’s current site layout and protects my account from rate-limit blocks or captchas. The flow I have in mind is straightforward: I feed the bot a list of product URLs (or SKUs). It runs on a schedule I can change—every few minutes during peak shortages, maybe every hour otherwise—grabs the stock status, and fires a concise Telegram message whenever the status flips from “Out of Stock” to “Av...
I need every public phone number that appears on gathered into a single, well-structured Excel workbook. Please crawl the entire site, not just a few sections, and return each number alongside the key profile details that make the data usable at a glance—name, profile URL, and any other easily captured identifiers shown next to the number. A clean .xlsx with one row per profile, no duplicates, and clearly labelled columns is the only deliverable I’m expecting. If you prefer Python, Scrapy, Selenium, Beautiful Soup or a comparable stack, go ahead; I’m interested in results, not the specific toolset, as long as the script can be rerun later should the site content change. Before delivery, double-check that: • every row contains a valid phone number and url • n...
I need a reliable scraper that monitors every basketball league listed on Bet365 (). The script must do two separate pulls for each game: Objective 1 • Run #1 – as soon as Bet365 publishes the starting lineup. • Run #2 – again on game day, no later than one hour before tip-off. For each run, capture Teams and scores, all published lineups and odds, plus the Q1 Total, full Quarter and Half statistics as soon as they appear. The goal is to analyse how the line and odds move between the first and second snapshot, feeding a broader betting-strategy model, so accuracy and time-stamping are essential. Store everything in a structured database of your choice (PostgreSQL or MySQL are fine). The tables must let me query: • first-pull values • second-pull val...
I need help streamlining a small questionnaire that captures only open-ended answers. Respondents will be typing directly into a web form, and I simply want each answer stored and exported as clean, plain-text strings—no JSON, CSV, or additional metadata layers. Your task is to: • Set up the formatting logic so every submission is saved exactly as entered, preserving paragraph breaks but stripping any extra HTML or special characters the form might inject. • Provide a straightforward way for me to download or copy that text in bulk once the survey closes. If you prefer, a lightweight script or form-handler (PHP, Python, or JavaScript are all fine) that writes the responses into a flat .txt file or an equivalent plain-text store will meet the requirement. Please keep th...
I need a seasoned backend developer to design and implement a secure REST API that lets my users check award-seat availability (Avios) directly from Iberia.com. The core of the job is to automate the full search flow — login, query, filter, and return the results — while keeping the service fast and reliable. Authentication & security The service must issue and validate JWT tokens for every request beyond the public health-check route. Token refresh, revocation, and a simple role model (“user” vs. “admin”) should be built in from the start. Flight data extraction I do not have official Iberia developer access, so we will need to pull the data ourselves. I’m open to whichever tooling you are most comfortable with — BeautifulSoup, Sel...
I'm looking for an experienced freelancer to build a complete, low-maintenance web-based educational app that uses AI to suggest peptides for anti-aging and health issues (e.g., recovery, inflammation) based on public research. The app will include study-based dosage, cycle, and usage suggestions, plus an integrated cost-comparison tool similar to (aggregating prices from legal suppliers via affiliates or scraping). This is strictly for educational purposes—**no medical advice or promotion of unapproved substances**. The app must include strong disclaimers everywhere to comply with FDA regulations. **Project Goals:** - Create a freemium SaaS web app (MVP first, then scale). - Low overhead: Use no/low-code tools where possible. - Monetization: Subscriptions ($9–$29/...
I need a seasoned backend developer to design and implement a secure REST API that lets my users check award-seat availability (Avios) directly from Iberia.com. The core of the job is to automate the full search flow — login, query, filter, and return the results — while keeping the service fast and reliable. Authentication & security The service must issue and validate JWT tokens for every request beyond the public health-check route. Token refresh, revocation, and a simple role model (“user” vs. “admin”) should be built in from the start. Flight data extraction I do not have official Iberia developer access, so we will need to pull the data ourselves. I’m open to whichever tooling you are most comfortable with — BeautifulSoup, Sel...
I need a senior-level specialist to harvest product data from several e-commerce sites and deliver it in a single, well-structured CSV file. The task demands production-ready techniques—think Scrapy spiders hardened with rotating proxies, Selenium or Playwright for dynamic content, and solid anti-bot countermeasures. The information I’m after is very specific: product names, prices, pictures, and SKU. Nothing less, nothing more. Your solution must run reliably at scale, cope with frequent layout changes, and leave no trace that could trigger blocks. Python is the preferred stack, but if you have a proven alternative that meets the same bar, I’m open to hearing it. To be considered, include in your proposal: • At least one example of a comparable e-commerce scrapi...
Por favor, Cadastre-se ou Faça Login para ver os detalhes.
I need a small script or micro-service that calls an odds API once per day and extracts NBA player-prop markets—specifically all categories—for every nba game on the board. The job is only about player props; spreads, moneylines, and totals can be ignored. Here is what I expect: • Code (Python or Node preferred, but I’m flexible) that hits a public or paid odds endpoint, parses the daily response, and saves the three prop categories in a tidy JSON or CSV file. Excel preferably • A clear spot in the code where I can drop my own API key and set the run time (cron, Cloud Function, Lambda, etc.). • Basic logging so I can confirm the call succeeded and see any errors. • Quick README explaining setup and the output format. If the script runs co...
PDF to Excel Data Scraper Needed Job Title: Data Scraper Needed: Convert 24 PDF Factsheets to Clean Excel (Mutual Fund Portfolios) Project Overview: I need a freelancer to extract detailed stock portfolio data from ~24 Mutual Fund Monthly Factsheets (PDFs). I will provide the URLs/Files. Your job is to extract the full stock holdings table for specific funds and deliver a consolidated, clean Excel/CSV file. The Goal: I need the complete list of stocks (100% of the portfolio), NOT just the Top 10. The data is used for financial backtesting, so accuracy is critical. Even top 85-90% data works. Scope of Work: Input: ~24 PDF Files (Monthly Factsheets). Target Funds: For each month, extract data for the Top 10 Equity Funds (e.g., Bluechip, Midcap, Smallcap, Value Discovery, etc. - list wi...
I’m expanding our Florida outreach list and need a reliable web-scraped data set of school, college, and university administrators who oversee Nursing or other Healthcare programs. You’ll pull the information directly from two source types only—official institution websites and reputable educational directories—so every entry must be traceable back to one of those pages. Here’s exactly what must land in the spreadsheet: • Institution name • Contact’s first and last name • Job title (Administrator, Director of Nursing, CTE Healthcare lead, etc.) • Verified email address • State (always Florida) Format & delivery – Send the file in Excel (.xlsx). – First progress drop: within 5 days so I can spot-c...
We want to do this in a consulting / facilitators / builders format in which we work with the facilitator / consultant / trainer for 3-6 hours a week for 3-6 months in order to help us collaboratively create various agents for our private equity business. The only billed time will be the time spent on the video call with our team, unless specifically approved otherwise. we want to be able to create a screen scrape tool to average certain cost items of specific real estate proejcts We also want to compare legal documents vs term sheets and excel spreadsheets Data sources • Company databases (SQL, flat files, Excel exports) - Dropbox all our files are in drop box • Extensive web scraping for competitor benchmarks and investment-market signals If you have ideas for safely add...
I need a single WebExtension that runs in both Chrome and Firefox and turns our current manual workflow into a one-click process. Its core job is data collection—capturing information from pages we specify—while also handling the little chores my team repeats every day: filling forms, scraping targeted fields, and kicking off routine browser actions such as page refreshes or button clicks once certain conditions are met. The add-on must connect cleanly to three parts of our internal stack: • our CRM system (REST APIs already documented) • the project-management tool we use (webhook support available) • a central database for long-term storage (PostgreSQL) Please build with the standard WebExtension/Manifest V3 approach so we can maintain a single code...
I need webscraping expert to scrape data and export to excel from Indiegogo. Details I need for the projects are: Title: Project title. Category: The category of the project based on Indiegogo categorization system. Category: The sub-category of the project based on Indiegogo categorization system. Close Date: Close data of the campaign. Open Date: Open date of the campaign. Currency: Currency used for collected funds. Funds Raised: The amounts of funds raised. Funds Raised Percent: The percent of funds raised from the targeted funds. Funding Target: The targeted amounts of funds by the campaign initiator to be collected. Country: Country in which the project is based. Publisher: The name of the campaign initiator. Backers: The number of people who decided to fund the campaign. Updates: ...
I’m looking for a well-structured Python solution, built around BeautifulSoup (BS4) and any supportive libraries you deem essential, that reliably pulls both product details and customer reviews from Lazada on a daily schedule. The data will fuel ongoing competitor research, so consistency and clarity of the output are critical. I looking specifically to get data using bs4 by bypassing the captcha Here’s how I picture the flow: • Input: category URL(s) or product list I supply in a CSV/JSON. • Scrape: title, price, promos, specs, images, ratings, full review texts, review dates, and reviewer scores. • Output: clean CSV or JSON dropped into a dated folder after each run. Make the script easy to tweak if Lazada changes its markup. Acceptance criteria 1. S...
I need a seasoned Python developer to build a robust scraper that collects the required data and writes it straight to JSON—no additional cleaning or processing necessary. Once we begin I’ll provide the target URL(s) and any access details; for now, assume a standard public site with pagination and occasional anti-bot checks. Core expectations • Written in Python 3 using requests/BeautifulSoup or Scrapy; resort to Selenium only if there’s no lighter workaround. • Handles pagination, retries, and polite delays gracefully so the run can complete unattended. • Config file or clear constants for headers, cookies, and start URLs, letting me tweak targets without editing core logic. • Produces a single JSON file (or one file per page if that’s...
I need to build a reliable, well-structured lead list and I already know exactly what it should contain. The task is to extract contact information—email addresses, phone numbers and full mailing addresses—from three sources: company and organisation websites, their public social-media profiles, and well-known online directories. I expect the data to be gathered with a solid scraping workflow (Python, Scrapy, BeautifulSoup, Selenium or an equivalent stack is fine) and then verified so that bounced emails and dead numbers are kept to an absolute minimum. Deliverables • One CSV or Excel file with separate columns for name, company, job title, email, phone, street address, city, state, ZIP/postcode, country, source URL and date collected. • No duplicates; every...
If you want to stay competitive in 2021, you need a high quality website. Learn how to hire the best possible web developer for your business fast.
Learn how to find and work with a top-rated Google Chrome Developer for your project today!
Learn how to find and work with a skilled Geolocation Developer for your project. Tips and tricks to ensure successful collaboration.