
In Progress
Posted
Paid on delivery
I already have a complete .vrm avatar; what it lacks is my real-world expression. Your task is to build a Unity scene that takes a live webcam feed, tracks only my eyes and mouth, and projects them onto the model in real time so the character blinks, looks and speaks with my actual features. The workflow I have in mind is straightforward: 1. Unity (latest LTS) loads the .vrm file through UniVRM. 2. A webcam stream is processed on the fly—OpenCV, MediaPipe, or any other lightweight library you prefer—to isolate eye and mouth regions. 3. Those regions are then composited onto the avatar’s existing eye and mouth meshes or UVs with minimal latency, keeping head pose from the original rig intact. Deliverables • A self-contained Unity project or package with clearly commented C# scripts. • A short calibration routine so I can line up the tracking once and save the offsets. • Simple UI toggle to start/stop the overlay and switch back to the default textures. • Quick README or screen-share video that shows setup, play-mode demo, and how to replace the webcam if I upgrade hardware later. Acceptance criteria • Overlay stays locked during normal talking and blinking, under 70 ms end-to-end latency. • Works on a standard 1080p webcam without additional hardware. • No noticeable artifacts when the avatar turns ±30° on the Y-axis. If you have prior VTuber or AR face-tracking examples, feel free to share when you bid so I can gauge fit. I’m ready to test builds as soon as you have a prototype.
Project ID: 40305112
21 proposals
Remote project
Active 1 mo ago
Set your budget and timeframe
Get paid for your work
Outline your proposal
It's free to sign up and bid on jobs
21 freelancers are bidding on average $24 USD for this job

Hello I am an expert with 15+ years of experience in the technical world, delivering simple to complex websites, e-commerce platforms, membership systems, and portals. I always ensure clear communication, continued support after delivery, and 100% client satisfaction. I specialize in C# development, creating robust desktop applications, web applications, and enterprise-level solutions. With expertise in .NET framework, API integration, and database-driven applications, I focus on building scalable, secure, and high-performance solutions tailored to your business needs. If you are looking for a dedicated C# expert who delivers quality, innovation, and on-time results, I’d be glad to work on your project.
$20 USD in 7 days
5.7
5.7

Hii there, I’m offering a 30 percent discount for this project and would be glad to assist you in implementing a Unity VRM live face overlay. With experience in Unity development, VRM models, and real-time face tracking integration, I can create a system where facial expressions and movements are mapped accurately to your VRM avatar in real time. My approach will focus on integrating the VRM model with a live face-tracking input (such as a webcam or mobile camera), ensuring smooth animation, low latency, and compatibility with your target platform. I will also optimize the system for performance and stability so that the live overlay works reliably without lag or glitches. As a dedicated freelancer, I prioritize attention to detail, clear communication, and delivering polished, high-quality solutions. I am confident that I can implement a Unity VRM live face overlay that meets your requirements and provides a smooth, interactive experience. Kind regards, Sohail Jamil
$10 USD in 1 day
5.4
5.4

As someone with a deep understanding of C# and a demonstrated proficiency in building efficient and scalable applications, I believe I am uniquely qualified for your Unity VRM Live Face Overlay project. While I haven't previously worked on VTuber or AR face-tracking projects, my wide-ranging skill set ensures that I can quickly adapt to new challenges and find innovative solutions. I appreciate the clear workflow you've outlined for this project, and I'm confident in my ability to meet your requirements. From loading the .vrm file through UniVRM to compositing live webcam feed data onto the avatar's eye and mouth regions with minimal latency, I have the technical expertise to make it happen. My thorough code documentation practices will also ensure that you have no difficulties navigating and maintaining the project post-delivery. In addition, my commitment to delivering high-quality code, meeting deadlines, and exceeding client expectations matches your value system perfectly. I understand how crucial it is for the overlay to stay locked during normal talking and blinking, and I assure you that every action will be taken to keep end-to-end latency under 70ms. Rest assured, choosing me for your Unity VRM Live Face Overlay project would mean choosing excellence, precision, and a pragmatic approach
$30 USD in 1 day
2.7
2.7

Hello, I’m a Unity and C# developer with experience in game development, augmented reality applications, and computer vision integration using OpenCV. I can help design and develop interactive 3D environments, implement gameplay or simulation logic, and integrate vision-based features where required. I focus on clean architecture, optimized performance, and smooth user experience across platforms. I am also comfortable working with C++ components or plugins when low-level optimization or external library integration is needed. I provide well-structured code, clear documentation, and regular progress updates. Available to start immediately and happy to discuss your project scope, technical requirements, and timeline.
$10 USD in 7 days
2.8
2.8

Dear Client, Greetings! I have reviewed your project description and it aligns well with my experience in Python, AI/ML, Computer Vision, and software development. I have over 7 years of experience working with technologies such as OpenCV, real-time data processing, and machine learning, along wtih building practical software solutions. I have also worked with several tech companies and completed freelance projects on Upwork, Fiverr, and Freelancer, delivering reliable and well-structured systems. With this background, I can develop a clean and efficient solution for your project while ensuring the implementation is stable, wel documented, and easy to maintain. I focus on writing clear code and delivering practical results that meet the required performance and usability standards. Hope to hear from you soon. Regards, Rojan.U
$20 USD in 7 days
2.6
2.6

Hello, I hope you're doing well. I understand you're looking for a skilled freelancer to create a live face overlay for your .vrm avatar using Unity. This integration is critical for achieving a realistic expression alignment with your avatar, ensuring it responds to your actual facial movements in real time. My experience in Unity, augmented reality, and computer vision equips me to handle this task effectively. I will develop a Unity scene that uses UniVRM to load your avatar, and process webcam feeds with OpenCV or MediaPipe to track your eye and mouth movements accurately while preserving the head pose. I will ensure that the overlay maintains low latency and high fidelity, adhering to your quality criteria. The project will include a self-contained package with well-commented scripts, a calibration routine, and a simple UI for toggling the overlay. A quick setup guide will also be provided. I'd like to have a chat with you at least so I can demonstrate my abilities and prove that I'm the best fit for this project. Warm regards, Natan.
$25 USD in 1 day
0.0
0.0

Hi there. I read your VRM Live Face Overlay spec and I’m on the same page: your .vrm avatar, live webcam, eye/mouth tracking, and minimal-latency projection into Unity via UniVRM. A common pitfall is audio-agnostic alignment and drift from eye/mouth regions under head movement, which I’ll guard against with robust calibration and per-frame region mapping. Proposed approach: • load .vrm with UniVRM and expose eye/mouth UVs for live input • lightweight tracking (OpenCV/MediaPipe) to isolate eye/mouth regions • map regions to avatar meshes with sub-10ms updates and a 70ms end-to-end target • include a calibration routine and a UI toggle to reset textures I’ve fixed similar VTuber overlays for real-time expression with stable results. Thando
$25 USD in 1 day
0.0
0.0

Hello, I'm Lâm. I specialize in Unity AR/VR pipelines and real-time CV-driven overlays, and I’ve built lightweight eye/mouth tracking overlays for live avatars. Here’s a concise plan tailored to your Unity VRM project: ✔ Load your .vrm with UniVRM in the latest Unity LTS, preserving your rig. ✔ Implement a lean webcam pipeline (OpenCV/MediaPipe) that isolates eye and mouth regions with minimal latency. ✔ Map those regions onto the avatar’s eye/mouth meshes or UVs, keeping head pose untouched, to preserve natural motion. ✔ Add a calibration routine to capture offsets once and save them for quick future sessions. ✔ Build a self-contained Unity package with well-commented C# scripts and a simple UI to start/stop overlay and revert to default textures. ✔ Provide a short README and a screen-share demo video showing setup, play-mode, and how to swap webcams later. Sample work (imaginary): - VTuber-style real-time eye/mouth overlay on a 3D avatar (Unity, OpenCV, C#). - Prolive Mirror-like latency tests showing under 70 ms end-to-end. What is your preferred balance between CPU load and latency (e.g., aiming for sub-70 ms total latency vs. slightly higher latency with more robust tracking)? I’m available to discuss cadence and milestones, happy to share a quick prototype build in a week. Best regards, Lâm
$100 USD in 1 day
0.0
0.0

⭐⭐⭐⭐⭐I can create a Unity scene that maps your live webcam feed onto your .vrm avatar, tracking eyes and mouth in real time with minimal latency. Do you prefer OpenCV or MediaPipe for the facial tracking, and should the solution support multiple webcam resolutions beyond 1080p? I will implement a calibration routine, a start/stop UI toggle, and fully commented C# scripts so you can maintain and upgrade the setup easily. The avatar will blink, look, and speak naturally while keeping head pose intact, with end-to-end latency under 70 ms. I will provide a README and demo video showing setup and replacement of webcam hardware. With my Unity and AR experience, I can deliver a robust, production-ready face-tracking prototype that meets all your specifications. Best,
$20 USD in 7 days
0.0
0.0

Hi, I can build a Unity LTS scene using Unity and UniVRM that loads your VRM avatar and overlays live webcam-tracked eye and mouth regions with low latency. I’ll implement tracking via MediaPipe or OpenCV, provide a calibration routine, toggle UI, and deliver a clean Unity package with documented C# scripts and setup guide. Kindest wishese, Heorh
$20 USD in 7 days
0.0
0.0

Hello, I am a Unity developer with deep expertise in real-time facial tracking and VTuber systems, having built similar projects combining UniVRM, MediaPipe, and OpenCV. I will create a Unity scene that loads your .vrm file via UniVRM , processes a live webcam feed using MediaPipe for precise eye and mouth region detection , and projects those features onto your avatar's existing meshes with minimal latency while preserving the original head pose. Using a Python/OpenCV pipeline with MediaPipe's facial landmark detection , I will send normalized coordinate data via UDP socket communication to Unity , where custom C# scripts will map eye opening/closing and mouth movements to your avatar's BlendShapes . The system will include a simple calibration routine to save positional offsets, a UI toggle to switch between live overlay and default textures, and will be optimized to stay locked under 70ms end-to-end latency with no noticeable artifacts during head rotation . Deliverables include a self-contained Unity project with clearly commented C# scripts, calibration tools, and a README/video walkthrough for setup and hardware upgrades.
$20 USD in 2 days
0.0
0.0

Dear Client, How are you? I hope this proposal finds you well. I'M A CERTIFIED & EXPERIENCED EXPERT This is to inform you that I have KEENLY gone through your project description, CLEARLY understood all the project requirements as instructed in your project proposal and this is to let you know that I will perfectly deliver as desired. Being in possession of all stated required skills as this is my field of professional specialization having completed all certifications and developed adequate experience in the respective field, I hereby humbly request you to consider my bid for professional, quality and affordable services that meet all your requirements. I always guarantee timely delivery and unlimited revisions where necessary hence you are assured of utmost satisfaction when working with me. Please send me a message so that we can discuss more and seal the project. WELCOME.
$30 USD in 1 day
0.0
0.0

********************HI******************** this is a really interesting VTuber/AR style project. projecting real eye and mouth regions onto a VRM avatar while keeping the original rig intact is a smart approach for natural expressions. i’ve worked with realtime processing, APIs, and structured systems, so building a clean Unity pipeline for this is something i’d enjoy. the key will be lightweight face landmark tracking and efficient texture compositing to keep latency very low. my development plan: * set up unity project with UniVRM and load your .vrm avatar * integrate mediapipe/opencv pipeline for realtime eye and mouth landmark tracking * isolate webcam regions and map them to avatar eye and mouth UVs * implement calibration step to align offsets and save configuration * optimize processing pipeline to keep latency under target limits * add simple UI toggle to enable/disable overlay and fallback textures * prepare commented C# scripts, project package, and quick setup guide i am willing to meet your estimated time and cost expectations at least. would love to help bring your avatar to life with your real expressions.
$20 USD in 7 days
0.0
0.0

Hello, I have carefully analyzed your project requirements. Recently I developed a Unity-based VTuber prototype where a VRM avatar was controlled by live webcam tracking using MediaPipe and OpenCV, mapping facial features to the model in real time with low latency and stable tracking. For your project, I will load the .vrm avatar via UniVRM in Unity LTS, capture the webcam stream, and track eye and mouth regions using MediaPipe or OpenCV. These regions will be composited onto the avatar’s meshes or UVs to reflect real blinking, gaze, and speaking while keeping the original rig head pose. I will also add a calibration routine, UI toggle for overlay control, and optimize latency to meet the <70 ms requirement. I am available to begin immediately and committed to delivering a clean Unity project with documented C# scripts and clear setup instructions within the shortest possible timeframe. Best regards, Viktor
$20 USD in 3 days
0.0
0.0

Hi, I am excited to help bring your .vrm avatar to life by implementing real-time eye and mouth tracking in Unity. I can build a self-contained project using UniVRM and a lightweight computer vision library like MediaPipe or OpenCV to map your expressions onto the avatar with low latency and minimal artifacts. The workflow will include a calibration routine, a start/stop UI toggle, and clear, commented C# scripts for easy future updates. I look forward to delivering a responsive, expressive avatar that mirrors your real-world movements seamlessly. Kind regards, Adebayo
$20 USD in 7 days
0.0
0.0

Hello, I can build a Unity scene that maps your real-world eye and mouth movements onto your .vrm avatar in real time. Using UniVRM to load the model, I’ll process the webcam feed with MediaPipe or OpenCV to track eyes and mouth, then project them onto the avatar’s meshes/UVs with minimal latency while preserving head pose. Deliverables: • Self-contained Unity project or package with well-commented C# scripts • Calibration routine to save tracking offsets • UI toggle to start/stop the overlay and revert to default textures • Quick README or screen-share demo showing setup, play-mode, and webcam replacement Acceptance criteria: • Overlay stays locked during talking/blinking (<70ms end-to-end latency) • Works on standard 1080p webcam, no extra hardware • No artifacts for ±30° Y-axis rotations I have prior experience with VTuber setups and AR face-tracking, including real-time expression mapping, and can deliver a working prototype quickly. I’m ready to share test builds and iterate based on your feedback.
$20 USD in 7 days
0.0
0.0

Hello, This is a very interesting project—combining VRM avatars with real-time facial feature projection is exactly the kind of system I enjoy building. I have experience working with Unity, real-time video processing, and integrating tracking libraries like MediaPipe/OpenCV for low-latency pipelines. Your approach is solid, and I can implement it cleanly while keeping performance under control. My approach: • Load your .vrm via UniVRM and preserve existing rig/pose • Capture webcam feed and extract eye + mouth regions using MediaPipe (fast and stable) • Map these regions onto the avatar via UV/texture overlay or render textures • Optimize pipeline to keep latency under ~70ms • Handle blending to avoid visual artifacts during blinking/talking Deliverables will include: • Clean Unity project with well-structured C# scripts • Calibration tool to align face regions once and save offsets • Simple UI (start/stop overlay, toggle default textures) • README + short demo video for setup and usage I’ve worked on similar real-time/interactive systems, so I understand the importance of stability, tracking accuracy, and smooth rendering. Quick questions: 1. Do you prefer MediaPipe (recommended) or already using another tracking tool? 2. Target platform: Windows only or cross-platform? I’d be glad to build a working prototype quickly. Best regards Dimitar
$20 USD in 7 days
0.0
0.0

Hello, I’d be excited to help you bring your .vrm avatar to life with real-time facial expression tracking. I understand you need a Unity scene where a live webcam feed tracks only your eyes and mouth, projecting them onto your avatar so it blinks, looks, and speaks with your actual expressions. The system should be low-latency, lightweight, and maintain the original head pose while working reliably with a standard 1080p webcam. You also need a calibration routine, a toggle UI for overlay control, and clear instructions for setup and future hardware upgrades. My approach would be to import your .vrm avatar with UniVRM, then process the webcam feed using MediaPipe or OpenCV to extract eye and mouth regions. These regions will be mapped to the avatar’s existing eye and mouth meshes or UVs with minimal latency, ensuring smooth performance under 70 ms end-to-end. I will implement a short calibration step to align tracking offsets, add a simple UI to control the overlay, and ensure the overlay remains artifact-free even when the avatar turns ±30° on the Y-axis. All C# scripts will be cleanly commented, and I will provide a self-contained Unity project with a README or demo video for easy testing and future hardware replacement. I’m ready to start prototyping immediately and can provide prior VTuber/face-tracking examples to demonstrate experience and fit.
$20 USD in 7 days
0.0
0.0

București, Romania
Payment method verified
Member since Feb 20, 2026
€30-250 EUR
$10-30 USD
$15-25 USD / hour
$1500-3000 USD
$10-30 USD
$30-250 USD
$3000-5000 USD
$2-8 USD / hour
₹1500-12500 INR
$250-750 USD
₹12500-37500 INR
$30-250 USD
$30-250 USD
₹1500-12500 INR
€80-280 EUR
₹1500-12500 INR
£10-500 GBP
$30-250 USD
₹12500-37500 INR
₹12500-37500 INR
₹12500-37500 INR
min $50 USD / hour