<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Pose Estimation | Yang Gao</title><link>https://ygao36buffalo.github.io/tags/pose-estimation/</link><atom:link href="https://ygao36buffalo.github.io/tags/pose-estimation/index.xml" rel="self" type="application/rss+xml"/><description>Pose Estimation</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Thu, 21 Nov 2024 00:00:00 +0000</lastBuildDate><item><title>PressInPose: Integrating Pressure and Inertial Sensors for Full-Body Pose Estimation in Activities</title><link>https://ygao36buffalo.github.io/project/pressinpose/</link><pubDate>Thu, 21 Nov 2024 00:00:00 +0000</pubDate><guid>https://ygao36buffalo.github.io/project/pressinpose/</guid><description>&lt;h2 id="project-overview">Project Overview&lt;/h2>
&lt;p>Accurate human body posture assessment through wearable technology has significant implications across various fields, including sports science, clinical diagnostics, rehabilitation, and VR interaction. Traditional methods often face limitations due to complex setups or environmental constraints. To address these challenges, we developed &lt;strong>PressInPose&lt;/strong>, an innovative system that integrates pressure and inertial sensors for precise full-body pose estimation in dynamic activities. This work was published in &lt;strong>Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT)&lt;/strong> and will be presented at &lt;strong>UbiComp 2025&lt;/strong>.&lt;/p>
&lt;h2 id="key-innovations--contributions">Key Innovations &amp;amp; Contributions:&lt;/h2>
&lt;h3 id="1-novel-multi-sensor-fusion">1. Novel Multi-Sensor Fusion&lt;/h3>
&lt;p>PressInPose employs an advanced shoe insole embedded with pressure sensors and an Inertial Measurement Unit (IMU), coupled with a single wrist-mounted IMU. This unique multi-modal sensor fusion approach allows for a comprehensive analysis of human biomechanics, capturing intricate body dynamics that traditional single-sensor systems often miss.&lt;/p>
&lt;h3 id="2-llm-powered-virtual-data-augmentation">2. LLM-Powered Virtual Data Augmentation&lt;/h3>
&lt;p>To enhance the robustness and generalization of our system, we leveraged large language models (LLMs) to generate virtual human motion sequences. These sequences were utilized to create synthetic IMU data for data augmentation, effectively addressing the challenge of limited real-world data availability and variability, especially for complex and dynamic movements.&lt;/p>
&lt;h3 id="3-physical-kinematics-modeling--deep-learning-network">3. Physical Kinematics Modeling &amp;amp; Deep Learning Network&lt;/h3>
&lt;p>Our approach uniquely combines physical kinematics modeling based on pressure data with a multi-region human posture estimation network. This integration allows PressInPose to accurately capture interactions and dependencies between different body parts, leading to superior accuracy in pose reconstruction.&lt;/p>
&lt;h2 id="media--resources">Media &amp;amp; Resources:&lt;/h2>
&lt;ul>
&lt;li>&lt;strong>Paper:&lt;/strong> &lt;a href="%60https://doi.org/10.1145/3699773%60">Full Paper (ACM DL)&lt;/a>&lt;/li>
&lt;/ul>
&lt;hr>
&lt;p>Did you find this page helpful? Consider sharing it 🙌&lt;/p></description></item></channel></rss>