<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>AI | Yang Gao</title><link>https://ygao36buffalo.github.io/tags/ai/</link><atom:link href="https://ygao36buffalo.github.io/tags/ai/index.xml" rel="self" type="application/rss+xml"/><description>AI</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Fri, 25 Apr 2025 00:00:00 +0000</lastBuildDate><item><title>SandTouch: Empowering Virtual Sand Art in VR with AI Guidance and Emotional Relief</title><link>https://ygao36buffalo.github.io/project/sandtouch/</link><pubDate>Fri, 25 Apr 2025 00:00:00 +0000</pubDate><guid>https://ygao36buffalo.github.io/project/sandtouch/</guid><description>&lt;h2 id="project-overview">Project Overview&lt;/h2>
&lt;p>In today&amp;rsquo;s data-rich world, simply having data isn&amp;rsquo;t enough – we need to understand it, communicate it, and extract insights from it. This course empowers you to transform raw numbers and complex datasets into compelling, insightful visual narratives. We&amp;rsquo;ll blend art with science, learning how to design interactive visualizations that not only look good but also enable powerful data exploration and decision-making. Get ready to become a data storyteller!&lt;/p>
&lt;p>Sand painting is a unique and valuable art form, but it&amp;rsquo;s often constrained by physical equipment and a steep learning curve. To address these challenges, we developed &lt;strong>SandTouch&lt;/strong>, a novel VR sand painting system that offers an immersive and intuitive experience, closely mirroring real-world sand manipulation. This project was presented at &lt;strong>CHI 2025&lt;/strong> (the premier conference in Human-Computer Interaction).&lt;/p>
&lt;h2 id="key-features--contributions">Key Features &amp;amp; Contributions:&lt;/h2>
&lt;h3 id="1-realistic-hand-sand-interaction">1. Realistic Hand-Sand Interaction&lt;/h3>
&lt;p>We designed SandTouch to create a highly realistic and natural interaction between the user&amp;rsquo;s hands and virtual sand, allowing direct manipulation without external devices or controllers. This approach provides an intuitive, device-free interface and emphasizes the authenticity of the interaction. Our system captures fine sensations of real sand manipulation, complemented by realistic sound feedback.&lt;/p>
&lt;h3 id="2-ai-guidance-for-creative-flow">2. AI Guidance for Creative Flow&lt;/h3>
&lt;p>A pioneering aspect of SandTouch is the integration of an AI agent, powered by a large language model (LLM). This AI intelligently interprets users&amp;rsquo; creative intentions in real-time, offering contextually relevant artistic suggestions. This feature simplifies the creative process, enhances interactivity, and helps users (especially beginners) refine their artwork through technique recommendations and composition analysis.&lt;/p>
&lt;h3 id="3-emotional-relief--immersion">3. Emotional Relief &amp;amp; Immersion&lt;/h3>
&lt;p>Beyond artistic creation, SandTouch prioritizes user well-being. It incorporates calming and responsive soundscapes that react to user gestures, reinforcing a relaxing atmosphere. Comprehensive evaluations demonstrated a significant increase in user engagement and immersion, with the realistic sound feedback enhancing emotional relief and deepening the painting experience. This highlights its potential for therapeutic applications.&lt;/p>
&lt;h2 id="media--resources">Media &amp;amp; Resources:&lt;/h2>
&lt;ul>
&lt;li>&lt;strong>Paper:&lt;/strong> &lt;a href="https://doi.org/10.1145/3706598.3714275" target="_blank" rel="noopener">Full Paper (ACM DL)&lt;/a>&lt;/li>
&lt;li>&lt;strong>Video:&lt;/strong> &lt;a href="https://www.youtube.com/watch?v=6FYOCeU0liw" target="_blank" rel="noopener">Project Video&lt;/a>&lt;/li>
&lt;/ul>
&lt;hr>
&lt;p>Did you find this page helpful? Consider sharing it 🙌&lt;/p></description></item><item><title>PressInPose: Integrating Pressure and Inertial Sensors for Full-Body Pose Estimation in Activities</title><link>https://ygao36buffalo.github.io/project/pressinpose/</link><pubDate>Thu, 21 Nov 2024 00:00:00 +0000</pubDate><guid>https://ygao36buffalo.github.io/project/pressinpose/</guid><description>&lt;h2 id="project-overview">Project Overview&lt;/h2>
&lt;p>Accurate human body posture assessment through wearable technology has significant implications across various fields, including sports science, clinical diagnostics, rehabilitation, and VR interaction. Traditional methods often face limitations due to complex setups or environmental constraints. To address these challenges, we developed &lt;strong>PressInPose&lt;/strong>, an innovative system that integrates pressure and inertial sensors for precise full-body pose estimation in dynamic activities. This work was published in &lt;strong>Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT)&lt;/strong> and will be presented at &lt;strong>UbiComp 2025&lt;/strong>.&lt;/p>
&lt;h2 id="key-innovations--contributions">Key Innovations &amp;amp; Contributions:&lt;/h2>
&lt;h3 id="1-novel-multi-sensor-fusion">1. Novel Multi-Sensor Fusion&lt;/h3>
&lt;p>PressInPose employs an advanced shoe insole embedded with pressure sensors and an Inertial Measurement Unit (IMU), coupled with a single wrist-mounted IMU. This unique multi-modal sensor fusion approach allows for a comprehensive analysis of human biomechanics, capturing intricate body dynamics that traditional single-sensor systems often miss.&lt;/p>
&lt;h3 id="2-llm-powered-virtual-data-augmentation">2. LLM-Powered Virtual Data Augmentation&lt;/h3>
&lt;p>To enhance the robustness and generalization of our system, we leveraged large language models (LLMs) to generate virtual human motion sequences. These sequences were utilized to create synthetic IMU data for data augmentation, effectively addressing the challenge of limited real-world data availability and variability, especially for complex and dynamic movements.&lt;/p>
&lt;h3 id="3-physical-kinematics-modeling--deep-learning-network">3. Physical Kinematics Modeling &amp;amp; Deep Learning Network&lt;/h3>
&lt;p>Our approach uniquely combines physical kinematics modeling based on pressure data with a multi-region human posture estimation network. This integration allows PressInPose to accurately capture interactions and dependencies between different body parts, leading to superior accuracy in pose reconstruction.&lt;/p>
&lt;h2 id="media--resources">Media &amp;amp; Resources:&lt;/h2>
&lt;ul>
&lt;li>&lt;strong>Paper:&lt;/strong> &lt;a href="%60https://doi.org/10.1145/3699773%60">Full Paper (ACM DL)&lt;/a>&lt;/li>
&lt;/ul>
&lt;hr>
&lt;p>Did you find this page helpful? Consider sharing it 🙌&lt;/p></description></item></channel></rss>