# UCSD Jacobs School of Engineering ## Masters in Data Science and Engineering ### Capstone Project: PhysioAI Companion - Contributors: - Laben Fisher - Sagar Jogadhenu - Prakhar Shukla - Vaaruni Desai - Zufeshan Imran - Digital Object Identifier: https://doi.org/10.6075/J0HM58PG # Project Abstract *Physical Therapists (PT) and Kinesiologists recommend a series of exercises but often face challenges in continuously monitoring individuals performing exercises to ensure correct postures and prevent injury aggravation. This research attempts to address this issue by building a product designed to automate the detection of incorrect exercises and provide users with timely feedback. The research effort began with a set of curated exercise videos, a set of biomechanical standards as well as developing a core model to analyze a single exercise - overhead squat. The core model approach consists of three main steps: preprocessing to standardize the videos, creating a 3D pose estimation model, and calculating incorrectness scores for each repetition and aggregation based on measured joint angles. The work uses state-of-the-art computer vision models and computational algorithms for a customized solution. The results from the core model are used to provide feedback to both practitioners and users through visual overlays on the exercise video and a graphical presentation of biomechanical measures captured during the exercise.* # Using this Repository This repository is organized so that everything in the `model` folder are the files that need to be run to start the model on the backend and everything in `webserver` are the files that need to be run for the frontend. ``` ├── model ├── model.ipynb ├── demo.ipynb ├── demo-batch.ipynb └── demo-desktop.ipynb ├── webserver ├── app.py - Flask routing file ├── static │   ├── default.css │   ├── images │   ├── js │   ├── output ├── templates - HTML pages ├── example ├── Guy01_2024-01-16.mp4 ``` # Installation instructions on AWS 1. Follow all the steps mentioned on `README.md` under `model` folder 2. Once the kernel gets created, pick a file from the `model` folder. - demo.ipynb to process a video captured through the website - demo-batch.ipynb to batch process videos on AWS S3 bucket) 3. Steps to follow if demo.ipynb was picked - Place the Jupyter notebook file under the `WHAM` folder - Start the file and run all cells - Open the website (35.95.37.192/capture) - Enter the name and start the capture - Results will be available on the website under the name entered 4. Steps to follow if demo-batch.ipynb was picked - Place the Jupyter notebook file under the `WHAM` folder - Place all video files under `physioai-data` S3 bucket - Start the file and run all cells - Results will be available on the website under the name entered *when you launch a notebook, make sure to select the `WHAM` kernel* # Running locally on PC 1. SMPL body models - To download SMPL body models (Neutral, Female, and Male), you need to register for SMPL (https://smpl.is.tue.mpg.de/) and SMPLify (https://smplify.is.tue.mpg.de/). 2. WHAM installation - We utilize the conda environment provided by WHAM and extend it. To install WHAM, please visit - https://github.com/yohanshin/WHAM/blob/main/docs/INSTALL.md 3. Once WHAM installation is done, pick the demo-desktop.ipynb file from the `model` folder. 4. Place the Jupyter notebook file under the `WHAM` folder 5. Place the video files under `physioai-videos` 6. Start the file and run all cells 7. Results will be available under `output` folder ## Citing the work - The Shin et al. (2023) model report and data: - [[https://pubs.er.usgs.gov/publication/ofr20161106](https://arxiv.org/abs/2312.07531)]