This repository contains a small, script-first pipeline to prepare data, extract pose landmarks with MediaPipe, train machine‑learning pose classifiers, and run a real‑time webcam demo.
The sections below explain what each Python script in the project root does and how to use it on macOS (zsh). For dependencies, see `requirements.txt`.
Optional but recommended: create and activate a virtual environment before installing.
1) (Optional) Extract raw images from the included Parquet dataset into train/test folders using `extractimages.py`. 2) Run `posedetection.py` to generate per‑image pose landmark JSON files under `PoseData/label`. 3) Train and evaluate a classifier with `mlposeclassifier.py`. Optionally export to ONNX or TFLite. 4) Run the webcam demo with `realtimeposeclassifier.py` using your saved model.
Purpose - Extract images and labels from the provided Parquet files (in `YogaDataSet/data/`) and save them into folders by label for training and testing.
Inputs/Outputs - Input: `YogaDataSet/data/train-00000-of-00001.parquet`, `YogaDataSet/data/test-00000-of-00001.parquet` - Output: Images under `TrainData/train/label` and/or `TrainData/test/label`
Notes - The script creates `label0`, `label1`, … subfolders and writes image files with their original extensions.
Purpose - Run MediaPipe Pose on your labeled image folders and save normalized landmark coordinates to JSON files for training.
Preprocessing - Uses the nose as the head reference point and applies: position = (pos − headPos) × 100, rounded to 2 decimals. This matches the training pipeline.
Inputs/Outputs - Input: An images root (default `TrainData/train`) organized as `label/.jpg|png|…` - Output: JSON files under `PoseData/label/ .json`
Tips - Supported image extensions: .jpg, .jpeg, .png, .bmp, .tiff - Requires a working OpenCV + MediaPipe install (see `requirements.txt`).
Purpose - Train, evaluate, and export pose classifiers from landmark JSONs. Supports Random Forest, SVM, Gradient Boosting, Logistic Regression, and a knowledge‑distilled RF→MLP variant.
Data expectation - Directory structure like: - `PoseData/label0/.json` - `PoseData/label1/.json` - …
Common options - `--data/-d` Pose JSON root (default: `PoseData`) - `--model/-m` Model type: `randomforest` (default), `svm`, `gradientboost`, `logistic`, `distilledrf` - `--test-size/-t` Test split ratio (default: 0.2) - `--save-model/-s` Path to save the trained model (`.pkl` via joblib) - `--load-model/-l` Path to load an existing model - `--predict/-p` Predict a single JSON file - `--evaluate/-e` Evaluate a folder of JSON files - `--export-onnx` Export the trained model to ONNX (tree models or distilled MLP) - `--export-model-type` Controls which model flavor to export - `--export-tflite` Export distilled student MLP to TFLite (requires extra deps)
Notes - ONNX export depends on `skl2onnx` and `onnx`. TFLite export additionally needs `onnx-tf` and `tensorflow`. - Linear classifiers (`svm`, `logistic`) are not supported by Unity Barracuda. Prefer `randomforest` or the distilled MLP for deployment.
Purpose - Run live pose classification from your webcam using a previously trained model. Draws the skeleton, highlights the used joints, and overlays prediction + confidence.
Model loading - If `--model` is not provided, the script auto‑searches common filenames in the project root: - `poseclassifierrandomforest.pkl` - `poseclassifierlogistic.pkl` - `poseclassifierdistilledrf.pkl`
Keyboard controls - Q: Quit - L: Toggle landmark keypoints - C: Toggle pose connections - R: Reset prediction history (smoothing window)
Notes - Uses the same preprocessing as training (nose‑relative coordinates ×100, 2‑decimal rounding, StandardScaler). - For smoother predictions, a small history window is used to compute a stable label and average confidence.
- `YogaDataSet/data/` — Parquet files used by `extractimages.py`. - `TrainData/train|test/label/` — Image folders produced by extraction. - `PoseData/label/` — Landmark JSONs generated by `posedetection.py`. - `models/` — Example trained/exported models and label mappings. - `confusionmatrix.png` — Saved confusion matrix plots (when enabled in training script).
- MediaPipe install issues on macOS: ensure you’re using a supported Python version and the latest pip; try reinstalling `mediapipe` and `opencv-python`. - Camera cannot open: try a different `--camera` index, close other apps using the camera, or allow camera permissions for Python in macOS Privacy settings. - Model not found in real‑time script: pass `--model` with an explicit path to your `.pkl` file.