Source Paper
Using AI Algorithms and Machine Learning in the Analysis of a Bio-Purification Method (Therapeutic Emesis, Known as “Vamana Karma”): Protocol for a Mixed Methods Study
Rani P, Kalra S, Singh S, David R, Gupta AR et al.
JMIR Res Protoc • 2026
Physical Gesture Analysis
Objective: AI-based framework for objective assessment of physical gestures, facial expressions, and hand/eye movements during vamana karma (therapeutic emesis) in Ayurveda
This is a Physical Gesture Analysis protocol using Human as the model organism. The procedure involves 6 procedural steps, 3 equipment items, 4 materials. Extracted from a 2026 paper published in JMIR Res Protoc.
Model and subjects
Human • N/A • Not specified • 18-65 years • 50
Study window
~1.2 hours hands-on
Core workflow
Camera positioning and setup • Video data capture • Frame extraction and annotation
Primary readouts
- Facial expression detection accuracy (disgust, sadness, tiredness, anger)
- Physical gesture analysis (hand and eye movements)
- Temporal correlation of emesis patterns and patient responses
- Frame-level accuracy for gesture identification
Key equipment and reagents
Use this page as an execution guide, then fall back to the source paper whenever you need exact exclusions, dosing details, or assay-specific caveats.
Confirm first
- Verify the animal model, intervention setup, and collection timepoints against the source paper.
- Check that every direct vendor link matches the exact specification your lab plans to run.
Use the page like this
- Work through the protocol steps in order and use the inline vendor chips only when you need to source or verify an item.
- Jump to Experimental Context for readouts, data shape, and analysis flow before planning downstream analysis.
Protocol Steps
Start here. The step list is optimized for running the experiment, with direct vendor links available inline when you need to source a cited item.
Camera positioning and setup
Position high-definition cameras at forehead level facing the participant with standardized artificial lighting and monochromatic green backdrop
View evidence from paper
“High-definition cameras (minimum 1080p resolution) will be positioned at forehead level facing the participant”
Video data capture
Record video throughout the vamana karma procedure to capture vomitus events, facial expressions, and physical gestures
View evidence from paper
“Video data will be captured throughout the procedure to record vomitus events, facial expressions, and physical gestures”
Frame extraction and annotation
Process video data through frame extraction and manual annotation to create labeled datasets
View evidence from paper
“Video data will be processed through frame extraction and manual annotation”
Facial expression analysis
Analyze patients' facial gestures by mapping various features of the face such as eyebrows, eyes, and mouth to emotions of anger, fear, surprise, sadness, and happiness
View evidence from paper
“The patients' facial gestures will be analyzed by mapping various features of the face, such as the eyebrows, eyes, and mouth, to the emotions of anger, fear, surprise, sadness, and happiness”
Physical gesture analysis
Analyze physical gestures including hand and eye movements using pose estimation models
View evidence from paper
“Physical gestures, including hand and eye movements, will also be analyzed using pose estimation models”
Temporal mapping
Time-stamp and map detected events, classified vomitus types, and facial expressions across the procedure timeline
View evidence from paper
“Detected events, classified vomitus types, and facial expressions will be time-stamped and mapped across the procedure timeline”