Source Paper
Using AI Algorithms and Machine Learning in the Analysis of a Bio-Purification Method (Therapeutic Emesis, Known as “Vamana Karma”): Protocol for a Mixed Methods Study
Rani P, Kalra S, Singh S, David R, Gupta AR et al.
JMIR Res Protoc • 2026
Use this page as an execution guide, then fall back to the source paper whenever you need exact exclusions, dosing details, or assay-specific caveats.
Confirm first
- Verify the animal model, intervention setup, and collection timepoints against the source paper.
- Check that every direct vendor link matches the exact specification your lab plans to run.
Use the page like this
- Work through the protocol steps in order and use the inline vendor chips only when you need to source or verify an item.
- Jump to Experimental Context for readouts, data shape, and analysis flow before planning downstream analysis.
Protocol Steps
Start here. The step list is optimized for running the experiment, with direct vendor links available inline when you need to source a cited item.
Camera positioning and setup
Position high-definition cameras at forehead level facing the participant to record both vomitus events and facial expressions. Set up monochromatic green backdrop and standardized artificial lighting.
View evidence from paper
“High-definition cameras (minimum 1080p resolution) will be positioned at forehead level facing the participant”
Participant preparation
Dress participants in green gowns for background segmentation optimization during video capture.
View evidence from paper
“Participants who undergo TE will be dressed in a green gown for the same purpose”
Video data capture
Record video data throughout the vamana karma procedure to capture vomitus events, facial expressions, and physical gestures.
View evidence from paper
“Video data will be captured throughout the procedure to record vomitus events, facial expressions”
Facial expression analysis using DeepFace
Analyze facial expressions using DeepFace framework to detect emotions such as disgust, sadness, tiredness, and anger by mapping facial features including eyebrows, eyes, and mouth.
View evidence from paper
“Facial expression analysis will be conducted using DeepFace (Meta Platforms), an open-source facial recognition”
Frame extraction and annotation
Process video data through frame extraction and manual annotation to create labeled datasets. Annotate approximately 700-800 cropped images for facial expression classification.
View evidence from paper
“approximately 700 to 800 cropped images will be classified as per classic parameters”
Temporal mapping
Time-stamp and map detected facial expressions across the procedure timeline to correlate with emesis patterns and patient responses.
View evidence from paper
“classified vomitus types, and facial expressions will be time-stamped and mapped across the procedure timeline”