Source Paper
Using AI Algorithms and Machine Learning in the Analysis of a Bio-Purification Method (Therapeutic Emesis, Known as “Vamana Karma”): Protocol for a Mixed Methods Study
Rani P, Kalra S, Singh S, David R, Gupta AR et al.
JMIR Res Protoc • 2026
Facial Expression Analysis
Objective: AI-based framework for objective assessment of facial expressions during vamana karma (therapeutic emesis) in Ayurveda, analyzing emotions of anger, fear, surprise, sadness, and happiness through facial feature mapping
Gather these items before starting the experiment. Check off items as you prepare.
Equipment3
As an Amazon Associate, we earn from qualifying purchases. Product links help support this free resource.
Protocol Steps
Camera positioning and setup
Position high-definition cameras at forehead level facing the participant to record both vomitus events and facial expressions. Set up monochromatic green backdrop and standardized artificial lighting.
View evidence from paper
“High-definition cameras (minimum 1080p resolution) will be positioned at forehead level facing the participant”
Participant preparation
Dress participants in green gowns for background segmentation optimization during video capture.
View evidence from paper
“Participants who undergo TE will be dressed in a green gown for the same purpose”
Video data capture
Record video data throughout the vamana karma procedure to capture vomitus events, facial expressions, and physical gestures.
View evidence from paper
“Video data will be captured throughout the procedure to record vomitus events, facial expressions”
Facial expression analysis using DeepFace
Analyze facial expressions using DeepFace framework to detect emotions such as disgust, sadness, tiredness, and anger by mapping facial features including eyebrows, eyes, and mouth.
View evidence from paper
“Facial expression analysis will be conducted using DeepFace (Meta Platforms), an open-source facial recognition”
Frame extraction and annotation
Process video data through frame extraction and manual annotation to create labeled datasets. Annotate approximately 700-800 cropped images for facial expression classification.
View evidence from paper
“approximately 700 to 800 cropped images will be classified as per classic parameters”
Temporal mapping
Time-stamp and map detected facial expressions across the procedure timeline to correlate with emesis patterns and patient responses.
View evidence from paper
“classified vomitus types, and facial expressions will be time-stamped and mapped across the procedure timeline”