EMPATHY at scale
Respond to user's expressions and speech in real-time with supercharged dynamic content
Before the end of this century humans will talk more to sentient machines than to other humans.
We need a better understanding about human emotions.

Multimodal emotion detection
Our new approach combines cues from multiple co-occurring modalities (such as face, text, and speech) and also is more robust than other methods to sensor noise in any of the individual modalities. A data-driven multiplicative fusion method to combine the modalities, which learn to emphasize the more reliable cues and suppress others on a per-sample basis.
As we act in real-time, we can directly provide feedback how to respond in your product or service.
Emotional Detection Results
Integration
Classifier 2
Classifier 1
Prosodic + Lexical Extraction
Expression Feature Extraction
Speech Signal Preprocessing
Face location and detection
The fastest, lightest and most complete REALTIME
Multimodal analysis SDK
Easy to integrate
Comes with all the samples, guides and documentation you'll need to make the most of our technology in the least time
Fast & lightweight
Great results with speed, accuracy and precision yet unmatched on the market
Fully GDPR Compliant
All personal data is directly processed,
and immediately destroyed
A few uses cases of our SDK
Get a real emotional feedback of your audience, far beyond any chat interaction..
Live Interaction & Feedback
Develop a completely unique digital personality based on its user interactions
Human-Robot interaction systems
Adjust your game in realtime to the users reactions and emotional feedback
Change dynamic game content
Virtual caregivers capable of feeling and understanding to create empathy at scale
Emotional healthcare
Request For Early Access
MYNT GmbH
Hohe Bleichen 28
20354 Hamburg
© 2021 MYNT