Multimodal Conversation (MPAI-MMC) is an MPAI Standard comprising five Use Cases, all sharing the use of artificial intelligence (AI) to enable a form of human-machine conversation that emulates human-human conversation in completeness and intensity: 1. “Conversation with Emotion” (CWE), supporting audio-visual conversation with a machine impersonated by a synthetic voice and an animated face. 2. “Multimodal Question Answering” (MQA), supporting request for information about a displayed object. 3. Three Uses Cases supporting conversational translation applications. In each Use Case, users can specify whether speech or text is used as input and, if it is speech, whether their speech features are preserved in the interpreted speech: a. “Unidirectional Speech Translation” (UST). b. “Bidirectional Speech Translation” (BST). c. “One-to-Many Speech Translation” (MST).
- Sponsor Committee
- BOG/CAG - Corporate Advisory Group
- Active PAR
- PAR Approval