Dia is a text-to-speech (TTS) model developed by Nari Labs with 160 million parameters that enables the generation of highly realistic conversations directly from text. The model supports emotional and intonation control and is able to generate nonverbal communication such as laughter and cough. Its pretrained model weights are hosted on Hugging Face and are suitable for English generation. This product is crucial for research and educational purposes and can drive the development of dialogue generation technology.
Demand population:
"The product is suitable for researchers, developers and educators because it provides a powerful platform to explore and develop conversation generation technologies that can generate high-quality voice content for a variety of application scenarios such as virtual assistants, game development and multimedia content creation."
Example of usage scenarios:
Generate the virtual assistant's conversation content.
Create diverse sounds for game characters.
Produce a voice commentary in educational videos.
Product Features:
Generate dialogue to distinguish speakers by [S1] and [S2] tags.
Generate non-verbal communication, such as (laughing), (cough), etc.
Voice cloning function, you can upload audio for cloning.
It can be operated through the Gradio UI for user interaction.
Provide pre-trained models and inference codes to facilitate research.
Supports conditioned output via audio to control emotions and intonation.
Supports generation of multiple voices to maintain speaker consistency.
Audio can be generated in real time on enterprise-class GPUs.
Tutorials for use:
1. Cloning the code base from GitHub: git clone https://github.com/nari-labs/dia.git
2. Enter the directory: cd dia
3. Installation dependency: pip install -e.
4. Start Gradio UI: python app.py
5. Enter text in the UI and generate audio.