Generates speech from text and returns a JSON object that contains a base64-encoded audio string and optionally word-level durations (timestamps). This endpoint waits for the entire synthesis before responding, so it is not ideal for latency-sensitive applications.
Authorizations
Your API key; get it from your LMNT settings.
Parameters
The text to synthesize; max 5000 characters per request (including spaces).
The voice id of the voice to use; voice ids can be retrieved by calls to List voices or Voice info.
When set to true, the generated speech will also be saved to your clip library in the LMNT playground.
The desired output format of the audio. If you are using a streaming endpoint, you'll generate audio faster by selecting a streamable format since chunks are encoded and returned as they're generated. For non-streamable formats, the entire audio will be synthesized before encoding.
The desired language. Two letter ISO 639-1 code. Defaults to auto language detection, but specifying the language is recommended for faster generation.
The model to use for synthesis. Learn more about models here.
If set as true, response will contain a durations object.
The desired output sample rate in Hz. Defaults to 24000 for all formats except mulaw which defaults to 8000.
Seed used to specify a different take; defaults to random
Influences how expressive and emotionally varied the speech becomes. Lower values (like 0.3) create more neutral, consistent speaking styles. Higher values (like 1.0) allow for more dynamic emotional range and speaking styles.
Controls the stability of the generated speech. A lower value (like 0.3) produces more consistent, reliable speech. A higher value (like 0.9) gives more flexibility in how words are spoken, but might occasionally produce unusual intonations or speech patterns.
Returns
object where each SpeechGenerateDetailedResponse is:
The base64-encoded audio file; the format is determined by the format parameter.
A JSON object outlining the spoken duration of each synthesized input element (words and non-words like spaces, punctuation, etc.). See an example of this object for the input string "Hello world!"
The seed used to generate this speech; can be used to replicate this output take (assuming the same text is resynthsized with this seed number, see here for more details).