Speed of speech of the assistant. / Assistant speech rate.
This is how long assistant waits before speaking (default 0.4 s).
This determines if a customer speech is considered done (endpointing). This is good for middle-of-thought detection. (default False = 0) Experimental.
If smart endpointing activated = True.
The minimum number of seconds to wait after transcription ending with punctuation before sending a request to the model. (default 0.1 s).
The minimum number of seconds to wait after transcription ending without punctuation before sending a request to the model. (default 1.5 s).
The minimum number of seconds to wait after transcription ending with a number before sending a request to the model. (default 0.4s).
Deprecated.
The time the assistant waits after the user stops talking before responding. By default, it's set to 0.1 seconds.
Amount of time waited after punctuation in the user's speech before sending the request to the assistant. (setting this too low can result in multiple requests being sent for a single user interaction)
This is the number of words that the user has to say before the assistant will stop talking.
This is the seconds user has to speak before the assistant stops talking. This uses the VAD (Voice Activity Detection) spike to determine if the user has started speaking.
This is the seconds to wait before the assistant will start talking again after being interrupted.
If left empty, the model will generate the message.
This message will be spoken by the assistant at the beginning of the conversation and cannot be interrupted.
More choices soon.