Skip to main contentWhat’s new in v1.1.0
🚀 Updates & Improvements📞 Caller ID Fix for Outbound Forwarded Calls
When an assistant‐initiated outbound call is forwarded to another recipient, the user caller ID (user’s phone number) is now preserved. The receiving user will no longer see the forwarded assistant’s caller ID.🎨 UI Enhancements
- Call Logs: Error Commands/Utterances are now visually highlighted in red for quick identification.
- Customer Logs: Updated layout and improved readability for streamlined troubleshooting.
- Assistant STT Configuration: UI support added for Deepgram Flux model configuration.
🗣️ Assistant Behavior Enhancements
- Added support for
<nonInterruptible> tag to ensure uninterrupted prompt/audio playback by the assistant.
- Webhook listeners now emit LLM error events, allowing better error tracking and observability.
📚 Knowledge Base
- Pagination and search capabilities added within the assistant’s Knowledge Base for faster navigation and scaling with larger datasets.
🔄 Integration & Automation
- Support added to run cadence flow using Microsoft integration.
📘 API Docs Optimizations
- Improved API documentation layout and navigation for easier reference.
- Certain API URLs have been updated to new paths. If you have bookmarked API endpoints, please review and update them accordingly.
🛠️ General
- Multiple bug fixes and performance improvements across the platform.
What’s new in v1.0.0
1. Enable Debug mode (see Enable Debug mode section below)
Quickly turn on verbose developer logs from your profile so you can inspect detailed runtime information when diagnosing issues. When Debug mode is enabled the dashboard surfaces expanded logs and error traces useful for developers during troubleshooting.See the Enable Debug mode section below for more details.2. TTS — new audio cache scopes (assistant & team) + clip deletion
We added two new cache scope levels for TTS audio clips — assistant and team — so generated audio can be cached at the most appropriate scope for reuse and cost savings. The dashboard now also exposes the ability to delete existing audio clips so you can manage storage and refresh voices or content when needed.3. Campaign Webhooks
You can now register webhooks to receive real-time notifications for important campaign lifecycle events (for example: campaign completion) and per-call status updates (for example: call completed, failed, or dropped). Webhooks may be configured when creating or updating a campaign, enabling easy integration with downstream systems and automation pipelines.4. Twilio SMS — Inbound & Outbound
Dashboard toggle to enable Twilio SMS inbound and outbound functionality. This makes two-way SMS possible so your flows can receive replies and send messages via Twilio directly from the dashboard — useful for bi-directional support and conversational workflows.5. Vonage Number management
Buy or import Vonage phone numbers directly from the dashboard. Note: you will need to provide your Vonage credentials when adding or importing numbers.Resolved a bug where tool calls and messages at the end of conversations could be missed or dropped. This fix improves reliability for trailing-message flows and integrations that depend on final tool outputs.General reliability and performance upgrades across the platform — faster page loads, reduced error rates, and smoother dashboard interactions.Enable Debug mode
Debug mode helps developers view detailed diagnostic logs and metadata for conversations, making it easier to analyze performance, latency, and behavior of both Assistant and User utterances.
How to enable Debug mode
You can enable Debug mode in two ways:
-
From the logo — Click the Interactly.ai logo at the top-left corner.
-
From your profile — Click the Profile icon at the top-right and toggle Debug mode ON.
Once enabled, additional debug details will appear inside the Call Logs and Conversation view.
Debug info in Call Logs
Open any conversation from Call Logs after enabling Debug mode.
You’ll notice new metrics displayed at each Assistant and User utterance.
Assistant utterance metrics
- AI: 1093 ms — Time taken by the LLM to generate the response.
- TTS: 356 ms — Time taken by the TTS engine to generate the audio clip.
- AD — Audio Duration of that particular clip.
User utterance metrics
-
AD — Audio Duration of the user’s spoken input.
-
VAD: 6 ms (Voice Activity Detection) — Time the STT vendor waited after receiving the final word.
-
AVAD: 800 ms (Additional Voice Activity Detection) — Extra waiting time configured at the Assistant level under
Advanced → Start Speaking Plan → Smart Endpointing OFF.
To view detailed metadata, click on the Assistant key in the utterance view.
-
command — Indicates whether the action is
Play or End.
End signals the call termination.
-
messageId — Unique ID for the Assistant’s utterance.
-
userMessageId — The ID of the corresponding user message that triggered this response.
-
finish_reason — Can be
stop, tool_calls, or length.
stop: Response finished naturally.
length: Token limit reached.
tool_calls: Model triggered a tool/function.
-
queue_latency — Time the request waited before model processing started.
-
response_latency — Time the model took to generate the response.
-
trailing_messages — Last 6 utterances passed to the LLM for context.
Other useful fields
- audioDuration — Duration of the generated audio clip.
- isBargein — Whether the utterance was interrupted.
- isCachePlaying —
true/false; indicates if the clip was played from cache or generated fresh.
- model / vendor — The TTS model and vendor used.
Click on the User key in the utterance to view metadata for the user’s input.
Important fields
- messageId — ID of the user message.
- previousBotMessageId — ID of the last Assistant response before this message.
- skippedBotMessages — List of interrupted Assistant messages skipped due to this utterance.
- confidence — Confidence score returned by the STT engine for this transcript.