Before the end of the year, we are shipping one of our most complete updates so far. This release brings new AI model options, better WhatsApp controls, major performance improvements, and several enhancements that make day‑to‑day operations smoother and more reliable.
Ask Knowledge Base Now Supports More LLMs
We’ve expanded support for the Ask Knowledge Base action with a wide range of the most advanced language models available today. You now have full control over which LLM powers your AI Assistant, allowing you to optimize for speed, reasoning depth, or cost depending on your use case.
Available models include:
-
OpenAI: GPT‑5.2, GPT‑5.1
-
Anthropic: Claude 4.5 (Opus, Sonnet, Haiku), Claude Opus 4.1
-
Google: Gemini 2.5 Flash Light, Gemini 3 Pro
-
DeepSeek
-
Cohere: Command RFB, Command R+
-
Grok: Qwen 3
This flexibility lets you fine‑tune how answers are retrieved and prioritized, from fast and lightweight responses to deep and highly contextual ones. Model selection is available directly from the AI Settings panel.

WhatsApp Gets More Practical for Daily Operations
WhatsApp workflows continue to improve with features focused on real operational needs:
-
You can now send a WhatsApp message to a single contact directly from the Contact List page, without creating a broadcast or campaign
-
The Sent Messages list now shows a preview of the message template, making it easier to review what was actually sent
-
Fixed an issue affecting PDF template selection in WhatsApp send actions
These improvements make WhatsApp interactions faster, clearer, and easier to manage.
Faster Platform with Smart Caching
We introduced a new caching strategy to improve performance across the platform:
-
Project data, project users, and departments are now cached to reduce redundant HTTP requests
-
Cache invalidation is automatically triggered when data changes, ensuring consistency while keeping performance high
In addition, we fixed a REST call that could generate a 504 error when loading conversation details, improving stability in high‑traffic environments.
These improvements move Tiledesk even closer to prompt‑driven automation, where AI is not just answering questions but operating reliably at scale. With better model choice, faster performance, and stronger operational controls, building intelligent workflows becomes simpler and more powerful with every release.




