Congratulations on the launch! This looks like a really powerful framework.
One trend I’ve noticed is there’s a really heavy focus on pre-deploy tests which makes a lot of sense. But one big gap is the lack of ability to surface the scenarios that you don’t know are even occurring after deployment. These are the scenarios that human agents are great at handling and ai agents often fall flat: in a sales context that can have a direct impact on a customers bottom line.
I think rather than attempting to deploy a perfect agent, having a mechanism to surface issues would lend much more peace of mind when launching ai voice agents. Would be happy to chat more if additional context/real world examples would be helpful. Congratulations again on the launch!
Background: have worked on contact center unstructured analytics for several years.
Yes, would love to chat. You can block my calendar here: cal.com/kabrasidhant.
Agree to everything you said. That is why we have our observability platform, which allows your live calls to be monitored. The idea is to use the observability platform to run real-life simulations so as you make fixes, you can test it in simulation environment
One trend I’ve noticed is there’s a really heavy focus on pre-deploy tests which makes a lot of sense. But one big gap is the lack of ability to surface the scenarios that you don’t know are even occurring after deployment. These are the scenarios that human agents are great at handling and ai agents often fall flat: in a sales context that can have a direct impact on a customers bottom line.
I think rather than attempting to deploy a perfect agent, having a mechanism to surface issues would lend much more peace of mind when launching ai voice agents. Would be happy to chat more if additional context/real world examples would be helpful. Congratulations again on the launch!
Background: have worked on contact center unstructured analytics for several years.