Model Mania Event 6×4

Model Mania: What You Missed

If you couldn’t make it to our fantastic Model Mania event, here are some of the highlights from the amazing demonstrations. Be sure to check out the podcast we record right after the event as well and subscribe to our newsletters to get the heads up on our next event and demo show.

Prudential Financial

Prudential Financial vice president of robotic process automation Jordan Wahba joined the stage first for a conversation with Kinsella. He shared how there’s a lot of interest and excitement in the insurance space for generative AI as a tool for improving and speeding up their work while still being cautious about possible pitfalls. He described what he hopes to see built with generative AI for the insurance industry; a way to collect and organize necessary data for claims, especially after traumatic events like car accidents. The idea would be to streamline the AI to speed up claims processing by answering questions and fetching photos without having to make the customer wade through the back-and-forth of the process.

“How do we blend the machine and the person to be hyper-productive and to get to outcomes that we otherwise wouldn’t?  Sometimes it’s just hard to get started on the new idea, the new project, or the deliverable. So, if generative AI can help us, [where] we give it the parameters, and it helps us draft the contract, It helps us create a script for the contact center.  [It would be] really great if it could help me organize my thoughts on a presentation,” Webb said. “That’s sort of where I think could be a real practical application from day one. Help me get that much more effective, faster and better at what I do so I can do more of it and do it better potentially.

HumanFirst

HumanFirst CEO Gregory Whiteside spoke next, discussing his company’s no-code tool for building accurate NLU designs more quickly. The new version of the platform set to launch soon will incorporate large language models into their workflows as they see value in combining the NLUs and LLMs for better and more useful analyses of voice and text data.  Call centers, in particular, stand to benefit as they have enormous amounts of unstructured voice data that need high-quality transcription and summarization available with LLM-powered AI. HumanFirst can then provide a more dynamic automated resolution system for customer service backed by data, meaning both quantitative and qualitative improvements in call center service.

“Where we see a lot of the value today is really upstream, before you even decide what you’re doing. The ability to go and to make datacentric decisions because you have the data because you’re able to explore it really, really, efficiently is going to drive better kind of roadmaps and prioritization in terms of what is the actual thing that we should be doing today that’s going to drive the most value. And this is why we’re really excited about this call center analytics space, because we see ultimately still the biggest part of the volume of conversations between enterprise and customers, being through voice and call centers,” Whiteside said. “But all that data has not been leveraged in really any capacity because of those core technical blockers… which were speech-to-text and, now, summarization. Unblock that, and all of a sudden, you have a massive trove of data that I think organizations want to put their hands on, and it’s the right time to do so.”

Got-It AI

Got-it AI co-founder Chandra Khatri came on stage next to showcase the Enterprise Language Model ARchitecture (ELMAR), a streamlined generative AI model built for enterprise and incorporating an internal filter for potential hallucinatory responses called TruthChecker. He demonstrated how Elmar cuts down on erroneous responses a very necessary component if businesses are to trust the technology. Elmar also ends up smaller and less expensive thanks to its enterprise-specific fine-tuning, which also means it can be deployed on a company’s premises and avoid any security issues around sending data to the cloud.

“The idea of Elmar is that when the input comes right, we have to make sure that any input goes with the policy of the enterprise… Sometimes these model generates incorrect information confidently. So you have to block that kind of content. You want to block hallucinations and false information going out, and, for that we have something called hallucination detection and other policies to control that,” Khatri said. “Elmar’s set of models are one to two orders smaller [than GPT-3 or GPT-4]. Therefore it’s easy to deploy on promises. It’s easy to fine-tune and update them per customer.”

Journey

Creative agency Journey’s chief innovation officer Brandon Kaplan came on stage next. The former CEO of Skilled Creative before Journey acquired it, Kaplan shared a wider perspective on generative AI and its uses in business. He conducted an overview of the kind of “educational discovery” programs Journey frequently runs for its clients to showcase the technology. He highlighted how the right suite of generative AI tools can enhance marketing, commerce, product development, and other enterprise projects, as long as they have experts like Journey to pick out which are worth experimenting with and how to utilize them best.

“We consider ourselves curators of technologies and curators of platforms, and so as this generative AI space has continued to evolve, we’ve been scouting the entire landscape. We generally, as an agency, work with Fortune 500 brands, and they’ll come to us and say, this is such a new, exciting space. What do we do like? Where do we start? What’s important to us? Help us figure it out,” Kaplan said. “We’re kind of scraping the universe. We’re cherry-picking platforms that we think are really meaningful, that have good teams that have quality APIs that have good support and can be used in [business] workflows.”

Voiceflow

Denys Linkov, Machine learning team lead at conversational AI developer Voiceflow, presented next on why expertise is crucial for linking LLMs and conversational design. That became especially apparent in the wake of ChatGPT’s release, and Voiceflow quickly introduced new features to start capitalizing on generative AI’s capabilities. One recurring lesson was that there’s a need for domain experts to set up and manage the tools and data sources.

“The reason we need experts is that there are certain points in time where you need to divert the conversation down a path that you can control. So these large language models give us some really, really cool stuff,” Linkov said. “[A]s we’re transitioning sort of from a creation workflow for conversational design to curation workflow, it becomes even more important to have domain experts who can build these assistants and that we have subject matter expertise.”

NLX

Andrei Papancea, the CEO of NLX, a customer service AI developer, shared how his team developed practical principles for using generative AI effectively based on their own experiments with the technology. He demonstrated these principles with a Model Mania Bot built for the occasion to showcase how NLX can make building conversational AI bots more efficient while also deepening their complexity and range of responses. NLX’s approach is designed to retrieve responses from knowledge bases with large language models without the need for exhaustive lists of intents. At the same time, the platform is designed to filter out hallucinations or inappropriate responses that a business wouldn’t want its AI representative to give a customer.

“Any type of AI, in particular, any type of machine learning-driven system, is statistical in nature. And if you break it down to that, it’s virtually guaranteed to not be 100% accurate all the time. And if you think about the importance of business processes through the lens of an enterprise, you don’t want an AI to randomly skip steps,” Papancea said. “So, for that reason, the structured world of intents, while still natural language-driven… can be blended that the generative capabilities or the large language model base capabilities to answer the more ad hoc questions because you can’t foresee all the different things that people might say. And you can then use the best of both worlds to drive your business processes through more structured logic, and then combine that with the flexibility that generative AI produces.”

Generative AI News – StableLM, Elon Musk, Drake Deepfake, and More – Voicebot Podcast Ep 316

Got-It AI Debuts Compact Large Language Model With Hallucination Filters ELMAR

Voiceflow Leverages Generative AI ‘Prompt Chaining’ to Accelerate Conversational AI App Development