Inside JAM

Why You Talk To Robots With Us

Introducing our support chatbot Jamie

Blog Author Jens

Jens

Customer Support
JAM Software AI Chatbot
Published on 31.01.2024

Digitalization is advancing at a rapid pace and we at JAM are determined to keep up with the times. That's why we've been experimenting with ChatGPT and the like for several months now, and we're now ready to show you how we can drastically reduce the time you spend waiting for support.

Introducing: Jamie!

Jamie is now your digital assistant for all your questions about our products. Jamie is a chatbot that accesses our extensive FAQs in the background and answers your support questions with virtually no waiting time.

But how does our AI chatbot work? Let's take a look at its innovative inner workings!

Vector databases and the power of embeddings

First of all, it is necessary to convert our collected product knowledge from online help and FAQs into what is known as "embedding". Such embedding transforms the text into vectors, i.e. arrays of numbers that are reflected in a high-dimensional space.

The vectors ultimately represent the meaning of the text content and are therefore the decisive interface between human language and machine comprehension.

Simplified graphical representation of the vector space (created with Dall-E)

 

Embeddings depict the complexity and nuances of human expression in a machine-interpretable format.

This allows the context and intention behind a query to be recorded in order to search for corresponding content in the database. Compared to a keyword search, the probability of getting the right answer is therefore significantly higher.

So what does it look like in practice?

When a customer inquiry is received, it is also converted into vectors. Our vector database compares the request with the existing information.

Next, the vector database searches for the most similar vectors to the query and can then deliver the matching content within a few seconds.

JAM chatbot at the cutting edge of AI technology

To continuously improve our AI chatbot, we use the latest technologies and findings from the field of artificial intelligence:

We have already implemented the latest embeddings from OpenAI in the week of publication. These will help us to increase the quality and diversity of our database, which forms the basis for AI-supported answers to our customers' questions.

The new embeddings from OpenAI are based on the "text-embedding-3-large" model, which has 175 billion parameters and covers more than 45 languages. This makes it the largest and most powerful model of its kind to date.

By comparison, the previous embeddings based on text-embedding-ada-002 had only 10 billion parameters and had fewer cross-language capabilities.

With text-embedding-3-large, we can process both monolingual and multilingual queries and combine content from different sources and languages.

In addition to semantic similarities, the model is also able to capture stylistic and pragmatic similarities between texts.

As the embeddings are constantly updated, our chatbot is always up to date and can answer questions about the latest versions of our products.

As we can only ever provide support for the latest version of our software and the chatbot is always up to date, it is also advisable for you as a customer to always have the latest version of your software installed.

ChatGPT-4 takes over and formulates an answer

The vectors are now matched and the best content has been found by our database, what happens now? ChatGPT enters the scene!

The content found in the database is now sent to OpenAI in German or English and processed further by ChatGPT-4-Turbo. 

ChatGPT then creates a precise response to your support request. To do this, it combines the various pieces of information from the database into a coherent answer.

Symbolic interaction between man and machine (created with Dall-E)

Faster responses thanks to caching

To further reduce the response time of our AI chatbot, we have implemented a caching system that stores frequently asked questions and answers.

This means that ChatGPT does not have to query the database every time and formulate a new response, but can instead access the cache and deliver a suitable response immediately.

The caching system is based on a language model that measures the similarity between new questions and the questions stored in the cache. If a high similarity is detected, the corresponding answer is retrieved from the cache, otherwise a new answer is generated.

The caching system is dynamic and is constantly updated by us so that we can guarantee and further improve the quality and relevance of our answers.

In a nutshell

Finally, let's take another look at the most important points:

Jamie, our AI-based assistance system, has a number of benefits in store for you as a customer!

Not only will you now receive even faster support, but our customer support team will also have more time for personal advice where it is needed most.

Have we managed to arouse your curiosity? Give JAMie a try and keep an eye on further AI developments at JAM. We will keep you up to date in our new series "AI at JAM"!

Do you have any final questions? Then please contact us directly!