Why You Will Soon Be Talking To Robots
Introducing our support chatbot
Digitalization is progressing at a rapid pace, and we at JAM are determined to keep our finger on the pulse. That's why we've been experimenting with ChatGPT, Microsoft Copilot and the like for several months now to take our service to the next level for you.
Our aim is to drastically reduce your waiting time for support requests: whereas today it still takes a maximum of one working day for your support request to be answered, in future you will receive an initial response within a few seconds.
In recent months, we have developed a chatbot that can access our extensive online help and FAQs. From now on, we can use it to access our accumulated knowledge and answer your customer queries even faster.
This digital assistant is currently going through our internal tests and will soon be the first point of contact to support you with product questions.
But how exactly will our AI chatbot work? Let's take a look at the innovative inner workings of the system!
First of all, it is necessary to convert our collected product knowledge from online help and FAQs into what is known as "embedding". Such embedding transforms the text into vectors, i.e. arrays of numbers that are reflected in a high-dimensional space.
The vectors ultimately represent the meaning of the text content and are therefore the decisive interface between human language and machine comprehension.
Simplified graphical representation of the vector space (created with Dall-E)
Embeddings depict the complexity and nuances of human expression in a machine-interpretable format.
This allows the context and intention behind a query to be recorded in order to search for corresponding content in the database. Compared to a keyword search, the probability of getting the right answer is therefore significantly higher.
So what does it look like in practice?
When a customer inquiry is received, it is also converted into vectors. Our vector database compares the request with the existing information.
Next, the vector database searches for the most similar vectors to the query and can then deliver the matching content within a few seconds.
To continuously improve our AI chatbot, we use the latest technologies and findings from the field of artificial intelligence:
We have already implemented the latest embeddings from OpenAI in the week of publication. These will help us to increase the quality and diversity of our database, which forms the basis for AI-supported answers to our customers' questions.
The new embeddings from OpenAI are based on the "text-embedding-3-large" model, which has 175 billion parameters and covers more than 45 languages. This makes it the largest and most powerful model of its kind to date.
By comparison, the previous embeddings based on text-embedding-ada-002 had only 10 billion parameters and had fewer cross-language capabilities.
With text-embedding-3-large, we can process both monolingual and multilingual queries and combine content from different sources and languages.
In addition to semantic similarities, the model is also able to capture stylistic and pragmatic similarities between texts.
As the embeddings are constantly updated, our chatbot is always up to date and can answer questions about the latest versions of our products.
As we can only ever provide support for the latest version of our software and the chatbot is always up to date, it is also advisable for you as a customer to always have the latest version of your software installed.
The vectors are now matched and the best content has been found by our database, what happens now? ChatGPT enters the scene!
The content found in the database is now sent to OpenAI in German or English and processed further by ChatGPT-4-Turbo.
ChatGPT then creates a precise response to your support request. To do this, it combines the various pieces of information from the database into a coherent answer.
Symbolic interaction between man and machine (created with Dall-E)
To further reduce the response time of our AI chatbot, we have implemented a caching system that stores frequently asked questions and answers.
This means that ChatGPT does not have to query the database every time and formulate a new response, but can instead access the cache and deliver a suitable response immediately.
The caching system is based on a language model that measures the similarity between new questions and the questions stored in the cache. If a high similarity is detected, the corresponding answer is retrieved from the cache, otherwise a new answer is generated.
The caching system is dynamic and is constantly updated by us so that we can guarantee and further improve the quality and relevance of our answers.
Finally, let's take another look at the most important points:
Our AI-based assistance system has several advantages in store for you as a customer!
Not only will you receive even faster support in future, but our customer support team will also have more time for personal advice where it is needed most.
Have we managed to arouse your curiosity? Then keep an eye on further AI developments at JAM. We'll keep you up to date in our new series "AI at JAM"!
Do you have any final questions? Then please contact us directly!