The first set of analytics that is important to admins are generic usage statistics. Is the bot used, on what devices, how often, how many repeat users, how many interactions go to successful completion and how many were abandoned, etc. This is the kind of standard analytics you would get from Google Analytics for a website.
Other generic statistics that are required for all bots when natural language processing is used are things like misunderstood phrases, most frequent words used, number of human escalation incidents etc.
When it comes to the above statistics, it is important that these statistics are not only measured but integrated into the software in ways that make the bot better. For example, misunderstood phrases can be automatically added to the list of phrases associated with a given intent when setting up the NLP so that the bot gets better over time. It’s important that admins or more likely support agents have easy ways of adding and validating these type of phrases so that the bot improves quickly.
While all the above generic analytics are important, it turns out that in many cases, custom chatbot analytics is even more important. This is particularly true when the chatbot is being rolled out and piloted. This is because at the beginning of a bot project sponsors are eager to show adoption and usage. They will, therefore, try to make sure that the bot is adequately marketed to the pilot users and if they have done their job correctly the statistics will show good usage. This is also partly because of the bot is a novel product for the users they may be curious to use it initially and this can artificially inflate the usage statistics.
What is of interest to bot admins, however, are signs that there are issues with the bot usage that signal that the usage may not be as robust as the initial statistics indicate. And even if the statistics are clear that there is a usage problem, the sponsors want to know why the usage problem is happening.
The problem may be easily identified and fixed with generic analytics. At the start, for example, it is very often the case that the NLP set up is not as comprehensive as it should be so the bot misunderstands more than it should. This problem is normally quickly rectified by adding more phrases to the relevant intent in the NLP set up.
Often custom analytics is needed to diagnose a problem. It may be for example that explicit feedback needs to be built into the conversation flow to be able to identify issues.
Custom analytics is also of particular interest when the bot is a more customized bot.
If a chatbot is simple, for example, it takes a user through some sort of decision tree, and there is a usage problem, it may be easy to identify the problem from the generic usage statistics alone. The analytics could indicate the point in the conversation where users lost interest and abandoned the conversation or it could indicate the amount of time spent on the bot before the user abandoned it and in both cases this can indicate a problem with the flow or with the bot use case overall.
If the bot is more complicated, i.e. it has custom logic, the generic statistics will not tell the full story. They might be able to tell you the point that the user abandons, but they won’t be able to tell you why the user abandons.
Imagine as a simple example. You build a bot to help children learn their timetables. A basic approach may be that the children choose the times table in question and the bot randomizes the questions regarding the chosen times table. The problem here is that the children learning the timetables are at different levels and therefore may abandon the bot if they find the questions are too difficult at the beginning or even at a later point in the interactions.
To pick this up we need the analytics to also reflect the difficulty of the questions among other things (and ideally automatically adjust the level). This needs to be built into the custom analytics. And this can only be done if the bot building platform supports custom analytics (or more to the point, easily adding custom analytics).
Once the custom analytics is available, it is possible to implement a sophisticated approach such as using an algorithm to match the child’s level to the questions asked to maximize their time in the game.
This brings us to a critical and related subject to customized analytics and that is A/B testing. With any software it is difficult to know at the start what might work best in terms of functionality, graphics and content and the only way to know definitely is to A/B test different alternatives.
This is true for bots as well. The custom analytics needs to be linked to an A/B testing engine inside the bot building platform. Of course, within the bot platform itself it is not only important to be able to generate and tag custom analytics, but also to define A/B tests within the conversation flow.
The sponsor, manager and developer of the bot are all responsible for helping define the analytics required. As mentioned, the custom analytics at least depend on the use cases addressed by the bot.
The sponsor clearly is interested in adoption of the bot and trying to work out any impediments to adoption or any other issues that might create a negative impression with the user.
From a generic analytics point of view they would be interested in analytics such as:
Number of interactions (nodes) Number of flows
Percentage of misunderstood / understood statements
Word map of statements
Word map of misunderstood statements
List of misunderstood intents
Number of flows initiated
Number of repeat flow
Ranking of the flows by usage
Ranking of the flows by misunderstood statements
Number of logons
Number of successful completions
Number of repeat users
Number of active users per period
Number of logins
Percentage of active to named / users
They may be interested in looking at the above statistics in specific periods or using other filters of course.
In terms of custom analytics they might be interested in feedback which is normally entered manually as a node in the conversation flow, especially at the end points of each flow (whether the outcome was successful or not). They could be interested in the ranking of the flows by feedback rating.
For the example we gave of a times table bot, they may be interested in seeing whether there is any correlation between the level of difficulty and the engagement (number of nodes traversed).
The bot managers are of course interested in the above statistics but is also interested in ensuring the smooth operation of the bot. They may require analytics on the performance of the bot across different devices and statistics on availability of the bot. Have there been any infrastructure or security issues?
They might be interested not only in the behaviour of the end users but also in the behaviour of the super users such as how often do they update content or modify flow. This kind of information could also be mandatory for security reasons.
They would of course also be interested in information regarding the progression of the bots from development, to staging, to production environments, and statistics on developer releases etc.
The developers are interested in all of the above to the extent that they can use the information to make their enterprise chatbots better. Of course they would be interested in statistics that identify bugs, such as the statistics coming out of the testing process which will have special tests for bots such as testing for NLP success. In practice however the developers and super users are more involved in implementing custom analytics than monitoring them.
I mentioned briefly that integrating analytics into the bot functionality is critical for successful bot building. A/B testing needs to integrate custom analytics and then can use a simple algorithm to optimize the conversation. More complex integration can be used to optimize the performance of the bot, such as the optimization mentioned previously to ensure that the difficulty of the timetable bot (or more realistically a more complex game) is optimized.
Many large software companies, such as Google, Microsoft and IBM offer chatbot analytics services. Although these services can easily provide generic analytics, what hopefully I have made a clear case for is that to get the full benefit of analytics the analytics needs to be customized and tightly coupled to the functionality of the bot in a way that is different to non-conversational software such as websites for example. It is therefore essential that the bot framework used allows developers to customize the admin panel.