- Gold Data Analytics
- Gold Data Platform
RPA Monitoring Dashboard in Power BI
Running and maintaining an RPA project is as challenging as any other IT project. Once it gains speed and first releases meet a production environment, we must take care of the continuous and errorless execution of the automated business processes. This implies that a solid monitoring concept should be in place to provide near real-time reporting as well as drill-down capabilities to allow for effortless localization and definition of the root cause of any error that might have occurred. Moreover, this concept should be platform-agnostic, which means it should not care about the vendor that provides the RPA technology. Such a task can be quite hard to tackle since there are more than 50 RPA vendors out there.
Another point to consider is the lack of options to track KPIs defined by the business users per each bot. There is simply no way how you can keep a record over time of how often a certain event has happened during execution time or which is the most commonly found input/output piece of data that is being processed.
Power BI Solution
We decided to build a solution that will address all the aforementioned problems, without exhibiting any problems related to scalability, system resource availability, platform dependencies, or custom coding styles. The greatest added value comes from its simplicity. Our solution requires the definition of a simple logging framework that should be implemented and followed by every developer. As long as they obey the simple set of rules, we would always get an up to date picture of all our productive bots. Thanks to this approach we can easily add new custom metrics upon their will, without the need for heavy code changes.
The logging files are processed in real-time using a traditional stream-processing software platform and are fed into a database. The data is then imported to Power BI.
By using Power BI, Adastra managed to build a semantic model from the logging data of the bots and their tasks. On top of that semantic model, an interactive report was built to analyze the historical performance of all running bots in the organization. From the main overview page, the user may navigate to a bot or task overview page, where key performance metrics can be monitored both overall and for a specific bot or task. Any of the bots could be then further analyzed through drill-down, discovering each of the performance of its tasks, and any deviations.
- Track the overall amount of bots running in the organization
- Find trends in bot failures over time
- Understand where bots fail most often
- Monitor any discrepancies in run times