According to the article “Exploring the application of process mining to support self-regulated learning,” process mining comprises an extensive set of algorithms and techniques for the analysis of sequence and temporal data. Because it is a fairly recent discipline, new algorithms and techniques are continually being developed.
When people are involved, processes are often not fixed or deterministic. However, seeing what happens in the real world can help us understand how reality differs from what we plan or expect. Thus process mining can be a very useful tool and play a key role in supporting human activities. This interview with Manuel Lama, a Process Mining professor at Santiago de Compostela University, offers an in-depth analysis of process mining, its current state, natural language generation techniques, opportunities for improvement, and the promising future of process mining.
What do you think are the 3 most important challenges in the coming years?
Manuel Lama: The most important challenges have to do with the qualitative leap that process mining has taken in the last 3 or 4 years when it went from the development of techniques and strategies aimed at describing what happened in a process—what happened before—to begin to focus on predictive techniques.
In other words, we not only do the post-mortem but also analyze the predictions and how to take action to correct the possible deviations that may occur between what is predicted and what happens when the process is executed. So that’s where techniques for prediction and optimization, causality, and simulation come in. All of them, especially causality, optimization, and simulation, are still in a very early stage of development.
Attention has been focused on prediction techniques for about 6 or 7 years, and there are already interesting developments—techniques that have proven their performance—especially at an academic level and especially those based on deep learning architectures in machine learning. And now the challenge in forecasting techniques is to apply them to real problems—problems in the industry—to find out if they are effective, if the information they use is sufficient, or if more information should be provided.
We also have the techniques of causality, which are tremendously important in understanding not what was happening, but why it was happening. Here the important thing is why, and basically these types of techniques focus on extracting the reasons why a process is executed in a certain way. So, for example, why a request, which is part of a process whose objective is to accept a request or reject it, has been accepted, or why it has been rejected.
Likewise, when we have, for example, a process of exploring a web page, being able to know why a certain number of users visit some pages more than others, understanding that navigation as a process.
In a medical process, it is necessary to decide whether or not to operate on a certain patient; why they operate or not. And this has to do with the fact that currently the process mining techniques that work with temporal relationships between activities take place post-mortem. That is, if an activity, A, is connected to another activity, B, that means that activity A occurs before activity B, but not that activity A causes activity B to execute.
So what we have to do is pass from a model of temporal relations to a causal model, in which the relationships between the activities identify the causes of those activities, and there also has to be a connection not only with the causes but also with other elements of interest in the process, such as attributes.
Continuing with these challenges, once we are clear about the causes we can optimize, which in the end is the ultimate goal of process mining. Process mining aims to optimize processes, that is, what has happened in their execution is studied, and what is going to happen can be predicted, but ultimately the objective is to facilitate decision-making by users to improve the processes.
Of course, if we don”t know the causes of what has happened, we will hardly be able to optimize. We can assume them, intuit them from the information we have, and we can say whether a pattern of activities is executed very frequently; we can infer that the growth or fact that this attribute has a higher value is because this pattern has been executed many times, but that is still a user’s inference. Here we are talking about something else.
If we know the causes of course we can take better measures, that is, optimize the process.
And finally, what completes the current picture are the simulation tools, which can be understood as the definitive process mining tools. If you can simulate a process based on what has happened, that is, on the actual execution of the process, not on a model that you may have; if you can eliminate activities to see what happens or introduce new activities, establish new participants that execute the activities, and play with a certain process configuration that simulates the result, you will have or will facilitate decision-making.
In short, one of the challenges that process mining faces today is to move on from or take advantage of the post-mortems that have been done over the years.
Another challenge that is also very important is to transmit the information—the analytics resulting from the techniques of process mining—to know how to transmit them to the decision-makers. In most cases, we have what is called a spaghetti process, in which there are so many relationships between activities that it is impossible for a user to visualize the structure of what is being seen. Therefore, we need to know how to explain and convey to the user what is happening. Historically, that has been done through advanced graphics, but in recent years one of the hot topic lines in process engineering is to use natural language generation techniques to explain the process. In short, instead of showing you the process with a graph, I explain it to you through text, but I don”t tell you everything because if I tell you everything, it is very difficult to see what is most relevant. You lose information but you may not need all the information.
One of the important issues in process engineering that concerns post-mortem analysis is to make users understand what has happened on a daily basis. One of the ways to do that is through natural language generation techniques.
In what environments will we see the application of process mining in the immediate future?
Manuel Lama: Currently there are a number of environments or application domains in which process mining techniques have been widely explored and in which they make perfect sense—for example, in the medical domain because there are a large number of exceptions to the established protocols and because in many cases the protocols have to be adapted to situations that are not foreseen.
So in these areas of health, it is very important to know what has happened and be able to foresee what may happen. Because those processes in which there are many exceptions tend to be spaghetti.
Process mining began to be applied in this area from the beginning, but there are other areas where systems capture this data and therefore lend themselves to carrying out such analyses. For example, in the field of banking, all interactions between the bank and the customers are recorded. To the extent that they are recorded, the use of process mining techniques is enabled for optimization and to identify anomalies that may exist within the processes.
We also have public administrations that are candidates to continue applying process engineering techniques since they have the obligation to optimize processes to provide users with a better service. Legally, at least in Spain, they have that obligation, and they have the data.
And, of course, industry is almost the most classic candidate, where there are process management systems capturing information about those activities that are carried out within the framework of the process and necessarily the objective must also be to optimize. To the extent that the process is better optimized, the indicators will be better.
Process mining can be understood as a fairly general tool when it comes to optimizing a process because a process is a sequence of activities that can take place and [a process] can be determined specifically for each one. An activity can be, for example, the spaces through which a user passes in the shopping center. If a user passes in front of a cafeteria, it can be understood that he has done the activity of passing in front, through which you can trace that user in the shopping center. If there is not one user but thousands, you can do pattern analysis to identify the sites that users have frequently visited.
A lot of work has also been done on e-learning. In my opinion, more work needs to be done on the use of process mining techniques within what is called learning analytics, that is, performing analytics on what students do on a platform.
With process mining, it is possible to study the phenomenon of why students do not finish a course, approaching it from different perspectives: analyzing the data in general or from the list of activities engaged in by users.
Process mining can be applied in very diverse fields. In virtual worlds, when you have an app and the user carries out a series of activities, you can establish the execution of the process that the user has carried out.
Both process simulation and process prediction are relevant to the industrial sectors (and many other sectors). What are the differences and relationships between them?
Manuel Lama: A prediction does not imply that there has to be a simulation. With a prediction—specifically, in what is called predictive monitoring in process engineering—basically, you predict the elements that are part of the process. You predict when the next activity will take place, you predict when that process execution will finish, you predict the set of activities that can occur in the process and you predict business indicators, that is, the results of the process execution.
The prediction can be made; it can be presented to the user. Ideally, that prediction would require an explanation of why the system predicts what is going to happen, for example, a request being rejected. Why is an explanation necessary? Because that helps corporate decision-making.
You give that information and leave the decision-making to the user. Since I know that this request is going to be rejected, for example, because the documentation has not been correctly evaluated, I can take action to tell the user to perform the validation better and not wait for the completion of the process.
In the case of simulation, here, to a certain extent, we predict. If we are in a certain part of an execution, to simulate we have to find out what activity is going to be executed next. Ultimately, we have to indicate the activities that are going to be executed, the time at which each one is going to be executed, and how the execution will modify the business indicators.
Currently, there is a line of research in which prediction is linked with simulation; in it, to a certain extent, if we can predict, we can simulate.
The simulation has a second component. This is the case if we do not modify anything in the process, but if we say we are going to eliminate this activity and simulate what is happening, then we need more than prediction techniques.
Prediction would be enough if what we need to know is how the process is evolving based on what has happened before. From my point of view, simulation can be understood as the definitive tool. You have prediction; you have causality [and] optimization; you need causality to be able to prescribe actions.
When we talk about the “mutation and change” of processes, what exactly do we mean? What is the difference?
Manuel Lama: Mutation and process change can be understood as synonyms. What does process change mean since it can be understood that a process is something stable over time, that is, that the execution of a process follows a set of activities over time? A process is something very alive because its fundamental characteristic is that many of the activities are carried out by people and people have their own situations and make decisions that in many cases are corrective measures that they take so that a process is executed in the best way. So what does process change mean? It means that the process is not always executed in the same way. Let”s imagine that we have an e-learning process in which in one of the activities or at a certain point the student can either take an exam and then participate in a debate or do a set of exercises. You have those two types of possibilities to obtain a certain result.
It is possible that some students will begin to execute the process and that there is a ratio of 70-30, but at the same time that the students executing the process are finishing and the teacher is seeing the results, the students themselves can interact with other students and tell them that the exercises take more time than taking the exam and participating in the debate. So that 70-30 proportion is changing; there may even come a time when no one does the exercises and if no one does them, that activity is not being carried out and the process is not the same. The process model has changed: the structure and the way of executing it. And it can change due to decisions made at a given moment—in this case, because the teachers can eliminate that exercise or because the dynamics of the execution of the process change when a set of activities is no longer executed.
That said, it is important to have tools that detect this change in the process because otherwise the decisions are being made assuming that the process has a certain structure when that is no longer the case.
Currently, process mining tools do not take into account that the process can change. So decisions are being made about something that is no longer happening. You can say, hey, in the last year 35% of the students have done the exercises, and yes, it”s true, but what you don”t say is that in the last 4 months only 0.5% or 1% did because the process has changed. What fails there is the identification of that process change; the recognition that from a given point new analytics must be extracted under the assumption that a different process exists.
Natural language and the evolution of AI in this field is giving a lot to talk about. How can we use natural language processing for process capture? Can you help us to explain their behavior? Can you tell us about your research on the subject?
Manuel Lama: My research is related not so much to natural language processing but rather to natural language generation. There are different things. The generation of natural language is related to what I discussed before: if you have a process that has many activities and many relationships between activities, you can use visual tools as a foundation to see what occurs.
Basically, what you are doing is exploring what is happening through these visual tools. You have a selection of variants: a selection of variants by frequency and execution time. You can eliminate or select the variants with no activity, those in which the time difference between two activities is less than a certain time, etc. Ultimately, what the process analyst does is ask questions and explore through visual tools.
The problem with these visual tools is that in many cases we are oversaturated with information, so it is necessary to carry out this deep exploration that takes a long time and drawing conclusions is not easy because the analyst also has to know the domain or, at the very least, you have to solve a set of questions that experts ask in the process.
You can solve these questions in two ways. You can do that exploration, obtain some results, and deliver those results through graphs. So who interprets these results? It is done by the business analyst or the domain expert looking at these graphs, seeing the activity frequency trend, the variable trend, etc., but the interpretation must necessarily be done by the user of the tool. In natural language generation, the objective is not to replace these visual tools because they provide a large amount of information and are very useful; the objective is to complement these visual tools so that the most relevant information about the process is described to the user. For example, with a natural language description, I could tell you that in the last 3 months, most of the patients who came in from the ER and had a tag and an ultrasound were operated on within 6 months. I have condensed it into a sentence, but extracting it through visual tools is quite complicated. First, the process model is not going to tell you; that model is a spaghetti of relationships and activities. Therefore what you have to do is a time and frequency analysis over time, looking for what I just told you: the relationships in the last 3 months between the income that has been made, the treatment that patients received, and the result of the processes, that is to say, whether an operation took place and, if so, how long it took.
You have to combine a great deal of information to attain those results. If I give them to you in natural language, you can read them and, when you see the graphs, draw conclusions about what has happened. That is one of the areas of natural language in which we are working. We have applied the technique in the medical domain with success: we have presented health professionals with information graphically and textually, and they prefer the textual form. Obviously, we cannot say that in all cases they prefer the textual form—more information is needed in this regard—but the results are very promising.
There is another area that is also very relevant, which is the extraction of the event log from unstructured information. In other words, the vast majority of process engineering techniques start from the fact that the data is in a database that has the information in columns: where was the case, what actions took place, when did they start, when did they finish, who performed them, and a series of business indicators.
The data is supposed to be in a CSV file or a database. But what if the data is not in those structured formats? What if the data, for example, is in a set of emails that users have exchanged? What if they are in a document that describes what has been done? There we do need natural language processing techniques to extract those events automatically. To date, as far as I know, there are still very few techniques to carry out this analysis and little experience with them. In fact, it is one of the open challenges in this marriage between process engineering and natural language processing and generation.
In closing, what would you like to see become a reality in the world of process mining 5 years from now?
Manuel Lama: What I would like is for it to be applied more widely. In other words, today the real applicability of process mining techniques is still limited, and what I would like is that it be used in the same way as certain types of techniques, tools, and technologies that have been adopted. For example, today nobody argues about whether a company should have a database, but very few companies argue that they have to have an ERP to manage their processes. The adoption of process mining techniques still requires what in other realms are called killer applications. That is, for example, what was the point of no return for the adoption of the Internet? Email, as a tool that made the use of the internet for interaction between people indispensable, was the killer application—the central tool from which we saw the need to implement the internet.
In my opinion, in process mining, there is still no such killer application that, applied to almost any company, promotes the adoption of process mining. Quite possibly, this is conditioned by limited data availability, that is, although nobody disputes the value of data, it is one thing to collect data and another thing is to make quality data available that can be used by other tools. I believe that these two aspects are linked: the availability of data and the availability of a tool that goes beyond exploration and beyond visual tools that are very useful but require great effort; an application that, by making use of well-structured, curated, and available data, allows companies to make decisions easily. And I believe that the killer application that will allow the market to really see the need for that adoption—the need to have process mining in general and that tool in particular—is still missing.
Steps are being taken; each time techniques are improved and the needs of companies are better understood but, from my perspective, there is still a long way to go before a company that sells process mining goes to another company that needs this tool and sees the necessity for it because they have already seen it applied in other companies, because it has already worked for them, and because adopting it is simple.
And that”s the third part. As long as there is no such simplicity when it comes to adopting this technology, we will have a barrier. And I believe that the three elements are related: that is what I would like as a researcher in process mining. This would allow us to continue developing new ideas [and meeting] new challenges—things that are difficult to do today because of the limited availability of data. And, in fact, in certain areas and lines of research, synthetic data is being used, because obtaining real data is very difficult. This is something that is being worked on in the “living lab” that Inverbis has set up with Queizuar.
I would like to go to companies that have real data and see to what extent we could make predictions. A great example is the Inverbis living lab with Queizuar.
And of course, one of the things that I really value about researching in process mining is that we develop research that has applicability, that is, it does not remain in theoretical models. It solves real problems in the short to medium term because the research also requires sufficient time to combine ideas and foster a certain creativity.
Knowing that the objective is to solve a real problem is for me, personally, one of the things that I like most about process mining research.
If you want to know more about our solution and how we apply process mining to process improvement, you can click on this link to register and request your demo.