Reverse-Time Event Sequence Prediction Using Summary Markov Models and Denoising Diffusion Processes
Abstract
Choosing a suitable subset of the event information is the first step in using the framework. We are unable to provide assistance with this endeavor since it is very customized and requires creativity. With any luck, the event data will be adequately represented by a probabilistic framework that is fitted using a learning algorithm once a dataset has been constructed. The algorithm's output is a representation suitable for use in prediction. This can only be achieved by feeding the framework event data from execution process instances. With the use of the probabilistic model, we can assess the chances of the procedure continuing using various event sequences given the current sequence of occurrences. Specifically, this feature may be used to foretell the process's continuation-inducing event type with the highest probability. Verifying that the prediction model does not go against common sense is something a framework user may want to undertake. The first step of this study is to use a transformation method to derive the conceptual framework from the probabilistic model. It is possible to see and understand this conceptual paradigm. A user may check how well the model matches his expectations. The outcome would depend on his expectations; he may find behavior that goes against them or decide that the probabilistic framework is sufficient. In the second scenario, a problem analysis might be conducted to determine whether the expectations were incorrect or if the probabilistic framework is insufficient. When the model's transformation algorithm produces a conceptual model that is too complicated for humans to understand, the framework offers algorithmic assistance to those who need it. To be more specific, one may utilize model query techniques to check for certain patterns in the model. The user's expectations on the presence or absence of model structure may be represented by the patterns. Any model, no matter how complicated, may be tested through comparing this expectation to the query results. The capacity of Markov Chain Models is used to handle sequential data—that is, to "remember" information from earlier events in the sequence as they go backwards through time—makes them ideal for reverse time prediction. Finally, we evaluate the order model (or memory) of time series related to electrocorticographic (ECG) data recorded epileptic episodes by making use of the latter attribute. Improved prediction accuracy and correlation efficiency are the results of a novel method that merges estimates from forward predictors and backward predictors. We prove that events may be informed by changes on Markov Chain Systems by analyzing dynamic graphs built from time-series data, i.e., time-series fluctuation. In a stochastic model known as a Markov chain, the previous state is irrelevant to determining the subsequent state; instead, the present state is used exclusively. This served as inspiration for the suggested encoding technique, which aims to provide accurate and interpretable predictions of time-series events. In a process that is fundamentally inverted from the conventional Markov chain prediction, the conditional likelihood of the prior state is computed given the present state. The experimental findings from five real-world datasets demonstrate that our method outperforms baselines and offers other explanations for the outcomes of event prediction.DOI:
https://doi.org/10.31449/inf.v49i15.8654Downloads
Published
How to Cite
Issue
Section
License
Authors retain copyright in their work. By submitting to and publishing with Informatica, authors grant the publisher (Slovene Society Informatika) the non-exclusive right to publish, reproduce, and distribute the article and to identify itself as the original publisher.
All articles are published under the Creative Commons Attribution license CC BY 3.0. Under this license, others may share and adapt the work for any purpose, provided appropriate credit is given and changes (if any) are indicated.
Authors may deposit and share the submitted version, accepted manuscript, and published version, provided the original publication in Informatica is properly cited.







