Despite significant advancements in deep learning for sequence forecasting, neural models are typically trained only on data, and the incorporation of high-level prior logical knowledge in their training is still an hard challenge. This limitation hinders the exploitation of background knowledge, such as common sense or domain-specific information, in predictive tasks performed by neural networks. In this work, we propose a principled approach to integrate prior knowledge in Linear Temporal Logic over finite traces (\ltlf) into deep autoregressive models for multistep symbolic sequence generation (i.e., suffix prediction) at training time. Our method involves representing logical knowledge through continuous probabilistic relaxations and employing a differentiable schedule for sampling the next symbol from the network. We test our approach on synthetic datasets based on background knowledge in Declare, inspired by Business Process Management (BPM) applications. The results demonstrate that our method consistently improves the performance of the neural predictor, achieving lower Damerau-Levenshtein (DL) distances from target sequences and higher satisfaction rates of the logical knowledge compared to models trained solely on data.
Dettaglio pubblicazione
2024, Proceedings of the 3rd International Workshop on Process Management in the AI Era (PMAI 2024) co-located with 27th European Conference on Artificial Intelligence (ECAI 2024), Santiago de Compostela, Spain, October 19, 2024, Pages 23-34 (volume: 3779)
Enhancing Deep Sequence Generation with Logical Temporal Knowledge (04b Atto di convegno in volume)
Umili E., Paludo Licks G., Patrizi F.
keywords