Forschungsfokus
Wir forschen auf dem Gebiet des Online Machine Learning (OML). In diesem Teilbereich der künstlichen Intelligenz stehen Verfahren und Systeme im Vordergrund, die sich während des laufenden Betriebs an Änderungen anpassen und diesen folgen können. Diese Schlüsselfähigkeit grenzt sie von den heute vielfach eingesetzten KI-Systemen ab, die an großen Datensätzen trainiert werden und im laufenden Betrieb nicht mehr lernen.
Anwendung finden OML-Verfahren im Bereich der Echtzeitanalyse von Datenströmen, beispielsweise in Sensornetzwerken. Eine typische Anwendung ist die Erkennung von Anomalien und Drift in den durchlaufenden Daten, die im industriellen Kontext für die frühzeitige Erkennung und Behebung technischer Störungen genutzt werden kann (Predictive Maintenance). Auch in der Analyse von tagesaktuellen Finanzdaten oder im Kontext der Unternehmensführung in den Bereichen Business Analytics/Business Intelligence eröffnet OML neue Möglichkeiten der Früherkennung. Darüber hinaus finden OML-Verfahren Anwendung in den Bereichen der adaptiven Regelung sowie der adaptiven Simulation.
blueAIC trägt durch eigene Forschungen und Publikationen zur Weiterentwicklung dieses Fachgebiets bei. Unsere Agenda umfasst dabei die folgenden Schwerpunkte:
- Neue Adaptionsmechanismen für Datenstromanwendungen
- Effiziente online-adaptive Algorithmen
- Online-adaptive Regelung
- Online-adaptive Simulation
Zur strukturierten und produktionsnahen Umsetzung ausgewählter Verfahren nutzen wir das Open-Source-Framework MLPro.
Die bisherigen Beiträge sind an der Fachhochschule Südwestfalen am Fachbereich Elektrische Energietechnik im Fachgebiet für Automatisierungstechnik und lernende Systeme entstanden.
Erweiterte Adaption im Datenstrom
Dieser Forschungsstrang adressiert neue Adaptionsmechanismen im Datenstrom, die über die rein vorwärtsgerichtete inkrementelle Modellanpassung hinausgehen. Im Zentrum steht die OML-Anwendung als Stream-Workflow, bestehend aus adaptiven Stream-Tasks, sowie deren Interaktion und kooperative Adaption. Hierzu haben wir drei neue Adaptionsmechanismen vorgeschlagen und in MLPro implementiert:
Ereignisorientierte Adaption
Task-übergreifende Adaptionskaskaden
Rückwärtsgerichtete Adaption
Publikationen
Authors
Detlef Arend, Laxmikant Shrikant Baheti, Steve Yuwono, Syamraj Purushamparambil Satheesh Kumar, Andreas Schwung
Journal
Machine Learning with Applications, Vol. 21, September 2025, Article 100715
Abstract
In this paper, we present version 2.0 of the open-source middleware MLPro for applied machine learning in Python. Notably, it introduces the new sub-framework MLPro-OA for online machine learning, focusing on standards and templates for classic and online-adaptive data stream processing (DSP/OADSP). As part of this, we provide three novel adaptation mechanisms:The first, event-oriented adaptation, enables localized, event-driven parameter updates within individual tasks. The second, cascaded adaptation, allows adaptation events to propagate across multiple dependent tasks, creating task-spanning adjustment cascades decoupled from the forward-facing DSP. The third, reverse adaptation, allows tasks to revise prior adjustments by explicitly processing obsolete instances discarded from a preceding sliding window. Furthermore, we provide insights into the underlying design criteria of MLPro-OA, which were developed through extensive requirements engineering. In the practical part of this work, we demonstrate the essential functionalities of MLPro-OA using reproducible examples.
DOI
Supplementary materials
- GitHub - fhswf/paper-mlwa-mlpro-2.0: Paper ScienceDirect MLWA - Arend e.a. - "MLPro 2.0 - Online machine learning in Python"
- MLPro Online documentation
- GitHub - fhswf/MLPro: MLPro - The Integrative Middleware Framework for Standardized Machine Learning in Python
- GitHub - fhswf/MLPro-Int-River: MLPro: Integration River
- GitHub - fhswf/MLPro-Int-scikit-learn: MLPro: Integration scikit-learn
- GitHub - fhswf/MLPro-Int-OpenML: MLPro: Integration OpenML
- GitHub - fhswf/MLPro-Int-SB3: MLPro: Integration StableBaselines3
- GitHub - fhswf/MLPro-Int-Gymnasium: MLPro: Integration Gymnasium
- GitHub - fhswf/MLPro-Int-PettingZoo: MLPro: Integration PettingZoo
- GitHub - fhswf/MLPro-Int-Optuna: MLPro: Integration Optuna
- GitHub - fhswf/MLPro-Int-Hyperopt: MLPro: Integration Hyperopt
Authors
Detlef Arend, Andreas Schwung
Conference
IEEE, 2025 IEEE 8th International Conference on Industrial Cyber-Physical Systems (ICPS)
Abstract
There are essentially two global trends that are currently heating up the AI sector. An increasing number of large language models (LLM) creates enormous resource requirements due to extensive data storage and complex training runs. Furthermore, advancing digitalization leads to an increased volume of data and the associated need for real-time analysis. In both areas, it is becoming apparent that the established AI models are reaching their limits despite all optimizations. This is where online machine learning comes in, with new models optimized to process incoming data in real-time and continuously adapt to it. This brings special requirements for the algorithms and their orchestration in extensive stream workflows. In this article, we focus on the latter aspect and highlight the two concepts of cascaded and reverse adaptation, which enable significant functional improvements individually and in combination. A corresponding extension of our open-source framework MLPro also enables us to demonstrate the adaptation mechanisms mentioned practically.
DOI
10.1109/ICPS65515.2025.11087907
Supplementary materials
Online-adaptive Clusteranalyse
Ein aktueller Schwerpunkt liegt auf der Entwicklung eines Frameworks für online-adaptive Clusteranalyse. Die Architektur ermöglicht eine variable Anzahl von Clustern, sowie eine erweiterte statistische Datenrepräsentation je Cluster. Im Fokus des Design stehen effiziente Update-Mechanismen und die Austauschbarkeit und Erweiterbarkeit zentraler Modellbestandteile.
Eine wissenschaftliche Veröffentlichung zu diesem Themenfeld ist in Vorbereitung. Näheres dazu nach der Veröffentlichung.
Cluster-basierte Anomalie- und Drifterkennung
Basierend auf unserer Forschung zu online-adaptiver Clusteranalyse untersuchen wir effiziente Algorithmen zur Erkennung von Anomalien und Drifts in evolvierenden Cluster-Sets. Diese ermöglichen eine Vielzahl konkreter Anwendungen der automatischen Früherkennung von Änderungen und Störungen in numerischen Datenströmen.
Eine wissenschaftliche Veröffentlichung zu diesem Themenfeld ist in Vorbereitung. Näheres dazu nach der Veröffentlichung.
Online-adaptive Regelung
Ergänzend zur Analyse kontinuierlicher Datenströme untersuchen wir online-adaptive Regelungsverfahren an der Schnittstelle von klassischer Regelungstechnik und Reinforcement Learning.
Dabei werden lernende Modelle direkt in Regelkreise integriert, um Regler unter veränderlichen Bedingungen datengetrieben anzupassen und zu optimieren.
Zur methodischen Umsetzung wurde in MLPro ein wiederverwendbares Sub-Framework für online-adaptive Regelungs- und Lernverfahren implementiert, das die strukturierte Kombination klassischer Regelungsalgorithmen mit lernenden Komponenten erlaubt. Es beinhaltet insbesondere einen PID-Regler, dessen Parameter mittels Reinforcement Learning an evolvierende Regelstrecken angepasst werden.
Publikationen
Authors
Detlef Arend, Amerik Toni Singh Padda, Andreas Schwung, Dorothea Schwung
Conference
IEEE, 2025 11th International Conference on Control, Decision and Information Technologies (CoDIT)
Abstract
This paper presents the novel RLPID architecture for online-adaptive control, which combines classical proportional-integral-derivative (PID) control with reinforcement learning (RL). This hybrid approach enables dynamic online adjustment of PID parameters during control operation. Specifically, we propose a multi-objective reward structure that integrates established control criteria and analyze suitable configurations for different system dynamics. The RLPID controller has been implemented within the open-source middleware MLPro, where it is embedded in newly developed sub-frameworks for classical and online-adaptive control. Owing to its hybrid nature, the architecture can be used both in traditional control loops and within the Markov decision process of RL. Its effectiveness and practical applicability are demonstrated in a cascade control scenario.
DOI
10.1109/CoDIT66093.2025.11321229
Supplementary materials
MLPro - Open-Source-Framework und Referenzarchitektur
MLPro ist eine Open-Source Engineering-Middleware, die seit 2021 im wissenschaftlichen Umfeld der Fachhochschule Südwestfalen gemeinsam mit weiteren Wissenschaftlern und Studierenden entwickelt wird.
Die Architektur ist modular aufgebaut und umfasst mehrere Sub-Frameworks, unter anderem für Online Machine Learning (MLPro-OA), Reinforcement Learning (MLPro-RL), Game Theory (MLPro-GT) sowie Supervised Learning (MLPro-SL). MLPro ist damit kein reines OML-Framework, sondern eine integrative Softwarebasis für unterschiedliche Lern- und Entscheidungsverfahren.
Es stellt eine konsistente, erweiterbare Struktur bereit, in der ausgewählte Adaptions-, Analyse-, Lern- und Regelungsverfahren implementiert, kombiniert und weiterentwickelt werden können.
MLPro fungiert sowohl als technische Referenzarchitektur als auch als nachhaltige Implementierungsbasis für veröffentlichte und zukünftige Arbeiten. Insofern sind Experimente und Verweise zu MLPro in all unseren Veröffentlichungen zu finden. Das Framework selbst wurde in mehreren Publikationen ausführlich beschrieben.
Publikationen
Authors
Detlef Arend, Laxmikant Shrikant Baheti, Steve Yuwono, Syamraj Purushamparambil Satheesh Kumar, Andreas Schwung
Journal
Machine Learning with Applications, Vol. 21, September 2025, Article 100715
Abstract
In this paper, we present version 2.0 of the open-source middleware MLPro for applied machine learning in Python. Notably, it introduces the new sub-framework MLPro-OA for online machine learning, focusing on standards and templates for classic and online-adaptive data stream processing (DSP/OADSP). As part of this, we provide three novel adaptation mechanisms:The first, event-oriented adaptation, enables localized, event-driven parameter updates within individual tasks. The second, cascaded adaptation, allows adaptation events to propagate across multiple dependent tasks, creating task-spanning adjustment cascades decoupled from the forward-facing DSP. The third, reverse adaptation, allows tasks to revise prior adjustments by explicitly processing obsolete instances discarded from a preceding sliding window. Furthermore, we provide insights into the underlying design criteria of MLPro-OA, which were developed through extensive requirements engineering. In the practical part of this work, we demonstrate the essential functionalities of MLPro-OA using reproducible examples.
DOI
Supplementary materials
- GitHub - fhswf/paper-mlwa-mlpro-2.0: Paper ScienceDirect MLWA - Arend e.a. - "MLPro 2.0 - Online machine learning in Python"
- MLPro Online documentation
- GitHub - fhswf/MLPro: MLPro - The Integrative Middleware Framework for Standardized Machine Learning in Python
- GitHub - fhswf/MLPro-Int-River: MLPro: Integration River
- GitHub - fhswf/MLPro-Int-scikit-learn: MLPro: Integration scikit-learn
- GitHub - fhswf/MLPro-Int-OpenML: MLPro: Integration OpenML
- GitHub - fhswf/MLPro-Int-SB3: MLPro: Integration StableBaselines3
- GitHub - fhswf/MLPro-Int-Gymnasium: MLPro: Integration Gymnasium
- GitHub - fhswf/MLPro-Int-PettingZoo: MLPro: Integration PettingZoo
- GitHub - fhswf/MLPro-Int-Optuna: MLPro: Integration Optuna
- GitHub - fhswf/MLPro-Int-Hyperopt: MLPro: Integration Hyperopt
Authors
Steve Yuwono, Detlef Arend, Andreas Schwung
Journal
SoftwareX, Vol. 28, December 2024, Article 101956
Abstract
Game theory, a fundamental aspect of mathematical economics and strategic decision-making, has been increasingly applied to various fields, including economics, biology, computer science, and engineering. Despite its growing importance, there is a significant lack of flexible and user-friendly tools for standardized modeling of them, particularly for real-world applications. Hence, we developed MLPro-GT as part of our open-source MLPro project, which offers modular and standardized yet flexible components, extensive documentation, and a variety of examples. MLPro-GT allows researchers and practitioners to easily incorporate game theory into their applications while lowering the entry barrier for students. This makes individual work more reproducible, shareable, and reusable.
DOI
Authors
Detlef Arend, Mochammad Rizky Diprasetya, Steve Yuwono, Andreas Schwung
Journal
Software Impacts, Vol. 14, December 2022, Article 100421
Abstract
In recent years, many powerful software packages have been released on various aspects of machine learning (ML). However, there is still a lack of holistic development environments for the standardized creation of ML applications. The current practice is that researchers, developers, engineers and students have to piece together functionalities from several packages in their own applications. This prompted us to develop the integrative middleware framework MLPro that embeds flexible and recombinable ML models into standardized processes for training and real operations. In addition, it integrates numerous common open source frameworks and thus standardizes their use. A meticulously designed architecture combined with a powerful foundation of overarching basic functionalities ensures maximum recombinability and extensibility. In the first version of MLPro, we provide sub-frameworks for reinforcement learning (RL) and game theory (GT).
DOI
Authors
Detlef Arend, Steve Yuwono, Mochammad Rizky Diprasetya, Andreas Schwung
Journal
Machine Learning with Applications, Vol. 9, September 2022, Article 100341
Abstract
Nowadays there are numerous powerful software packages available for most areas of machine learning (ML). These can be roughly divided into frameworks that solve detailed aspects of ML and those that pursue holistic approaches for one or two learning paradigms. For the implementation of own ML applications, several packages often have to be involved and integrated through individual coding. The latter aspect in particular makes it difficult for newcomers to get started. It also makes a comparison with other works difficult, if not impossible. Especially in the area of reinforcement learning (RL), there is a lack of frameworks that fully implement the current concepts up to multi-agents (MARL) and model-based agents (MBRL). For the related field of game theory (GT), there are hardly any packages available that aim to solve real-world applications. Here we would like to make a contribution and propose the new framework MLPro, which is designed for the holistic realization of hybrid ML applications across all learning paradigms. This is made possible by an additional base layer in which the fundamentals of ML (interaction, adaptation, training, hyperparameter optimization) are defined on an abstract level. In contrast, concrete learning paradigms are implemented in higher sub-frameworks that build on the conventions of this additional base layer. This ensures a high degree of standardization and functional recombinability. Proven concepts and algorithms of existing frameworks can still be used. The first version of MLPro includes sub-frameworks for RL and cooperative GT.
DOI
Supplementary materials
Mehr erfahren
Seitenanfang
Alle Publikationen
Authors
Detlef Arend, Laxmikant Shrikant Baheti, Steve Yuwono, Syamraj Purushamparambil Satheesh Kumar, Andreas Schwung
Journal
Machine Learning with Applications, Vol. 21, September 2025, Article 100715
Abstract
In this paper, we present version 2.0 of the open-source middleware MLPro for applied machine learning in Python. Notably, it introduces the new sub-framework MLPro-OA for online machine learning, focusing on standards and templates for classic and online-adaptive data stream processing (DSP/OADSP). As part of this, we provide three novel adaptation mechanisms:The first, event-oriented adaptation, enables localized, event-driven parameter updates within individual tasks. The second, cascaded adaptation, allows adaptation events to propagate across multiple dependent tasks, creating task-spanning adjustment cascades decoupled from the forward-facing DSP. The third, reverse adaptation, allows tasks to revise prior adjustments by explicitly processing obsolete instances discarded from a preceding sliding window. Furthermore, we provide insights into the underlying design criteria of MLPro-OA, which were developed through extensive requirements engineering. In the practical part of this work, we demonstrate the essential functionalities of MLPro-OA using reproducible examples.
DOI
Supplementary materials
- GitHub - fhswf/paper-mlwa-mlpro-2.0: Paper ScienceDirect MLWA - Arend e.a. - "MLPro 2.0 - Online machine learning in Python"
- MLPro Online documentation
- GitHub - fhswf/MLPro: MLPro - The Integrative Middleware Framework for Standardized Machine Learning in Python
- GitHub - fhswf/MLPro-Int-River: MLPro: Integration River
- GitHub - fhswf/MLPro-Int-scikit-learn: MLPro: Integration scikit-learn
- GitHub - fhswf/MLPro-Int-OpenML: MLPro: Integration OpenML
- GitHub - fhswf/MLPro-Int-SB3: MLPro: Integration StableBaselines3
- GitHub - fhswf/MLPro-Int-Gymnasium: MLPro: Integration Gymnasium
- GitHub - fhswf/MLPro-Int-PettingZoo: MLPro: Integration PettingZoo
- GitHub - fhswf/MLPro-Int-Optuna: MLPro: Integration Optuna
- GitHub - fhswf/MLPro-Int-Hyperopt: MLPro: Integration Hyperopt
Authors
Detlef Arend, Amerik Toni Singh Padda, Andreas Schwung, Dorothea Schwung
Conference
IEEE, 2025 11th International Conference on Control, Decision and Information Technologies (CoDIT)
Abstract
This paper presents the novel RLPID architecture for online-adaptive control, which combines classical proportional-integral-derivative (PID) control with reinforcement learning (RL). This hybrid approach enables dynamic online adjustment of PID parameters during control operation. Specifically, we propose a multi-objective reward structure that integrates established control criteria and analyze suitable configurations for different system dynamics. The RLPID controller has been implemented within the open-source middleware MLPro, where it is embedded in newly developed sub-frameworks for classical and online-adaptive control. Owing to its hybrid nature, the architecture can be used both in traditional control loops and within the Markov decision process of RL. Its effectiveness and practical applicability are demonstrated in a cascade control scenario.
DOI
10.1109/CoDIT66093.2025.11321229
Supplementary materials
Authors
Detlef Arend, Andreas Schwung
Conference
IEEE, 2025 IEEE 8th International Conference on Industrial Cyber-Physical Systems (ICPS)
Abstract
There are essentially two global trends that are currently heating up the AI sector. An increasing number of large language models (LLM) creates enormous resource requirements due to extensive data storage and complex training runs. Furthermore, advancing digitalization leads to an increased volume of data and the associated need for real-time analysis. In both areas, it is becoming apparent that the established AI models are reaching their limits despite all optimizations. This is where online machine learning comes in, with new models optimized to process incoming data in real-time and continuously adapt to it. This brings special requirements for the algorithms and their orchestration in extensive stream workflows. In this article, we focus on the latter aspect and highlight the two concepts of cascaded and reverse adaptation, which enable significant functional improvements individually and in combination. A corresponding extension of our open-source framework MLPro also enables us to demonstrate the adaptation mechanisms mentioned practically.
DOI
10.1109/ICPS65515.2025.11087907
Supplementary materials
Authors
Steve Yuwono, Detlef Arend, Andreas Schwung
Journal
SoftwareX, Vol. 28, December 2024, Article 101956
Abstract
Game theory, a fundamental aspect of mathematical economics and strategic decision-making, has been increasingly applied to various fields, including economics, biology, computer science, and engineering. Despite its growing importance, there is a significant lack of flexible and user-friendly tools for standardized modeling of them, particularly for real-world applications. Hence, we developed MLPro-GT as part of our open-source MLPro project, which offers modular and standardized yet flexible components, extensive documentation, and a variety of examples. MLPro-GT allows researchers and practitioners to easily incorporate game theory into their applications while lowering the entry barrier for students. This makes individual work more reproducible, shareable, and reusable.
DOI
Authors
Steve Yuwono, Marlon Löppenberg, Detlef Arend, Mochammad Rizky Diprasetya, Andreas Schwung
Journal
Software Impacts, Vol. 16, Mai 2023, Article 100509
Abstract
Although the integration of machine learning into production systems has demonstrated significant potential, its real-life applications remain challenging. It is often necessary to rely on digital representations of the actual systems to design and test machine learning algorithms for controlling the systems because conducting such processes directly in real systems is expensive and risky. Hence, in this paper, we present a standardized, multi-purpose and flexible production systems simulator in Python for scientific and industrial activities, namely MLPro-MPPS, that is integrated with the MLPro package. Consequently, this allows the simulations by MLPro-MPPS to be compatible with machine learning tasks.
DOI
Supplementary materials
Authors
Steve Yuwono, Marlon Löppenberg, Detlef Arend, Mochammad Rizky Diprasetya, Andreas Schwung
Conference
2023 IEEE 2nd Industrial Electronics Society Annual On-Line Conference (ONCON)
Abstract
In the current era of automation, there has been a growing interest in integrating artificial intelligence into manufacturing systems. A digital representation of the systems is often important to develop, test, and implement learning algorithms. However, developing a digital representation that seamlessly integrates with artificial intelligence remains challenging. The majority of existing simulators lack the capability of such integration or are not available as open-source platforms. To overcome these challenges, we propose MLPro-MPPS, a versatile and configurable Python-based production systems simulations framework integrated with the established machine learning framework, namely MLPro. MLPro offers advanced templates for machine learning in both scientific and industrial contexts, where both MLPro and MLPro-MPPS are open-source frameworks. In this paper, we present the design and fundamentals of MLPro-MPPS, which accurately models the behaviour of real hardware. We demonstrate the potential of MLPro-MPPS in facilitating standardized machine learning within production system simulations and conduct two sample applications of production systems within multiple machine learning methods. The results show that MLPro-MPPS offers a clean and reproducible simulation framework.
DOI
Authors
Detlef Arend, Mochammad Rizky Diprasetya, Steve Yuwono, Andreas Schwung
Journal
Software Impacts, Vol. 14, December 2022, Article 100421
Abstract
In recent years, many powerful software packages have been released on various aspects of machine learning (ML). However, there is still a lack of holistic development environments for the standardized creation of ML applications. The current practice is that researchers, developers, engineers and students have to piece together functionalities from several packages in their own applications. This prompted us to develop the integrative middleware framework MLPro that embeds flexible and recombinable ML models into standardized processes for training and real operations. In addition, it integrates numerous common open source frameworks and thus standardizes their use. A meticulously designed architecture combined with a powerful foundation of overarching basic functionalities ensures maximum recombinability and extensibility. In the first version of MLPro, we provide sub-frameworks for reinforcement learning (RL) and game theory (GT).
DOI
Authors
Detlef Arend, Steve Yuwono, Mochammad Rizky Diprasetya, Andreas Schwung
Journal
Machine Learning with Applications, Vol. 9, September 2022, Article 100341
Abstract
Nowadays there are numerous powerful software packages available for most areas of machine learning (ML). These can be roughly divided into frameworks that solve detailed aspects of ML and those that pursue holistic approaches for one or two learning paradigms. For the implementation of own ML applications, several packages often have to be involved and integrated through individual coding. The latter aspect in particular makes it difficult for newcomers to get started. It also makes a comparison with other works difficult, if not impossible. Especially in the area of reinforcement learning (RL), there is a lack of frameworks that fully implement the current concepts up to multi-agents (MARL) and model-based agents (MBRL). For the related field of game theory (GT), there are hardly any packages available that aim to solve real-world applications. Here we would like to make a contribution and propose the new framework MLPro, which is designed for the holistic realization of hybrid ML applications across all learning paradigms. This is made possible by an additional base layer in which the fundamentals of ML (interaction, adaptation, training, hyperparameter optimization) are defined on an abstract level. In contrast, concrete learning paradigms are implemented in higher sub-frameworks that build on the conventions of this additional base layer. This ensures a high degree of standardization and functional recombinability. Proven concepts and algorithms of existing frameworks can still be used. The first version of MLPro includes sub-frameworks for RL and cooperative GT.
DOI
Supplementary materials