Research

Currently, my research focus is on synthetic data and virtual sensors for lab-based preparation of ML applications for reality. For modeling and data generation I use Blender and own plugins, which I combine with ML algorithms like GANs and AE. I mainly code in Python and C++. For ML applications I use TensorFlow, sklearn, pytorch and YOLO. I regularly work with the game engines UE4, Unity and Godot and use the software Blender to design and simulate virtual worlds.

AI of mobile roboters in virtual worlds

In order to train virtual agents (especially mobile roboters) in artificial worlds, I develop concepts to bring them into suitable environments and to transfer the AI trained there to the outside. For this purpose, several game engines (e. g. Unity) are used. Within my research I develop frameworks for these engines to create an interference engine for AI. Furthermore, agents will be trained by reinforcement and supervised learning. I create the virtual environments with Blender and incorporate models of my colleagues, which provide a very realistic image with photogrammetry.

Sim1 Sim2

Machine learning in computer vision

In my habilitation thesis I apply machine learning methods on complex geophysical data sets. Scientific and technical goals concern e.g. the development of AI-based prediction methods for the detection of disturbances and boundary layers. These predictions will form the basis for significantly more efficient simulations, e.g. for solving of inverse problems in the area of electromagnetic geophysics. Further work addresses e.g. automated data processing workflows to establish model-driven machine learning pipelines using heterogenous geological data repositories.

You can find more information about our project AIRGEMM (AI and Robotics for GeoEnvironmental Modeling and Monitoring) here.

Mesh Point cloud Labeled point cloud

Machine learning in time series prediction

In my doctoral thesis I dealt with the application of neural networks (MLP, RNN/LSTM, hybrids) in time series prediction in air traffic. Besides multidimensional modeling, I have been able to extract knowledge from neural networks by using genetic algorithms.

You can find my PhD thesis here.

Mesh Point Cloud

VR/AR

For a better understanding of the measurements of our robotic boat and their visualization, we are in the development of an AR application. Thereby, measured point clouds are to be displayed correctly from the edge of the bank at the target water body and registered into the environment. On the one hand, this involves a live view of a measurement run, and on the other hand, the display of an already measured set of points in their entirety.

A Microsoft HoloLens 2 (HL2) is used as AR hardware, whereby this is to be used as a stand-alone solution, i.e. without an external computing unit. This concept allows mobile applications at the waters, but limits the display scope of the point clouds (according to empirical values, a maximum of 300,000 points). The application is implemented as a Unity application and uses the pcx-Point Cloud Importer https://github.com/keijiro/Pcx to display the point clouds.