Research

The Software Engineering for Cognitive Robots and Systems (SECORO) group at the University of Bremen conducts basic and applied research on robotic agents capable of physically interacting with their environment, collaborating with humans and other agents, and performing a variety of tasks autonomously in open-ended environments. This includes, among others, the following research topics:

Semantic Foundations of Robotic System Composition

Our research investigates the foundations of composable modeling and algorithmic reasoning in robotics, focusing on the structural and semantic representations of kinematic systems and solver behaviors. Robotic systems often involve a heterogeneous mix of components -- mechanical, computational, and algorithmic -- that must interact in complex, yet coherent ways. However, current modeling practices frequently conflate concerns, obscure semantics, and limit reuse, making it difficult to build, extend, or verify systems in a principled manner. These challenges are addressed by studying how composability  and compositionality can be enforced at the model level, not just implementation. This includes the development of explicit graph-based representations of kinematic structures and solver algorithms that maintain clean abstraction boundaries and allow symbolic reasoning about spatial relations, motion constraints, and computational flows. Our goal is to enable robotic systems whose structure and behavior are both modular, interpretable, and mathematically grounded, supporting systematic reuse, verification, and even autonomous reconfiguration. This work contributes to a deeper understanding of how robotic capabilities emerge from a structured composition and how such a structure can be made explicit and exploitable.

Reference: ICRA-23, Frontiers-25

 

Automated Validation and Performance Evaluation of Cognitive Robots

Our research investigates the foundations and tool-supported methodologies for the automated validation and evaluation of robotic systems, with a particular emphasis on system-level correctness, safety, and standard conformance in dynamic and variable environments. As robotic systems become increasingly integrated into everyday and safety-critical applications, ensuring compliance with standards and acceptance criteria of end users  presents significant challenges. These include the formalization of unambiguous specifications (e.g., high-level functional requirements), variability of real-world interactions, and complexity of executing repeatable, scalable tests across heterogeneous platforms. We address these challenges by developing domain-specific languages, simulation-based testing infrastructures, and workflows that bridge the gap between high-level specifications, such as behavioral expectations or regulatory standards, and executable automated test processes. Our work contributes to a more rigorous, transparent, and efficient engineering process for robotic systems, advancing the state-of-the-art in system-level validation and verification.

Reference: CASE-24, IROS-23

 

Introspection and Self-Assessment of Cognitive Robots

Our research focuses on uncertainty-aware perception and monitoring for learning-enabled robotic systems and components, addressing a central challenge in deploying machine learning models in real-world, safety-critical applications: the need for robust, interpretable, and context-aware decision making under uncertainty. Although deep neural networks offer powerful capabilities for perception, their outputs are often brittle in the face of out-of-distribution inputs, noisy data, or adversarial conditions, which are common in robotics. We investigate how uncertainty estimation can be made robust to such perturbations and how these estimates can be operationalized at runtime to enable more dependable robot behavior. Our work further encompasses iterative, learning-based self-assessment, enabling robots to predict both task success and failure while enhancing their ability to collaborate effectively. A key dimension of our work is the integration of embodiment—spatial and contextual information specific to the robot's configuration—into runtime monitors, allowing them to assess more accurately when predictions should be trusted. By leveraging synthetic data, sim-to-real transfer techniques, and heavy-tailed modeling approaches, we develop scalable methods for monitoring and reasoning the reliability of perception components in autonomous systems.

Reference: CASE-24

 

Empirical Robot Software Engineering

Our research examines the real-world practices and challenges of engineering robotic software systems, with a focus on how robots transition from controlled prototypes to reliable field-deployed products. While robotics research often emphasizes novel algorithms or system performance, developers face challenges to deploy these new approaches in real systems and real-world environments. To improve the state-of-the-art and state-of-the-practice of such deployments, we study the processes, tools, and human factors that shape the development and validation of robotic systems in practice. This includes exploring how testing is conducted in industrial and startup settings, what constraints engineers face, and how system complexity, uncertainty, and resource limitations affect design decisions. Through empirical methods, such as field studies, artifact analysis, and developer interviews, we aim to uncover patterns, challenges, and best practices that are otherwise underreported. Our goal is to build an evidence-based understanding of robotic software engineering that can guide the development of more scalable, maintainable, and trustworthy robotic systems. This line of work is essential for informing both engineering practice and future tool support, particularly as robotics moves increasingly into safety-critical real-world applications.

Reference: IROS-22