Juergen Kleinoeder

Dr.-Ing. Jürgen Kleinöder

Department of Computer Science
Chair of Computer Science 4 (Systems Software)

Room: Raum 0.043
Martensstr. 1
91058 Erlangen
CIO of the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) and Senior Academic Director at the Department of ‘Computer Sciences 4 (Distributed Systems and Operating Systems Group)

Summer 2022: Systemprogrammierung 1

2011

2007

2005

2002

2001

2000

1999

1998

1997

1996

1994

1992

  • Micro Replication for Dependable Network-based Services

    (Third Party Funds Single)

    Term: 1. November 2024 - 31. October 2027
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)

    Network-based services such as distributed databases, file systems, or blockchains are essential parts of today's computing infrastructures and therefore must be able to withstand a wide spectrum of fault scenarios, including hardware crashes, software failures, and attacks. Although a variety of state-machine replication protocols exist that provide fault and intrusion tolerance, it is inherently difficult to build dependable systems based on their complex and often incomplete specifications. Unfortunately, this commonly leads to systems being vulnerable against correlated failures or attacks, for example, in cases where, to save development and maintenance costs, all replicas in a system share the same homogeneous implementation.

    In the Mirador project, we seek to eliminate this gap between theory and practice, proposing a novel paradigm for the specification and implementation of dependable systems: micro replication. In contrast to existing systems, micro-replication architectures do not consist of a few monolithic and complex replicas, but instead are organized as dedicated, loosely coupled micro-replica clusters that are each responsible for a different protocol step or mechanism. As a key benefit of providing only a small subset of the overall protocol functionality, micro replicas make it significantly easier to reason about the completeness and correctness of both specifications as well as implementations. To further reduce complexity, all micro replicas follow a standardized internal work flow, thereby greatly facilitating the task of introducing heterogeneity at the replica, communication, and authentication level.

    Starting from this basic concept, in the Mirador project we explore micro replication as a means to build dependable replicated systems and examine its flexibility by developing micro-replication architectures for different fault models (i.e., crashes and Byzantine faults). In particular, our research focuses on two problem areas: First, we aim at increasing the resilience of micro-replicated systems by enabling them to recover from replica failures. Among other things, this requires mechanisms for rejuvenating micro replicas from a clean state and integrating replacement replicas at runtime. Second, our goal is to improve the performance and efficiency of micro-replicated systems and the applications running on top of them. Specifically, this includes the design of techniques to reduce overheads by exploiting optimistic approaches that save processor and network resources in the absence of faults. Furthermore, we investigate ways to restructure the service logic and for example outsource preprocessing steps to upstream micro-replica clusters. To evaluate the concepts, protocols, and mechanisms developed in the Mirador project, we build a heterogeneous micro-replicated platform that allows us to conduct experiments for a wide range of different settings and with a variety of applications.

  • Whole-System Optimality Analysis and Tailoring of Worst-Case–Constrained Applications

    (Third Party Funds Single)

    Term: since 1. November 2022
    Funding source: Deutsche Forschungsgemeinschaft (DFG)
    Energy-constrained real-time systems, such as implantable medical devices, are prevalent in modern life. These systems demand its software to fulfill both properties of safe and energy-efficient task executions. Regarding safety, these systems must execute their tasks within execution-time and energy bounds since resource-budget violations potentially cause danger to life. In order to guarantee the system's safe execution with the given time and energy resources, static program-code analysis tools are automatically able to determine the system's worst-case resource-consumption behavior. However, existing static analyses so far are not able to tackle the problem of resource-efficient execution while maintaining the property of safe execution under the given resource constraints. Achieving the system's efficient execution through manual tailoring would involve an unrealistically high effort, especially when considering the large amount of energy-saving features in modern system-on-chip (SoC) platforms. In order to eventually yield resource-optimal execution and likewise allow the operating system to safely schedule tasks, a whole-system view on software tasks, their resource constraints, and hardware features would be essential, which goes beyond the current state of the art.

    The research proposal Watwa describes an approach for whole-system optimality analysis and automatic tailoring of worst-case-constrained applications. The core idea is the automatic generation of variants of the analyzed application that are equivalent from a functional point of view. The variant generation accounts for the multitude of modern energy-saving features, which, in turn, allows subsequent optimizing analyses to tailor the application by aggressively switching off unneeded, power-consuming components in the SoC. The temporal and energetic behavior of these variants is examined by means of worst-case analysis tools that yield bounds on the respective resource demands, which eventually achieves the safe execution during runtime. This novel combination of the variant generation and the analysis of their worst-case behavior allows Watwa to systematically determine hints for safe, resource-optimal scheduling sequences. In order to exploit these hints during runtime, the project proposes an operating system along with its scheduler that dynamically reacts to changes in the environment and exploits the determined scheduling hints for efficiency considerations while enforcing safe operation within resource budgets. In summary, the goal of this project is to provide answers to the following two questions: (1) How can program-code analyses determine resource-optimal task variants by exploiting modern hardware features while accounting for worst-case bounds? (2) How can operating systems exploit analytically determined scheduling hints on possible task-execution sequences and hardware activations to guarantee safe task executions while increasing the system's efficiency?

  • Design and validation of scalable, Byzantine fault tolerant consensus algorithms for blockchains

    (Third Party Funds Single)

    Term: 1. September 2022 - 1. September 2025
    Funding source: Deutsche Forschungsgemeinschaft (DFG)

    Distributed Ledger Technologies (DLTs), often referred to as blockchains, enable the realisation of reliable and attack-resilient services without a central infrastructure. However, the widely used proof-of-work mechanisms for DLTs suffer from high latencies of operations and enormous energy costs. Byzantine fault-tolerant (BFT) consensus protocols prove to be a potentially energy-efficient alternative to proof-of-work. However, current BFT protocols also present challenges that still limit their practical use in production systems. This research project addresses these challenges by (1) improving the scalability of BFT consensus protocols without reducing their resilience, (2) applying modelling approaches for making the expected performance and timing behaviour of these protocols more predictable, even under attacks, taking into consideration environmental conditions, and (3) supporting the design process for valid, automated testable BFT systems from specification to deployment in a blockchain infrastructure. The topic of scalability aims at finding practical solutions that take into account challenges such as recovery from major outages or upgrades, as well as reconfigurations at runtime. We also want to design a resilient communication layer that decouples the choice of a suitable communication topology from the actual BFT consensus protocol and thus reduces its complexity.This should be supported by the use of trusted hardware components. In addition, we want to investigate combinations of these concepts with suitable cryptographic primitives to further improve scalability. Using systematic modelling techniques, we want to be able to analyse the efficiency of scalable, complex BFT protocols (for example, in terms of throughput and latency of operations), already before deploying them in a real environment, based on knowledge of system size, computational power of nodes, and basic characteristics of the communication links. We also want to investigate robust countermeasures that help defending against targeted attacks in large-scale blockchain systems. The third objective is to support the systematic and valid implementation in a practical system, structured into a constructive, modular approach, in which a validatable BFT protocol is assembled based on smaller, validatable building blocks; the incorporation of automated test procedures based on a heuristic algorithm which makes the complex search space of misbehaviour in BFT systems more manageable; and a tool for automated deployment with accompanying benchmarking and stress testing in large-scale DLTs.

  • Blockchain applications in aviation

    (Third Party Funds Single)

    Term: 1. September 2022 - 31. August 2024
    Funding source: Bundesministerium für Wirtschaft und Klimaschutz (BMWK)

    Das Konzept von Distributed Ledger Systemen (Blockchain) ist eine grundlegend neue Basistechnologie, welche in der öffentlichen Wahrnehmung derzeit verstärkt im Fokus steht und welche erhöhtes Potential zur Lösung von Problemstellungen in einer Vielzahl von Anwendungsbereichen verspricht.

    Daneben wandelt sich die Luftverkehrslandschaft absehbar mit einer massiven Zunahme an Luftverkehrsteilnehmern und weiteren Luftverkehrsarten wie autonomen Kleinstsystemen. Des Weiteren besteht ein Bedarf bei der Digitalisierung in administrativen und operativen Bereichen des Luftverkehrs, welcher sich dabei in eine globale Digitalisierungsstrategie einordnet. Im Luftverkehr sollte dabei der Erhalt des hohen Sicherheitsstandards bzw. sogar eine Erhöhung dessen ein übergeordnetes Ziel bleiben.

    Im Rahmen des vorliegenden Projektes soll die Anwendbarkeit der Basistechnologie der Distributed Ledger Technologie für die Luftfahrt untersucht werden, die nutzbaren Potentiale für die Luftfahrt identifiziert werden und zur Anwendung gebracht werden.

    Zur Maximierung des Vorhabenerfolges in dem Projekt wird folgendes Vorgehen angestrebt: Zunächst werden in einer divergenten Explorationsphase parallel Anforderungen und Potentiale in der Luftfahrt erarbeitet und zugleich die Basistechnologie Blockchain technisch analysiert und in den luftfahrtbezogenen Kontext gebracht. Anschließend werden Konzepte zur Anwendung der Blockchaintechnologie in zwei Anwendungsbereichen entwickelt. Diese decken eine repräsentative Bandbreite an Einsatzbereichen in der Luftfahrt ab und bieten zugleich ein hervorgehobenes Nutzungspotential. Zum einen wird ein Konzept entwickelt, welches eine lückenlose, revisionssichere und vertrauenswürdige digitale Dokumentation aller Bauteile eines Flugzeuges über den gesamten Lebenszyklus eines Luftfahrzeuges zwischen allen Beteiligten auf Basis der Distributed Ledger Technologie ermöglicht. Dies verspricht Vorteile in Bezug auf Reduzierung des Verwaltungsaufwandes, Nachhaltigkeit und Vertrauenssteigerung und repräsentiert einen administrativen Anwendungsfall. Zum anderen wird ein Konzept eines dezentralen Flugdatenschreibers entwickelt, um im Hinblick auf neue Luftraumstrukturen und -Teilnehmer Freigaben und Zustandsdaten von Fluggeräten für die Analyse von potenziellen Fehlerfällen sicher zu dokumentieren. Die Blockchaintechnologie erlaubt es dabei, die Aufzeichnung von Flugzustandsdaten dezentral verteilt durch mehrere Luftverkehrsteilnehmer und Bodenstationen vorzunehmen, wodurch eine hohe Verfügbarkeit der Aufzeichnung erreicht wird. Gleichzeitig werden Manipulationen der Daten durch einzelne Teilnehmer ausgeschlossen und der Zugang zu den Aufzeichnungen ist auch nach einer möglichen Zerstörung des UAVs gegeben. Dieses Konzept wird anschließend prototypisch implementiert und dient so dem Nachweis der praktischen Applikabilität der Blockchaintechnologie in dem vorgesehenen Anwendungskontext. Ergebnisse werden der Fachöffentlichkeit und der europäischen Industrie zugänglich gemacht, so dass dieses Projekt direkt den Technologievorsprung des Luftfahrtstandorts sichert und eine Anschlussverwendung in Folgeprojekten mit Industriepartnern ermöglicht.

    Das Vorhaben wird in Kooperation der Technischen Universität Braunschweig und der Friedrich-Alexander-Universität Erlangen-Nürnberg durchgeführt. Durch die Beteiligung des Instituts für Flugführung (IFF, TUBS) sowie des Lehrstuhl Informatik 16 für Systemsoftware (I16, FAU) können interdisziplinär die benötigten Kompetenzen im Bereich der Informationstechnologie und Luftfahrt eingebracht werden und der Verbund bildet damit ein Alleinstellungsmerkmal und vorteilhafte Voraussetzungen zur Erreichung der Projektziele.

  • Dynamic Operating-System Specialisation

    (Third Party Funds Single)

    Term: since 1. May 2022
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)
    URL: https://sys.cs.fau.de/research/doss

    An operating system is located between two fronts: On the one hand ("above") the machine programs of the applications with their sometimes very different functional and non-functional requirements and on the other hand ("below") the computer hardware, whose features and equipment ideally are to be made available "unfiltered" and "noise-free" for the applications. However, a general purpose system cannot be as efficient in any of its functions as a system designed specifically for a specific purpose, and less demanding applications may require that they are not forced to pay for the resources consumed by the unneeded functions. So it is not uncommon for large systems, once put into operation, to be subject to frequent changes --- precisely in order to achieve a better fit to changing application requirements.

    The ideal operating system offers exactly what is required for the respective application --- no more and no less, but also depending on the characteristics of the hardware platform. However, such an ideal is only realistic, if at all, for an uni-programming mode of operation. In the case of multi-programming, the various applications would have to have "sufficiently the same" functional and non-functional requirement characteristics in order not to burden any of the applications with the overhead that the unneeded functions entail. An operating system with these characteristics falls into the category of special purpose operating system, it is tailored to the needs of applications of a certain type.

    This is in contrast to the general purpose operating system, where the ultimate hope is that an application will not be burdened with excessive overhead from the unneeded functions. At least one can try to minimise the "background noise" in the operating system if necessary --- ideally in this case with a different "discount" depending on the program type. The operating system would then not only have to be dynamically freed from unnecessary ballast and shrink with less demanding applications, but also be able to grow again with more demanding applications with the necessary and additional functions. Specialisation of an operating system depending on the respective application ultimately means functional reduction and enrichment, for which a suitable system software design is desirable, but often can no longer be implemented, especially with legacy systems.

    One circumstance for the specialisation of an operating system relates to measures explicitly initiated "from outside". On the one hand, this affects selected system calls and, on the other hand, tasks such as bootstrapping and the loading of machine programs, operating system kernel modules or programs that are to be executed in sandbox-like virtual machines within the operating system kernel. This form of specialisation also enables the dynamic generation of targeted protective measures as a result of particularly vulnerable operating system operations, such as loading external modules of the operating system kernel. The other determinant of the specialisation of an operating system relates to measures initiated implicitly "from within". This concerns possible reactions of an operating system to changes in its own runtime behavior that are only noticeable during operation, in order to then adapt the strategies of resource management to the respective workload and to seamlessly integrate the corresponding software components into the existing system.

    The project focus is the dynamic operating system specialisation triggered by extrinsic and intrinsic events. The focus is on concepts and techniques that (a) are independent of a specific programming paradigm or hardware approach and (b) are based on just in time (JIT) compilation of parts of the operating system (kernel) in order to to be loaded on demand or to be replaced anticipatory to the respective conditions on the "operating system fronts". Existing general-purpose systems such as Linux are the subject of investigation.

  • Non-volatility in energy-aware operating systems

    (Third Party Funds Single)

    Term: since 1. January 2022
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)
    URL: https://sys.cs.fau.de/en/research/neon-note

    The current trend toward fast, byte-addressable non-volatile memory (NVM) with latencies and write resistance closer to SRAM and DRAM than flash positions NVM as a possible replacement for established volatile technologies. While on the one hand the non-volatility and low leakage capacity make NVM an attractive candidate for new system designs in addition to other advantageous features, on the other hand there are also major challenges, especially for the programming of such systems. For example, power failures in combination with NVM to protect the computing status result in control flows that can unexpectedly transform a sequential process into a non-sequential process: a program has to deal with its own status from earlier interrupted runs.

    If programs can be executed directly in the NVM, normal volatile main memory (functional) becomes superfluous. Volatile memory can then only be found in the cache and in device/processor registers ("NVM-pure"). An operating system designed for this can dispense with many, if not all, persistence measures that would normally otherwise be implemented and thereby reduce its level of background noise. Considered in detail, this enables energy requirements to be reduced, computing power to be increased and latencies to be reduced. In addition, the elimination of these persistence measures means that an "NVM-pure" operating system is leaner than its functionally identical twin of conventional design. On the one hand, this contributes to better analysability of non-functional properties of the operating system and, on the other hand, results in a smaller attack surface or trustworthy computing base.

    The project follows an "NVM-pure" approach. A threatening power failure leads to an interrupt request (power failure interrupt, PFI), with the result that a checkpoint of the unavoidable volatile system state is created. In addition, in order to tolerate possible PFI losses, sensitive operating system data structures are secured in a transactional manner analogous to methods of non-blocking synchronisation. Furthermore, methods of static program analysis are applied to (1) cleanse the operating system of superfluous persistence measures, which otherwise only generate background noise, (2) break up uninterruptible instruction sequences with excessive interruption latencies, which can cause the PFI-based checkpoint backup to fail and (3) define the work areas of the dynamic energy demand analysis. To demonstrate that an "NVM-pure" operating system can operate more efficiently than its functionally identical conventional twin, both in terms of time and energy, the work is carried out with Linux as an example.

  • Power-fail aware byte-addressable virtual non-volatile memory

    (Third Party Funds Group – Sub project)

    Overall project: SPP 2377: Disruptive Memory Technologies
    Term: 5. April 2021 - 14. May 2026
    Funding source: DFG / Schwerpunktprogramm (SPP)
    URL: https://sys.cs.fau.de/en/research/pave-note

    Virtual memory (VM) subsystems blur the distinction between storage and memory such that both volatile and non-volatile data can be accessed transparently via CPU instructions. Each and every VM subsystem tries hard to keep highly contended data in fast volatile main memory to mitigate the high access latency of secondary storage, irrespective of whether the data is considered to be volatile or not. The recent advent of byte-addressable NVRAM does not change this scheme in principle, because the current technology can neither replace DRAM as fast main memory due to its significantly higher access latencies, nor secondary storage due to its significantly higher price and lower capacity. Thus, ideally, VM subsystems should be NVRAM-aware and be extended in such a way that all available byte-addressable memory technologies can be employed to their respective advantages. By means of an abstraction anchored in the VM management in the operating system, legacy software should then be able to benefit unchanged and efficiently from byte- addressable non-volatile main memory. Due to the fact that most VM subsystems are complex, highly-tuned software systems, which have evolved over decades of development, we follow a minimally invasive approach to integrate NVRAM-awareness into an existing VM subsystem instead of developing an entirely new system from scratch. NVRAM will serve as an immediate DRAM substitute in case of main memory shortage and inherently support processes with large working sets. However, due to the high access latencies of NVRAM, non-volatile data also needs to be kept at least temporarily in fast volatile main memory and the volatile CPU caches, anyway. Our new VM subsystem - we want to adapt FreeBSD accordingly - then takes care of migration of pages between DRAM and NVRAM, if the available resources allow. Thus, the DRAM is effectively managed as a large software-controlled volatile page cache for NVRAM. Consequently, this raises the question of data losses caused by power outages. The VM subsystem therefore needs to keep its own metadata in a consistent and recoverable state and modified pages in volatile memory need to be copied to NVRAM to avoid losses. The former requires an extremely efficient transactional mechanism for modifying of complex, highly contended VM metadata, while the latter must cope with potentially large amounts of modified pages with limited energy reserves.

  • Resilient Power-Constrained Embedded Communication Terminals

    (Third Party Funds Group – Sub project)

    Overall project: SPP 2378 Resilient Worlds
    Term: since 26. March 2021
    Funding source: Deutsche Forschungsgemeinschaft (DFG)

    Within the wide subject of resilience in networked worlds ResPECT focuses on a core element of all networked systems: sensor- and actuator-nodes in cyber-physical systems. Communication up to today is understood and implemented as an auxiliary functionality of embedded systems. The system itself is disruption-tolerant and able to handle power failures or in a limited scope even hardware problems, but the communication isn't part of the overall design. In the best case it can make use of the underlying system resilience. ResPECT develops a holistic operating system and communication protocol stack, assuming that conveying information (the receipt of control data for actuators or the sending of sensor data) is a core task of all networked components. Consequently it must become a part of the operating system's management functionality. ResPECT builds on two pillars: Non-volatile memory and transactional operation. Non- volatile memory in recent years has evolved towards a serious element of the storage hierarchy. Even embedded platforms with exclusively non-volatile memory become conceivable. Network communication, other than social communication, is transactional in its design: Data is collected and under channel constraints like latency, error-resilience and energy consumption and content constraints like age and therewith value of information is transmitted between the communication partners. Other than for operating systems this communication, however, faces many external disruptions and impacts. In addition, the duration of a disruption can have severe implications on the validity of already completed transactions like the persistence of the physical connection. Hence on resumption all this has to be considered. ResPECT consequently will - by interdisciplinary research of operating system and communication experts - develop a model based on transactions and will apply non-volatile memory to ensure, that states during the flow of transactions are known at any point in time and can and will be stored persistently. This monitoring and storing functionality must be very efficient (with respect to the energy consumption as well as to the amount of data to be stored in non-volatile memory) and hence be implemented as a core functionality of the operating system. To ensure generalizability and to have the model available for a variety of future platforms, ResPECT will focus on IP-networks and use communication networks which typically are operated as WAN, LAN or PAN (wide, local or personal area networks).

  • Migration-Aware Multi-Core Real-Time Executive

    (Own Funds)

    Term: since 11. August 2020

    This research proposal investigates the predictability of task migration in multi-core real-time systems. Therefore, we propose , a migration- aware real-time executive where migration decisions are no longer based on generic performance parameters but systematically deduced on application-specific knowledge of the real-time tasks. These so-called migration hints relate to temporal and spatial aspects of real-time tasks; they mark potential migration points in their non- sequential (multi-threaded) machine programs. Migration hints enable the operating system to reach decisions that have the most favorable impact on the overall predictability and system performance. The proposal assumes that application-specific hints on admissible and particularly favorable program points for migration represent a cost- effective way to leverage multi-core platforms with existing real-time systems and scheduling techniques. The object of investigation is multi-core platforms with heterogeneous memory architectures. The focus is on the worst-case overhead caused by migration in such systems, mainly depending on the current size and location of the real-time tasks' resident core-local data. This data set, which varies in size over execution time, is determined using tool-based static analysis techniques that derive usable migration hints at design time. In addition, the proposal develops migration-aware variants of standard real-time operating systems, which provide specialized interfaces and mechanisms to utilize these migration hints as favorable migration points at runtime to provide predictable migrations and optimize the overall schedulability and performance of the system.

  • Suresoft: Sustainable Research Software Development and Deployment

    (Third Party Funds Single)

    Term: 23. March 2020 - 23. March 2023
    Funding source: Deutsche Forschungsgemeinschaft (DFG)
    URL: https://www.tu-braunschweig.de/suresoft

    Research software is of fundamental importance for many disciplines to achieve scientific progress. The software is commonly developed by scientists, with a short-term perspective to gain specific results. In most cases, it ends up with instabilities as typically scientists are self-taught developers. As a result, the widespread and long-term usage is inhibited and consequently, the scientific research quality and progress pace are hindered.

    SURESOFT project aims to establish a common usable methodology and infrastructure based on the concepts of continuous integration for all research software projects. Continuous integration is the enabler for improving the quality of research software, easing software delivery and ensuring long-term sustainability and availability. Furthermore, SURESOFT will use its technical basis of continuous integration to enable long-term archival and the reproducibility of results.

  • Energy-, Latency- and Resilience-aware Networking

    (Third Party Funds Group – Sub project)

    Overall project: SPP 1914 „Cyber-Physical Networking (CPN)
    Term: since 1. January 2020
    Funding source: DFG / Schwerpunktprogramm (SPP)
    URL: https://www.nt.uni-saarland.de/project/latency-and-resilience-aware-networking-larn/
  • Invasive Run-Time Support System (SFB/TRR 89, Project C1: Phase 3)

    (Third Party Funds Group – Sub project)

    Overall project: Invasive Computing
    Term: 1. July 2018 - 30. June 2022
    Funding source: DFG / Sonderforschungsbereich / Transregio (SFB / TRR)
    URL: https://invasic.informatik.uni-erlangen.de/en/tp_c1_PhIII.php

    As part of the SFB/TRR 89 "Invasive Computing" subproject C1 investigates operating-system support for invasive applications. We provide methods, principles, and abstractions for the application-aware extension, configuration, and adaptation of the invasive platform by a novel and flexible operating system infrastructure (OctoPOS) that will be integrated into standard Unix-like operating systems.
    The general focus will be on the enforcement of required quality criteria of mixed criticality as to timing and energy consumption by means of application-oriented resource allocation strategies as well as mechanisms at iRTSS level. This includes a worst-case execution time (WCET) analysis for the Agent System to identify the performance corridors of resource allocation for given use cases. Emphasis is also on the control of background noise (i.e., indirect overhead) of OctoPOS functions that provide transparent access (virtual shared memory, VSM) to the different main-memory subsystems of the PGAS (partitioned global address space) model as defined for invasive computing.

  • Energy-aware Execution Environments

    (Own Funds)

    Term: 1. January 2018 - 31. December 2026

    The processing of large amounts of data on distributed execution platforms such as MapReduce or Heron contributes significantly to the energy consumption of today's data centers. The E³ project aims at minimizing the power consumption of such execution environments without sacrificing performance. For this purpose, the project develops means to make execution environments and data-processing platforms energy aware and to enable them to exploit knowledge about applications to dynamically adapt the power consumption of the underlying hardware. To measure and control the energy consumption of hardware units, E³'s energy-aware platforms rely on hardware features provided by modern processors that allow the system software of a server to regulate the server's power usage at runtime by enforcing configurable upper limits. As a key benefit, this approach makes it possible to build data-processing and execution platforms that, on the one hand, save energy during phases in which only low and medium workloads need to be handled and, on the other hand, are still able to offer full processing power during periods of high workloads.

  • PRIMaTE: PRIvacy preserving Multi-compartment Trusted Execution

    (Third Party Funds Single)

    Term: 16. October 2017 - 31. August 2023
    Funding source: Deutsche Forschungsgemeinschaft (DFG)

    Nowadays, a wide variety of online services (e.g., web search engines, location-based services, recommender systems) are being used by billions of users on a daily basis. Key to the success of these services is the personalisation of their results, that is returning to each user those results that are closer to their interests. For instance, given a web search query sent by two different users, search engines generally rank differently the search results to best fit each user's preferences. However, according to the underlying application, user profiles may contain sensitive information about end users. In this context, it becomes urgent to devise mechanisms that allow users to securely access online services without fearing that their data will be leaked out from the cloud platforms where it is being stored and processed. 

    The proposed PRIMaTE project addresses privacy-preserving in online services. We propose a system that reduces and precisely specifies trust assumptions, while still providing improved performance compared to the state of the art. Our key contribution will be to systematically decompose these services in strongly hardware-secured compartments, where each has access only to the data essential for performing the assigned task. In case of security breaches for example due to attackers exploiting a weakness in the code of one or even multiple compartments, the impact of the leaked data will be kept at bounds and their effect can be precisely quantified. Thus, the attacker might only learn certain aspects of a profile but cannot link it to a user. PRIMaTE achieves this goal by utilizing novel trusted execution support offered by recent commodity processors such as the 2016 introduced Skylake generation of Intel processors. Trusted execution as offered by Intel Software Guard Extensions (SGX) is a disruptive technology that will impact how code and data is protected in the future.

    PRIMaTE will utilize trusted execution to devise novel privacy-preserving online services. While current research on trusted execution focused either on deploying whole legacy applications such as a databases in a single Trusted Execution Environment (TEE) or on ad-hoc solutions to split existing applications into two parts - a trusted and untrusted one - PRIMaTE aims for a more systematic and fine-grained approach. It targets to develop a methodology to split privacy-preserving online services into multiple interacting compartments each implemented by a TEE. Thereby, each TEE should handle as little data as possible and have a tailored and therefore minimal trusted computing base. While the latter makes it hard to exploit a PRIMaTE TEE, the former limits the exposed information if an attacker is able to successfully break into a TEE.

  • Aspect-Oriented Real-Time Architectures (Phase 2)

    (Third Party Funds Single)

    Term: 1. August 2017 - 30. September 2020
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)
    URL: https://www4.cs.fau.de/Research/AORTA/
    The goal of the AORTA project is to enhance the predictability of dynamic mixed-criticality real time systems by extracting critical paths. These paths are to be transformed into their static equivalents and to be executed in a time-triggered fashion at run-time. In comparison to event triggered processes, time-triggered execution tends to underuse resources. Therefore the optimistic execution model of mixed-criticality real-time systems will be retained. Only in case of an emergency the real-time system will be executed according to the static schedule. At the same time the results of the first funding phase will be generalized to dynamic real-time architectures. In particular, the focus will be on mixed-criticality systems with complex dependency patterns. The research project will investigate several variants of real-time Linux, as well as applications from the domain of control engineering.The main focus during the second funding phase of the project is going to be on dependencies between critical and non-critical paths of execution. These dependencies may potentially be problematic and can be found on all levels of the system: For example, application software may combine non-critical comfort functions with critical control functionality, leading to coupled components. In the operating system buffers may be used for shared communication stacks. Often such coupling may be desirable, however, in dynamic systems a host of possible execution paths at run-time may lead to dramatically overprovisioned system designs w.r.t. WCET and WCRT. Therefore, guaranteed execution times often lead to a loss of the efficiency gained from the dynamic real-time system design. Three key activities of this project will provide hard guarantees at run-time for the critical application core: analysis, tailoring and mechanisms.The basis for this project will be existing techniques for designing mixed-criticality systems under hard real-time constraints. For AORTA, it will be assumed that in general critical paths have deterministic structure and therefore their coupling with non-critical paths may be mapped to static equivalents. In the course of this project the applicability of the simple communication patterns provided by different variants of real-time Linux will be scrutinized to determine if these can guarantee the hard deadlines of safety-critical control applications, and if the concepts and techniques for static analysis, tailoring and scheduling developed in the first funding phase are suitable for this purpose. In addition the necessity of coupling the real-time architecture, scheduling and dependencies will be investigated in the context of mixed-criticality real-time systems to determine the general fitness of real-time Linux's design concepts for switching real time paradigms at run-time.
  • Latency and Resilience-aware Networking

    (Third Party Funds Group – Sub project)

    Overall project: Cyber-Physical Networking (CPN)
    Term: 1. January 2016 - 31. December 2019
    Funding source: DFG / Schwerpunktprogramm (SPP)
    The poject develops transport channels for cyber-physical networks. Such channels need to be latency- and resilience-aware; i.e. the latency as seen by the application must be predictable and in certain limits, e.g. by balancing latency and resilience, be guaranteed. This is only possible by an innovative transport protocol stack and an appropriate fundament of operating system and low level networking support. Thereto the current proposal unites the disciplines Operating Systems / Real-Time Processing and Telecommunications / Information- Theory.

    Project target is the evolution of the PRRT (predictably reliable real-time transport) transport protocol stack towards a highly efficient multi-hop-protocol with loss domain separation. This is enabled by an interdisciplinary co-development with a latency-aware operating system kernel incl. wait-free synchronisation and the corresponding low level networking components (POSE, ``predictable operating system executive''). The statistical properties of the entire system (RNA, ``reliable networking atom'') shall be optimised and documented.

    A software-defined networking testbed for validation of the system in a real-world wide area network scenario is available. The developed components will be introduced during the workshops organised by the priority programme Cyber- physical Networking and will be made available to other projects during the entire run-time of the priority programme.

  • Quality-aware Co-Design of Responsive Real-Time Control Systems

    (Own Funds)

    Term: 1. September 2015 - 30. September 2021
    URL: https://www4.cs.fau.de/Research/qronOS/

    A key design goal of safety-critical control systems is the verifiable compliance with a specific quality objective in the sense of the quality of control. Corresponding to these requirements, the underlying real- time operating system has to provide resources and a certain quality of service. However, the relationship between real-time performance and quality of control is nontrivial: First of all, execution load varies considerably with environmental situation and disturbance. Vice versa, the actual execution conditions also have a qualitative influence on the control performance. Typically, substantial overestimations, in particular of the worst-case execution times, have to be made to ensure compliance with the aspired quality of control. This ultimately leads to a significant over-dimension of resources, with the degree disproportionately increasing with the complexity and dynamics of the control system under consideration. Consequently, it is to be expected that pessimistic design patterns and analysis techniques commonly used to date will no longer be viable in the future. Examples of this are complex, adaptive and mixed-critical assistance and autopilot functions in vehicles, where universal guarantees for all driving and environmental conditions are neither useful nor realistic. The issues outlined above can only be solved by an interdisciplinary approach to real-time control systems. This research project emanates from existing knowledge about the design of real-time control systems with soft, firm and hard timing guarantees. The basic assumption is that the control application's performance requirement varies significantly between typical and maximum disturbance and leads to situation-dependent reserves, correspondingly. Consequently, the commonly used pessimistic design and analysis of real-time systems that disregards quality-of- control dynamics is scrutinized. The research objective is the avoidance of pessimism in the design of hard real-time systems for control applications with strict guarantees and thus the resolution of the trade-off between quality-of-control guarantees and good average performance. This proposal pursues a co-design of control application and real-time executive and consists of the following three key aspects: model-based quality-of-control assessment, adaptive and predictive scheduling of control activities, and a hybrid execution model to regain guarantees.

  • Softwareinfrastruktur betriebsmittelbeschränkter vernetzter Systeme (Phase 2)

    (Third Party Funds Group – Sub project)

    Overall project: FOR 1508: Dynamisch adaptierbare Anwendungen zur Fledermausortung mittels eingebetteter kommunizierender Sensorsysteme
    Term: 1. August 2015 - 31. July 2018
    Funding source: DFG / Forschungsgruppe (FOR)

    Im Kontext der Gesamtvision der Forschergruppe BATS ist es das Ziel des Teilprojekts ARTE (adaptive run-time environment, TP 2) eine flexible Systemsoftwareunterstützung zu entwickeln. Diese soll es ermöglichen, für die Verhaltensbeobachtungen von Fledermäusen (TP 1) verteilte Datenstromanfragen (TP 3) auf einem heterogenen Sensornetzwerk (TP 4), bestehend aus stationären (TP 5) und mobilen (TP 7) Sensornetzwerkknoten, zu etablieren. Eine besondere Herausforderung stellen hierbei die knappen Ressourcen dar, im speziellen Speicher und Energie, sowie die wechselhafte Konnektivität der nur 2 g schweren mobilen Knoten. In Anbetracht dieser vielfältigen und teilweise konfligierenden Anforderungen soll ARTE in Form einer hochkonfigurierbaren Softwareproduktlinie realisiert werden. Ziel ist es, sowohl die unterschiedlichen funktionalen Anforderungen zwischen mobilen und stationären Knoten zu unterstützen, als auch wichtige nichtfunktionale Eigenschaften, wie niedriger Speicherverbrauch und Energieeffizienz. Entsprechend soll schon bei der Entwicklung von ARTE der Konfigurationsraum werkzeuggestützt und gezielt auf nichtfunktionale Eigenschaften untersucht werden, um gemäß der Anforderungen an das Projekt später im Einsatz eine optimierte Auswahl von Implementierungsartefakten zu bieten. Dabei ist explizit die dynamische Anpassbarkeit von Anwendungs- wie auch von Systemfunktionen zu berücksichtigen. Auf funktionaler Ebene wird ARTE Systemdienste in Gestalt einer Middleware bereitstellen, die Anpassung und Erweiterung zur Laufzeit unterstützt und auf Datenstromverarbeitung zugeschnitten ist, um eine ressourceneffiziente und flexible Ausführung von Datenstromanfragen zu ermöglichen.

  • Power-Aware Critical Sections

    (Third Party Funds Single)

    Term: 1. January 2015 - 30. September 2022
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)
    Race conditions of concurrent processes within a computing system may cause partly inexplicable phenomena or even defective run-time behaviour. Reason are critical sections in non-sequential programs. Solutions for the protection of critical sections generally are facing a multi-dimensional problem space: (1) processor-local interrupts, (2) shared-memory multi/many-core multiprocessors with (2a) coherent or (2b) incoherent caches, (3) distributed-memory systems with global address space, (4) interference with process management of the operating system. Thereby, the protection method makes pessimistic or optimistic assumptions regarding the occurrence of access contention.The number of contending processes depends on the use case and has a large impact on the effectiveness of their coordination at all levels of a computing system. Overhead, scalability, and dedication of the protective function thereby constitute decisive performance-affecting factors. This influencing quantity not only accounts for varying process run-times but also different energy uses. The former results in noise or jitter in the program flow: non-functional properties that are especially problematic for highly parallel or real-time dependent processes. In contrast, the later has economical importance as well as ecological consequences on the one hand and is tangent to the boundary of scalability of many-core processors (dark silicon) on the other hand.Subject to the structural complexity of a critical section and its sensitivity to contention, a trade-off becomes apparent that shall be tackled in the project by means of analytical and constructive measures. Objects of investigation are own special-purpose operating systems, which were designed primarily for the support of parallel and partly also real-time dependent data processing, and Linux. Goal is the provision (a) of a software infrastructure for load-dependent and---by the program sections---self-organized change of protection against crucial race condition of concurrent processes as well as (b) of tools for preparation, characterisation, and capturing of those sections. Hotspots caused by increased process activity and becoming manifested in energy-use and temperature rise shall be avoided or attenuated on demand or anticipatory by a section-specific dispatch policy. The overhead induced by the particular dispatch policy slips in the weighting to dynamic reconfiguration of a critical section for undertaking a change only in case that real practical gain compared to the original solution can be expected. Before-after comparisons based on the investigated operating systems shall demonstrate the effectivity of the approach developed.Race conditions of concurrent processes within a computing system may cause partly inexplicable phenomena or even defective run-time behaviour. Reason are critical sections in non-sequential programs. Solutions for the protection of critical sections generally are facing a multi-dimensional problem space: (1) processor-local interrupts, (2) shared-memory multi/many-core multiprocessors with (2a) coherent or (2b) incoherent caches, (3) distributed-memory systems with global address space, (4) interference with process management of the operating system. Thereby, the protection method makes pessimistic or optimistic assumptions regarding the occurrence of access contention.The number of contending processes depends on the use case and has a large impact on the effectiveness of their coordination at all levels of a computing system. Overhead, scalability, and dedication of the protective function thereby constitute decisive performance-affecting factors. This influencing quantity not only accounts for varying process run-times but also different energy uses. The former results in noise or jitter in the program flow: non-functional properties that are especially problematic for highly parallel or real-time dependent processes. In contrast, the later has economical importance as well as ecological consequences on the one hand and is tangent to the boundary of scalability of many-core processors (dark silicon) on the other hand.Subject to the structural complexity of a critical section and its sensitivity to contention, a trade-off becomes apparent that shall be tackled in the project by means of analytical and constructive measures. Objects of investigation are own special-purpose operating systems, which were designed primarily for the support of parallel and partly also real-time dependent data processing, and Linux. Goal is the provision (a) of a software infrastructure for load-dependent and---by the program sections---self-organized change of protection against crucial race condition of concurrent processes as well as (b) of tools for preparation, characterisation, and capturing of those sections. Hotspots caused by increased process activity and becoming manifested in energy-use and temperature rise shall be avoided or attenuated on demand or anticipatory by a section-specific dispatch policy. The overhead induced by the particular dispatch policy slips in the weighting to dynamic reconfiguration of a critical section for undertaking a change only in case that real practical gain compared to the original solution can be expected. Before-after comparisons based on the investigated operating systems shall demonstrate the effectivity of the approach developed.
  • Configurability Aware Development of Operating Systems

    (Third Party Funds Single)

    Term: since 1. May 2014
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)

    Todays operating systems (as well as other system software) offer a great deal of static configurability to tailor them with respect to a specific application or hardware platform. Linux 4.2, for instance, provides (via its Kconfig models and tools) more than fifteen thousand configurable features for this purpose. Technically, the implementation of all these features is spread over multiple levels of the software generation process, including the configuration system, build system, C preprocessor, compiler, linker, and more. This enormous variability has become unmanageable in practice; in the case of Linux it already has led to thousands of variability defects within the lifetime of Linux. With this term, we denote bugs and other quality issues related to the implementation of variable features. Variability defects manifest as configuration consistency and configuration coverage issues.

    In the CADOS project, we investigate scalable methods and tools to grasp the variability on every layer within the configuration and implementation space, visualize and analyze it and, if possible, adjust it while maintaining a holistic view on variability.

  • System Software Infrastructure of Heterogeneous Image Systems (RTG 1773, HBS, Subproject B.2)

    (Third Party Funds Group – Sub project)

    Overall project: GRK 1773: Heterogene Bildsysteme
    Term: 1. October 2012 - 31. December 2018
    Funding source: Deutsche Forschungsgemeinschaft (DFG)
    URL: http://hbs.fau.de/research/area-b-methods-and-tools/project-b-1/

    Das Teilprojekt untersucht architektonische Belange der Systemsoftware für Bildsysteme und entwickelt eine Infrastruktur zum Aufbau anwendungsgewahrer Systemlösungen dieser Domäne. Im Vordergrund stehen dabei:

    1. Die Maßschneiderung und Optimierung der Verschränkung von Anwendungs- und Systemsoftware im Hinblick auf die anwendungspezifischen Interaktionen zwischen Host (Multicore-PC) und Spezialzweckhardware (GPU/DSP).
    2. Die Beherrschung der aus diesem Ansatz resultierenden Variabilität in der Systemsoftware.

    B2 adressiert als Brückenprojekt die Reduktion der Interaktionskosten zwischen Anwendungen (Projektbereich C) und Hardware (Projektbereich A) im Hinblick auf nichtfunktionale Eigenschaften. Dies wird zum Beispiel von Teilprojekt C1 bzgl. Latenz und Energieverbrauch gefordert; C4 profitiert von der anwendungsgewahren Optimierung der Systemsoftware in Hinblick auf verschränkte CPU/GPU-Algorithmen. Die in B3 entwickelte domänenspezifische Programmiermethodik nutzt die in B2 entwickelten Systemabstraktionen für die effiziente Verschattung der Heterogenität.

    Im Fokus des Teilprojekts liegt die Entwicklung einer problemorientierten Laufzeitexekutive für heterogene Vielkernprozessoren, mit Blickpunkt auf die betriebsmittelgewahre Simultanverarbeitung *(resource-aware multi- processing, RAMP)* von Anwendungen heterogener Bildsysteme. Der dabei entstehende Minimalkern ist zunächst GPU- und RAM-zentrisch ausgelegt.

  • Software-controlled consistency and coherence for many-core processor architectures

    (Third Party Funds Single)

    Term: 1. September 2012 - 31. March 2021
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)

    The achievable computing capacity of individual processors currently strikes at its technological boundary line. Further improvement in performance is attainable only by the use of many computation cores. While processor architectures of up to 16 cores mark the topical state of the art in commercially available products, here and there systems of 100 computing cores are already obtainable. Architectures exceeding thousand computation cores are to be expected in the future. For better scalability, extremely powerful communication networks are integrated in these processors (network on chip, NoC), so that they combine de facto properties of a distributed system with those of a NUMA system. The extremely low latency and high bandwidth of those networks opens up the possibility to migrate methods of replication and consistency preservation from hardware into operating and run-time systems and, thus, to flexibly counteract notorious problems such as false sharing of memory cells, cache-line thrashing, and bottlenecks in memory bandwidth.

    Therefore, the goal of the project is to firstly design a minimal, event-driven consistency kernel (COKE) for such many-core processors that provides the relevant elementary operations for software-controlled consistency preservation protocols for higher levels. On the basis of this kernel, diverse "consistency machines" will be designed, which facilitate different memory semantics for software- and page-based shared memory.

  • Softwareinfrastruktur betriebsmittelbeschränkter vernetzter Systeme (Phase 1)

    (Third Party Funds Group – Sub project)

    Overall project: FOR 1508: Dynamisch adaptierbare Anwendungen zur Fledermausortung mittels eingebetteter kommunizierender Sensorsysteme
    Term: 1. August 2012 - 31. July 2015
    Funding source: DFG / Forschungsgruppe (FOR)
  • Efficient Distributed Coordination

    (Own Funds)

    Term: 1. January 2012 - 31. December 2026
    URL: https://www4.cs.fau.de/Research/EDC/

    Coordination services such as ZooKeeper are essential building blocks of today's data-center infrastructures as they provide processes of distributed applications with means to exchange messages, to perform leader election, to detect machine or process crashes, or to reliably store configuration data. Providing an anchor of trust for their client applications, coordination services have to meet strong requirements regarding stability and performance. Only this way, it is possible to ensure that a coordination service neither is a single point of failure nor becomes the bottleneck of the entire system.

    To address drawbacks of state-of-the-art systems, the EDC project develops approaches that enable coordination services to meet the stability and performance demands. Amongst other things, this includes making these services resilient against both benign and malicious faults, integrating mechanisms for extending the service functionality at runtime in order to minimize communication and synchronization overhead, as well as designing system architectures for effectively and efficiently exploiting the potential of multi-core servers. Although focusing on coordination services, the developed concepts and techniques are expected to also be applicable to other domains, for example, replicated data stores.

  • Aspect-Oriented Real-Time Architecture (Phase 1)

    (Third Party Funds Single)

    Term: 1. August 2011 - 31. August 2016
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)
    URL: https://www4.cs.fau.de/Research/AORTA/

    A cdentral role bei der Entwicklung von Echtzeitsystemen spielt die verwendete Echtzeitsystemarchitektur, in der sie nämlich Mechanismen widerspiegelt, um kausale und temporale Abhängigkeiten zwischen verschiedenen, gleichzeitigen Aufgaben eines Echtzeitsystems zu implementieren. Zwei gegensätzliche Pole solcher Architekturen stellen zeit- und ereignisgesteuerte Systeme dar. In ersteren werden Abhängigkeiten bevorzugt auf temporale Mechanismen abgebildet: Aufgabenfragmente werden zeitlich so angeordnet, dass beispielsweise gegenseitiger Ausschluss oder Produzenten-Konsumenten-Abhängigkeiten eingehalten werden. In letzteren werden solche Abhängigkeiten mit Hilfe von Synchronisationskonstrukten wie Semaphore oder Schlossvariablen explizit koordiniert. Die Echtzeitsystemarchitektur beeinflusst also die Entwicklung eines Echtzeitsystems auf Ebene der Anwendung und kann dort als stark querschneidende, nicht-funktionale Eigenschaft aufgefasst werden. Diese Eigenschaft beeinflusst darüber hinaus die Implementierung weiterer wichtiger nicht-funktionaler Eigenschaften von Echtzeitsystemen, etwa Redundanz oder Speicherverbrauch. Basierend auf einer geeigneten Repräsentation der kausalen und temporalen Abhängigkeiten auf der Ebene der Anwendung sollen im Rahmen des beantragen Projekts Mechanismen entwickelt werden, um die Echtzeitsystemarchitektur und damit weitere nicht-funktionale Eigenschaften von Echtzeitsystemen gezielt zu beeinflussen.

  • Latency Awareness in Operating Systems

    (Third Party Funds Single)

    Term: 1. May 2011 - 30. April 2020
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)

    The goal of the LAOS project is to investigate in the efficient use of modern many-core processors on operating system level. Thereby providing low latency operating system services even in high contention cases. 
    Self-made minimal kernels providing thread and interrupt management, as well as synchronization primitives are analyzed with respect to performance and scaling characteristics. These kernels consist of different architectural designs and alternative implementations. Strong focus lies on non-blocking implementations on parts, or if possible, on the whole operating system kernel. Standard Intel x86-64 compatible processors are the main target hardware because of their popularity in high-performance parallel computing, server and desktop systems. After careful analysis, modifications of existing kernels e.g. Linux may be possible that increase the performance in highly parallel systems.

  • Trustworthy Clouds - Privacy and Resilience for Internet-scale Critical Infrastructure

    (Third Party Funds Group – Sub project)

    Overall project: Trustworthy Clouds - Privacy and Resilience for Internet-scale Critical Infrastructure
    Term: 1. October 2010 - 1. October 2013
    Funding source: EU - 7. RP / Cooperation / Verbundprojekt (CP)
  • Dependability Aspects in Configurable Embedded Operating Systems

    (Third Party Funds Group – Sub project)

    Overall project: SPP 1500: Design and Architectures of Dependable Embedded Systems
    Term: 1. October 2010 - 30. September 2017
    Funding source: DFG / Schwerpunktprogramm (SPP)
    Future hardware designs for embedded systems will exhibit more parallelism at the price of being less reliable. This bears new challenges for system software, especially the operating system, which has to use and provide software measures to compensate for unreliable hardware. However, dependability in this respect is a nonfunctional concern that affects and depends on all parts of the system. Tackling it in a problem-oriented way by the operating system is an open challenge: (1) It is still unclear, which combination of software measures is most beneficial to compensate certain hardware failures – ideally these measures should be understood as a matter of configuration and adaptation. (2) To achieve overall dependability, the implementation of these measures, even though provided by the operating system, cannot be scoped just to the operating-system layer – it inherently crosscuts the whole software stack. (3) To achieve cost-efficiency with respect to hardware and energy, the measures have, furthermore, to be tailored with respect to the actual hardware properties and reliability requirements of the application. We address these challenges for operating-system design by a novel combination of (1) speculative and resource-efficient fault-tolerance techniques, which can (2) flexibly be applied to the operating system and the application by means of aspect-oriented programming, driven by (3) a tool-based (semi-)automatic analysis of the application and operating-system code, resulting in a strictly problem-oriented tailoring of the latter with respect to hardware-fault tolerance.
  • Security in Invasive Computing Systems (C05)

    (Third Party Funds Group – Sub project)

    Overall project: TRR 89: Invasive Computing
    Term: 1. July 2010 - 30. June 2022
    Funding source: DFG / Sonderforschungsbereich / Transregio (SFB / TRR)
    Untersucht werden Anforderungen und Mechanismen zum Schutz vor böswilligen Angreifern für ressourcengewahre rekonfigurierbare Hardware/Software-Architekturen. Der Fokus liegt auf der Umsetzung von Informationsflusskontrolle mittels Isolationsmechanismen auf Anwendungs-, Betriebssystems- und Hardwareebene. Ziel der Untersuchungen sind Erkenntnisse über die Wechselwirkungen zwischen Sicherheit und Vorhersagbarkeit kritischer Eigenschaften eines invasiven Rechensystems.
  • Invasive Run-Time Support System (iRTSS) (C01)

    (Third Party Funds Group – Sub project)

    Overall project: TRR 89: Invasive Computing
    Term: 1. July 2010 - 30. June 2022
    Funding source: DFG / Sonderforschungsbereich / Transregio (SFB / TRR)

    Teilprojekt C1 erforscht Systemsoftware für invasiv-parallele Anwendungen. Bereitgestellt werden Methoden, Prinzipien und Abstraktionen zur anwendungsgewahren Erweiterung, Konfigurierung und Anpassung invasiver Rechensysteme durch eine neuartige, hochgradig flexible Betriebssystem-Infrastruktur. Diese wird zur praktischen Anwendung in ein Unix-Wirtssystem integriert. Untersucht werden (1) neue Entwurfs- und Implementierungsansätze nebenläufigkeitsgewahrer Betriebssysteme, (2) neuartige AOP-ähnliche Methoden für die statische und dynamische (Re-)konfigurierung von Betriebssystemen sowie (3) agentenbasierte Ansätze für die skalierbare und flexible Verwaltung von Ressourcen.

  • Adaptive Responsive Embedded Systems (ESI 2)

    (Third Party Funds Group – Sub project)

    Overall project: ESI-Anwendungszentrum für die digitale Automatisierung, den digitalen Sport und die Automobilsensorik der Zukunft
    Term: 1. January 2010 - 31. December 2018
    Funding source: Bayerisches Staatsministerium für Wirtschaft und Medien, Energie und Technologie (StMWIVT) (ab 10/2013)
    URL: https://www4.cs.fau.de/Research/ARES/
  • Resource-Efficient Fault and Intrusion Tolerance

    (Third Party Funds Single)

    Term: 1. October 2009 - 31. October 2024
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)
    URL: https://www4.cs.fau.de/Research/REFIT/

    Internet-based services play a central role in today's society. With such services progressively taking over from traditional infrastructures, their complexity steadily increases. On the downside, this leads to more and more faults occurring. As improving software-engineering techniques alone will not do the job, systems have to be prepared to tolerate faults and intrusions.

    REFIT investigates how systems can provide fault and intrusion tolerance in a resource-efficient manner. The key technology to achieve this goal is virtualization, as it enables multiple service instances to run in isolation on the same physical host. Server consolidation through virtualization not only saves resources in comparison to traditional replication, but also opens up new possibilities to apply optimizations (e.g., deterministic multi-threading).

    Resource efficiency and performance of the REFIT prototype are evaluated using a web-based multi-tier architecture, and the results are compared to non-replicated and traditionally-replicated scenarios. Furthermore, REFIT develops an infrastructure that supports the practical integration and operation of fault and intrusion-tolerant services; for example, in the context of cloud computing.

  • Variability Management in Operating Systems

    (Third Party Funds Single)

    Term: 1. November 2008 - 31. October 2011
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)

    Thema des Vorhabens ist die durch nichtfunktionale Eigenschaften von Betriebssystemfunktionen hervorgerufene Variabilität von Systemsoftware (a) durch verschiedene Implementierungen derselben Systemfunktion verursachte Variabilität, um gewisse nichtfunktionale Eigenschaften in Erscheinung treten zu lassen, und (b) die auf der benutzenden Ebene dieser Implementierungen auftretende Variabilität, um die Auswirkungen bestimmter nichtfunktionaler Eigenschaften zu kompensieren. Programmsequenzen zur fallabhängigen Kompensation von Effekten an solchen Auswirkungsstellen in der Systemsoftware werden durch problem- spezifisch ausgelegte Fittinge in einer domänenspezifischen Programmiersprache (DSL) beschrieben: Ein Fitting ähnelt einem Aspekt (AOP), er kann jedoch feingranular an beliebige und extra ausgewiesene Programmstellen in die Systemsoftware eingebracht werden. Die Verschmelzung der Implementierungen und Auswirkungsstellen von ausgewählten nichtfunktionalen Eigenschaften nimmt ein Werkzeug (Fittingkompilierer) vor, das fallspezifisch die Fittinge verarbeitet. Behandelt werden so u.a. architektonische Belange eines Betriebssystems zur Synchronisation, Verdrängung (von Ausführungssträngen) und Rechnerbetriebsart. Der Ansatz wird an Eigen- (CiAO) und Fremdentwicklungen (eCos, Linux) auf seine Eignung hin validiert. Um das Risiko von Fehlentscheidungen im Kompositionsprozess zu verringern, wird eine multikriterielle Bewertung von funktional zwar identischen jedoch in nichtfunktionaler Hinsicht verschiedenen Betriebssystemprodukten vorgenommen.

  • Variability Management in Operating Systems

    (Third Party Funds Single)

    Term: 1. October 2008 - 31. October 2011
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)

    Todays operating systems (as well as other system software) offer a great deal of static configurability to tailer them with respect to a specific application or hardware platform. Linux 3.2, for instance, provides (via its Kconfig modells and tools) more than twelve thousand configurable features for this purpose. Technically, the implementation of all these features is spread over multiple levels of the software generation process, including the configuration system, build system, C preprocessor, compiler, linker, and more. This enormous variablity has become unmanageable in practice; in the case of Linux it already has led to thousands of variability defects. With this term, we denote bugs and other quality issues related to the implementation of variable features. Variability defects manifest as configuration consistency and and configuration coverage issues.

     

    In the VAMOS project, we investigate methods and tools to mitigate the situation by a holistic view on variability. Our findings have already led to more than 100 accepted patches in the Linux mainline kernel (see our EuroSys '11 and SPLC '12 papers) and an approach for the automatic tailoring of Linux server systems in order to reduce the exploitable code base (see our HotDep '12 paper). Currently we are working on the issue of configuration coverage (see our PLOS '12 paper).

  • Platform for evaluation and education of embedded and safety-critical system software

    (Third Party Funds Single)

    Term: 1. October 2007 - 31. July 2014
    Funding source: Siemens AG

    The project originally started in the context of the CoSa project, where it is intended to be deployed as a creditable demonstrator for safety-critical mission scenarios. During the development of the I4Copter prototype, it turned out to be more of a challenge than initially expected, both in terms of complexity and applicability. The software required for flight control, navigation and communication is a comprehensive and demanding application for the underlying system software. That is why it has emerged as a demonstrative showcase, addressing various aspects of system software research. This way, other research projects, such as CiAO, also benefit from this platform. Beyond the domain of computer science, the development of a quadrotor helicopter also includes challenges in the areas of engineering, manufacturing and automatic control. That is why I4Copter is now an interdisciplinary project with partners in other sciences. It is therefore an ideal platform for students, especially those of combined study programs (e.g. Mechatronics or Computational Engineering), showing the need for cross-domain education.

Jürgen Kleinöder is CIO of the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Germany and Senior Academic Director at the Department of Computer Science 4 (Chair for Distributed Systems and Operating Systems). He is  Deputy Managing Director of the Department of Computer Science and Managing Director of the Transregional Collaborative Research Center “Invasive Computing” (SFB/TRR 89).

He completed his Master’s Degree (Diplom-Informatiker) in 1987 and his Ph.D. (Dr.-Ing.) in 1992 at the University of Erlangen. Between 1986 and 1989 he worked on UNIX operating-system support for multiprocessor architectures. From 1988 to 1991 he was member of the project groups for the foundation of a Bavarian University Network and the German IP network.

He is currently interested in all aspects of distributed object-oriented operating-system architectures; particularly in concepts for application-specific adaptable operating-system and run-time-system software (middleware).

He is member of the ACM, Eurosys and the German GI. From 2001 – 2008 he was chair of the special interest group “operating systems” in the GI.