Research Projects
Blockchain applications in aviation
(Third Party Funds Single)Term: 1. September 2022 - 31. August 2024
Funding source: Bundesministerium für Wirtschaft und Klimaschutz (BMWK)Das Konzept von Distributed Ledger Systemen (Blockchain) ist eine grundlegend neue Basistechnologie, welche in der öffentlichen Wahrnehmung derzeit verstärkt im Fokus steht und welche erhöhtes Potential zur Lösung von Problemstellungen in einer Vielzahl von Anwendungsbereichen verspricht.
Daneben wandelt sich die Luftverkehrslandschaft absehbar mit einer massiven Zunahme an Luftverkehrsteilnehmern und weiteren Luftverkehrsarten wie autonomen Kleinstsystemen. Des Weiteren besteht ein Bedarf bei der Digitalisierung in administrativen und operativen Bereichen des Luftverkehrs, welcher sich dabei in eine globale Digitalisierungsstrategie einordnet. Im Luftverkehr sollte dabei der Erhalt des hohen Sicherheitsstandards bzw. sogar eine Erhöhung dessen ein übergeordnetes Ziel bleiben.
Im Rahmen des vorliegenden Projektes soll die Anwendbarkeit der Basistechnologie der Distributed Ledger Technologie für die Luftfahrt untersucht werden, die nutzbaren Potentiale für die Luftfahrt identifiziert werden und zur Anwendung gebracht werden.
Zur Maximierung des Vorhabenerfolges in dem Projekt wird folgendes Vorgehen angestrebt: Zunächst werden in einer divergenten Explorationsphase parallel Anforderungen und Potentiale in der Luftfahrt erarbeitet und zugleich die Basistechnologie Blockchain technisch analysiert und in den luftfahrtbezogenen Kontext gebracht. Anschließend werden Konzepte zur Anwendung der Blockchaintechnologie in zwei Anwendungsbereichen entwickelt. Diese decken eine repräsentative Bandbreite an Einsatzbereichen in der Luftfahrt ab und bieten zugleich ein hervorgehobenes Nutzungspotential. Zum einen wird ein Konzept entwickelt, welches eine lückenlose, revisionssichere und vertrauenswürdige digitale Dokumentation aller Bauteile eines Flugzeuges über den gesamten Lebenszyklus eines Luftfahrzeuges zwischen allen Beteiligten auf Basis der Distributed Ledger Technologie ermöglicht. Dies verspricht Vorteile in Bezug auf Reduzierung des Verwaltungsaufwandes, Nachhaltigkeit und Vertrauenssteigerung und repräsentiert einen administrativen Anwendungsfall. Zum anderen wird ein Konzept eines dezentralen Flugdatenschreibers entwickelt, um im Hinblick auf neue Luftraumstrukturen und -Teilnehmer Freigaben und Zustandsdaten von Fluggeräten für die Analyse von potenziellen Fehlerfällen sicher zu dokumentieren. Die Blockchaintechnologie erlaubt es dabei, die Aufzeichnung von Flugzustandsdaten dezentral verteilt durch mehrere Luftverkehrsteilnehmer und Bodenstationen vorzunehmen, wodurch eine hohe Verfügbarkeit der Aufzeichnung erreicht wird. Gleichzeitig werden Manipulationen der Daten durch einzelne Teilnehmer ausgeschlossen und der Zugang zu den Aufzeichnungen ist auch nach einer möglichen Zerstörung des UAVs gegeben. Dieses Konzept wird anschließend prototypisch implementiert und dient so dem Nachweis der praktischen Applikabilität der Blockchaintechnologie in dem vorgesehenen Anwendungskontext. Ergebnisse werden der Fachöffentlichkeit und der europäischen Industrie zugänglich gemacht, so dass dieses Projekt direkt den Technologievorsprung des Luftfahrtstandorts sichert und eine Anschlussverwendung in Folgeprojekten mit Industriepartnern ermöglicht.
Das Vorhaben wird in Kooperation der Technischen Universität Braunschweig und der Friedrich-Alexander-Universität Erlangen-Nürnberg durchgeführt. Durch die Beteiligung des Instituts für Flugführung (IFF, TUBS) sowie des Lehrstuhl Informatik 16 für Systemsoftware (I16, FAU) können interdisziplinär die benötigten Kompetenzen im Bereich der Informationstechnologie und Luftfahrt eingebracht werden und der Verbund bildet damit ein Alleinstellungsmerkmal und vorteilhafte Voraussetzungen zur Erreichung der Projektziele.Design and validation of scalable, Byzantine fault tolerant consensus algorithms for blockchains
(Third Party Funds Single)Term: 1. September 2022 - 1. September 2025
Funding source: Deutsche Forschungsgemeinschaft (DFG)Distributed Ledger Technologies (DLTs), often referred to as blockchains, enable the realisation of reliable and attack-resilient services without a central infrastructure. However, the widely used proof-of-work mechanisms for DLTs suffer from high latencies of operations and enormous energy costs. Byzantine fault-tolerant (BFT) consensus protocols prove to be a potentially energy-efficient alternative to proof-of-work. However, current BFT protocols also present challenges that still limit their practical use in production systems. This research project addresses these challenges by (1) improving the scalability of BFT consensus protocols without reducing their resilience, (2) applying modelling approaches for making the expected performance and timing behaviour of these protocols more predictable, even under attacks, taking into consideration environmental conditions, and (3) supporting the design process for valid, automated testable BFT systems from specification to deployment in a blockchain infrastructure. The topic of scalability aims at finding practical solutions that take into account challenges such as recovery from major outages or upgrades, as well as reconfigurations at runtime. We also want to design a resilient communication layer that decouples the choice of a suitable communication topology from the actual BFT consensus protocol and thus reduces its complexity.This should be supported by the use of trusted hardware components. In addition, we want to investigate combinations of these concepts with suitable cryptographic primitives to further improve scalability. Using systematic modelling techniques, we want to be able to analyse the efficiency of scalable, complex BFT protocols (for example, in terms of throughput and latency of operations), already before deploying them in a real environment, based on knowledge of system size, computational power of nodes, and basic characteristics of the communication links. We also want to investigate robust countermeasures that help defending against targeted attacks in large-scale blockchain systems. The third objective is to support the systematic and valid implementation in a practical system, structured into a constructive, modular approach, in which a validatable BFT protocol is assembled based on smaller, validatable building blocks; the incorporation of automated test procedures based on a heuristic algorithm which makes the complex search space of misbehaviour in BFT systems more manageable; and a tool for automated deployment with accompanying benchmarking and stress testing in large-scale DLTs.
Dynamic Operating-System Specialisation
(Third Party Funds Single)Term: since 1. May 2022
Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)
URL: https://sys.cs.fau.de/research/dossAn operating system is located between two fronts: On the one hand ("above") the machine programs of the applications with their sometimes very different functional and non-functional requirements and on the other hand ("below") the computer hardware, whose features and equipment ideally are to be made available "unfiltered" and "noise-free" for the applications. However, a general purpose system cannot be as efficient in any of its functions as a system designed specifically for a specific purpose, and less demanding applications may require that they are not forced to pay for the resources consumed by the unneeded functions. So it is not uncommon for large systems, once put into operation, to be subject to frequent changes --- precisely in order to achieve a better fit to changing application requirements.
The ideal operating system offers exactly what is required for the respective application --- no more and no less, but also depending on the characteristics of the hardware platform. However, such an ideal is only realistic, if at all, for an uni-programming mode of operation. In the case of multi-programming, the various applications would have to have "sufficiently the same" functional and non-functional requirement characteristics in order not to burden any of the applications with the overhead that the unneeded functions entail. An operating system with these characteristics falls into the category of special purpose operating system, it is tailored to the needs of applications of a certain type.
This is in contrast to the general purpose operating system, where the ultimate hope is that an application will not be burdened with excessive overhead from the unneeded functions. At least one can try to minimise the "background noise" in the operating system if necessary --- ideally in this case with a different "discount" depending on the program type. The operating system would then not only have to be dynamically freed from unnecessary ballast and shrink with less demanding applications, but also be able to grow again with more demanding applications with the necessary and additional functions. Specialisation of an operating system depending on the respective application ultimately means functional reduction and enrichment, for which a suitable system software design is desirable, but often can no longer be implemented, especially with legacy systems.
One circumstance for the specialisation of an operating system relates to measures explicitly initiated "from outside". On the one hand, this affects selected system calls and, on the other hand, tasks such as bootstrapping and the loading of machine programs, operating system kernel modules or programs that are to be executed in sandbox-like virtual machines within the operating system kernel. This form of specialisation also enables the dynamic generation of targeted protective measures as a result of particularly vulnerable operating system operations, such as loading external modules of the operating system kernel. The other determinant of the specialisation of an operating system relates to measures initiated implicitly "from within". This concerns possible reactions of an operating system to changes in its own runtime behavior that are only noticeable during operation, in order to then adapt the strategies of resource management to the respective workload and to seamlessly integrate the corresponding software components into the existing system.
The project focus is the dynamic operating system specialisation triggered by extrinsic and intrinsic events. The focus is on concepts and techniques that (a) are independent of a specific programming paradigm or hardware approach and (b) are based on just in time (JIT) compilation of parts of the operating system (kernel) in order to to be loaded on demand or to be replaced anticipatory to the respective conditions on the "operating system fronts". Existing general-purpose systems such as Linux are the subject of investigation.
Non-volatility in energy-aware operating systems
(Third Party Funds Single)Term: since 1. January 2022
Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)
URL: https://sys.cs.fau.de/en/research/neon-noteThe current trend toward fast, byte-addressable non-volatile memory (NVM) with latencies and write resistance closer to SRAM and DRAM than flash positions NVM as a possible replacement for established volatile technologies. While on the one hand the non-volatility and low leakage capacity make NVM an attractive candidate for new system designs in addition to other advantageous features, on the other hand there are also major challenges, especially for the programming of such systems. For example, power failures in combination with NVM to protect the computing status result in control flows that can unexpectedly transform a sequential process into a non-sequential process: a program has to deal with its own status from earlier interrupted runs.
If programs can be executed directly in the NVM, normal volatile main memory (functional) becomes superfluous. Volatile memory can then only be found in the cache and in device/processor registers ("NVM-pure"). An operating system designed for this can dispense with many, if not all, persistence measures that would normally otherwise be implemented and thereby reduce its level of background noise. Considered in detail, this enables energy requirements to be reduced, computing power to be increased and latencies to be reduced. In addition, the elimination of these persistence measures means that an "NVM-pure" operating system is leaner than its functionally identical twin of conventional design. On the one hand, this contributes to better analysability of non-functional properties of the operating system and, on the other hand, results in a smaller attack surface or trustworthy computing base.
The project follows an "NVM-pure" approach. A threatening power failure leads to an interrupt request (power failure interrupt, PFI), with the result that a checkpoint of the unavoidable volatile system state is created. In addition, in order to tolerate possible PFI losses, sensitive operating system data structures are secured in a transactional manner analogous to methods of non-blocking synchronisation. Furthermore, methods of static program analysis are applied to (1) cleanse the operating system of superfluous persistence measures, which otherwise only generate background noise, (2) break up uninterruptible instruction sequences with excessive interruption latencies, which can cause the PFI-based checkpoint backup to fail and (3) define the work areas of the dynamic energy demand analysis. To demonstrate that an "NVM-pure" operating system can operate more efficiently than its functionally identical conventional twin, both in terms of time and energy, the work is carried out with Linux as an example.
Power-fail aware byte-addressable virtual non-volatile memory
(Third Party Funds Group – Sub project)Overall project: SPP 2377: Disruptive Memory Technologies
Term: 5. April 2021 - 14. May 2026
Funding source: DFG / Schwerpunktprogramm (SPP)
URL: https://sys.cs.fau.de/en/research/pave-noteVirtual memory (VM) subsystems blur the distinction between storage and memory such that both volatile and non-volatile data can be accessed transparently via CPU instructions. Each and every VM subsystem tries hard to keep highly contended data in fast volatile main memory to mitigate the high access latency of secondary storage, irrespective of whether the data is considered to be volatile or not. The recent advent of byte-addressable NVRAM does not change this scheme in principle, because the current technology can neither replace DRAM as fast main memory due to its significantly higher access latencies, nor secondary storage due to its significantly higher price and lower capacity. Thus, ideally, VM subsystems should be NVRAM-aware and be extended in such a way that all available byte-addressable memory technologies can be employed to their respective advantages. By means of an abstraction anchored in the VM management in the operating system, legacy software should then be able to benefit unchanged and efficiently from byte- addressable non-volatile main memory. Due to the fact that most VM subsystems are complex, highly-tuned software systems, which have evolved over decades of development, we follow a minimally invasive approach to integrate NVRAM-awareness into an existing VM subsystem instead of developing an entirely new system from scratch. NVRAM will serve as an immediate DRAM substitute in case of main memory shortage and inherently support processes with large working sets. However, due to the high access latencies of NVRAM, non-volatile data also needs to be kept at least temporarily in fast volatile main memory and the volatile CPU caches, anyway. Our new VM subsystem - we want to adapt FreeBSD accordingly - then takes care of migration of pages between DRAM and NVRAM, if the available resources allow. Thus, the DRAM is effectively managed as a large software-controlled volatile page cache for NVRAM. Consequently, this raises the question of data losses caused by power outages. The VM subsystem therefore needs to keep its own metadata in a consistent and recoverable state and modified pages in volatile memory need to be copied to NVRAM to avoid losses. The former requires an extremely efficient transactional mechanism for modifying of complex, highly contended VM metadata, while the latter must cope with potentially large amounts of modified pages with limited energy reserves.
Micro Replication for Dependable Network-based Services
(Third Party Funds Single)Term: 1. November 2024 - 31. October 2027
Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)Network-based services such as distributed databases, file systems, or blockchains are essential parts of today's computing infrastructures and therefore must be able to withstand a wide spectrum of fault scenarios, including hardware crashes, software failures, and attacks. Although a variety of state-machine replication protocols exist that provide fault and intrusion tolerance, it is inherently difficult to build dependable systems based on their complex and often incomplete specifications. Unfortunately, this commonly leads to systems being vulnerable against correlated failures or attacks, for example, in cases where, to save development and maintenance costs, all replicas in a system share the same homogeneous implementation.
In the Mirador project, we seek to eliminate this gap between theory and practice, proposing a novel paradigm for the specification and implementation of dependable systems: micro replication. In contrast to existing systems, micro-replication architectures do not consist of a few monolithic and complex replicas, but instead are organized as dedicated, loosely coupled micro-replica clusters that are each responsible for a different protocol step or mechanism. As a key benefit of providing only a small subset of the overall protocol functionality, micro replicas make it significantly easier to reason about the completeness and correctness of both specifications as well as implementations. To further reduce complexity, all micro replicas follow a standardized internal work flow, thereby greatly facilitating the task of introducing heterogeneity at the replica, communication, and authentication level.
Starting from this basic concept, in the Mirador project we explore micro replication as a means to build dependable replicated systems and examine its flexibility by developing micro-replication architectures for different fault models (i.e., crashes and Byzantine faults). In particular, our research focuses on two problem areas: First, we aim at increasing the resilience of micro-replicated systems by enabling them to recover from replica failures. Among other things, this requires mechanisms for rejuvenating micro replicas from a clean state and integrating replacement replicas at runtime. Second, our goal is to improve the performance and efficiency of micro-replicated systems and the applications running on top of them. Specifically, this includes the design of techniques to reduce overheads by exploiting optimistic approaches that save processor and network resources in the absence of faults. Furthermore, we investigate ways to restructure the service logic and for example outsource preprocessing steps to upstream micro-replica clusters. To evaluate the concepts, protocols, and mechanisms developed in the Mirador project, we build a heterogeneous micro-replicated platform that allows us to conduct experiments for a wide range of different settings and with a variety of applications.
Resilient Power-Constrained Embedded Communication Terminals
(Third Party Funds Group – Sub project)Overall project: SPP 2378 Resilient Worlds
Term: since 26. March 2021
Funding source: Deutsche Forschungsgemeinschaft (DFG)Within the wide subject of resilience in networked worlds ResPECT focuses on a core element of all networked systems: sensor- and actuator-nodes in cyber-physical systems. Communication up to today is understood and implemented as an auxiliary functionality of embedded systems. The system itself is disruption-tolerant and able to handle power failures or in a limited scope even hardware problems, but the communication isn't part of the overall design. In the best case it can make use of the underlying system resilience. ResPECT develops a holistic operating system and communication protocol stack, assuming that conveying information (the receipt of control data for actuators or the sending of sensor data) is a core task of all networked components. Consequently it must become a part of the operating system's management functionality. ResPECT builds on two pillars: Non-volatile memory and transactional operation. Non- volatile memory in recent years has evolved towards a serious element of the storage hierarchy. Even embedded platforms with exclusively non-volatile memory become conceivable. Network communication, other than social communication, is transactional in its design: Data is collected and under channel constraints like latency, error-resilience and energy consumption and content constraints like age and therewith value of information is transmitted between the communication partners. Other than for operating systems this communication, however, faces many external disruptions and impacts. In addition, the duration of a disruption can have severe implications on the validity of already completed transactions like the persistence of the physical connection. Hence on resumption all this has to be considered. ResPECT consequently will - by interdisciplinary research of operating system and communication experts - develop a model based on transactions and will apply non-volatile memory to ensure, that states during the flow of transactions are known at any point in time and can and will be stored persistently. This monitoring and storing functionality must be very efficient (with respect to the energy consumption as well as to the amount of data to be stored in non-volatile memory) and hence be implemented as a core functionality of the operating system. To ensure generalizability and to have the model available for a variety of future platforms, ResPECT will focus on IP-networks and use communication networks which typically are operated as WAN, LAN or PAN (wide, local or personal area networks).
Whole-System Optimality Analysis and Tailoring of Worst-Case–Constrained Applications
(Third Party Funds Single)Term: since 1. November 2022
Funding source: Deutsche Forschungsgemeinschaft (DFG)Energy-constrained real-time systems, such as implantable medical devices, are prevalent in modern life. These systems demand its software to fulfill both properties of safe and energy-efficient task executions. Regarding safety, these systems must execute their tasks within execution-time and energy bounds since resource-budget violations potentially cause danger to life. In order to guarantee the system's safe execution with the given time and energy resources, static program-code analysis tools are automatically able to determine the system's worst-case resource-consumption behavior. However, existing static analyses so far are not able to tackle the problem of resource-efficient execution while maintaining the property of safe execution under the given resource constraints. Achieving the system's efficient execution through manual tailoring would involve an unrealistically high effort, especially when considering the large amount of energy-saving features in modern system-on-chip (SoC) platforms. In order to eventually yield resource-optimal execution and likewise allow the operating system to safely schedule tasks, a whole-system view on software tasks, their resource constraints, and hardware features would be essential, which goes beyond the current state of the art.
The research proposal Watwa describes an approach for whole-system optimality analysis and automatic tailoring of worst-case-constrained applications. The core idea is the automatic generation of variants of the analyzed application that are equivalent from a functional point of view. The variant generation accounts for the multitude of modern energy-saving features, which, in turn, allows subsequent optimizing analyses to tailor the application by aggressively switching off unneeded, power-consuming components in the SoC. The temporal and energetic behavior of these variants is examined by means of worst-case analysis tools that yield bounds on the respective resource demands, which eventually achieves the safe execution during runtime. This novel combination of the variant generation and the analysis of their worst-case behavior allows Watwa to systematically determine hints for safe, resource-optimal scheduling sequences. In order to exploit these hints during runtime, the project proposes an operating system along with its scheduler that dynamically reacts to changes in the environment and exploits the determined scheduling hints for efficiency considerations while enforcing safe operation within resource budgets. In summary, the goal of this project is to provide answers to the following two questions: (1) How can program-code analyses determine resource-optimal task variants by exploiting modern hardware features while accounting for worst-case bounds? (2) How can operating systems exploit analytically determined scheduling hints on possible task-execution sequences and hardware activations to guarantee safe task executions while increasing the system's efficiency?