• Skip navigation
  • Skip to navigation
  • Skip to the bottom
Simulate organization breadcrumb open Simulate organization breadcrumb close
Friedrich-Alexander-Universität Lehrstuhl für Informatik 4 (Systemsoftware)
  • FAUTo the central FAU website
  1. Friedrich-Alexander-Universität
  2. Technische Fakultät
  3. Department Informatik
Suche öffnen
  • Deutsch
  • Campo
  • StudOn
  • FAUdir
  • Jobs
  • Map
  • Help
  1. Friedrich-Alexander-Universität
  2. Technische Fakultät
  3. Department Informatik
Friedrich-Alexander-Universität Lehrstuhl für Informatik 4 (Systemsoftware)
Navigation Navigation close
  • Chair
    • Team
    • News
    • Contact and directions
    • Mission Statement
    Portal Chair
  • Research
    • Research Fields
      • Confidential Computing
      • Distributed Systems
      • Embedded Systems Software
      • Operating Systems
    • Research Projects
      • AIMBOS
      • BALu
      • BFT2Chain
      • DOSS
      • Mirador
      • NEON
      • PAVE
      • ResPECT
      • Watwa
    • Project Initiatives
      • maRE
    • Seminar
      • Systemsoftware
    Portal Research
  • Publications
  • Teaching
    • List of German Lectures
    Portal Teaching
  • Theses
  1. Home
  2. Research
  3. AIMBOS

AIMBOS

In page navigation: Research
  • Research Fields
  • AIMBOS
  • BALu
  • BFT2Chain
  • DOSS
  • Mirador
  • NEON
  • NEON Note
  • NVRAM-ified Unixoid
  • PAVE
  • PAVE Note
  • REDOS Note
  • REFIT
  • ResPECT
  • Watwa
  • Archive

AIMBOS

AIMBOS: Abstract Interpretation for Embedded AI Code Safety

(Third Party Funds Single)

Project leader: Dr. Peter Wägemann (FAU), Dr. Dominik Penk (Schaeffler), Dr. Dominik Riedelbauch (Schaeffler)
Project members: Tobias Häberlein (FAU), Dr. Isabella Stilkerich (Schaeffler)
Start date: 1. Januar 2025
End date: 31. Dezember 2027
Acronym: AIMBOS
Funding source: Schaeffler Technologies AG & Co. KG

Abstract

Artificial intelligence (AI), particularly in the form of machine learning models, steadily gains importance in industrial applications. For Schaeffler, the usage of those methods in edge- or embedded devices is of particular interest. Deployment and development of AI solutions in these environments presents a unique array of challenges. This project specifically focuses on proving functional code safety and the correctness of deployed applications. To this end, we will investigate how Abstract Interpretation, a well-known mathematical method to prove a wide range of program properties, can be used and extended for ML-based applications containing (deep) Neural Networks.

Friedrich-Alexander-Universität
Erlangen-Nürnberg

Schlossplatz 4
91054 Erlangen
  • Impressum
  • Datenschutz
  • Barrierefreiheit
  • Facebook
  • RSS Feed
  • Xing
Up