INDUX-R: Transforming European INDUstrial Ecosystems through eXtended Reality enhanced by human-centric AI and secure, 5G-enabled IoT (#101135556)
EU Horizon 2020 Research & Innovation Action (RIA)
Runtime: 36 months
Role: Principal Investigator, Co-Author Proposal
Partners: CERTH, FORTH, CWI, University of Augsburg, TUM, University of Barcelona, Fundacio Eurecat, FINT, NOVA, ORAMA, INOVA, RINA-CSM, IDECO, Crealsa, Inventics, University of Geneva, EKTACOM, University of Jena
INDUX-R will create an XR ecosystem with concrete technological advances over existing offerings, validated in scenarios across the Industry 5.0 spectrum. Starting from the virtualization of the real world, INDUX-R will enable users to seamlessly create ad-hoc, realistic digital representations of their surroundings using commodity hardware and providing an immersive background for INDUX-R applications, by further researching Neural Radiance Fields (NeRF), 3D scanning and audio-reconstruction methodologies. This work will be enriched with an XR toolkit for; i) the synthesis of speech driven, lifelike face animations utilising Transformers and Generative Adversarial Networks, and; ii) the generation of photo-realistic human avatars driven by 3D human pose estimation and local radiance fields for accurately replicating human motion, modelling deformation phenomena and reproducing natural texture. INDUX-R will research real-time, egocentric perception algorithms, integrated in XR wearables to provide contextual analysis of the users’ surroundings and enable new ways of XR interaction using visual, auditory and haptic cues. Egocentric perception will be combined with virtual elastic objects that the user can manipulate and deform in XR according to material properties, getting multi-sensorial feedback in real-time. By exploiting this closed-feedback loop, INDUX-R will develop a dynamic and pervasive user interface environment that can adapt to user’s profile, abilities and task at hand. This adaptation process will be controlled by Reinforcement Learning algorithms that will adjust the presented XR content in an online, human-centric manner that improves accessibility. Through these interfaces human-in-the-loop pipelines based on Active Learning will be implemented where user feedback will be utilised to improve the quality of services and applications offered.
Wiss-KKI: Wissenschaftskommunikation über und mit kommunikativer künstlicher Intelligenz: Emotionen, Engagement, Effekte
BMBF (Förderrichtlinie Wissenschaftskommunikationsforschung, 7.9% Acceptance Rate in the Call)
Runtime: 01/2024-12/2026
Role: Principal Investigator, Co-author Proposal
Partners: University of Augsburg, TUM, TU Braunschweig
tba
Noise Embeddings with a Hearing Aid Tailored Deep Learning Noise Supression Framework
Industry Cooperation with Sivantos GmbH
https://www.uni-augsburg.de/de/fakultaet/fai/informatik/prof/eihw/forschung/projekte/
COHYPERA: Computed hyperspectral perfusion assessment
Seed Funding UAU Project
Runtime: 24 months
Role: Principal Investigator, Co-author Proposal
Partners: University of Augsburg
Over the last years, imaging photoplethysmography (iPPG) has been attracting immense interest. iPPG assesses the cutaneous perfusion by exploiting subtle color variations from videos. Common procedures use RGB cameras and employ the green channel or rely on a linear combination of RGB to extract physiological information. iPPG can capture multiple parameters such as heart rate (HR), heart rate variability (HRV), oxygen saturation, blood pressure, venous pulsation and strength as well as spatial distribution of cutaneous perfusion. Its highly convenient usage and a wide range of possible applications, e.g. patient monitoring, using skin perfusion as early risk score and assessment of lesions, make iPPG a diagnostic mean with immense potential. Under real -world conditions, however, iPPG is prone to errors. Particularly regarding analyses beyond HR, the number of published works is limited, proposed algorithms are immature, basic mechanisms are not completely understood and iPPG’s potential is far from being exploited. We hypothesize that hyperspectral (HS) reconstruction by artificial intelligence (AI) methods can fundamentally improve iPPG and extend its applicability. HS reconstruction refers to the estimation of HS images from RGB images. The technique has recently gained much attention but is not common to iPPG. COHYPERA aims to prove the potential of HS reconstruction as universal processing step for iPPG. The pursued approach takes advantage of the fact that the HS reconstruction can incorporate knowledge and training data to yield a high dimensional data representation, which enables various analyses.
Silent Paralinguistics (#SCHU2508/15-1)
DFG (German Research Foundation) Project
Runtime: 01.09.2023 – 31.08.2026
Role: Principal Investigator, Co-author Proposal
Partners: TUM, University of Bremen
We propose to combine Silent Speech Interfaces with Computational Paralinguistics to form Silent Paralinguistics (SP). To reach the envisioned project goal of inferring paralinguistic information from silently produced speech for natural spoken communication, we will investigate three major questions: (1) How well can speaker states and traits be predicted from EMG signals of silently produced speech, using the direct and indirect silent paralinguistics approach? (2) How to integrate the paralinguistic predictions into the Silent Speech Interface to generate appropriate acoustic speech from EMG signals (EMG-to-speech)? and (3) Does the resulting paralinguistically enriched acoustic speech signal improve the usability of spoken communication with regards to naturalness and user acceptance?
HearTheSpecies: Using computer audition to understand the drivers of soundscape composition, and to predict parasitation rates based on vocalisations of bird species (#SCHU2508/14-1)
(“Einsatz von Computer-Audition zur Erforschung der Auswirkungen von Landnutzung auf Klanglandschaften, sowie der Parasitierung anhand von Vogelstimmen“)
DFG (German Research Foundation) Project, Schwerpunktprogramm „Biodiversitäts-Exploratorien“
Runtime: 01.03.2023 – 29.02.2026
Role: Principal Investigator, Co-author Proposal
Partners: University of Augsburg, TUM, University of Freiburg
The ongoing biodiversity crisis has endangered thousands of species around the world and its urgency is being increasingly acknowledged by several institutions – as signified, for example, by the upcoming UN Biodiversity Conference. Recently, biodiversity monitoring has also attracted the attention of the computer science community due to the potential of disciplines like machine learning (ML) to revolutionise biodiversity research by providing monitoring capabilities of unprecedented scale and detail. To that end, HearTheSpecies aims to exploit the potential of a heretofore underexplored data stream: audio. As land use is one of the main drivers of current biodiversity loss, understanding and monitoring the impact of land use on biodiversity is crucial to mitigate and halt the ongoing trend. This project aspires to bridge the gap between existing data and infrastructure in the Exploratories framework and state-of-the-art computer audition algorithms. The developed tools for coarse and fine scale sound source separation and species identification can be used to analyse the interaction among environmental variables, local and regional land-use, vegetation cover and the different soundscape components: biophony (biotic sounds), geophony (abiotic sounds) and anthropophony (human-related sounds).
SHIFT: MetamorphoSis of cultural Heritage Into augmented hypermedia assets For enhanced accessibiliTy and inclusion (#101060660)
EU Horizon 2020 Research & Innovation Action (RIA)
Runtime: 01.10.2022 – 30.09.2025
Role: Principal Investigator, Workpackage Leader, Co-Author Proposal
Partners: Software Imagination & Vision, Foundation for Research and Technology, Massive Dynamic, Audeering, University of Augsburg, TUM, Queen Mary University of London, Magyar Nemzeti Múzeum – Semmelweis Orvostörténeti Múzeum, The National Association of Public Librarians and Libraries in Romania, Staatliche Museen zu Berlin – Preußischer Kulturbesitz, The Balkan Museum Network, Initiative For Heritage Conservation, Eticas Research and Consulting, German Federation of the Blind and Partially Sighted
The SHIFT project is strategically conceived to deliver a set of technological tools, loosely coupled that offers cultural heritage institutions the necessary impetus to stimulate growth, and embrace the latest innovations in artificial intelligence, machine learning, multi-modal data processing, digital content transformation methodologies, semantic representation, linguistic analysis of historical records, and the use of haptics interfaces to effectively and efficiently communicate new experiences to all citizens (including people with disabilities).
Machine Learning für Kameradaten mit unvollständiger Annotation
Industry Cooperation with BMW AG
Runtime: 01.01.2022 – 31.12.2023
Role: Principal Investigator
Partners: University of Augsburg, BMW AG
The project aims at self-supervised and reinforced learning for analysis of camera data with incomplete annotation.
causAI: AI Interaktionsoptimierung bei Videoanrufen im Vertrieb (#03EGSBY853)
BMWi (Federal Ministry for Economic Affairs and Energy) EXIST Business Start-up Grant
Runtime: tba
Role: Mentor
Partners: University of Augsburg
causAI analysiert die Sprache, Gestik und Mimik von vertrieblichen Videoanrufen mithilfe von künstlicher Intelligenz, um die digitale Vertriebskompetenz zu verbessern. Ziel ist es, causAI als innovatives Softwareprodukt für Vertriebsgesprächsunterstützung und -schulung im Vertrieb zu etablieren.
AUDI0NOMOUS: Agentenbasierte, Interaktive, Tiefe 0-shot-learning-Netzwerke zur Optimierung von Ontologischem Klangverständnis in Maschinen
(Agent-based Unsupervised Deep Interactive 0-shot-learning Networks Optimising Machines’ Ontological Understanding of Sound) (# 442218748)
DFG (German Research Foundation) Reinhart Koselleck-Projekt
Runtime: 01.01.2021 – 31.12.2025
Role: Principal Investigator, Co-author Proposal
Partners: University of Augsburg, TUM
Soundscapes are a component of our everyday acoustic environment; we are always surrounded by sounds, we react to them, as well as creating them. While computer audition, the understanding of audio by machines, has primarily been driven through the analysis of speech, the understanding of soundscapes has received comparatively little attention. AUDI0NOMOUS, a long-term project based on artificial intelligent systems, aims to achieve a major breakthroughs in analysis, categorisation, and understanding of real-life soundscapes. A novel approach, based around the development of four highly cooperative and interactive intelligent agents, is proposed herein to achieve this highly ambitious goal. Each agent will autonomously infer a deep and holistic comprehension of sound. A Curious Agent will collect unique data from web sources and social media; an Audio Decomposition Agent will decompose overlapped sounds; a Learning Agent will recognise an unlimited number of unlabelled sound; and, an Ontology Agent will translate the soundscapes into verbal ontologies. AUDI0NOMOUS will open up an entirely new dimension of comprehensive audio understanding; such knowledge will have a high and broad impact in disciplines of both the sciences and humanities, promoting advancements in health care, robotics, and smart devices and cities, amongst many others.
EASIER: Intelligent Automatic Sign Language Translation (#101016982)
EU Horizon 2020 Research & Innovation Action (RIA)
Runtime: 01.01.2021 – 31.12.2023
Role: Principal Investigator
Partners: Martel GmbH Martel, Athena Research & Innovation Center in Information Communication & Knowledge Technologies, Universität Hamburg, Radboud University, University of Surrey, University of Zurich, CNRS, DFKI, audEERING GmbH, nuromedia GmbH, Swiss TXT AG, European Union of the Deaf iVZW, SCOP Interpretis, University College London
EASIER aims to create a framework for barrier-free communication among deaf and hearing citizens across the EU by enabling users of European SLs to use their preferred language to interact with hearing individuals, via incorporation of state-of-the-art NMT technology that is capable of dealing with a wide range of languages and communication scenarios. To this end, it exploits a robust data-driven SL (video) recognition engine and utilizes a signing avatar engine that not only produces signing that is easy to comprehend by the deaf community but also integrates information on affective expressions and coherent prosody. The envisaged ecosystem will incorporate a robust translation service surrounded by numerous tools and services which will support equal participation of deaf individuals to the whole range of everyday-life activities within an inclusive community, and also accelerate the incorporation of less-resourced SLs into SL technologies, while it leverages the SL content creation industry. The deaf community is heavily involved in all project processes, while deaf researchers are among the staff members of all SL expert partners.
MARVEL: Multimodal Extreme Scale Data Analytics for Smart Cities Environments (#957337)
EU Horizon 2020 Research & Innovation Action (RIA)
Runtime: 01.01.2021 – 31.12.2023
Role: Principal Investigator
Partners: Idryma Technologies, Infineon, Aarhus University, Atos Spain, Consiglio Nazionale delle Ricerche, Intrasoft, FBK, audEERING GmbH, Tampereen Korkeakoulusaatio, Privanova, Sphynx Technology Solutions, Comune die Trento, Univerzitet u Novom Sadu Fakultet Tehnickih Nauka, Information Technology for Market Leadership, Greenroads Limited, Zelus Ike, Instytut Chemii Bio Organicnej Polskiej Akademii Nauk
The “Smart City” paradigm aims to support new forms of monitoring and managing of resources as well as to provide situational awareness in decision-making fulfilling the objective of servicing the citizen, while ensuring that it meets the needs of present and future generations with respect to economic, social and environmental aspects. Considering the city as a complex and dynamic system involving different interconnected spatial, social, economic, and physical processes subject to temporal changes and continually modified by human actions. Big Data, fog, and edge computing technologies have significant potential in various scenarios considering each city individual tactical strategy. However, one critical aspect is to encapsulate the complexity of a city and support accurate, cross-scale and in-time predictions based on the ubiquitous spatio-temporal data of high-volume, high-velocity and of high-variety.
To address this challenge, MARVEL delivers a disruptive Edge-to-Fog-to-Cloud ubiquitous computing framework that enables multi-modal perception and intelligence for audio-visual scene recognition, event detection in a smart city environment. MARVEL aims to collect, analyse and data mine multi-modal audio-visual data streams of a Smart City and help decision makers to improve the quality of life and services to the citizens without violating ethical and privacy limits in an AI-responsible manner. This is achieved via: (i) fusing large scale distributed multi-modal audio-visual data in real-time; (ii) achieving fast time-to-insights; (iii) supporting automated decision making at all levels of the E2F2C stack; and iv) delivering a personalized federated learning approach, where joint multi modal representations and models are co-designed and improved continuously through privacy aware sharing of personalized fog and edge models of all interested parties.
ForDigitHealth: Bayerischer Forschungsverbund zum gesunden Umgang mit digitalen Technologien und Medien
BayFOR (Bayerisches Staatsministerium für Wissenschaft und Kunst) Project
Runtime: 48 Months – 2019-31.05.2023
Role: Principal Investigator, Co-author Proposal
Partners: University of Augsburg, Otto-Friedrichs-University Bamberg, FAU Erlangen-Nuremberg, LMU Munich, JMU Würzburg
Die Digitalisierung führt zu grundlegenden Veränderungen unserer Gesellschaft und unseres individuellen Lebens. Dies birgt Chancen und Risiken für unsere Gesundheit. Zum Teil führt unser Umgang mit digitalen Technologien und Medien zu negativem Stress (Distress), Burnout, Depression und weiteren gesundheitlichen Beeinträchtigungen. Demgegenüber kann Stress auch eine positive, anregende Wirkung haben (Eustress), die es zu fördern gilt. Die Technikgestaltung ist weit fortgeschritten, sodass digitale Technologien und Medien dank zunehmender künstlicher Intelligenz, Adaptivität und Interaktivität die Gesundheit ihrer menschlichen Nutzerinnen und Nutzer bewahren und fördern können. Ziel des Forschungsverbunds ForDigitHealth ist es, die Gesundheitseffekte der zunehmenden Präsenz und intensivierten Nutzung digitaler Technologien und Medien – speziell in Hinblick auf die Entstehung von digitalem Distress und Eustress und deren Folgen – in ihrer Vielgestaltigkeit wissenschaftlich zu durchdringen sowie Präventions- und Interventionsoptionen zu erarbeiten und zu evaluieren. Dadurch soll der Forschungsverbund zu einem angemessenen, bewussten und gesundheitsförderlichen individuellen wie kollektiven Umgang mit digitalen Technologien und Medien beitragen.
Improving the specificity of affective computing via multimodal analysis
ARC Discovery Project (22% Acceptance Rate in 2nd Round of Call)
Runtime: 01.01.2020 – 31.12.2023
Role: Principal Investigator, Co-author Proposal
Partners: University of Canberra, University of Pittsburgh, CMU, Imperial College London
Being able to have computational models and approaches to sense and understand a person’s emotion or mood is a core component of affective computing. While much research over the last two decades has tried to address the question of sensitivity – the correct recognition of affect classes – the equally important issue of specificity – the correct recognition of true negatives – has been neglected. This highly inter-disciplinary project aims to address this issue and to solve the fundamental affective computing problem of developing robust non-invasive multimodal approaches for accurately sensing a person’s affective state. Of course, neither sensitivity, nor specificity should be seen in isolation. The underlying issue is one of conceptualising affective states as areas within a continuous space, of determining the affect intensity on a continuous scale and of being able to analyse very subtle expressions of affect.