Current Projects

SHIFT: MetamorphoSis of cultural Heritage Into augmented hypermedia assets For enhanced accessibiliTy and inclusion (#101060660)

EU Horizon 2020 Research & Innovation Action (RIA)
Runtime: 01.10.2022 – 30.09.2025
Role: Principal Investigator, Workpackage Leader, Co-Author Proposal
Partners:  Software Imagination & Vision, Foundation for Research and Technology, Massive Dynamic, Audeering, University of Augsburg, Queen Mary University of London, Magyar Nemzeti Múzeum – Semmelweis Orvostörténeti Múzeum, The National Association of Public Librarians and Libraries in Romania, Staatliche Museen zu Berlin – Preußischer Kulturbesitz, The Balkan Museum Network, Initiative For Heritage Conservation, Eticas Research and Consulting, German Federation of the Blind and Partially Sighted

The SHIFT project is strategically conceived to deliver a set of technological tools, loosely coupled that offers cultural heritage institutions the necessary impetus to stimulate growth, and embrace the latest innovations in artificial intelligence, machine learning, multi-modal data processing, digital content transformation methodologies, semantic representation, linguistic analysis of historical records, and the use of haptics interfaces to effectively and efficiently communicate new experiences to all citizens (including people with disabilities).


Machine Learning für Kameradaten mit unvollständiger Annotation

Industry Cooperation with BMW AG
Runtime: 01.01.2022 – 31.12.2023
Role: Principal Investigator
PartnersUniversity of Augsburg, BMW AG

The project aims at self-supervised and reinforced learning for analysis of camera data with incomplete annotation.


causAI: AI Interaktionsoptimierung bei Videoanrufen im Vertrieb (#03EGSBY853)

BMWi (Federal Ministry for Economic Affairs and Energy) EXIST Business Start-up Grant
Runtime: tba
Role: Mentor
Partners: University of Augsburg

causAI analysiert die Sprache, Gestik und Mimik von vertrieblichen Videoanrufen mithilfe von künstlicher Intelligenz, um die digitale Vertriebskompetenz zu verbessern. Ziel ist es, causAI als innovatives Softwareprodukt für Vertriebsgesprächsunterstützung und -schulung im Vertrieb zu etablieren.


Improving asthma care through personalised risk assessment and support from a conversational agent (#EP/W002477/1 )

EPSRC UK Research and Innovation fEC Grants
Runtime: 01.09.2021 – 31.08.2023
Role: Principal Investigator, Co-author Proposal
Partners: Imperial College London

Over 5.4 million people have asthma in the UK, and despite £1Billion a year in NHS spending on asthma treatment, the national mortality rate is the highest in Europe. One of the reasons for this statistic, is that risk is often dramatically underestimated by many with asthma. This leads to neglect of early care, poor control, and eventually, hospitalisation. Therefore, improving accurate risk assessment and reduction via relevant behaviour change among people with asthma could save lives and dramatically reduce health care costs. We aim to address this early-care gap by investigating a new type of low-cost, and scalable personalised risk assessment, combined with follow-up automated support for risk reduction. The technology will leverage artificial intelligence to calculate a personalised asthma risk score based on voice features and self-reported data. It will then provide personalised advice on actions that can be taken to lower risk followed by customised conversational guidance to support the process of healthy change. We envision our work will ultimately lead to a safe and engaging system where the patients are able to see their current risk of an asthma attack after answering a series of questions, akin to clinical history taking, and record their voice. They then get ongoing customised support from an automated coach on how to reduce that risk. Any progress they make will visibly lower their risk (presented, for example, as “Strengthening their shield”), in order to make their state of asthma control more tangible and motivating. The technology will be developed collaboratively with direct involvement from people with asthma and clinicians through co-design methods and regular feedback in order to ensure risk assessment, feedback and guidance are clinically sound, and delivered in a way that is autonomy-supportive, clear, useful, and engaging to patients.


Leader Humor: A Multimodal Approach to Humor Recognition and an Analysis of the Influence of Leader Humor on Team Performance in Major European Soccer Leagues (#SCHU2508/12-1)

(“Ein multimodaler Ansatz zur Erkennung und Messung von Humor und eine Analyse des Einflusses des Humors von Führungskräften auf die Teamleistung in europäischen Profifußball-Ligen”)
DFG (German Research Foundation) Project
Runtime: 36 Months
Role: Principal Investigator, Co-author Proposal
Partners: University of Passau, University of Augsburg

In this project, scholars active in the fields of management and computerized psychometry take the unique opportunity to join their respective perspectives and complementary capabilities to address the overarching question of “How, why, and under which circumstances does leader humor affect team processes and team performance, and how can (leader) humor be measured on a large scale by applying automatic multimodal recognition approaches?”. Trait humor, which is one of the most fundamental and complex phenomena in social psychology, has garnered increasing attention in management research. However, scholarly understanding of humor in organizations is still substantially limited, largely because research in this domain has primarily been qualitative, survey-based, and small scale. Notably, recent advances in computerized psychometry promise to provide unique tools to deliver unobtrusive, multi-faceted, ad hoc measures of humor that are free from the substantial limitations associated with traditional humor measures. Computerized psychometry scholars have long noted that a computerized understanding of humor is essential for the humanization of artificial intelligence. Yet, they have struggled to automatically identify, categorize, and reproduce humor. In particular, computerized approaches have suffered not only from a lack of theoretical foundations but also from a lack of complex, annotated, real-life data sets and multimodal measures that consider the multi- faceted, contextual nature of humor. We combine our areas of expertise to address these research gaps and complementary needs in our fields. Specifically, we substantially advance computerized measures of humor and provide a unique view into the contextualized implications of leader humor, drawing on the empirical context of professional soccer. Despite initial attempts to join computerized psychometry and management research, these two fields have not yet been successfully combined to address our overall research question. We aspire to fill this void as equal partners, united by our keen interest in humor, computerized psychometry, leader rhetoric, social evaluations, and team performance. 


AUDI0NOMOUS: Agentenbasierte, Interaktive, Tiefe 0-shot-learning-Netzwerke zur Optimierung von Ontologischem Klangverständnis in Maschinen
(Agent-based Unsupervised Deep Interactive 0-shot-learning Networks Optimising Machines’ Ontological Understanding of Sound) (# 442218748)
DFG (German Research Foundation) Reinhart Koselleck-Projekt
Runtime: 01.01.2021 – 31.12.2025
Role: Principal Investigator, Co-author Proposal
Partners: University of Augsburg

Soundscapes are a component of our everyday acoustic environment; we are always surrounded by sounds, we react to them, as well as creating them. While computer audition, the understanding of audio by machines, has primarily been driven through the analysis of speech, the understanding of soundscapes has received comparatively little attention. AUDI0NOMOUS, a long-term project based on artificial intelligent systems, aims to achieve a major breakthroughs in analysis, categorisation, and understanding of real-life soundscapes. A novel approach, based around the development of four highly cooperative and interactive intelligent agents, is proposed herein to achieve this highly ambitious goal. Each agent will autonomously infer a deep and holistic comprehension of sound.  A Curious Agent will collect unique data from web sources and social media; an Audio Decomposition Agent will decompose overlapped sounds; a Learning Agent will recognise an unlimited number of unlabelled sound; and, an Ontology Agent will translate the soundscapes into verbal ontologies. AUDI0NOMOUS will open up an entirely new dimension of comprehensive audio understanding; such knowledge will have a high and broad impact in disciplines of both the sciences and humanities, promoting advancements in health care, robotics, and smart devices and cities, amongst many others.


EASIER: Intelligent Automatic Sign Language Translation (#101016982)
EU Horizon 2020 Research & Innovation Action (RIA)

Runtime: 01.01.2021 – 31.12.2023
Role: Principal Investigator
Partners: Martel GmbH Martel, Athena Research & Innovation Center in Information Communication & Knowledge Technologies, Universität Hamburg, Radboud University, University of Surrey, University of Zurich, CNRS, DFKI, audEERING GmbH, nuromedia GmbH, Swiss TXT AG, European Union of the Deaf iVZW, SCOP Interpretis, University College London

EASIER aims to create a framework for barrier-free communication among deaf and hearing citizens across the EU by enabling users of European SLs to use their preferred language to interact with hearing individuals, via incorporation of state-of-the-art NMT technology that is capable of dealing with a wide range of languages and communication scenarios. To this end, it exploits a robust data-driven SL (video) recognition engine and utilizes a signing avatar engine that not only produces signing that is easy to comprehend by the deaf community but also integrates information on affective expressions and coherent prosody. The envisaged ecosystem will incorporate a robust translation service surrounded by numerous tools and services which will support equal participation of deaf individuals to the whole range of everyday-life activities within an inclusive community, and also accelerate the incorporation of less-resourced SLs into SL technologies, while it leverages the SL content creation industry. The deaf community is heavily involved in all project processes, while deaf researchers are among the staff members of all SL expert partners.


MARVEL: Multimodal Extreme Scale Data Analytics for Smart Cities Environments (#957337)
EU Horizon 2020 Research & Innovation Action (RIA)

Runtime: 01.01.2021 – 31.12.2023
Role: Principal Investigator
Partners: Idryma Technologies, Infineon, Aarhus University, Atos Spain, Consiglio Nazionale delle Ricerche, Intrasoft, FBK, audEERING GmbH, Tampereen Korkeakoulusaatio, Privanova, Sphynx Technology Solutions, Comune die Trento, Univerzitet u Novom Sadu Fakultet Tehnickih Nauka, Information Technology for Market Leadership, Greenroads Limited, Zelus Ike, Instytut Chemii Bio Organicnej Polskiej Akademii Nauk

The “Smart City” paradigm aims to support new forms of monitoring and managing of resources as well as to provide situational awareness in decision-making fulfilling the objective of servicing the citizen, while ensuring that it meets the needs of present and future generations with respect to economic, social and environmental aspects. Considering the city as a complex and dynamic system involving different interconnected spatial, social, economic, and physical processes subject to temporal changes and continually modified by human actions. Big Data, fog, and edge computing technologies have significant potential in various scenarios considering each city individual tactical strategy. However, one critical aspect is to encapsulate the complexity of a city and support accurate, cross-scale and in-time predictions based on the ubiquitous spatio-temporal data of high-volume, high-velocity and of high-variety.
To address this challenge, MARVEL delivers a disruptive Edge-to-Fog-to-Cloud ubiquitous computing framework that enables multi-modal perception and intelligence for audio-visual scene recognition, event detection in a smart city environment. MARVEL aims to collect, analyse and data mine multi-modal audio-visual data streams of a Smart City and help decision makers to improve the quality of life and services to the citizens without violating ethical and privacy limits in an AI-responsible manner. This is achieved via: (i) fusing large scale distributed multi-modal audio-visual data in real-time; (ii) achieving fast time-to-insights; (iii) supporting automated decision making at all levels of the E2F2C stack; and iv) delivering a personalized federated learning approach, where joint multi modal representations and models are co-designed and improved continuously through privacy aware sharing of personalized fog and edge models of all interested parties.


HUAWEI Joint Lab: Human-centered Empathetic Interaction
HUAWEI Joint Lab
Runtime: 01.01.2020 – 31.12.2022
Role: Lab Leader
Partners: HUAWEI, University of Augsburg

The Huawei-University of Augsburg Joint Lab aims to bring together Affective Computing & Human-Centered Intelligence for Human-centred empathic interaction.


KIrun: Einsatz Künstlicher Intelligenz in der Laufsportanalytik mit Audioanalyse/ -auswertung zur Motivation, Leistungssteigerung und Verletzungsprävention (FKZ: 16KN069402)
BMWi Zentrales Innovationsprogramm Mittelstand (ZIM) Projekt
Runtime: 01.12.2019 – 31.08.2022
Role: Principal Investigator, Co-author Proposal
Partners: Universitätsklinikum Tübingen, HB Technologies AG (HBT), University of Augsburg

Das Kooperationsprojekt KIRun verfolgt die Entwicklung eines Messsystems und eines selbstlernenden Algorithmus, der auf Basis von auditiven, biomechanischen und physiologischen Messdaten das Wohlbefinden und die Anstrengung autonom ermittelt. Der innovative Kern besteht in der Ermittlung des Wohlbefindens und der Anstrengung auf der Basis von objektiven Messdaten: Audiosignale (z.B. Atemgeräusche) werden in diesem System nicht zur Sprachsteuerung verwendet, sondern werden permanent erfasst, um daraus eigenständig Rückschlüsse auf das Wohlbefinden zu ziehen. Ein autonomes Messverfahren zur Datenerfassung, mit dem das Wohlbefinden und die Anstrengung objektiviert und zeitsynchron zum Laufen erfasst und in eine Trainingssteuerung eingebunden werden, gibt es bislang nicht. Dies stellt ein Alleinstellungsmerkmal der Technologie und eine erhebliche Verbesserung zum Stand der Technik in der Laufsportanalyse dar. Per App soll eine gezielte Beeinflussung des Läufers in Richtung Wohlbefinden möglich werden, so dass die Motivation des Läufers für das Lauftraining maximal gesteigert werden kann. Als Zielgröße des Lauftrainings wird das maximale Wohlbefinden und nicht wie bisher üblich die maximale Geschwindigkeit oder das größte Streckenpensum angestrebt. Viele Einsteiger und Gelegenheitsläufer sind aufgrund falscher Trainingsgestaltung frühzeitig demotiviert oder steigen verletzt wieder aus. KIRun stellt dagegen einen positiven Trainingseindruck für den Läufer in den Mittelpunkt. Die Steigerung des Wohlbefindens mit Hilfe der “KIRun”-Technologie ist damit der effektive Antrieb für den Sportler, um die regelmäßige körperlichen Aktivität auszuüben.


EMBOA: Affective loop in Socially Assistive Robotics as an intervention tool for children with autism
ERASMUS+ project

Runtime: 01.09.2019 – 31.08.2022
Role: Principal Investigator, Co-author Proposal
Partners: Politechnika Gdanska, University of Hertfordshire, Istanbul Teknik Universitesi, Yeditepe University Vakif, Macedonian association for applied psychology, University of Augsburg

The EMBOA project (Affective loop in Socially Assistive Robotics as an intervention tool for children with autism) aims at the development of guidelines and practical evaluation of applying emotion recognition technologies in robot-supported intervention in children with autism. Children with autism spectrum disorder (ASD) suffer from multiple deficits, and limited social and emotional skills are among those, that influence their ability to involve in interaction and communication. Limited communication occurs in human-human interaction and affects relations with family members, peers, and therapists. There are promising results in the use of robots in supporting the social and emotional development of children with autism. We do not know, why children with autism are eager to interact with human-like looking robots and not with humans. Regardless of the reason, social robots proved to be a way to get through the social obstacles of a child and make him/her involved in the interaction. Once the interaction happens, we have a unique opportunity to engage a child in gradually building and practicing social and emotional skills. In the project, we combine social robots, that are already used in therapy for children with autism with algorithms for automatic emotion recognition. The EMBOA project goal is to confirm the possibility of the application (feasibility study), and in particular, we aim at the identification of the best practices and obstacles in using the combination of the technologies. What we hope to obtain is a novel approach for creating an affective loop in child-robot interaction that would enhance interventions regarding emotional intelligence building in children with autism. The lessons learned, summarized in the form of guidelines, might be used in higher education in all involved countries in robotics, computer science, and special pedagogy fields of study. The results will be disseminated in the form of trainings, multiplier events, and to the general public by scientific papers and published reports. The project consortium is multidisciplinary and combines partners with competence in interventions in autism, robotics, and automatic emotion recognition from Poland, UK, Germany, North Macedonia, and Turkey. The methodological approach includes systematic literature reviews and meta-analysis, data analysis based on statistical and machine learning approaches, and as well observational studies. We have planned a double-loop of observational studies. The first round is to analyze the application of emotion recognition methods in robot-based interaction in autism, and especially to compare diverse channels for observation of emotion symptoms. The lessons learned would be formulated in the form of guidelines. The guidelines would be evaluated with the AGREE (Appraisal of Guidelines, Research, and Evaluation) instrument and confirmed with the second round of observational studies. The objectives of our project are matching the Social Inclusion horizontal priority with regards to supporting the actions for improvement of learning performance of disadvantaged learners (testing of a novel approach for improvement of learning performances of children with autism).


ForDigitHealth: Bayerischer Forschungsverbund zum gesunden Umgang mit digitalen Technologien und Medien
BayFOR (Bayerisches Staatsministerium für Wissenschaft und Kunst) Project

Runtime: 48 Months – 2019-31.05.2023
Role: Principal Investigator, Co-author Proposal
Partners: University of Augsburg, Otto-Friedrichs-University Bamberg, FAU Erlangen-Nuremberg, LMU Munich, JMU Würzburg

Die Digitalisierung führt zu grundlegenden Veränderungen unserer Gesellschaft und unseres individuellen Lebens. Dies birgt Chancen und Risiken für unsere Gesundheit. Zum Teil führt unser Umgang mit digitalen Technologien und Medien zu negativem Stress (Distress), Burnout, Depression und weiteren gesundheitlichen Beeinträchtigungen. Demgegenüber kann Stress auch eine positive, anregende Wirkung haben (Eustress), die es zu fördern gilt. Die Technikgestaltung ist weit fortgeschritten, sodass digitale Technologien und Medien dank zunehmender künstlicher Intelligenz, Adaptivität und Interaktivität die Gesundheit ihrer menschlichen Nutzerinnen und Nutzer bewahren und fördern können. Ziel des Forschungsverbunds ForDigitHealth ist es, die Gesundheitseffekte der zunehmenden Präsenz und intensivierten Nutzung digitaler Technologien und Medien – speziell in Hinblick auf die Entstehung von digitalem Distress und Eustress und deren Folgen – in ihrer Vielgestaltigkeit wissenschaftlich zu durchdringen sowie Präventions- und Interventionsoptionen zu erarbeiten und zu evaluieren. Dadurch soll der Forschungsverbund zu einem angemessenen, bewussten und gesundheitsförderlichen individuellen wie kollektiven Umgang mit digitalen Technologien und Medien beitragen.


ParaStiChaD: Paralinguistic Speech Characteristics in Major Depressive Disorder (#SCHU2508/8-1)
(“Paralinguistische Stimmmerkmale in Major Depression”)
DFG (German Research Foundation) Project
Runtime: 01.01.2020 – 31.12.2022
Role: Principal Investigator, Co-author Proposal
Partners: FAU Erlangen-Nuremberg, University of Augsburg, Rheinische Fachhoschule Köln

More needs to be done to improve the validity of current methods to detect depression, to improve the validity of ways to predict the future course of depression and to enhance the efficacy and availability of evidence-based treatments for depression. The work proposed in Paralinguistic Speech Characteristics In Major Depressive Disorder (ParaSpeChaD) aims to address these needs by clarifying the extent to which Paralinguistic Speech Characteristics (PSCs; i.e. the vocal phenomena that occur alongside the linguistic information in speech) can be used to detect depression and predict its future course and how recent progress in mobile sensor technology can be used to improve the detection, prediction and potentially even the treatment of depression.


Improving the specificity of affective computing via multimodal analysis
ARC Discovery Project (22% Acceptance Rate in 2nd Round of Call)
Runtime: 01.01.2020 – 31.12.2023
Role: Principal Investigator, Co-author Proposal
Partners: University of Canberra, University of Pittsburgh, CMU, Imperial College London

Being able to have computational models and approaches to sense and understand a person’s emotion or mood is a core component of affective computing. While much research over the last two decades has tried to address the question of sensitivity – the correct recognition of affect classes – the equally important issue of specificity – the correct recognition of true negatives – has been neglected. This highly inter-disciplinary project aims to address this issue and to solve the fundamental affective computing problem of developing robust non-invasive multimodal approaches for accurately sensing a person’s affective state. Of course, neither sensitivity, nor specificity should be seen in isolation. The underlying issue is one of conceptualising affective states as areas within a continuous space, of determining the affect intensity on a continuous scale and of being able to analyse very subtle expressions of affect.


Ready.

Comments are closed.