{"id":192,"date":"2015-12-07T01:32:11","date_gmt":"2015-12-07T01:32:11","guid":{"rendered":"http:\/\/www.schuller.it\/bws\/?page_id=192"},"modified":"2026-04-08T22:08:44","modified_gmt":"2026-04-08T22:08:44","slug":"current-projects","status":"publish","type":"page","link":"http:\/\/www.schuller.it\/bws\/?page_id=192","title":{"rendered":"Current Projects"},"content":{"rendered":"<div class=\"page\" title=\"Page 1\">\n<div class=\"section\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 3\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p><strong>UNIty &#8211; Uniting Networks, Interactions and Technology for Youth Mental Health<br \/>\n<\/strong><strong>DZPG \/ DLR: VISIONS26<\/strong><\/p>\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p><em>Runtime<\/em>: 01.07.2026 &#8211; 30.06.2029<br \/>\n<em>Role<\/em>: Principal Investigator, Co-author Proposal<br \/>\n<em>Partners<\/em>: LMU, Eberhard Karls Universit\u00e4t T\u00fcbingen, Universit\u00e4tsklinikum T\u00fcbingen, <strong>TUM Universit\u00e4tsklinikum<\/strong>, Philipps-Universit\u00e4t Marburg, Friedrich-Schiller-Universit\u00e4t Jena, Freie Universit\u00e4t Berlin<\/p>\n<p>Psychische Belastungen junger Erwachsener nehmen stark zu, insbesondere der Studienbeginn gilt als vulnerable Phase. Trotz hoher Pr\u00e4valenz bleibt der Zugang zu fr\u00fchzeitiger Unterst\u00fctzung eingeschr\u00e4nkt. Angesichts \u00fcberlasteter Versorgungssysteme besteht ein dringender Bedarf an innovativen, niedrigschwelligen Pr\u00e4ventionsstrategien. UNIty zielt darauf ab, psychische Belastungen bei Studierenden fr\u00fchzeitig zu erkennen und pr\u00e4ventiv zu begegnen. Daf\u00fcr werden zwei zentrale Komponenten entwickelt und evaluiert: (1) ein Monitoring- und Feedback-System zur Fr\u00fcherkennung psychischer Belastungen und (2) ein bedarfsspezifisches Interventionsangebot. Methoden: \u00dcber eine Monitoring-App werden mithilfe von Digital Phenotyping (EMA, Sprachaufnahmen, Bewegungsdaten) individuelle Belastungsprofile erfasst und mittels Machine Learning analysiert und r\u00fcckgemeldet. Deutschlandweit werden Studierende rekrutiert. Bei klinisch relevanten Belastungen erfolgt in einem dynamischen RCT eine Zuweisung zu einer von zwei Interventionsbedingungen \u2013 einer LLM-basierten Chatbot-Intervention oder einem Buddy-Programm zur F\u00f6rderung sozialer Unterst\u00fctzung oder der Monitoring-Kontrollgruppe. Durch die Vernetzung von f\u00fcnf DZPG-Standorten entsteht ein skalierbares Modell f\u00fcr Forschung, Pr\u00e4vention und Versorgung. UNIty tr\u00e4gt wesentlich zur Entwicklung evidenzbasierter, digitaler Tools und zur Etablierung pr\u00e4ventiver Ans\u00e4tze in der psychischen Gesundheitsversorgung junger Menschen bei.<\/p>\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<hr \/>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<p><strong>LLM Cooperation through Mutual Feedback and Reward Shaping<\/strong><\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p><strong>TUM Global Incentive Fund<\/strong><\/p>\n<p><em>Runtime<\/em>: 30.09.2025 &#8211; 30.09.2026<br \/>\n<em>Role<\/em>: Principal Investigator, Co-author Proposal<br \/>\n<em>Partners<\/em>: <b>TUM<\/b>, Imperial College London<\/p>\n<p>Our project introduces a novel paradigm for training large language models (LLMs) that moves beyond static fine-tuning or isolated inference. We enable LLMs to instruct and provide feedback to one another within a reinforcement learning (RL) framework. Previous work only uses LLM as a judge but not mutual learning. Traditional RL depends heavily on manually designed reward functions or costly human feedback, but this approach utilises the pretrained knowledge of LLMs to generate more context-aware guidance signals. We will provide a strategy to integrate human preferences, not as static labels but as part of a dynamic learning loop to enable continual refinement. We hope this research could reduce reliance on human supervision while enabling AI systems to evolve collaboratively.<\/p>\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<hr \/>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<p><strong>Development and Deployment of AI models in Predictive Maintenance Scenarios<br \/>\n<\/strong><strong>Industry Cooperation with MAN Truck &amp; Bus SE<\/strong><\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<p><em>Runtime<\/em>: 36 months<br \/>\n<em>Role<\/em>: Principal Investigator, Co-author Proposal<br \/>\n<em>Partners<\/em>: <b>TUM<\/b>, MAN Truck &amp; Bus SE<\/p>\n<p>Warning systems that detect signs of future breakdowns ahead of time are promising to mitigate corresponding\u00a0 risks. Concretely, these systems can be based on remotely send measurements, which encode certain patterns indicative of the states of the involved pieces. This includes, for instance, the continuously captured battery voltage curve during a motor start cycle, which is expected to change in character over time due to wear and tear. While these changes are hard to model mathematically a priori and might be strongly affected by external variables, such as ambient or engine temperature, machine learning &#8211; and in particular &#8211; deep learning techniques are promising candidates to effectively learn signal patterns indicative of breakdown risks. The full potential of these algorithms is likely to be enabled if they are integrated in a deployed software framework, that allows to provide large amounts of data for training and testing. This allows to directly measure real-world-impact and in return improve the applied concepts.<\/p>\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<hr \/>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<p><strong>AI-supported failure prevention for commercial vehicles<\/strong><br \/>\n<strong>Industry Cooperation with MAN Truck &amp; Bus SE<\/strong><\/p>\n<p><em>Runtime<\/em>: 36 months<br \/>\n<em>Role<\/em>: Principal Investigator, Co-author Proposal<br \/>\n<em>Partners<\/em>: <b>TUM<\/b>, MAN Truck &amp; Bus SE<\/p>\n<p>A common problem in the everyday use of trucks in logistical chains is the risk of a breakdown en route, leading to financial losses. While the frequency of such breakdowns can be reduced to some degree through regular maintenance, estimating the exact time at which breakdowns occur due to worn-down or broken components remains close to guesswork. Warning systems that detect signs of future breakdowns ahead of time are thus promising to further mitigate corresponding risks. Concretely, these systems can be based on onboard measurements which are transferred via OTA to a database with large computing capacity for post-processing. There, certain patterns indicating damage or the beginning of the end of the lifetime of components can be calculated. These predictive maintenance systems need to be as reliable and accurate as possible given large economic impact both on customer side as well as ultimately on manufacturer side. The following use case scenarios underline how an accurate and timely prediction of the decay of certain truck components can effectively lead to (economic) benefits in the larger transportation and maintenance framework.<\/p>\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<hr \/>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<p><strong>Emotion Computing in Speech &#8211; Perception with LLM<br \/>\n<\/strong><strong>Industry Cooperation with HUAWEI TECHNOLOGIES<\/strong><\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<p><em>Runtime<\/em>: 15.05.2025 \u2013 14.09.2026<br \/>\n<em>Role<\/em>: Principal Investigator, Co-author Proposal<br \/>\n<em>Partners<\/em>: <b>TUM<\/b>, HUAWEI TECHNOLOGIES<\/p>\n<p>This project presents a research initiative focused on developing a comprehensive model capable of predicting a wide array of speaker states, traits, and acoustic\/prosodic description from a given speech utterance. The proposed model will leverage advanced deep learning techniques to generate clear, concise, and contextually accurate descriptions, exemplified by statements like &#8220;this is an utterance of a happy man, characterized by a rising pitch contour and a vibrant tone.&#8221; Through this initiative, we aim to enhance the accuracy and applicability of SER systems across diverse fields.<\/p>\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<hr \/>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<p><strong>MENOSTIK: <\/strong>KI-gest\u00fctzte Diagnostik der Wechseljahre durch Wearables und digitale Biomarker<br \/>\n<strong>BMBF &#8220;Start-interaktiv&#8221;<\/strong><\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<p><em>Runtime<\/em>: 01.01.2026 &#8211; 31.12.2028<br \/>\n<em>Role<\/em>: Principal Investigator, Co-author Proposal<br \/>\n<em>Partners<\/em>: <strong>TUM<\/strong><\/p>\n<p>Menostik revolutiniert den Ansatz der Menopausen-Detektion durch die Entwicklung einer von K\u00fcnstlicher Intelligenz (KI) gest\u00fctzten Diagnostikplattform. Die Analyse von Daten aus handels\u00fcblichen tragbaren Sensoren (Wearables) sowie Stimmaufnahmen erm\u00f6glicht eine pr\u00e4zise, nicht-invasive Fr\u00fcherkennung der Perimenpause.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<hr \/>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<p><strong>TE(A)CHADOPT<\/strong>: Teaching students how children with neurodevelopmental disorders adopt and interact with technologies (#)<br \/>\n<strong>EU Horizon 2020 ERASMUS+<\/strong><\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<p><em>Runtime<\/em>: 3 years<br \/>\n<em>Role<\/em>: Principal Investigator, Co-author Proposal<br \/>\n<em>Partners<\/em>: Medical University of Graz, Politechnika Gdanska, Yeditepe University, Istanbul Technical University, Beit Issie Shapiro &#8211; Amutat Avi, Alliance for applied psychology, <strong>TUM<\/strong><\/p>\n<p>We aim to advance technology adaptation to the needs of children with neurodevelopmental disorders. We search for concrete methods to evaluate how they interact with technologies and test the strategies in observational studies.<br \/>\nOur findings shall result in guidelines that aim to support people involved in the technology development to adapt their products to this user group. The guidelines will be disseminated to students, technology providers, therapists, researchers, families, and others. We will perform systematic literature reviews on technology adoption models and methods to evaluate the interaction of children with neurodevelopmental disorders with technologies. In 4 countries, we will perform observational studies with at least 25 children in total. We will develop guidelines on how to evaluate child-technology interaction, evaluate, revise, and translate them. We will publish our findings, and organise a student workshop as well as promotion events in the partner countries. TE(A)CHADOPT shall help to shift the focus of technology providers from producing pure learning applications to joyful and custom-tailored games for children with neurodevelopmental disorders. This will advance the inclusion of children with neurodevelopmental disorders and increase their quality of life. Our guidelines will help to identify challenges that children with neurodevelopmental disorders have when using technologies and provide support on how to adapt the products accordingly.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<hr \/>\n<\/div>\n<\/div>\n<\/div>\n<p><strong>Silent Speech: Enabling Quiet Communication through EMG <\/strong>(#15 [2024-1])<br \/>\n<strong>Bavaria California Technology Center (BaCaTeC)\u00a0<\/strong><\/p>\n<\/div>\n<\/div>\n<\/div>\n<p><em>Runtime<\/em>: 07\/2024 &#8211; 12\/2025<br \/>\n<em>Role<\/em>: Principal Investigator, Co-author Proposal<br \/>\n<em>Partners<\/em>: <strong>TUM<\/strong>, University of Southern California<\/p>\n<p>Silent Computational Paralinguistics (SCP) focuses on recognizing speaker states as well as traits during non-audible speech from sources such as facial ElectroMyoGraphy (EMG) signals. SCP can help to interact with next generation socio-emotionally competent speech technology in a private manner or the mute. The cooperation aims to significantly advance the field of SCP by collecting a larger EMG-speech corpus and developing improved machine learning models. The project will advance research into SCP in the following directions: 1) Collecting a larger, more diverse, more expressive EMG-Silent Speech dataset with sessions being recorded from a more diverse speaker set consisting of project participants from both partner institutions, with the participants themselves performing more varied communication expressions. 2) Establishing relevant baseline metrics for modeling the collected dataset, this is achieved by applying more traditional machine learning approaches to establish baseline dataset modeling parameters. 3) Investigating advanced deep learning approaches for SCP modeling, methods into transfer learning from the speech modality to the EMG modality, and representation learning, with EMG-to-speech synthesis being of high priority for investigation.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<hr \/>\n<p><strong>VoCS<\/strong>: Voice Communication Sciences (#101168998)<br \/>\n<strong>EU Horizon 2020 Marie Sklodowska-Curie Innovative Training Networks European Training Networks\u00a0<\/strong>(MSCA-2023-DN-01-01)<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<p><em>Runtime<\/em>: 4 years<br \/>\n<em>Role<\/em>: Principal Investigator, Co-author Proposal<br \/>\n<em>Partners<\/em>: Universit\u00e9 d&#8217;Aix Marseille, Friedrich Schiller University Jena, University of Maastricht, University of Oslo, University Jean Monnet Saint-Etienne, Eotvos Lorand Tudomanyegyetem, Universidad Pompeu Fabra, Univerzita Karlova, Ita-Suomen Yliopisto, University of Twente, Queen Mary University of London,\u00a0<strong>Audeering GmbH<\/strong>, Oticon A\/S, Universiy of Zurich,\u00a0<strong>University of Augsburg<\/strong>,\u00a0<strong>TUM<\/strong>, Oxford Wave Research Ltd, National Institute of Informatics, National Bureau of Investigation, Odia, Oticon Medical<\/p>\n<p>With AI-driven advances, the rapidly developing field of voice technology (VT) has transformed European life through voice assistants, text-to- speech systems, and cochlear implants. However, severe challenges remain in processing paralinguistic information such as identity, emotional state or health in voices. The Voice Communication Sciences (VoCS) project&#8217;s innovative aspects lie in its comprehensive approach to voice processing, bridging disciplines from neuroscience to engineering. The VoCS research program is structured around three scientific objectives: (1) advancing basic knowledge of natural voice processing, exploring paralinguistic information in voices; (2) building on these insights to design more natural and flexible synthetic voices; (3) transferring this knowledge into user-oriented applications in health and forensics, including the improvement of voice perception for hearing-impaired individuals, advancements in forensic speaker comparison methods, and the development of tools to combat deepfake speech. VoCS aims to contribute not only to scientific knowledge but also to the exponential growth of the VT industry by creating a network of skilled experts shaping the future of VT in Europe.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<hr \/>\n<p><strong>INDUX-R<\/strong>: Transforming European INDUstrial Ecosystems through eXtended Reality enhanced by human-centric AI and secure, 5G-enabled IoT (#101135556)<br \/>\n<strong style=\"font-size: inherit;\">EU Horizon 2020 Research &amp; Innovation Action<\/strong><span style=\"font-size: inherit;\"> (RIA)<\/span><\/p>\n<\/div>\n<\/div>\n<\/div>\n<p><em style=\"font-size: inherit;\">Runtime<\/em><span style=\"font-size: inherit;\">: 36 months<br \/>\n<\/span><em style=\"font-size: inherit;\">Role<\/em><span style=\"font-size: inherit;\">: Principal Investigator, Co-Author Proposal<br \/>\n<\/span><em style=\"font-size: inherit;\">Partners<\/em><span style=\"font-size: inherit;\">: CERTH, FORTH, CWI, <strong>University of Augsburg<\/strong>,\u00a0 <strong>TUM<\/strong>, University of Barcelona,\u00a0 Fundacio Eurecat, FINT, NOVA, ORAMA, INOVA, RINA-CSM, IDECO, Crealsa, Inventics, University of Geneva, EKTACOM, University of Jena<\/span><\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 2\">\n<div class=\"section\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 2\">\n<div class=\"section\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>INDUX-R will create an XR ecosystem with concrete technological advances over existing offerings, validated in scenarios across the Industry 5.0 spectrum. Starting from the virtualization of the real world, INDUX-R will enable users to seamlessly create ad-hoc, realistic digital representations of their surroundings using commodity hardware and providing an immersive background for INDUX-R applications, by further researching Neural Radiance Fields (NeRF), 3D scanning and audio-reconstruction methodologies. This work will be enriched with an XR toolkit for; i) the synthesis of speech driven, lifelike face animations utilising Transformers and Generative Adversarial Networks, and; ii) the generation of photo-realistic human avatars driven by 3D human pose estimation and local radiance fields for accurately replicating human motion, modelling deformation phenomena and reproducing natural texture. INDUX-R will research real-time, egocentric perception algorithms, integrated in XR wearables to provide contextual analysis of the users\u2019 surroundings and enable new ways of XR interaction using visual, auditory and haptic cues. Egocentric perception will be combined with virtual elastic objects that the user can manipulate and deform in XR according to material properties, getting multi-sensorial feedback in real-time. By exploiting this closed-feedback loop, INDUX-R will develop a dynamic and pervasive user interface environment that can adapt to user\u2019s profile, abilities and task at hand. This adaptation process will be controlled by Reinforcement Learning algorithms that will adjust the presented XR content in an online, human-centric manner that improves accessibility. Through these interfaces human-in-the-loop pipelines based on Active Learning will be implemented where user feedback will be utilised to improve the quality of services and applications offered.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<hr \/>\n<p><strong>Wiss-KKI<\/strong>: Wissenschaftskommunikation \u00fcber und mit kommunikativer k\u00fcnstlicher Intelligenz: Emotionen, Engagement, Effekte<br \/>\n<b>BMBF <\/b>(F\u00f6rderrichtlinie Wissenschaftskommunikationsforschung, 7.9% Acceptance Rate in the Call)<\/p>\n<\/div>\n<\/div>\n<\/div>\n<p><em style=\"font-size: inherit;\">Runtime<\/em><span style=\"font-size: inherit;\">: 01.01.2024 &#8211; 31.12.2026<br \/>\n<\/span><em style=\"font-size: inherit;\">Role<\/em><span style=\"font-size: inherit;\">: Principal Investigator, Co-author Proposal<br \/>\n<\/span><em style=\"font-size: inherit;\">Partners<\/em><span style=\"font-size: inherit;\">: <\/span>University of Augsburg, <strong>TUM<\/strong>, TU Braunschweig<\/p>\n<p>Dieses Projekt widmet sich der Rolle kommunikativer ku\u0308nstlicher Intelligenz (KKI) in der Wissenschaftskommunikation. Diese Technologie fu\u0308hrt Aufgaben in Kommunikationsprozessen aus, die ehedem als genuin menschliche Aktivita\u0308t wahrgenommen wurden (z.B ChatGPT). KKI hat eine Doppelrolle als Vermittler\/Kommunikator u\u0308ber sozio-wissenschaftliche Themen und als Gegenstand der Wissenschaftskommunikation, etwa in der Medienberichterstattung.<br \/>\nDas Projekt hat zum Ziel, das Potential von KKI fu\u0308r Wissenschaftskommunikation in dieser Doppelrolle in einem interdisziplina\u0308ren Verbund zwischen Kommunikationswissenschaft und Informatik systematisch zu untersuchen. In einer konzeptionellen Phase sollen zuna\u0308chst Zielgro\u0308\u00dfen fu\u0308r Wissenschaftskommunikation u\u0308ber und mit KKI bestimmt werden. In einer darauffolgenden empirischen Phase wird (1) der Diskurs in traditionellen und sozialen Medien mit einer Verschra\u0308nkung manueller und automatisierter Verfahren analysiert, (2) der Effekt des medialen Diskurses auf Emotionen und Bewertungen der Technologie in experimentellen Designs untersucht, und (3) das Engagement (Ausma\u00df und Qualita\u0308t der Interaktion von User:innen mit KKI-Tools fu\u0308r Wissenschaftskommunikation) in einer Kombination von qualitativen und quantitativen Methoden exploriert. Dabei wird angenommen, dass Diskurs, Praktiken und Effekte eine fu\u0308r Wahrnehmung und Nutzung bedeutende emotionale Komponente haben. Schlie\u00dflich wird in einem technischen Teil ein Anforderungsprofil an ein KKI-Tool fu\u0308r Wissenschaftskommunikation erstellt und ein KKI-basiertes Tool fu\u0308r die direkte Kommunikation zwischen Wissenschaft und O\u0308ffentlichkeit entwickelt. Dieses ermo\u0308glicht es Wissenschaftler:innen, aus Publikationen leicht versta\u0308ndliche, zielgruppenspezifische Pressemitteilungen und Social Media Posts zu erstellen. Zugleich soll das Tool auch von Laien genutzt werden ko\u0308nnen, um sich mit Themen der Wissenschaft auseinanderzusetzen. Die technische Entwicklung wird von einer formativen Evaluation begleitet.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<hr \/>\n<p><strong>COHYPERA<\/strong>: Computed hyperspectral perfusion assessment<br \/>\n<strong>Seed Funding UAU <\/strong><strong style=\"font-size: inherit;\">Project<\/strong><span style=\"font-size: revert; color: initial; font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen-Sans, Ubuntu, Cantarell, 'Helvetica Neue', sans-serif;\">\u00a0<\/span><\/p>\n<\/div>\n<\/div>\n<\/div>\n<p><em style=\"font-size: inherit;\">Runtime<\/em><span style=\"font-size: inherit;\">: 24 months<br \/>\n<\/span><em style=\"font-size: inherit;\">Role<\/em><span style=\"font-size: inherit;\">: Principal Investigator, Co-author Proposal<br \/>\n<\/span><em style=\"font-size: inherit;\">Partners<\/em><span style=\"font-size: inherit;\">: <\/span><strong style=\"font-size: inherit;\">University of Augsburg<\/strong><\/p>\n<p>Over the last years, imaging photoplethysmography (iPPG) has been attracting immense interest. iPPG assesses the cutaneous perfusion by exploiting subtle color variations from videos. Common procedures use RGB cameras and employ the green channel or rely on a linear combination of RGB to extract physiological information. iPPG can capture multiple parameters such as heart rate (HR), heart rate variability (HRV), oxygen saturation, blood pressure, venous pulsation and strength as well as spatial distribution of cutaneous perfusion. Its highly convenient usage and a wide range of possible applications, e.g. patient monitoring, using skin perfusion as early risk score and assessment of lesions, make iPPG a diagnostic mean with immense potential. Under real -world conditions, however, iPPG is prone to errors. Particularly regarding analyses beyond HR, the number of published works is limited, proposed algorithms are immature, basic mechanisms are not completely understood and iPPG\u2019s potential is far from being exploited. We hypothesize that hyperspectral (HS) reconstruction by artificial intelligence (AI) methods can fundamentally improve iPPG and extend its applicability. HS reconstruction refers to the estimation of HS images from RGB images. The technique has recently gained much attention but is not common to iPPG. COHYPERA aims to prove the potential of HS reconstruction as universal processing step for iPPG. The pursued approach takes advantage of the fact that the HS reconstruction can incorporate knowledge and training data to yield a high dimensional data representation, which enables various analyses.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<hr \/>\n<p><span style=\"font-size: inherit;\"><strong>Silent Paralinguistics<\/strong> (#SCHU2508\/15-1)<br \/>\n<\/span><strong style=\"font-size: inherit;\">DFG (German Research Foundation) Project<\/strong><span style=\"font-size: revert; color: initial; font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen-Sans, Ubuntu, Cantarell, 'Helvetica Neue', sans-serif;\">\u00a0<\/span><\/p>\n<\/div>\n<\/div>\n<\/div>\n<p><em style=\"font-size: inherit;\">Runtime<\/em><span style=\"font-size: inherit;\">: 01.09.2023 &#8211; 31.12.2027<br \/>\n<\/span><em style=\"font-size: inherit;\">Role<\/em><span style=\"font-size: inherit;\">: Principal Investigator, Co-author Proposal<br \/>\n<\/span><em style=\"font-size: inherit;\">Partners<\/em><span style=\"font-size: inherit;\">: <\/span><strong style=\"font-size: inherit;\">TUM<\/strong>, University of Bremen<\/p>\n<div class=\"page\" title=\"Page 7\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 2\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>We propose to combine Silent Speech Interfaces with Computational Paralinguistics to form Silent Paralinguistics (SP). To reach the envisioned project goal of inferring paralinguistic information from silently produced speech for natural spoken communication, we will investigate three major questions: (1) How well can speaker states and traits be predicted from EMG signals of silently produced speech, using the direct and indirect silent paralinguistics approach? (2) How to integrate the paralinguistic predictions into the Silent Speech Interface to generate appropriate acoustic speech from EMG signals (EMG-to-speech)? and (3) Does the resulting paralinguistically enriched acoustic speech signal improve the usability of spoken communication with regards to naturalness and user acceptance?<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<hr \/>\n<p><strong>HearTheSpecies<\/strong>: Using computer audition to understand the drivers of soundscape composition, and to predict parasitation rates based on vocalisations of bird species (#SCHU2508\/14-1) (&#8220;Einsatz von Computer-Audition zur Erforschung der Auswirkungen von Landnutzung auf Klanglandschaften, sowie der Parasitierung anhand von Vogelstimmen<span style=\"font-size: inherit; font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen-Sans, Ubuntu, Cantarell, 'Helvetica Neue', sans-serif;\">&#8220;)<br \/>\n<\/span><strong style=\"font-size: inherit;\">DFG (German Research Foundation) Project<\/strong><span style=\"font-size: inherit; font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen-Sans, Ubuntu, Cantarell, 'Helvetica Neue', sans-serif;\">, Schwerpunktprogramm \u201eBiodiversit\u00e4ts-Exploratorien\u201c<\/span><\/p>\n<\/div>\n<\/div>\n<\/div>\n<p><em style=\"font-size: inherit;\">Runtime<\/em><span style=\"font-size: inherit;\">: 01.03.2023 &#8211; 31.08.2026<br \/>\n<\/span><em style=\"font-size: inherit;\">Role<\/em><span style=\"font-size: inherit;\">: Principal Investigator, Co-author Proposal<br \/>\n<\/span><em style=\"font-size: inherit;\">Partners<\/em><span style=\"font-size: inherit;\">: <\/span><strong style=\"font-size: inherit;\">University of Augsburg<\/strong>, <strong>TUM<\/strong>, University of Freiburg<\/p>\n<p>The ongoing biodiversity crisis has endangered thousands of species around the world and its urgency is being increasingly acknowledged by several institutions \u2013 as signified, for example, by the upcoming UN Biodiversity Conference.\u00a0Recently, biodiversity monitoring has also attracted the attention of the computer science community due to the potential of disciplines like machine learning (ML) to revolutionise biodiversity research by providing monitoring capabilities of unprecedented scale and detail.\u00a0To that end, HearTheSpecies aims to exploit the potential of a heretofore underexplored data stream: audio. As land use is one of the main drivers of current biodiversity loss, understanding and monitoring the impact of land use on biodiversity is crucial to mitigate and halt the ongoing trend.\u00a0This project aspires to bridge the gap between existing data and infrastructure in the Exploratories framework and state-of-the-art computer audition algorithms. The developed tools for coarse and fine scale sound source separation and species identification can be used to analyse the interaction among environmental variables, local and regional land-use, vegetation cover and the different soundscape components: biophony (biotic sounds), geophony (abiotic sounds) and anthropophony (human-related sounds).<\/p>\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<hr \/>\n<p><strong>SHIFT<\/strong>: MetamorphoSis of cultural Heritage Into augmented hypermedia assets For enhanced accessibiliTy and inclusion (#101060660)<br \/>\n<strong style=\"font-size: inherit;\">EU Horizon 2020 Research &amp; Innovation Action<\/strong><span style=\"font-size: inherit;\"> (RIA)<\/span><\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p><span style=\"font-size: inherit;\"><a href=\"http:\/\/www.schuller.it\/bws\/wp-content\/uploads\/2023\/05\/Unbenannt.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-6751\" src=\"http:\/\/www.schuller.it\/bws\/wp-content\/uploads\/2023\/05\/Unbenannt.png\" alt=\"\" width=\"258\" height=\"82\" srcset=\"http:\/\/www.schuller.it\/bws\/wp-content\/uploads\/2023\/05\/Unbenannt.png 258w, http:\/\/www.schuller.it\/bws\/wp-content\/uploads\/2023\/05\/Unbenannt-150x48.png 150w\" sizes=\"auto, (max-width: 258px) 100vw, 258px\" \/><\/a><\/span><\/p>\n<p><em style=\"font-size: inherit;\">Runtime<\/em><span style=\"font-size: inherit;\">: 01.10.2022 &#8211; 30.09.2025<br \/>\n<\/span><em style=\"font-size: inherit;\">Role<\/em><span style=\"font-size: inherit;\">: Principal Investigator, Workpackage Leader,\u00a0Co-Author Proposal<br \/>\n<\/span><em style=\"font-size: inherit;\">Partners<\/em><span style=\"font-size: inherit;\">: \u00a0Software Imagination &amp; Vision, Foundation for Research and Technology, Massive Dynamic, <strong>Audeering<\/strong>, <strong>University of Augsburg<\/strong>, <strong>TUM<\/strong>, Queen Mary University of London, Magyar Nemzeti Mu\u0301zeum \u2013 Semmelweis Orvosto\u0308rte\u0301neti Mu\u0301zeum, The National Association of Public Librarians and Libraries in Romania, Staatliche Museen zu Berlin &#8211; Preu\u00dfischer Kulturbesitz, The Balkan Museum Network, Initiative For Heritage Conservation, Eticas Research and Consulting, German Federation of the Blind and Partially Sighted<\/span><\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 2\">\n<div class=\"section\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<div class=\"page\" title=\"Page 2\">\n<div class=\"section\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>The SHIFT project is strategically conceived to deliver a set of technological tools, loosely coupled that offers cultural heritage institutions the necessary impetus to stimulate growth, and embrace the latest innovations in artificial intelligence, machine learning, multi-modal data processing, digital content transformation methodologies, semantic representation, linguistic analysis of historical records, and the use of haptics interfaces to effectively and efficiently communicate new experiences to all citizens (including people with disabilities).<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<hr \/>\n<p><strong>causAI<\/strong>: AI Interaktionsoptimierung bei Videoanrufen im Vertrieb (#03EGSBY853)<br \/>\n<strong>BMWi (Federal Ministry for Economic Affairs and Energy) EXIST Business Start-up Grant<\/strong><\/p>\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p><em style=\"font-size: inherit;\">Runtime<\/em><span style=\"font-size: inherit;\">: tba<br \/>\n<\/span><em style=\"font-size: inherit;\">Role<\/em><span style=\"font-size: inherit;\">: Mentor<br \/>\n<\/span><em style=\"font-size: inherit;\">Partners<\/em><span style=\"font-size: inherit;\">: <\/span><strong style=\"font-size: inherit;\">University of Augsburg<\/strong><\/p>\n<div class=\"page\" title=\"Page 3\">\n<div class=\"section\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>causAI analysiert die Sprache, Gestik und Mimik von vertrieblichen Videoanrufen mithilfe von k\u00fcnstlicher Intelligenz, um die digitale Vertriebskompetenz zu verbessern. Ziel ist es, causAI als innovatives Softwareprodukt f\u00fcr Vertriebsgespr\u00e4chsunterst\u00fctzung und -schulung im Vertrieb zu etablieren.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<hr \/>\n<p><strong>AUDI0NOMOUS<\/strong>: Agentenbasierte, Interaktive, Tiefe 0-shot-learning-Netzwerke zur Optimierung von Ontologischem Klangversta\u0308ndnis in Maschinen<br \/>\n(Agent-based Unsupervised Deep Interactive 0-shot-learning Networks Optimising Machines&#8217; Ontological Understanding of Sound) (# <span style=\"font-size: inherit;\">442218748)<br \/>\n<\/span><strong style=\"font-size: inherit;\">DFG (German Research Foundation) <\/strong><span style=\"font-size: inherit;\"><strong>Reinhart Koselleck-Projekt<\/strong><br \/>\n<\/span><em style=\"font-size: inherit;\">Runtime<\/em><span style=\"font-size: inherit;\">: 01.01.2021 &#8211; 30.06.2026<br \/>\n<\/span><em style=\"font-size: inherit;\">Role<\/em><span style=\"font-size: inherit;\">: Principal Investigator, Co-author Proposal<br \/>\n<\/span><em style=\"font-size: inherit;\">Partners<\/em><span style=\"font-size: inherit;\">: <\/span><strong style=\"font-size: inherit;\">University of Augsburg<\/strong>,\u00a0<strong>TUM<\/strong><\/p>\n<p>Soundscapes are a component of our everyday acoustic environment; we are always surrounded by sounds, we react to them, as well as creating them. While computer audition, the understanding of audio by machines, has primarily been driven through the analysis of speech, the understanding of soundscapes has received comparatively little attention. AUDI0NOMOUS, a long-term project based on artificial intelligent systems, aims to achieve a major breakthroughs in analysis, categorisation, and understanding of real-life soundscapes. A novel approach, based around the development of four highly cooperative and interactive intelligent agents, is proposed herein to achieve this highly ambitious goal. Each agent will autonomously infer a deep and holistic comprehension of sound.\u00a0 A Curious Agent will collect unique data from web sources and social media; an Audio Decomposition Agent will decompose overlapped sounds; a Learning Agent will recognise an unlimited number of unlabelled sound; and, an Ontology Agent will translate the soundscapes into verbal ontologies. AUDI0NOMOUS will open up an entirely new dimension of comprehensive audio understanding; such knowledge will have a high and broad impact in disciplines of both the sciences and humanities, promoting advancements in health care, robotics, and smart devices and cities, amongst many others.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<hr \/>\n<p><img decoding=\"async\" src=\"http:\/\/schuller.it\/index-Dateien\/C64ready_3.gif\" alt=\"Ready.\" \/><\/p>\n","protected":false},"excerpt":{"rendered":"<p>UNIty &#8211; Uniting Networks, Interactions and Technology for Youth Mental Health DZPG \/ DLR: VISIONS26 Runtime: 01.07.2026 &#8211; 30.06.2029 Role: Principal Investigator, Co-author Proposal Partners: LMU, Eberhard Karls Universit\u00e4t T\u00fcbingen, Universit\u00e4tsklinikum T\u00fcbingen, TUM Universit\u00e4tsklinikum, Philipps-Universit\u00e4t Marburg, Friedrich-Schiller-Universit\u00e4t Jena, Freie Universit\u00e4t Berlin Psychische Belastungen junger Erwachsener nehmen stark zu, insbesondere der\u2026<\/p>\n<p> <a class=\"continue-reading-link\" href=\"http:\/\/www.schuller.it\/bws\/?page_id=192\"><span>Continue reading<\/span><i class=\"crycon-right-dir\"><\/i><\/a> <\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-192","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"http:\/\/www.schuller.it\/bws\/index.php?rest_route=\/wp\/v2\/pages\/192","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.schuller.it\/bws\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"http:\/\/www.schuller.it\/bws\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"http:\/\/www.schuller.it\/bws\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.schuller.it\/bws\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=192"}],"version-history":[{"count":330,"href":"http:\/\/www.schuller.it\/bws\/index.php?rest_route=\/wp\/v2\/pages\/192\/revisions"}],"predecessor-version":[{"id":8326,"href":"http:\/\/www.schuller.it\/bws\/index.php?rest_route=\/wp\/v2\/pages\/192\/revisions\/8326"}],"wp:attachment":[{"href":"http:\/\/www.schuller.it\/bws\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=192"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}