Challenge 1. Develop a Federated-Learning platform that enables financial institutions to collaboratively improve their Models for money laundering prevention without exchanging confidential data.

The PostFinance AG is obliged to monitor transactions in order to identify and uncover possible cases of money laundering. Modern Machine-Learning (ML) techniques allow suspicious transactions to be tracked down automatically in an efficient way. One difficulty, however, is, that the vast majority of transactions is harmless, and only a small proportion of the cash flow actually involves money laundering.
Therefore, individual training sets of different financial institutes only contain a comparatively small number of money laundering cases, having an impact on the ML model performances. Hence, there is an interest in sharing the training data among different banks. However, due to banking secrecy and privacy protection rules, the training data cannot be easily exchanged between individual institutes.

Federated Learning offers an elegant way to train a ML model iteratively and decentralized on the servers of every individual financial institution without the need to exchange sensitive data.

In this challenge, you will develop a platform based on Federated Learning that enables a number of banks to iteratively improve a shared ML model for the money laundering prevention of tomorrow.
PostFinance
Challenge 2. Predicting high potential researchers.

The impact of hiring a top talent on an organization’s performance is very significant, especially for highly complex jobs. Approaches to detect top talents for a particular position use so-called “success patterns”, i.e., details on where a researcher did their PhD, who his collaborators are, etc. As a challenge task, participants are suggested to develop an algorithm for identifying high potential researchers by inferring success patterns from the Dimensions Knowledge Graph. The Dimensions Knowledge Graph contains information about authors, publications, patents and topics for multiple science domains.
Metaphacts
Challenge 3. Grounding LLM answers to scientific knowledge graph.  

Validating large language model answers for question answering over scientific data.
Interaction between large language models (LLMs) and knowledge graphs (KGs) is expected to be beneficial. LLMs, being black-box models, and KGs, which explicitly store rich factual knowledge, are, however, both incomplete, but in a different way. LLMs contain commonsense knowledge, but might be based on outdated training data; they  can generate an unlimited number of coherent and fluent texts, but these are not always factually correct. On the other hand, KGs can be viewed as sources of reliable and trustworthy information. In the challenge, participants are encouraged to develop methods that help to combine the strengths of both. The challenge task is settled as question-answering over scientific data in a specific usage context. For example, a researcher would like to find recent methods that are used for a specific task, i.e., for thermal modeling for power-electronic systems or for circuit simulators design, and asks the LLM about it. The LLM response can be partially correct and partially hallucinated. The challenge is, therefore, to develop an algorithm that detects and labels correct and hallucinated fragments in LLM-generated answers. The Dimensions Knowledge Graph is supposed to serve as the source of trustworthy knowledge for this task. 
Metaphacts
Challenge 4. From pseudonymized clinical data to targeted noise.

Introduce dedicated noise into clinical trail data, to achieve an intermediate state between anonymized and
pseudonymized data. With that, data do not fall under GDPR regulation, but will maintain the right amount of granularity
information for scientific accurate outcomes.
Potential Outputs:
✓ Draw business and data flows with continously autmated data integration in mind
✓ Introduce different noise catagories, develop machine learning models and benchmark them
✓ Develop pipeline evaluation metric and process related KPI’s
Wega
Challenge 5. There is never enough data – create synthetic clinical data.

Goal: Biomedical and clinical data are expensive to generate. Recently, generative models were introduced to generate
synthetic data for e.g.: machine learning training purposes. Therefore, apply generative models to produce more data.
Potential Outputs:
✓ Draw business and data flows with continuously automated data integration, generation, and evaluation in mind
✓ Apply scientific-domain specific generative models for synthetic data generation in the field of clinical data, and benchmark the different models for their applicability
✓ Develop evaluation metric and visualization tools to compare real and synthetic data, and process related KPI’s
Wega
Challenge 6. Enhancing Real-Time Emotion Detection.

Develop an AI-Based REST API that creates the real-time emotion detection capabilities for our safety app (Limu). There is a need to achieve higher accuracy and handle various challenging scenarios.

Dataset: you will be provided with a labeled dataset containing audio recordings of individuals expressing various emotions such as happiness, sadness, anger, fear, surprise, and neutral. Each recording will be accompanied by an annotation indicating the correct emotion.

Evaluation Criteria: Participants will be evaluated based on the accuracy of their emotion detection algorithm. The evaluation will be conducted on a separate test dataset with different recordings.

Requirements and Considerations:

✓ Participants should utilize third-party AI services for voice recognition and leverage them to extract relevant features from the audio recordings.
✓ The developed model/algorithm should be able to handle real-time processing, providing results within a specified time frame (e.g., 1 second).
✓ The algorithm should be scalable and adaptable to different environments and user inputs.
✓ Participants are encouraged to document and explain the methodologies used, including any preprocessing steps, feature extraction techniques, and the choice of AI services employed.
Note: by creating such a challenge, you can engage other students in developing AI algorithms that can contribute to the improvement of Limu’s emotion detection service.
Limu
Challenge 7. Ensuring gender-neutral language.

While gender-biased language is not a big problem in English, there is an increasing demand for detecting it in other languages, e.g. German. For an automated identification of the corresponding mistakes, one needs to identify which words should be transformed to a gender-neutral version (e.g. “Dozenten”). The automated correction is challenging, especially since it is desirable to do it in a “smart” way, e.g. instead of correcting to “zufriedene Kund*innen”, it may feel more natural to correct to “zufriedene Kundschaft”. In this challenge, both tasks (identification and “smart” correction) can be taken into focus: there are overall somewhat more than 4000 training sentences that you can use for identification of gender-biased language, and around 800 sentences in which “smart” corrections have been applied, i.e. that can be used to train a smart correction model. In this context, some linguistic knowledge (see also https://geschicktgendern.de/) may be combined with your machine learning approach. It is up to you whether you want to tackle one or the other task, or both!
IIS Group
Challenge 8. Summarizing tender documents
 
Companies bidding on tenders face the challenge of identifying the relevant information from extensive tender documents in a short time. Manual review and extraction are time-consuming and prone to errors.
 
You are supposed to provide a user-friendly tool, where the key information of PDF files (~100 pages) will be identified and extracted. The user should be given a neat overview of the file, in which the key information of the file is structured correctly by context. The extraction should be fast and should be able to handle different kinds of document layouts.
AXIANS
Challenge 9: The non-HIV Chitchat Challenge​

The SNF research project on “Researching Intelligent Chatbots as Healthcare Coaches” is hosting a MAKEathon challenge called the “non-HIV Chitchat Challenge.” Participants in this challenge are tasked with creating a chatbot that engages in light-hearted, non-serious conversations while avoiding discussions related to HIV and medical topics. The chatbot should be trainable on chitchat dialogues and provide varying, motivational, and friendly responses.​

The primary goal is to prevent the chatbot from engaging in conversations about HIV, medical, and sensitive topics. Ideally, the chatbot should be implemented in Python with a FastAPI interface for ease of use. The challenge provides a dataset of HIV-related FAQs that the chatbot should avoid discussing. Additionally, participants can use the BioPortal Ontology on HIV or Linked Open Data as “stop-topics” to guide the chatbot’s responses. Dialogues for training can be sourced from open-domain conversational datasets like Kaggle.​

Participants are encouraged to explore Language Model-based approaches such as Llama 2, GPT, and potentially BERT, as well as technologies like LangChain (SPARQL) and Chainlit (chat interface) to create their chitchat chatbot. The challenge presents an exciting opportunity to develop an intelligent and engaging chatbot that respects the specified semantic boundaries in a healthcare context.
IIS Group
×