Challenge 1. Explainable-AI by PostFinance As AI methods become more established in business processes, both regulators and business users are becoming increasingly interested in model explainability. There are a few standard technical metrics, such as Shapley values, that are often used in this context, and which we already use in several product groups, such as customer personalization, fraud prevention, and money laundering detection. However, these metrics can still be difficult to interpret quickly in a business setting far removed from the technical model implementation. We postulate that an LLM-assisted product, which combines the Shapley values corresponding to a given model score, the technical documentation of a model including feature and target definitions, and the power of a modern generative AI model could provide relevant explainability interpretations at the level of detail required by a given application. We will provide a set of example data corresponding to a specific business application. Your task will be to build a model, extract relevant explainability metrics, and build a product that summarizes those metrics in German or English language for the business user. | PostFinance |
Challenge 2. AI Assistant for Research Reproducibility Scientific papers often introduce or reference complex software, datasets, or experimental procedures that are crucial for reproducibility, but finding, understanding, and running these resources can be time-consuming and technically challenging. With this challenge, we invite to design and develop an AI-powered assistant that aids researchers in navigating this complexity. Using scientific knowledge graphs like SemOpenAlex, the AI assistant should help users discover relevant software, datasets, and resources from academic papers. The system should not only locate these resources but also assist in setting them up, explaining how they work, and — ideally — automating the process of running experiments. The complexity of the assistant could range from an exploratory tool that maps relationships between resources in a paper (e.g., software and datasets) to an advanced agent that uses reasoning and automation to fully set up and reproduce experiments. For example, given a software-related research topic, the assistant could retrieve relevant papers from SemOpenAlex and provide information and setup instructions for the artifacts produced in these papers. External links to the artifacts (e.g., on GitHub or Zenodo) can be obtained from analyzing metadata in SemOpenAlex and connected knowledge graphs like Linked Papers with Code or Wikidata. Based on the nature of the artifacts, the level of guidance from the (LLM-based) assistant may vary. For example, for a dataset the assistant could provide a summary, an overview of connected datasets, and produce a data sample. For a software library, the assistant could provide a description of its capabilities and how to apply it in the context of the research topic. Ideally, the assistant will provide additional guidance on how to apply these artifacts in an integrated setup. | Metaphacts |
Challenge 3. TBD COMING SOON! | Chedventure |
Challenge 4. TBD COMING SOON! | Swiss Airlines |
Challenge 5. TBD COMING SOON! | Roche |
Are you a company or a research institute with an AI challenge? Submit your challenge using the form below!
We will review your challenge and get back to you to inform you about the selection, rejection or to request further information.