Our Objectives

Artificial Intelligence • GenAI • Explainable AI • Multi-Agent Systems • Explainable Multimodal Large Language Models • Context-Aware Systems •  

Build an adaptable, explainable AI-agentic platform

1.

Build an adaptable, explainable AI-agentic platform

We are developing a flexible, situation-aware AI-agentic platform powered by Generative AI (GenAI) and multimodal foundation models. This platform will decompose complex AI decisions into interpretable subprocesses and provide users with transparent, context-sensitive explanations. By integrating Knowledge Graphs and Retrieval-Augmented Generation (RAG), the system ensures that AI models are explainable without compromising performance. An embedded Explainability Assistant will guide developers in adopting best practices and monitoring system behaviour.

1

Define and assess AI trustworthiness

2.

Define and assess AI trustworthiness

AIXPERT introduces a standardised, multi-dimensional framework to assess AI trustworthiness, including transparency, accountability, autonomy, and robustness across various working conditions. Using real-world datasets and human feedback, we benchmark AI systems using clearly defined metrics and share evaluation handbooks to promote replicability, governance, and informed human-AI collaboration. This framework provides a foundation for regulatory alignment and future certification schemes.

2

Advance explainable multimodal foundation models

3.

Advance explainable multimodal foundation models

To ensure equity and inclusivity, we are working on next-generation explainable multimodal foundation models capable of handling diverse data types: text, image, audio, speech, and tabular. These models incorporate ethical design, bias mitigation strategies, and cultural sensitivity, setting a new standard for responsible AI development. In our model design, we value explainability and social equity to allow broad usability and minimise harmful outcomes.

3

Demonstrate real-world impact through pilot use cases

4.

Demonstrate real-world impact through pilot use cases

We will validate AIXPERT’s framework across five high-impact pilot domains: healthcare, recruitment, educational robotics, manufacturing and creative arts. You can find out more on use cases in the dedicated website section.

4