AI gives answers.
We give reasons.
AI solutions that are transparent by design.
Our Solution
Modern AI systems are often large, highly complex black boxes that provide little to no transparency into how their predictions are made. This makes it impossible to use them in highly regulated and critical use cases.
To solve this problem, we developed in the academy of finland project XAILOG a novel method for learning inherently interpretable AI models directly from data. Our solution provides a cost-efficient and transparent alternative to black-box techniques such as Random Forests and XGBoost, while maintaining competitive predictive performance.
Training Data
Historical labeled data from your domain
ExplainedAI
Learns transparent rules directly from data
IF condition OR condition THEN class A ELSE class B
Interpretable Rules
Human-readable IF/THEN classifier
Predictions
Apply rules to new, unseen data
Examples
IF Uniformity of cell size ≥ 3.5 OR Bare nuclei ≥ 3.5 AND 1.5 ≤ Uniformity of cell size < 3.5 OR Single epithelial cell size ≥ 2.5 AND Uniformity of cell shape ≥ 3.5 THEN Malignant ELSE Benign
Insights From the Data
Trained on the Wisconsin Breast Cancer dataset, this 3-rule classifier flags a tumor as malignant when cell sizes are highly uniform, when bare nuclei are enlarged alongside moderate cell size uniformity, or when epithelial cells are large with irregular cell shapes. Each rule captures a distinct hallmark of malignancy.
Accuracy: Our solution 96.4% — matching Random Forest (97%) and XGBoost (97%) while remaining fully interpretable.
IF Duration ≥ 15.5 months AND No established credit history AND Unemployed OR Duration ≥ 15.5 months AND Has checking account OR Has checking account AND No established credit history AND No guarantor THEN Bad Credit ELSE Good Credit
Transparent Decisions
Trained on the German Credit dataset, this 3-rule classifier flags bad credit when long loan durations combine with no established credit history and unemployment, when longer loans are paired with an active checking account, or when a checking account holder has no established credit history and no guarantor backing the loan.
F1 score: Our solution 60.7% — outperforming Random Forest (59.4%) and XGBoost (58.9%) by catching more bad credit cases.
Black Box AI vs. Our Approach
Black Box AI
- Predictions hard to explain or audit
- Complex and large
- Requires heavy computing infrastructure
- Compliance risk under EU AI Act / GDPR
Our Solution
IF Income < 40k AND Employment < 2y OR Debt-to-income ≥ 45% THEN Deny Loan ELSE Approve Loan
- Every prediction has an explanation
- Short and human-readable
- Cheap to deploy and maintain
- Regulatory-compliant by design
Already Have a Model?
You don't have to replace your existing AI system. Feed your model's predictions into ours and get human-readable rules that explain its decision-making. Meet EU AI Act and GDPR explainability requirements without getting rid of your existing model. Uncover what your model actually learned, catch hidden biases, and build trust before deploying.
Your Black Box
Your existing AI model
Labeled Predictions
Data with your model's predictions
ExplainedAI
Learns rules that mimic your model
IF Risk score ≥ 0.7 OR Age < 25 AND Income < 30k THEN High risk ELSE Low risk
Human-Readable Explanation
Human-readable rules that explain your model
Benefits
Rivals Black-Box Accuracy
Approaches state-of-the-art performance on structured data.
Compact Results
Far smaller and more readable than decision trees.
Mathematical Guarantees
Backed by rigorous proofs on convergence and accuracy.
Runs on a Laptop
No GPU required. Fast training, instant inference.
What You Can Do With It
Predictions Shareholders Trust
Loan approvals, insurance claims, medical triage — when an AI makes a decision that affects someone's life, regulations demand an explanation.
EU AI Act & GDPR compliant by designDiscoveries Anyone Understands
Point our solution at your data and it gives you patterns your domain experts immediately understand. No data science degree required.
Turn data into actionable knowledgeRuns Anywhere
Our models are plain if/then rules. No cloud, no GPU, no ML framework required. Deploy them on edge devices, embedded systems, or microcontrollers.
Predictions in any environmentOur Team
A world-class team from Tampere University, advancing transparent and understandable AI through our research in AI and logic.
Antti Kuusisto
- Led the Academy of Finland–funded XAILOG project
- 20+ years of research in logic and AI
- Publications in NeurIPS, AAAI, and JAIR
Tomi Janhunen
- 25+ years of research in logic and AI
- Chairman of the Finnish AI Society (2022–2023)
- Multiple Best Paper awards
- Publications in IJCAI, AAAI, JAIR, NeurIPS
Jussi Lemiläinen
- 25+ years as an executive and entrepreneur
- Product development & productization
- Financing and international sales
Reijo Jaakkola
- Publications in logic and explainable AI
- Ernst Lindelöf Award for Master's thesis
- Two-time AI hackathon winner
Veeti Ahvonen
- Logic and modern AI models (neural networks, transformers)
- Work featured at NeurIPS and AAAI
- 7+ years of industry collaboration
Selected Publications
Peer-reviewed research underpinning the technology, conducted within the Academy of Finland project XAILOG.
-
Expressive Power of Graph Transformers via Logic
-
Logical Characterizations of Recurrent Graph Neural Networks with Reals and Floats
-
Why This and Not That? A Logic-based Framework for Contrastive Explanations
-
Explainability via Short Formulas: the Case of Propositional Logic with Implementation
-
Interpretable Classifiers for Tabular Data via Feature Selection and Discretization
-
Short Boolean Formulas as Explanations in Practice
Become a Pilot Partner
We are looking for organizations that want to test our solution.
Free
No fees, no commitment. We are funded by Business Finland.
Tailored
We work directly with your team to adapt our solution to your domain, data, and requirements.
Easy to Integrate
Delivered as a library, plugin, or API. Fits into any existing platform or workflow. No infrastructure changes needed.

