Using Large Language Models and Knowledge Graphs to Improve the Interpretability of Machine Learning Models in Manufacturing
Researchers present a novel method combining Large Language Models and Knowledge Graphs to enhance the interpretability of Machine Learning models in manufacturing environments. The approach stores domain-specific data and ML results in a structured knowledge graph, then uses an LLM to generate user-friendly explanations of ML predictions, demonstrating practical applicability in real-world manufacturing decision-making.
This research addresses a critical gap in Explainable Artificial Intelligence (XAI) by bridging the communication gap between complex ML models and end-users who need to understand and trust their outputs. The authors developed a system where domain knowledge and ML insights are organized within a Knowledge Graph structure, allowing selective retrieval of relevant information that is then processed by an LLM to generate contextually appropriate explanations. This approach is particularly valuable in manufacturing, where operational decisions hinge on understanding why a model makes specific recommendations.
The intersection of Knowledge Graphs and LLMs represents a meaningful advancement in AI interpretability. Knowledge Graphs provide structured, domain-specific context while LLMs excel at natural language generation, creating a synergistic relationship that produces more accurate and useful explanations than either technology alone. The research validates this through comprehensive evaluation using both standard XAI benchmarks and specialized manufacturing-domain questions, measuring not just technical accuracy but also clarity and practical usefulness.
For manufacturing operations, this capability directly impacts decision-making quality and regulatory compliance. Factory managers and engineers can now understand ML model recommendations in human-readable terms, reducing the "black box" problem that traditionally hampers ML adoption in industrial settings. The empirical evidence of real-world applicability suggests this methodology could accelerate ML integration across manufacturing sectors.
Future development should focus on scaling this architecture to other industries facing similar interpretability challenges, optimizing the knowledge graph construction process for different domain contexts, and examining how explanation quality varies with model complexity and data characteristics.
- →Knowledge Graphs combined with LLMs significantly improve the explainability of ML models by providing structured, domain-specific context for generating user-friendly explanations.
- →The method was validated in manufacturing environments using 33 questions measuring accuracy, consistency, clarity, and usefulness of ML result explanations.
- →This approach addresses the critical XAI challenge of making complex ML predictions understandable and actionable for non-technical stakeholders in industrial settings.
- →The research demonstrates that LLMs can dynamically access and leverage structured knowledge to produce contextually appropriate explanations tailored to specific domains.
- →Real-world applicability in manufacturing suggests this methodology could reduce adoption barriers for ML systems in industries requiring high decision transparency.