y0news
← Feed
Back to feed
🧠 AI🟢 BullishImportance 7/10

From Business Events to Auditable Decisions: Ontology-Governed Graph Simulation for Enterprise AI

arXiv – CS AI|Hongyin Zhu, Jinming Liang, Mengjun Hou, Ruifan Tang, Xianbin Zhu, Jingyuan Yang, Yuanman Mao, Feng Wu|
🤖AI Summary

Researchers introduce LOM-action, an enterprise AI system that grounds LLM-based decisions in business ontologies and event-driven simulations rather than unrestricted knowledge spaces. The approach achieves 93.82% accuracy with 98.74% F1 scores on decision chains, vastly outperforming larger models like DeepSeek-V3.2, while maintaining complete audit trails for enterprise compliance.

Analysis

The architecture of current LLM-based enterprise systems exhibits a fundamental design flaw: they generate fluent but ungrounded decisions without simulating how specific business contexts constrain the valid decision space. LOM-action addresses this by implementing event-driven ontology simulation, where business events trigger predefined scenario conditions encoded in an enterprise ontology, which then deterministically mutate a graph representation of business state in an isolated sandbox. This evolved simulation graph becomes the exclusive source for decision derivation, creating decisions that are both contextually valid and fully auditable.

The research exposes what the authors term "illusive accuracy"—the phenomenon where larger models like DeepSeek-V3.2 achieve 80% accuracy but only 24-36% F1 scores on tool-chain execution. This reveals that accuracy alone masks systematic failures in decision quality when applied to multi-step enterprise workflows. LOM-action's four-fold F1 advantage demonstrates that architectural soundness, not model scale, determines trustworthiness in enterprise decision systems.

For the enterprise AI market, this work establishes ontology-governed simulation as the prerequisite for production-grade systems handling regulated domains. Organizations managing compliance-critical workflows face mounting pressure to deploy AI that produces not just correct answers but verifiable decision reasoning. The complete audit trail capability directly addresses GDPR, SOX, and industry-specific regulatory requirements that demand explainable AI.

Future development should focus on scaling ontology frameworks across heterogeneous enterprise environments and integrating with existing knowledge graphs. The dual-mode architecture—skill mode for routine decisions and reasoning mode for complex scenarios—suggests a pathway toward human-AI collaboration models that preserve accountability while improving operational efficiency.

Key Takeaways
  • Event-driven ontology simulation produces 98.74% F1 scores versus 24-36% for larger models, revealing architectural design matters more than model scale
  • Complete audit trails generated by LOM-action address enterprise compliance requirements in regulated industries
  • The illusive accuracy phenomenon shows that 80% accuracy masks systematic failures in multi-step enterprise decision chains
  • Dual-mode architecture enables skill-based execution for routine decisions and reasoning mode for complex scenarios
  • Isolated sandbox graph mutations ensure decisions derive only from scenario-valid business state, eliminating ungrounded reasoning
Read Original →via arXiv – CS AI
Act on this with AI
Stay ahead of the market.
Connect your wallet to an AI agent. It reads balances, proposes swaps and bridges across 15 chains — you keep full control of your keys.
Connect Wallet to AI →How it works
Related Articles