10 articles tagged with #knowledge-representation. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv β CS AI Β· Mar 37/103
π§ Researchers have identified that the 'reversal curse' in language models - their inability to infer 'B is A' from 'A is B' - can be overcome through bilinear representation structures. Training models on synthetic relational knowledge graphs creates internal geometries that enable consistent model editing and logical inference of reverse facts.
AINeutralarXiv β CS AI Β· 1d ago6/10
π§ Researchers propose DALM, a Domain-Algebraic Language Model that constrains token generation through structured denoising across domain lattices rather than unconstrained decoding. The framework uses algebraic constraints across three phasesβdomain, relation, and concept resolutionβto prevent cross-domain knowledge interference and improve factual accuracy in specialized domains.
AINeutralarXiv β CS AI Β· 6d ago6/10
π§ A new research paper proposes a governance framework for personal AI memory systems designed to function as 'companion' knowledge wikis that mirror user knowledge while compensating for epistemic failures like entrenchment and evidence suppression. The work addresses an emerging 2026 landscape of memory architectures for large language models through five operational mechanisms (TRIAGE, DECAY, CONTEXTUALIZE, CONSOLIDATE, AUDIT) aimed at preventing user-coupled drift in single-user knowledge systems.
AINeutralarXiv β CS AI Β· Apr 146/10
π§ Researchers identify a critical architectural gap in leading AI agent frameworks (CoALA and JEPA), which lack an explicit Knowledge layer with distinct persistence semantics. The paper proposes a four-layer decomposition model with fundamentally different update mechanics for knowledge, memory, wisdom, and intelligence, with working implementations demonstrating feasibility.
AINeutralarXiv β CS AI Β· Apr 146/10
π§ Researchers present OIDA, a framework that adds epistemic structure to organizational knowledge systems by tracking commitment strength, contradiction status, and gaps in understanding. The framework introduces a QUESTION primitive that surfaces organizational ignorance with increasing urgency, addressing a capability absent from current retrieval-augmented generation (RAG) systems.
AINeutralarXiv β CS AI Β· Apr 146/10
π§ This academic paper proposes a neuro-symbolic approach for AGI robots combining neural networks with formal logic reasoning using Belnap's 4-valued logic system. The framework enables robots to handle unknown information, inconsistencies, and paradoxes while maintaining controlled security through axiom-based logic inference.
AIBearisharXiv β CS AI Β· Mar 266/10
π§ A research paper argues that Large Language Models lack true intelligence and understanding compared to humans, as they rely on written discourse rather than tacit knowledge built through social interaction. The authors demonstrate this through examples like the Monty Hall problem, showing that LLM improvements come from changes in training data rather than enhanced reasoning abilities.
π§ ChatGPT
AINeutralarXiv β CS AI Β· 1d ago5/10
π§ Researchers conducted a systematic cross-domain study evaluating how large language models generate Competency Questions (CQs)βnatural language requirements for ontology engineering. Using both open-source models (Llama, KimiK2) and proprietary systems (GPT-4, Gemini 2.5), they identified measurable differences in readability, relevance, and structural complexity, revealing that LLM performance varies significantly by use case.
π§ GPT-4π§ Gemini
AINeutralarXiv β CS AI Β· Feb 274/105
π§ A new academic paper demonstrates that AGM belief revision logic contains KM belief update logic, showing that AGM belief revision can be viewed as a special case of KM belief update. The research uses modal logic with three operators to prove this theoretical relationship between two foundational frameworks in artificial intelligence reasoning.
AINeutralarXiv β CS AI Β· Feb 274/105
π§ Researchers propose using category theory to formalize knowledge domains and construct analogies between different fields. The paper demonstrates this approach using the classic analogy between the solar system and hydrogen atom, showing how mathematical structures like functors and pullbacks can define analogical relationships.
$ATOM