y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#constraint-satisfaction News & Analysis

7 articles tagged with #constraint-satisfaction. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

7 articles
AIBullisharXiv โ€“ CS AI ยท 2d ago7/10
๐Ÿง 

CircuitSynth: Reliable Synthetic Data Generation

CircuitSynth is a neuro-symbolic framework that addresses hallucinations and logical inconsistencies in LLM-generated synthetic data by combining probabilistic decision diagrams with optimization mechanisms to enforce hard constraints and distributional guarantees. The approach achieves 100% schema validity across complex benchmarks while outperforming existing methods in coverage, representing a significant advancement in reliable synthetic data generation for machine learning applications.

AIBullisharXiv โ€“ CS AI ยท Mar 97/10
๐Ÿง 

Localizing and Correcting Errors for LLM-based Planners

Researchers developed Localized In-Context Learning (L-ICL), a technique that significantly improves large language model performance on symbolic planning tasks by targeting specific constraint violations with minimal corrections. The method achieves 89% valid plan generation compared to 59% for best baselines, representing a major advancement in LLM reasoning capabilities.

AIBullisharXiv โ€“ CS AI ยท Mar 56/10
๐Ÿง 

JANUS: Structured Bidirectional Generation for Guaranteed Constraints and Analytical Uncertainty

Researchers introduce JANUS, a new AI framework that solves the 'Quadrilemma' in synthetic data generation by achieving high fidelity, logical constraint control, reliable uncertainty estimation, and computational efficiency simultaneously. The system uses Bayesian Decision Trees and a novel Reverse-Topological Back-filling algorithm to guarantee 100% constraint satisfaction while being 128x faster than existing methods.

AINeutralarXiv โ€“ CS AI ยท 3d ago6/10
๐Ÿง 

CONDESION-BENCH: Conditional Decision-Making of Large Language Models in Compositional Action Space

Researchers introduce CONDESION-BENCH, a new benchmark for evaluating how large language models make decisions in complex, real-world scenarios with compositional actions and conditional constraints. The benchmark addresses limitations in existing decision-making frameworks by incorporating variable-level, contextual, and allocation-level restrictions that better reflect actual decision-making environments.

AINeutralarXiv โ€“ CS AI ยท 3d ago6/10
๐Ÿง 

Reasoning in a Combinatorial and Constrained World: Benchmarking LLMs on Natural-Language Combinatorial Optimization

Researchers introduced NLCO, a benchmark for evaluating large language models on natural-language combinatorial optimization problems without external solvers or code generation. Testing across modern LLMs reveals that while high-performing models handle small instances well, performance degrades significantly as problem complexity increases, with graph-structured and bottleneck-objective problems proving particularly challenging.

AINeutralarXiv โ€“ CS AI ยท Mar 36/107
๐Ÿง 

Pencil Puzzle Bench: A Benchmark for Multi-Step Verifiable Reasoning

Researchers introduced Pencil Puzzle Bench, a new framework for evaluating large language model reasoning capabilities using constraint-satisfaction problems. The benchmark tested 51 models across 300 puzzles, revealing significant performance improvements through increased reasoning effort and iterative verification processes.

AIBullisharXiv โ€“ CS AI ยท Mar 27/1019
๐Ÿง 

Provably Safe Generative Sampling with Constricting Barrier Functions

Researchers have developed a safety filtering framework that ensures AI generative models like diffusion models produce outputs that satisfy hard constraints without requiring model retraining. The approach uses Control Barrier Functions to create a 'constricting safety tube' that progressively tightens constraints during the generation process, achieving 100% constraint satisfaction across image generation, trajectory sampling, and robotic manipulation tasks.