y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#software-quality News & Analysis

7 articles tagged with #software-quality. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

7 articles
AIBearisharXiv โ€“ CS AI ยท Apr 137/10
๐Ÿง 

Demystifying the Silence of Correctness Bugs in PyTorch Compiler

Researchers have identified and systematically studied correctness bugs in PyTorch's compiler (torch.compile) that silently produce incorrect outputs without crashing or warning users. A new testing technique called AlignGuard has detected 23 previously unknown bugs, with over 60% classified as high-priority by the PyTorch team, highlighting a critical reliability gap in a core tool for AI infrastructure optimization.

AIBearisharXiv โ€“ CS AI ยท Apr 107/10
๐Ÿง 

Beyond Functional Correctness: Design Issues in AI IDE-Generated Large-Scale Projects

Researchers evaluated Cursor, an AI-powered IDE, on its ability to generate large-scale software projects and found it achieves 91% functional correctness but produces significant design issues including code duplication, complexity violations, and framework best-practice breaches that threaten long-term maintainability.

AINeutralarXiv โ€“ CS AI ยท Mar 277/10
๐Ÿง 

WebTestBench: Evaluating Computer-Use Agents towards End-to-End Automated Web Testing

Researchers introduced WebTestBench, a new benchmark for evaluating automated web testing using AI agents and large language models. The study reveals significant gaps between current AI capabilities and industrial deployment needs, with LLMs struggling with test completeness, defect detection, and long-term interaction reliability.

AIBearisharXiv โ€“ CS AI ยท Apr 106/10
๐Ÿง 

A Study of LLMs' Preferences for Libraries and Programming Languages

A new empirical study reveals that eight major LLMs exhibit systematic biases in code generation, overusing popular libraries like NumPy in 45% of cases and defaulting to Python even when unsuitable, prioritizing familiarity over task-specific optimality. The findings highlight gaps in current LLM evaluation methodologies and underscore the need for targeted improvements in training data diversity and benchmarking standards.

AINeutralarXiv โ€“ CS AI ยท Mar 276/10
๐Ÿง 

Factors Influencing the Quality of AI-Generated Code: A Synthesis of Empirical Evidence

A systematic literature review of 24 studies reveals that AI-generated code quality depends on multiple factors including prompt design, task specification, and developer expertise. The research shows variable outcomes for code correctness, security, and maintainability, indicating that AI-assisted development requires careful human oversight and validation.

AIBullisharXiv โ€“ CS AI ยท Mar 266/10
๐Ÿง 

LLMLOOP: Improving LLM-Generated Code and Tests through Automated Iterative Feedback Loops

Researchers have developed LLMLOOP, a framework that automatically refines LLM-generated code and test cases through five iterative loops addressing compilation errors, static analysis issues, test failures, and quality improvements. The tool was evaluated on HUMANEVAL-X benchmark and demonstrated effectiveness in improving the quality of AI-generated code outputs.

AINeutralarXiv โ€“ CS AI ยท Mar 36/103
๐Ÿง 

OBsmith: LLM-Powered JavaScript Obfuscator Testing

Researchers introduce OBsmith, an LLM-powered framework that tests JavaScript obfuscators for correctness bugs that can silently alter program functionality. The tool discovered 11 previously unknown bugs that existing JavaScript fuzzers failed to detect, highlighting critical gaps in obfuscation quality assurance.