y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#zero-shot News & Analysis

31 articles tagged with #zero-shot. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

31 articles
AIBullisharXiv โ€“ CS AI ยท Apr 77/10
๐Ÿง 

Zero-Shot Quantization via Weight-Space Arithmetic

Researchers have developed a zero-shot quantization method that transfers robustness between AI models through weight-space arithmetic, improving post-training quantization performance by up to 60% without requiring additional training. This breakthrough enables low-cost deployment of extremely low-bit models by extracting 'quantization vectors' from donor models to patch receiver models.

AI ร— CryptoBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿค–

Benchmarking Zero-Shot Reasoning Approaches for Error Detection in Solidity Smart Contracts

Researchers benchmarked state-of-the-art LLMs for detecting vulnerabilities in Solidity smart contracts using zero-shot prompting strategies. The study found that Chain-of-Thought and Tree-of-Thought approaches significantly improved recall (95-99%) but reduced precision, while Claude 3 Opus achieved the best performance with a 90.8 F1-score in vulnerability classification.

๐Ÿง  Claude
AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

PrototypeNAS: Rapid Design of Deep Neural Networks for Microcontroller Units

PrototypeNAS is a new zero-shot neural architecture search method that rapidly designs and optimizes deep neural networks for microcontroller units without requiring extensive training. The system uses a three-step approach combining structural optimization, ensemble zero-shot proxies, and Hypervolume subset selection to identify efficient models within minutes that can run on resource-constrained edge devices.

AIBullisharXiv โ€“ CS AI ยท Mar 117/10
๐Ÿง 

BiCLIP: Domain Canonicalization via Structured Geometric Transformation

Researchers introduce BiCLIP, a new framework that improves vision-language models' ability to adapt to specialized domains through geometric transformations. The approach achieves state-of-the-art results across 11 benchmarks while maintaining simplicity and low computational requirements.

AIBullisharXiv โ€“ CS AI ยท Mar 117/10
๐Ÿง 

Reinforcing Numerical Reasoning in LLMs for Tabular Prediction via Structural Priors

Researchers propose PRPO (Permutation Relative Policy Optimization), a reinforcement learning framework that enhances large language models' numerical reasoning capabilities for tabular data prediction. The method achieves performance comparable to supervised baselines while excelling in zero-shot scenarios, with an 8B parameter model outperforming much larger models by up to 53.17%.

AIBullisharXiv โ€“ CS AI ยท Mar 57/10
๐Ÿง 

MPFlow: Multi-modal Posterior-Guided Flow Matching for Zero-Shot MRI Reconstruction

Researchers developed MPFlow, a new zero-shot MRI reconstruction framework that uses multi-modal data and rectified flow to improve medical imaging quality. The system reduces tumor hallucinations by 15% while using 80% fewer sampling steps compared to existing diffusion methods, potentially advancing AI applications in medical diagnostics.

AIBullisharXiv โ€“ CS AI ยท Mar 56/10
๐Ÿง 

GeoSeg: Training-Free Reasoning-Driven Segmentation in Remote Sensing Imagery

Researchers introduce GeoSeg, a zero-shot, training-free framework for AI-driven segmentation of remote sensing imagery that uses multimodal language models for reasoning without requiring specialized training data. The system addresses domain-specific challenges in satellite and aerial image analysis through bias-aware coordinate refinement and dual-route prompting mechanisms.

AIBullisharXiv โ€“ CS AI ยท Mar 57/10
๐Ÿง 

TSPulse: Tiny Pre-Trained Models with Disentangled Representations for Rapid Time-Series Analysis

IBM researchers introduce TSPulse, an ultra-lightweight pre-trained AI model with only 1M parameters that achieves state-of-the-art performance in time-series analysis tasks. The model uses disentangled representations across temporal, spectral, and semantic views, delivering significant performance gains of 20-50% across multiple diagnostic tasks while being 10-100x smaller than competing models.

๐Ÿข Hugging Face
AINeutralarXiv โ€“ CS AI ยท Mar 47/103
๐Ÿง 

MoECLIP: Patch-Specialized Experts for Zero-shot Anomaly Detection

Researchers have developed MoECLIP, a new AI architecture that improves zero-shot anomaly detection by using specialized experts to analyze different image patches. The system outperforms existing methods across 14 benchmark datasets in industrial and medical domains by dynamically routing patches to specialized LoRA experts while maintaining CLIP's generalization capabilities.

AIBullisharXiv โ€“ CS AI ยท Mar 37/103
๐Ÿง 

ZeroDVFS: Zero-Shot LLM-Guided Core and Frequency Allocation for Embedded Platforms

Researchers developed ZeroDVFS, a system that uses Large Language Models to optimize power management in embedded systems without requiring extensive profiling. The system achieves 7.09 times better energy efficiency and enables zero-shot deployment for new workloads in under 5 seconds through LLM-based code analysis.

AIBullisharXiv โ€“ CS AI ยท Mar 37/104
๐Ÿง 

Relational Transformer: Toward Zero-Shot Foundation Models for Relational Data

Researchers from Stanford introduce the Relational Transformer (RT), a new AI architecture that can work with relational databases without task-specific fine-tuning. The 22M parameter model achieves 93% performance of fully supervised models on binary classification tasks, significantly outperforming a 27B parameter LLM at 84%.

AIBullisharXiv โ€“ CS AI ยท Feb 277/105
๐Ÿง 

CourtGuard: A Model-Agnostic Framework for Zero-Shot Policy Adaptation in LLM Safety

Researchers introduce CourtGuard, a new framework for AI safety that uses retrieval-augmented multi-agent debate to evaluate LLM outputs without requiring expensive retraining. The system achieves state-of-the-art performance across 7 safety benchmarks and demonstrates zero-shot adaptability to new policy requirements, offering a more flexible approach to AI governance.

AIBullishOpenAI News ยท Jan 57/105
๐Ÿง 

CLIP: Connecting text and images

OpenAI introduces CLIP, a neural network that learns visual concepts from natural language supervision and can perform visual classification tasks without specific training. CLIP demonstrates zero-shot capabilities similar to GPT-2 and GPT-3, enabling it to recognize visual categories simply by providing their names.

AINeutralarXiv โ€“ CS AI ยท 2d ago6/10
๐Ÿง 

Frugal Knowledge Graph Construction with Local LLMs: A Zero-Shot Pipeline, Self-Consistency and Wisdom of Artificial Crowds

Researchers demonstrate a zero-shot knowledge graph construction pipeline using local open-source LLMs on consumer hardware, achieving 0.70 F1 on document relations and 0.55 exact match on multi-hop reasoning through ensemble methods. The study reveals that strong model consensus often signals collective hallucination rather than accuracy, challenging traditional ensemble assumptions while maintaining low computational costs and carbon footprint.

AIBullishTechCrunch โ€“ AI ยท Apr 66/10
๐Ÿง 

OpenAI alums have been quietly investing from a new, potentially $100M fund

Zero Shot, a new venture capital fund with strong connections to OpenAI, is targeting $100 million for its inaugural fund and has already begun making investments. The fund represents another significant capital pool entering the AI investment landscape from industry insiders.

๐Ÿข OpenAI
AIBullisharXiv โ€“ CS AI ยท Mar 266/10
๐Ÿง 

OmniCustom: Sync Audio-Video Customization Via Joint Audio-Video Generation Model

Researchers introduce OmniCustom, a new AI framework that simultaneously customizes both video identity and audio timbre in generated content. The system uses reference images and audio samples to create synchronized audio-video content while allowing users to specify spoken content through text prompts.

AIBullisharXiv โ€“ CS AI ยท Mar 55/10
๐Ÿง 

Topological Alignment of Shared Vision-Language Embedding Space

Researchers introduce ToMCLIP, a new framework that improves multilingual vision-language models by using topological alignment to better preserve the geometric structure of shared embedding spaces. The method shows enhanced performance on zero-shot classification and multilingual image retrieval tasks.

AIBullisharXiv โ€“ CS AI ยท Mar 36/107
๐Ÿง 

Zero-Shot and Supervised Bird Image Segmentation Using Foundation Models: A Dual-Pipeline Approach with Grounding DINO~1.5, YOLOv11, and SAM~2.1

Researchers developed a dual-pipeline framework for bird image segmentation using foundation models including Grounding DINO 1.5, YOLOv11, and SAM 2.1. The supervised pipeline achieved state-of-the-art results with 0.912 IoU on the CUB-200-2011 dataset, while the zero-shot pipeline achieved 0.831 IoU using only text prompts.

AIBullisharXiv โ€“ CS AI ยท Mar 37/108
๐Ÿง 

Unified Vision-Language Modeling via Concept Space Alignment

Researchers introduce V-SONAR, a vision-language embedding system that extends text-only SONAR to support 1500+ languages with vision capabilities. The system demonstrates state-of-the-art performance on video captioning and multilingual vision tasks through V-LCM, which combines vision and language processing in a unified framework.

AIBullisharXiv โ€“ CS AI ยท Mar 36/105
๐Ÿง 

GateLens: A Reasoning-Enhanced LLM Agent for Automotive Software Release Analytics

Researchers introduced GateLens, an LLM-based system that uses Relational Algebra as an intermediate layer to analyze complex tabular data more reliably than traditional approaches. The system demonstrated over 80% reduction in analysis time in automotive software analytics while maintaining high accuracy, outperforming existing Chain-of-Thought methods.

AIBullisharXiv โ€“ CS AI ยท Mar 26/1011
๐Ÿง 

Multimodal Optimal Transport for Unsupervised Temporal Segmentation in Surgical Robotics

Researchers developed TASOT, an unsupervised AI method for surgical phase recognition that combines visual and textual information without requiring expensive large-scale pre-training. The approach showed significant improvements over existing zero-shot methods across multiple surgical datasets, demonstrating that effective surgical AI can be achieved with more efficient training methods.

AIBearisharXiv โ€“ CS AI ยท Mar 26/1015
๐Ÿง 

The False Promise of Zero-Shot Super-Resolution in Machine-Learned Operators

Research reveals that machine-learned operators (MLOs) fail at zero-shot super-resolution, unable to accurately perform inference at resolutions different from their training data. The study identifies key limitations in frequency extrapolation and resolution interpolation, proposing a multi-resolution training protocol as a solution.

AINeutralarXiv โ€“ CS AI ยท Apr 74/10
๐Ÿง 

Can LLMs Reason About Attention? Towards Zero-Shot Analysis of Multimodal Classroom Behavior

Researchers developed a privacy-preserving AI system that analyzes classroom videos to understand student engagement using pose detection and gaze tracking, with data processed by the QwQ-32B-Reasoning LLM. The system deletes original video frames and retains only geometric coordinates to comply with FERPA privacy regulations.

AINeutralarXiv โ€“ CS AI ยท Apr 64/10
๐Ÿง 

Expressive Prompting: Improving Emotion Intensity and Speaker Consistency in Zero-Shot TTS

Researchers developed a two-stage prompt selection strategy for zero-shot text-to-speech synthesis that improves emotional intensity and speaker consistency. The method evaluates prompts using prosodic features, audio quality, and text-emotion coherence in a static stage, then uses textual similarity for dynamic prompt selection during synthesis.

AINeutralarXiv โ€“ CS AI ยท Mar 114/10
๐Ÿง 

VoxEmo: Benchmarking Speech Emotion Recognition with Speech LLMs

Researchers introduce VoxEmo, a comprehensive benchmark for evaluating Speech Large Language Models on emotion recognition tasks across 35 emotion corpora and 15 languages. The benchmark addresses evaluation challenges in open text generation and introduces novel protocols that better align with human subjective emotion perception.

Page 1 of 2Next โ†’