37 articles tagged with #prompt-engineering. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv โ CS AI ยท Mar 36/107
๐ง Researchers introduce ROSA2, a framework that improves Large Language Model interactions by simultaneously optimizing both prompts and model parameters during test-time adaptation. The approach outperformed baselines by 30% on mathematical tasks while reducing interaction turns by 40%.
AIBullisharXiv โ CS AI ยท Mar 36/103
๐ง Researchers developed a meta-learning approach for Large Multimodal Models (LMMs) that uses distilled soft prompts to improve few-shot visual question answering performance. The method outperformed traditional in-context learning by 21.2% and parameter-efficient finetuning by 7.7% on VQA tasks.
AIBullisharXiv โ CS AI ยท Mar 36/104
๐ง Researchers introduce MetaTuner, a new framework that combines prompt optimization with fine-tuning for Large Language Models, using shared neural networks to discover optimal combinations of prompts and parameters. The approach addresses the discrete-continuous optimization challenge through supervised regularization and demonstrates consistent performance improvements across benchmarks.
AINeutralarXiv โ CS AI ยท Mar 26/1016
๐ง Research reveals that large language models don't significantly benefit from conditioning on their own previous responses in multi-turn conversations. The study found that omitting assistant history can reduce context lengths by up to 10x while maintaining response quality, and in some cases even improves performance by avoiding context pollution where models over-condition on previous responses.
AINeutralarXiv โ CS AI ยท Feb 276/105
๐ง Researchers propose Natural Language Declarative Prompting (NLD-P) as a governance framework to manage prompt engineering challenges as large language models evolve. The method separates different control elements into modular components to maintain stable AI system behavior despite model updates and drift.
AIBullishMicrosoft Research Blog ยท Dec 106/103
๐ง Microsoft Research introduces Promptions, a tool that helps developers add dynamic UI controls to chat interfaces for more precise AI prompting. The system allows users to guide generative AI responses through intuitive controls rather than complex written instructions.
AINeutralOpenAI News ยท Jan 236/107
๐ง This document outlines a multi-layered AI safety framework based on OpenAI's established approaches, focusing on protections against prompt engineering, jailbreaks, privacy and security concerns. It details model and product mitigations, external red teaming efforts, safety evaluations, and ongoing refinement of safeguards.
AINeutralarXiv โ CS AI ยท Apr 64/10
๐ง Researchers developed a two-stage prompt selection strategy for zero-shot text-to-speech synthesis that improves emotional intensity and speaker consistency. The method evaluates prompts using prosodic features, audio quality, and text-emotion coherence in a static stage, then uses textual similarity for dynamic prompt selection during synthesis.
AINeutralarXiv โ CS AI ยท Mar 175/10
๐ง Researchers evaluated the semantic fragility of text-to-audio generation systems, finding that small changes in prompts can lead to substantial variations in generated audio output. While larger models like MusicGen-large showed better semantic consistency, all models exhibited persistent divergence in acoustic and temporal characteristics even when semantic similarity remained high.
AINeutralarXiv โ CS AI ยท Feb 274/103
๐ง Researchers tested GPT-5's ability to perform citation context analysis by examining how different prompt designs affect the model's interpretative readings of academic citations. The study found that while GPT-5 produces consistent surface classifications, prompt scaffolding significantly influences which interpretative frameworks and vocabularies the model emphasizes in deeper analysis.
AINeutralHugging Face Blog ยท Jun 125/107
๐ง The article examines how long prompts in large language models can block other requests, creating performance bottlenecks. It focuses on optimization strategies to improve LLM performance and request handling efficiency.
AINeutralHugging Face Blog ยท Apr 303/108
๐ง The article title 'Improving Prompt Consistency with Structured Generations' suggests content about enhancing AI prompt engineering techniques. However, no article body content was provided for analysis, making it impossible to extract meaningful insights or details about the specific methods or implications discussed.