y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#federated-learning News & Analysis

53 articles tagged with #federated-learning. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

53 articles
AINeutralarXiv – CS AI Β· 2d ago7/10
🧠

PAC-BENCH: Evaluating Multi-Agent Collaboration under Privacy Constraints

Researchers introduce PAC-Bench, a benchmark for evaluating how AI agents collaborate while maintaining privacy constraints. The study reveals that privacy protections significantly degrade multi-agent system performance and identify coordination failures as a critical unsolved challenge requiring new technical approaches.

$PAC
AIBearisharXiv – CS AI Β· 3d ago7/10
🧠

XFED: Non-Collusive Model Poisoning Attack Against Byzantine-Robust Federated Classifiers

Researchers have developed XFED, a novel model poisoning attack that compromises federated learning systems without requiring attackers to communicate or coordinate with each other. The attack successfully bypasses eight state-of-the-art defenses, revealing fundamental security vulnerabilities in FL deployments that were previously underestimated.

AINeutralarXiv – CS AI Β· Apr 67/10
🧠

Enhancing Robustness of Federated Learning via Server Learning

Researchers propose a new heuristic algorithm combining server learning with client update filtering and geometric median aggregation to improve federated learning robustness against malicious attacks. The approach maintains model accuracy even when over 50% of clients are malicious and works with non-identical data distributions across clients.

AINeutralarXiv – CS AI Β· Mar 177/10
🧠

Efficient Federated Conformal Prediction with Group-Conditional Guarantee

Researchers propose group-conditional federated conformal prediction (GC-FCP), a new protocol that enables trustworthy AI uncertainty quantification across distributed clients while providing coverage guarantees for specific groups. The framework addresses challenges in federated learning for applications in healthcare, finance, and mobile sensing by creating compact weighted summaries that support efficient calibration.

AIBullisharXiv – CS AI Β· Mar 177/10
🧠

HO-SFL: Hybrid-Order Split Federated Learning with Backprop-Free Clients and Dimension-Free Aggregation

Researchers propose HO-SFL (Hybrid-Order Split Federated Learning), a new framework that enables memory-efficient fine-tuning of large AI models on edge devices by eliminating backpropagation on client devices while maintaining convergence speed comparable to traditional methods. The approach significantly reduces communication costs and memory requirements for distributed AI training.

AIBullisharXiv – CS AI Β· Mar 127/10
🧠

Repurposing Backdoors for Good: Ephemeral Intrinsic Proofs for Verifiable Aggregation in Cross-silo Federated Learning

Researchers propose a novel lightweight architecture for verifiable aggregation in federated learning that uses backdoor injection as intrinsic proofs instead of expensive cryptographic methods. The approach achieves over 1000x speedup compared to traditional cryptographic baselines while maintaining high detection rates against malicious servers.

AIBullisharXiv – CS AI Β· Mar 97/10
🧠

FLoRG: Federated Fine-tuning with Low-rank Gram Matrices and Procrustes Alignment

Researchers propose FLoRG, a new federated learning framework for efficiently fine-tuning large language models that reduces communication overhead by up to 2041x while improving accuracy. The method uses Gram matrix aggregation and Procrustes alignment to solve aggregation errors and decomposition drift issues in distributed AI training.

AI Γ— CryptoBullisharXiv – CS AI Β· Mar 56/10
πŸ€–

Zero-Knowledge Federated Learning with Lattice-Based Hybrid Encryption for Quantum-Resilient Medical AI

Researchers introduce ZKFL-PQ, a quantum-resistant cryptographic protocol for federated learning in medical AI that combines zero-knowledge proofs, lattice-based encryption, and homomorphic encryption. The protocol achieves 100% rejection of malicious updates while maintaining model accuracy, addressing vulnerabilities from gradient inversion attacks and future quantum threats.

AINeutralarXiv – CS AI Β· Mar 56/10
🧠

From Privacy to Trust in the Agentic Era: A Taxonomy of Challenges in Trustworthy Federated Learning Through the Lens of Trust Report 2.0

Researchers propose Trustworthy Federated Learning (TFL) framework that treats trust as a continuously maintained system condition rather than static property, addressing challenges in AI systems with autonomous decision-making. The framework introduces Trust Report 2.0 as a privacy-preserving coordination blueprint for multi-stakeholder governance in federated learning deployments.

AIBearisharXiv – CS AI Β· Mar 56/10
🧠

Structure-Aware Distributed Backdoor Attacks in Federated Learning

Researchers have discovered that model architecture significantly affects the success of backdoor attacks in federated learning systems. The study introduces new metrics to measure model vulnerability and develops a framework showing that certain network structures can amplify malicious perturbations even with minimal poisoning.

AINeutralarXiv – CS AI Β· Mar 47/105
🧠

Federated Inference: Toward Privacy-Preserving Collaborative and Incentivized Model Serving

Researchers introduce Federated Inference (FI), a new collaborative paradigm where independently trained AI models can work together at inference time without sharing data or model parameters. The study identifies key requirements including privacy preservation and performance gains, while highlighting system-level challenges that differ from traditional federated learning approaches.

AIBullisharXiv – CS AI Β· Feb 277/107
🧠

Distributed LLM Pretraining During Renewable Curtailment Windows: A Feasibility Study

Researchers developed a system that trains large language models using renewable energy during curtailment periods when excess clean electricity would otherwise be wasted. The distributed training approach across multiple GPU clusters reduced operational emissions to 5-12% of traditional single-site training while maintaining model quality.

AINeutralarXiv – CS AI Β· Feb 277/105
🧠

Conformalized Neural Networks for Federated Uncertainty Quantification under Dual Heterogeneity

Researchers propose FedWQ-CP, a new approach for uncertainty quantification in federated learning that addresses both data and model heterogeneity challenges. The method enables reliable uncertainty estimation across distributed agents while maintaining efficiency through single-round communication and weighted threshold aggregation.

AIBullisharXiv – CS AI Β· 2d ago6/10
🧠

A Proposed Biomedical Data Policy Framework to Reduce Fragmentation, Improve Quality, and Incentivize Sharing in Indian Healthcare in the era of Artificial Intelligence and Digital Health

A research paper proposes a comprehensive policy framework for India to address fragmentation in biomedical data sharing by aligning institutional incentives around AI and digital health. The framework recommends recognizing data curation in academic promotions, incorporating open data metrics into institutional rankings, and implementing Shapley Value-based revenue sharing in federated learningβ€”while navigating India's 2023 data protection regulations.

AINeutralarXiv – CS AI Β· 2d ago6/10
🧠

FedRio: Personalized Federated Social Bot Detection via Cooperative Reinforced Contrastive Adversarial Distillation

Researchers propose FedRio, a federated learning framework that enables social media platforms to collaboratively detect bot accounts without sharing raw user data. The system uses graph neural networks, adversarial learning, and reinforcement learning to improve bot detection accuracy while maintaining privacy across heterogeneous platform architectures.

AIBullisharXiv – CS AI Β· 2d ago6/10
🧠

Task2vec Readiness: Diagnostics for Federated Learning from Pre-Training Embeddings

Researchers propose Task2Vec-based readiness indices to predict federated learning performance before training begins. By computing unsupervised metrics from pre-training embeddings, the method achieves correlation coefficients exceeding 0.9 with final outcomes, offering practitioners a diagnostic tool to assess federation alignment and heterogeneity impact.

AINeutralarXiv – CS AI Β· 3d ago6/10
🧠

From Selection to Scheduling: Federated Geometry-Aware Correction Makes Exemplar Replay Work Better under Continual Dynamic Heterogeneity

Researchers propose FEAT, a federated learning method that improves continual learning by addressing class imbalance and representation collapse across distributed clients. The approach combines geometric alignment and energy-based correction to better utilize exemplar samples while maintaining performance under dynamic heterogeneity.

AINeutralarXiv – CS AI Β· 6d ago6/10
🧠

Towards Privacy-Preserving Large Language Model: Text-free Inference Through Alignment and Adaptation

Researchers introduce Privacy-Preserving Fine-Tuning (PPFT), a novel training approach that enables LLM services to process user queries without receiving raw text, addressing privacy vulnerabilities in current deployments. The method uses client-side encoders and noise-injected embeddings to maintain competitive model performance while eliminating exposure of sensitive personal, medical, or legal information.

AINeutralarXiv – CS AI Β· 6d ago6/10
🧠

FedDAP: Domain-Aware Prototype Learning for Federated Learning under Domain Shift

Researchers introduce FedDAP, a federated learning framework that addresses domain shift challenges by constructing domain-specific global prototypes rather than single aggregated prototypes. The method aligns local features with prototypes from the same domain while encouraging separation from different domains, improving model generalization across heterogeneous client data.

AIBullisharXiv – CS AI Β· Apr 76/10
🧠

APPA: Adaptive Preference Pluralistic Alignment for Fair Federated RLHF of LLMs

Researchers propose APPA, a new framework for aligning large language models with diverse human preferences in federated learning environments. The method dynamically reweights group-level rewards to improve fairness, achieving up to 28% better alignment for underperforming groups while maintaining overall model performance.

🏒 Meta🧠 Llama
AIBullisharXiv – CS AI Β· Apr 66/10
🧠

A Survey on AI for 6G: Challenges and Opportunities

This survey paper examines AI's role in developing 6G wireless networks, covering key technologies like deep learning, reinforcement learning, and federated learning. The research addresses how AI will enable 6G's promise of high data rates and low latency for applications like smart cities and autonomous systems, while identifying challenges in scalability, security, and energy efficiency.

AIBullisharXiv – CS AI Β· Mar 176/10
🧠

FedTreeLoRA: Reconciling Statistical and Functional Heterogeneity in Federated LoRA Fine-Tuning

Researchers propose FedTreeLoRA, a new framework for privacy-preserving fine-tuning of large language models that addresses both statistical and functional heterogeneity across federated learning clients. The method uses tree-structured aggregation to allow layer-wise specialization while maintaining shared consensus on foundational layers, significantly outperforming existing personalized federated learning approaches.

AIBullisharXiv – CS AI Β· Mar 176/10
🧠

Computation and Communication Efficient Federated Unlearning via On-server Gradient Conflict Mitigation and Expression

Researchers propose FOUL (Federated On-server Unlearning), a new framework for efficiently removing specific participants' data from federated learning models without accessing client data. The approach reduces computational and communication costs while maintaining privacy compliance through a two-stage process that performs unlearning operations on the server side.

Page 1 of 3Next β†’