y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#federated-learning News & Analysis

54 articles tagged with #federated-learning. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

54 articles
AINeutralarXiv โ€“ CS AI ยท Mar 116/10
๐Ÿง 

Benchmarking Federated Learning in Edge Computing Environments: A Systematic Review and Performance Evaluation

A systematic review evaluates federated learning algorithms for edge computing environments, benchmarking five leading methods across accuracy, efficiency, and robustness metrics. The study finds SCAFFOLD achieves highest accuracy (0.90) while FedAvg excels in communication and energy efficiency, though challenges remain with data heterogeneity and energy limitations.

AIBullisharXiv โ€“ CS AI ยท Mar 96/10
๐Ÿง 

Federated Learning: A Survey on Privacy-Preserving Collaborative Intelligence

This research survey examines Federated Learning (FL), a distributed machine learning approach that enables collaborative AI model training without centralizing sensitive data. The paper covers FL's technical challenges, privacy mechanisms, and applications across healthcare, finance, and IoT systems.

AIBullisharXiv โ€“ CS AI ยท Mar 66/10
๐Ÿง 

ZorBA: Zeroth-order Federated Fine-tuning of LLMs with Heterogeneous Block Activation

Researchers propose ZorBA, a new federated learning framework for fine-tuning large language models that reduces memory usage by up to 62.41% through zeroth-order optimization and heterogeneous block activation. The system eliminates gradient storage requirements and reduces communication overhead by using shared random seeds and finite difference methods.

AINeutralarXiv โ€“ CS AI ยท Mar 36/107
๐Ÿง 

DeepAFL: Deep Analytic Federated Learning

Researchers propose DeepAFL, a new federated learning approach that uses gradient-free analytical solutions to address heterogeneity and scalability issues in traditional gradient-based FL systems. The method incorporates deep residual blocks with closed-form solutions, achieving 5.68%-8.42% performance improvements over existing baselines across benchmark datasets.

AI ร— CryptoBullisharXiv โ€“ CS AI ยท Mar 37/1010
๐Ÿค–

Communication-Efficient Quantum Federated Learning over Large-Scale Wireless Networks

Researchers present a novel quantum federated learning framework for large-scale wireless networks that combines quantum computing with privacy-preserving federated learning. The study introduces a sum-rate maximization approach using quantum approximate optimization algorithm (QAOA) that achieves over 100% improvement in performance compared to conventional methods.

AIBullisharXiv โ€“ CS AI ยท Mar 36/104
๐Ÿง 

FAuNO: Semi-Asynchronous Federated Reinforcement Learning Framework for Task Offloading in Edge Systems

Researchers have developed FAuNO, a new federated reinforcement learning framework that uses asynchronous processing to optimize task distribution in edge computing networks. The system employs an actor-critic architecture where local nodes learn specific dynamics while a central critic coordinates overall system performance, demonstrating superior results in reducing latency and task loss compared to existing methods.

AINeutralarXiv โ€“ CS AI ยท Mar 36/103
๐Ÿง 

Toward Youth-Centered Privacy-by-Design in Smart Devices: A Systematic Review

A systematic review of 122 academic papers reveals significant gaps in privacy protection for youth using AI-enabled smart devices, with technical solutions dominating research (67%) while policy enforcement and educational integration remain underdeveloped. The study recommends a multi-stakeholder approach involving policymakers, manufacturers, and educators to create comprehensive privacy ecosystems for young users.

AIBullisharXiv โ€“ CS AI ยท Mar 26/1013
๐Ÿง 

FedRot-LoRA: Mitigating Rotational Misalignment in Federated LoRA

Researchers propose FedRot-LoRA, a new framework that solves rotational misalignment issues in federated learning for large language models. The solution uses orthogonal transformations to align client updates before aggregation, improving training stability and performance without increasing communication costs.

AIBullisharXiv โ€“ CS AI ยท Mar 27/1016
๐Ÿง 

MPU: Towards Secure and Privacy-Preserving Knowledge Unlearning for Large Language Models

Researchers have developed MPU, a privacy-preserving framework that enables machine unlearning for large language models without requiring servers to share parameters or clients to share data. The framework uses perturbed model copies and harmonic denoising to achieve comparable performance to non-private methods, with most algorithms showing less than 1% performance degradation.

AIBullisharXiv โ€“ CS AI ยท Mar 27/1012
๐Ÿง 

FedNSAM:Consistency of Local and Global Flatness for Federated Learning

Researchers propose FedNSAM, a new federated learning algorithm that improves global model performance by addressing the inconsistency between local and global flatness in distributed training environments. The algorithm uses global Nesterov momentum to harmonize local and global optimization, showing superior performance compared to existing FedSAM approaches.

AIBullisharXiv โ€“ CS AI ยท Mar 26/1014
๐Ÿง 

An Efficient Unsupervised Federated Learning Approach for Anomaly Detection in Heterogeneous IoT Networks

Researchers propose an efficient unsupervised federated learning framework for anomaly detection in heterogeneous IoT networks that preserves privacy while leveraging shared features from multiple datasets. The approach uses explainable AI techniques like SHAP for transparency and demonstrates superior performance compared to conventional federated learning methods on real-world IoT datasets.

AIBullishGoogle Research Blog ยท Jul 246/107
๐Ÿง 

Synthetic and federated: Privacy-preserving domain adaptation with LLMs for mobile applications

The article discusses privacy-preserving domain adaptation techniques using Large Language Models for mobile applications, combining synthetic data generation with federated learning approaches. This represents an advancement in AI privacy technology that could enable better model performance while protecting user data in mobile environments.

AINeutralarXiv โ€“ CS AI ยท Apr 75/10
๐Ÿง 

Discrete Prototypical Memories for Federated Time Series Foundation Models

Researchers propose FeDPM, a federated learning framework that addresses semantic misalignment issues when using Large Language Models for time series analysis. The system uses discrete prototypical memories to better handle cross-domain time-series data while preserving privacy in distributed settings.

AIBullisharXiv โ€“ CS AI ยท Mar 274/10
๐Ÿง 

FED-HARGPT: A Hybrid Centralized-Federated Approach of a Transformer-based Architecture for Human Context Recognition

Researchers developed FED-HARGPT, a hybrid centralized-federated approach using Transformer architecture for Human Activity Recognition (HAR) with mobile sensor data. The study demonstrates that federated learning can achieve comparable performance to centralized models while preserving data privacy through the Flower framework.

AINeutralarXiv โ€“ CS AI ยท Mar 275/10
๐Ÿง 

Revealing the influence of participant failures on model quality in cross-silo Federated Learning

Researchers conducted extensive experiments to analyze how participant failures affect Federated Learning model quality across different data types and scenarios. The study reveals that data skewness significantly impacts model performance and can lead to overly optimistic evaluations when participants are missing from the training process.

AIBullisharXiv โ€“ CS AI ยท Mar 174/10
๐Ÿง 

FedUAF: Uncertainty-Aware Fusion with Reliability-Guided Aggregation for Multimodal Federated Sentiment Analysis

Researchers propose FedUAF, a new multimodal federated learning framework that addresses challenges in sentiment analysis by using uncertainty-aware fusion and reliability-guided aggregation. The system demonstrates superior performance on benchmark datasets CMU-MOSI and CMU-MOSEI, showing improved robustness against missing modalities and unreliable client updates in federated learning environments.

AIBullisharXiv โ€“ CS AI ยท Mar 175/10
๐Ÿง 

A Robust Framework for Secure Cardiovascular Risk Prediction: An Architectural Case Study of Differentially Private Federated Learning

Researchers developed FedCVR, a privacy-preserving federated learning framework for cardiovascular risk prediction that enables secure collaboration across medical institutions. The system achieved an F1-score of 0.84 and AUC of 0.96 while maintaining differential privacy, demonstrating that server-side adaptive optimization can preserve clinical utility under strict privacy constraints.

AINeutralarXiv โ€“ CS AI ยท Mar 174/10
๐Ÿง 

FedPBS: Proximal-Balanced Scaling Federated Learning Model for Robust Personalized Training for Non-IID Data

Researchers propose FedPBS, a new federated learning algorithm that addresses key challenges in distributed AI training including statistical heterogeneity and uneven client participation. The algorithm dynamically adapts batch sizes and applies proximal corrections to improve model convergence while preserving data privacy across distributed clients.

AIBullisharXiv โ€“ CS AI ยท Mar 115/10
๐Ÿง 

FedLECC: Cluster- and Loss-Guided Client Selection for Federated Learning under Non-IID Data

Researchers propose FedLECC, a new client selection strategy for federated learning that improves AI model training efficiency in distributed environments. The method groups clients by data similarity and prioritizes those with higher loss, achieving up to 12% better accuracy while reducing communication overhead by 50%.

AINeutralarXiv โ€“ CS AI ยท Mar 64/10
๐Ÿง 

ASFL: An Adaptive Model Splitting and Resource Allocation Framework for Split Federated Learning

Researchers propose ASFL, an adaptive split federated learning framework that optimizes machine learning model training across wireless networks by splitting computation between clients and central servers. The framework reduces training delay by up to 75% and energy consumption by 80% compared to baseline approaches while maintaining faster convergence rates.

AINeutralarXiv โ€“ CS AI ยท Mar 54/10
๐Ÿง 

Noise-aware Client Selection for carbon-efficient Federated Learning via Gradient Norm Thresholding

Researchers propose a new client selection method for carbon-efficient federated learning that filters out noisy data to improve model performance. The approach uses gradient norm thresholding to better identify quality clients while maintaining sustainability goals in distributed AI training across renewable energy-powered data centers.

๐Ÿข Meta
AINeutralarXiv โ€“ CS AI ยท Mar 44/103
๐Ÿง 

Adaptive Personalized Federated Learning via Multi-task Averaging of Kernel Mean Embeddings

Researchers propose a new Personalized Federated Learning approach that automatically learns optimal collaboration weights between agents without prior knowledge of data heterogeneity. The method uses kernel mean embedding estimation to capture statistical relationships between agents and includes a practical implementation for communication-constrained federated settings.

AIBullisharXiv โ€“ CS AI ยท Mar 25/107
๐Ÿง 

FedDAG: Clustered Federated Learning via Global Data and Gradient Integration for Heterogeneous Environments

Researchers introduce FedDAG, a new clustered federated learning framework that improves AI model training across heterogeneous client environments. The system combines data and gradient similarity metrics for better client clustering and uses a dual-encoder architecture to enable knowledge sharing across clusters while maintaining specialization.

AINeutralHugging Face Blog ยท Mar 274/104
๐Ÿง 

Federated Learning using Hugging Face and Flower

The article appears to focus on federated learning implementation using Hugging Face and Flower frameworks. However, the article body content was not provided, limiting the ability to analyze specific technical details or market implications.