54 articles tagged with #federated-learning. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AINeutralarXiv โ CS AI ยท Mar 126/10
๐ง Researchers propose TASER, a new defense framework against backdoor attacks in UAV-based decentralized federated learning systems. The system uses spectral energy analysis rather than traditional outlier detection, achieving below 20% attack success rates while maintaining accuracy within 5% loss.
AINeutralarXiv โ CS AI ยท Mar 116/10
๐ง A systematic review evaluates federated learning algorithms for edge computing environments, benchmarking five leading methods across accuracy, efficiency, and robustness metrics. The study finds SCAFFOLD achieves highest accuracy (0.90) while FedAvg excels in communication and energy efficiency, though challenges remain with data heterogeneity and energy limitations.
AIBullisharXiv โ CS AI ยท Mar 96/10
๐ง This research survey examines Federated Learning (FL), a distributed machine learning approach that enables collaborative AI model training without centralizing sensitive data. The paper covers FL's technical challenges, privacy mechanisms, and applications across healthcare, finance, and IoT systems.
AIBullisharXiv โ CS AI ยท Mar 66/10
๐ง Researchers propose ZorBA, a new federated learning framework for fine-tuning large language models that reduces memory usage by up to 62.41% through zeroth-order optimization and heterogeneous block activation. The system eliminates gradient storage requirements and reduces communication overhead by using shared random seeds and finite difference methods.
AINeutralarXiv โ CS AI ยท Mar 36/107
๐ง Researchers propose DeepAFL, a new federated learning approach that uses gradient-free analytical solutions to address heterogeneity and scalability issues in traditional gradient-based FL systems. The method incorporates deep residual blocks with closed-form solutions, achieving 5.68%-8.42% performance improvements over existing baselines across benchmark datasets.
AI ร CryptoBullisharXiv โ CS AI ยท Mar 37/1010
๐คResearchers present a novel quantum federated learning framework for large-scale wireless networks that combines quantum computing with privacy-preserving federated learning. The study introduces a sum-rate maximization approach using quantum approximate optimization algorithm (QAOA) that achieves over 100% improvement in performance compared to conventional methods.
AIBullisharXiv โ CS AI ยท Mar 36/104
๐ง Researchers have developed FAuNO, a new federated reinforcement learning framework that uses asynchronous processing to optimize task distribution in edge computing networks. The system employs an actor-critic architecture where local nodes learn specific dynamics while a central critic coordinates overall system performance, demonstrating superior results in reducing latency and task loss compared to existing methods.
AINeutralarXiv โ CS AI ยท Mar 36/103
๐ง A systematic review of 122 academic papers reveals significant gaps in privacy protection for youth using AI-enabled smart devices, with technical solutions dominating research (67%) while policy enforcement and educational integration remain underdeveloped. The study recommends a multi-stakeholder approach involving policymakers, manufacturers, and educators to create comprehensive privacy ecosystems for young users.
AIBullisharXiv โ CS AI ยท Mar 26/1013
๐ง Researchers propose FedRot-LoRA, a new framework that solves rotational misalignment issues in federated learning for large language models. The solution uses orthogonal transformations to align client updates before aggregation, improving training stability and performance without increasing communication costs.
AIBullisharXiv โ CS AI ยท Mar 27/1016
๐ง Researchers have developed MPU, a privacy-preserving framework that enables machine unlearning for large language models without requiring servers to share parameters or clients to share data. The framework uses perturbed model copies and harmonic denoising to achieve comparable performance to non-private methods, with most algorithms showing less than 1% performance degradation.
AIBullisharXiv โ CS AI ยท Mar 27/1012
๐ง Researchers propose FedNSAM, a new federated learning algorithm that improves global model performance by addressing the inconsistency between local and global flatness in distributed training environments. The algorithm uses global Nesterov momentum to harmonize local and global optimization, showing superior performance compared to existing FedSAM approaches.
AIBullisharXiv โ CS AI ยท Mar 26/1014
๐ง Researchers propose an efficient unsupervised federated learning framework for anomaly detection in heterogeneous IoT networks that preserves privacy while leveraging shared features from multiple datasets. The approach uses explainable AI techniques like SHAP for transparency and demonstrates superior performance compared to conventional federated learning methods on real-world IoT datasets.
AIBullishGoogle Research Blog ยท Jul 246/107
๐ง The article discusses privacy-preserving domain adaptation techniques using Large Language Models for mobile applications, combining synthetic data generation with federated learning approaches. This represents an advancement in AI privacy technology that could enable better model performance while protecting user data in mobile environments.
AINeutralarXiv โ CS AI ยท Apr 75/10
๐ง Researchers propose FeDPM, a federated learning framework that addresses semantic misalignment issues when using Large Language Models for time series analysis. The system uses discrete prototypical memories to better handle cross-domain time-series data while preserving privacy in distributed settings.
AIBullisharXiv โ CS AI ยท Mar 274/10
๐ง Researchers developed FED-HARGPT, a hybrid centralized-federated approach using Transformer architecture for Human Activity Recognition (HAR) with mobile sensor data. The study demonstrates that federated learning can achieve comparable performance to centralized models while preserving data privacy through the Flower framework.
AINeutralarXiv โ CS AI ยท Mar 275/10
๐ง Researchers conducted extensive experiments to analyze how participant failures affect Federated Learning model quality across different data types and scenarios. The study reveals that data skewness significantly impacts model performance and can lead to overly optimistic evaluations when participants are missing from the training process.
AIBullisharXiv โ CS AI ยท Mar 174/10
๐ง Researchers propose FedUAF, a new multimodal federated learning framework that addresses challenges in sentiment analysis by using uncertainty-aware fusion and reliability-guided aggregation. The system demonstrates superior performance on benchmark datasets CMU-MOSI and CMU-MOSEI, showing improved robustness against missing modalities and unreliable client updates in federated learning environments.
AIBullisharXiv โ CS AI ยท Mar 175/10
๐ง Researchers developed FedCVR, a privacy-preserving federated learning framework for cardiovascular risk prediction that enables secure collaboration across medical institutions. The system achieved an F1-score of 0.84 and AUC of 0.96 while maintaining differential privacy, demonstrating that server-side adaptive optimization can preserve clinical utility under strict privacy constraints.
AINeutralarXiv โ CS AI ยท Mar 174/10
๐ง Researchers propose FedPBS, a new federated learning algorithm that addresses key challenges in distributed AI training including statistical heterogeneity and uneven client participation. The algorithm dynamically adapts batch sizes and applies proximal corrections to improve model convergence while preserving data privacy across distributed clients.
AIBullisharXiv โ CS AI ยท Mar 115/10
๐ง Researchers propose FedLECC, a new client selection strategy for federated learning that improves AI model training efficiency in distributed environments. The method groups clients by data similarity and prioritizes those with higher loss, achieving up to 12% better accuracy while reducing communication overhead by 50%.
AINeutralarXiv โ CS AI ยท Mar 64/10
๐ง Researchers propose ASFL, an adaptive split federated learning framework that optimizes machine learning model training across wireless networks by splitting computation between clients and central servers. The framework reduces training delay by up to 75% and energy consumption by 80% compared to baseline approaches while maintaining faster convergence rates.
AINeutralarXiv โ CS AI ยท Mar 54/10
๐ง Researchers propose a new client selection method for carbon-efficient federated learning that filters out noisy data to improve model performance. The approach uses gradient norm thresholding to better identify quality clients while maintaining sustainability goals in distributed AI training across renewable energy-powered data centers.
๐ข Meta
AINeutralarXiv โ CS AI ยท Mar 44/103
๐ง Researchers propose a new Personalized Federated Learning approach that automatically learns optimal collaboration weights between agents without prior knowledge of data heterogeneity. The method uses kernel mean embedding estimation to capture statistical relationships between agents and includes a practical implementation for communication-constrained federated settings.
AIBullisharXiv โ CS AI ยท Mar 25/107
๐ง Researchers introduce FedDAG, a new clustered federated learning framework that improves AI model training across heterogeneous client environments. The system combines data and gradient similarity metrics for better client clustering and uses a dual-encoder architecture to enable knowledge sharing across clusters while maintaining specialization.
AINeutralHugging Face Blog ยท Mar 274/104
๐ง The article appears to focus on federated learning implementation using Hugging Face and Flower frameworks. However, the article body content was not provided, limiting the ability to analyze specific technical details or market implications.