UMEDA: Unified Multi-modal Efficient Data Fusion for Privacy-Preserving Graph Federated Learning via Spectral-Gated Attention and Diffusion-Based Operator Alignment
Researchers introduce UMEDA, a federated learning framework designed to enable device-free localization across heterogeneous sensors while maintaining privacy. The system uses spectral signal processing and diffusion-based aggregation to align data from different sensor modalities without requiring direct node correspondence, achieving superior performance on multi-modal benchmarks under privacy constraints.
UMEDA addresses a fundamental challenge in distributed machine learning: training accurate models when edge devices have incompatible sensors and data distributions. Traditional federated learning assumes clients share similar hardware and data patterns, but real-world deployments often feature Wi-Fi receivers alongside LiDAR systems with different resolutions and calibrations. This research reformulates the aggregation problem by treating client updates as discretizations of a shared continuous operator rather than topology-bound weights, enabling the system to absorb missing modalities and varying graph sizes without manual alignment. The technical innovation centers on spectral filtering through low-rank kernels, which suppresses modality-specific noise while preserving shared signal structure. This approach naturally complements privacy preservation. The researchers implement anisotropic differential privacy by projecting noise preferentially into null spaces of the signal subspace, protecting sensitive information while maintaining utility on dominant eigendirections. Experimental validation on MM-Fi and RELI11D benchmarks demonstrates clear improvements in accuracy, convergence speed, and communication efficiency, particularly when modality heterogeneity increases or privacy budgets tighten. The work extends federated learning beyond homogeneous settings, making it practical for real infrastructure deployments where sensor diversity is inevitable. This framework could influence how edge computing systems handle multi-modal sensor fusion in smart buildings, autonomous vehicles, and industrial IoT applications where both privacy and accuracy matter.
- βUMEDA solves federated learning challenges with heterogeneous sensors by treating aggregation as spectral signal processing on continuous operators.
- βSpectral-gated attention filters client data into low-rank subspaces, aligning devices with different modalities without explicit correspondence.
- βAnisotropic differential privacy mechanism preserves utility on dominant signal directions while formally satisfying (Ξ΅, Ξ΄)-DP guarantees.
- βOutperforms federated baselines on multi-modal benchmarks under high heterogeneity and tight privacy budgets.
- βEnables device-free localization across distributed edge devices with incompatible sensors and drifting data distributions.