y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#fpga News & Analysis

7 articles tagged with #fpga. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

7 articles
AIBullisharXiv โ€“ CS AI ยท Mar 267/10
๐Ÿง 

QUARK: Quantization-Enabled Circuit Sharing for Transformer Acceleration by Exploiting Common Patterns in Nonlinear Operations

Researchers have developed QUARK, a quantization-enabled FPGA acceleration framework that significantly improves Transformer model performance by optimizing nonlinear operations through circuit sharing. The system achieves up to 1.96x speedup over GPU implementations while reducing hardware overhead by more than 50% compared to existing approaches.

AIBullisharXiv โ€“ CS AI ยท Mar 46/102
๐Ÿง 

TinyIceNet: Low-Power SAR Sea Ice Segmentation for On-Board FPGA Inference

Researchers developed TinyIceNet, a compact AI model for real-time sea ice mapping using satellite SAR imagery, designed specifically for on-board FPGA processing in space. The system achieves 75.216% F1 score while consuming 50% less energy than GPU baselines, demonstrating practical AI deployment for maritime navigation in polar regions.

$NEAR
AIBullisharXiv โ€“ CS AI ยท Feb 277/106
๐Ÿง 

TT-SEAL: TTD-Aware Selective Encryption for Adversarially-Robust and Low-Latency Edge AI

Researchers developed TT-SEAL, a selective encryption framework for compressed AI models using Tensor-Train Decomposition that maintains security while encrypting only 4.89-15.92% of parameters. The system achieves the same robustness as full encryption while reducing AES decryption overhead in end-to-end latency from 58% to as low as 2.76%.

AIBullisharXiv โ€“ CS AI ยท Feb 277/106
๐Ÿง 

Bitwise Systolic Array Architecture for Runtime-Reconfigurable Multi-precision Quantized Multiplication on Hardware Accelerators

Researchers developed a runtime-reconfigurable bitwise systolic array architecture for multi-precision quantized neural networks on FPGA hardware accelerators. The system achieves 1.3-3.6x speedup on mixed-precision models while supporting higher clock frequencies up to 250MHz, addressing the trade-off between hardware efficiency and inference accuracy.

AIBullisharXiv โ€“ CS AI ยท Mar 116/10
๐Ÿง 

Architectural Design and Performance Analysis of FPGA based AI Accelerators: A Comprehensive Review

This comprehensive review examines FPGA-based AI accelerators as a promising solution for deep learning workloads, addressing the limitations of ASIC and GPU accelerators. The paper analyzes hardware optimizations including loop pipelining, parallelism, and quantization techniques that make FPGAs attractive for AI applications requiring high performance and energy efficiency.

AIBullisharXiv โ€“ CS AI ยท Mar 26/1014
๐Ÿง 

BiKA: Kolmogorov-Arnold-Network-inspired Ultra Lightweight Neural Network Hardware Accelerator

Researchers propose BiKA, a new ultra-lightweight neural network accelerator inspired by Kolmogorov-Arnold Networks that uses binary thresholds instead of complex computations. The FPGA prototype demonstrates 27-51% reduction in hardware resource usage compared to existing binarized and quantized neural network accelerators while maintaining competitive accuracy.

AINeutralIEEE Spectrum โ€“ AI ยท Mar 16/108
๐Ÿง 

Letting Machines Decide What Matters

Particle physicists are turning to AI to discover new physics beyond the Standard Model by using machine learning systems to analyze data from the Large Hadron Collider in real-time. The AI systems, running on FPGAs connected to detectors, must decide which of 40 million particle collisions per second are worth preserving for analysis, essentially becoming part of the scientific instrument itself.