7 articles tagged with #temporal-analysis. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AINeutralarXiv โ CS AI ยท Mar 97/10
๐ง Researchers introduced LLMTM, a comprehensive benchmark to evaluate Large Language Models' performance on temporal motif analysis in dynamic graphs. The study tested nine different LLMs and developed a structure-aware dispatcher that balances accuracy with cost-effectiveness for graph analysis tasks.
๐ง GPT-4
AINeutralarXiv โ CS AI ยท Mar 57/10
๐ง Research shows that static word embeddings like GloVe and Word2Vec can recover substantial geographic and temporal information from text co-occurrence patterns alone, challenging assumptions that such capabilities require sophisticated world models in large language models. The study found these simple embeddings could predict city coordinates and historical birth years with high accuracy, suggesting that linear probe recoverability doesn't necessarily indicate advanced internal representations.
AIBullisharXiv โ CS AI ยท Apr 76/10
๐ง Researchers have developed SmartGuard Energy Intelligence System (SGEIS), an AI framework that combines machine learning, deep learning, and graph neural networks to detect electricity theft in smart grids. The system achieved 96% accuracy in identifying high-risk nodes and demonstrates strong performance with practical applications for energy security.
AIBullisharXiv โ CS AI ยท Mar 36/107
๐ง Researchers have developed Thoth, the first family of Large Language Models specifically designed to understand and reason about time series data through a mid-training approach. The model uses a specialized corpus called Book-of-Thoth to bridge the gap between temporal data and natural language, significantly outperforming existing LLMs in time series analysis tasks.
AIBullisharXiv โ CS AI ยท Mar 26/1012
๐ง Researchers developed Hybrid Class-Aware Selective Replay (Hybrid-CASR), a continual learning method that improves AI-based software vulnerability detection by addressing catastrophic forgetting in temporal scenarios. The method achieved 0.667 Macro-F1 score while reducing training time by 17% compared to baseline approaches on CVE data from 2018-2024.
AIBullishHugging Face Blog ยท Jun 166/108
๐ง The article appears to discuss the effectiveness of Transformer models for time series forecasting, specifically mentioning Autoformer architecture. However, the article body content was not provided in the input.
AINeutralarXiv โ CS AI ยท Mar 54/10
๐ง Researchers propose TFWaveFormer, a novel Transformer architecture that combines temporal-frequency analysis with multi-resolution wavelet decomposition for dynamic link prediction. The framework achieves state-of-the-art performance on benchmark datasets by better capturing complex multi-scale temporal dynamics in applications like social networks and financial modeling.