y0news
#cloud-computing2 articles
2 articles
AIBullisharXiv โ€“ CS AI ยท 6h ago2
๐Ÿง 

Your Inference Request Will Become a Black Box: Confidential Inference for Cloud-based Large Language Models

Researchers propose Talaria, a new confidential inference framework that protects client data privacy when using cloud-hosted Large Language Models. The system partitions LLM operations between client-controlled environments and cloud GPUs, reducing token reconstruction attacks from 97.5% to 1.34% accuracy while maintaining model performance.

AIBullisharXiv โ€“ CS AI ยท 6h ago1
๐Ÿง 

Towards Privacy-Preserving LLM Inference via Collaborative Obfuscation (Technical Report)

Researchers have developed AloePri, the first privacy-preserving LLM inference method designed for industrial applications. The system uses collaborative obfuscation to protect input/output data while maintaining 96.5-100% accuracy and resisting state-of-the-art attacks, successfully tested on a 671B parameter model.