AINeutralarXiv โ CS AI ยท 4h ago6/10
๐ง
CoDe-R: Refining Decompiler Output with LLMs via Rationale Guidance and Adaptive Inference
Researchers propose CoDe-R, a two-stage framework using Large Language Models to improve binary decompilation by reducing logical errors and semantic misalignment. A 1.3B model using this approach achieves state-of-the-art performance on the HumanEval-Decompile benchmark, becoming the first lightweight model to exceed 50% re-executability rates.