## K-ASTRO: Making LLMs more generalizable in Neural Code Analysis ![image](https://hackmd.io/_uploads/rkeek16H6.png) ![image](https://hackmd.io/_uploads/SJaJYgVop.png) ![image](https://hackmd.io/_uploads/HyJ2vQQia.png) References: https://arxiv.org/abs/2211.08411 ### 0. Background * Lack of context (i.e., syntax or semantics) during fine-tuing of LLMs * Performance decrease in cross-domain tasks * Therefore -> our method (K-ASTRO), semantic-based syntax diversity -> enhanced Code LLMs ### 1. Dataset * Code2Text (6 PLs and corresponding comments) * Code2Text (Modified by only electing Java and Python) * Code Translation (Java -> C#) * Code Repair (Java) * Vulnerablity Repair (Java) ![image](https://hackmd.io/_uploads/rk00ptUSa.png) Code smells -> bad code smells, type of things that could be repaired i.e., less severe buffer overflow -> strong buffer overflow, CVSS score to group weak or strong develop rules to help decide attention between two nodes -> definition of one variable, tied back to other nodes in the AST. In the construction of one of the maps, also represent data flow (control flow graph) value numbering -> a type of dataflow analysis. The element you give value number analysis can correspond to elements in AST, could be a path in combining everything together ### 2. Model * Background Information ![image](https://hackmd.io/_uploads/BJiIc0hra.png) ### 2.0 Old Design * Model Design ![image](https://hackmd.io/_uploads/Hy6LM_ISp.png) * Dataset ![image](https://hackmd.io/_uploads/B1FAn03Sa.png) ![image](https://hackmd.io/_uploads/S1sHoA3Ha.png) * Processed Dataframe ![image](https://hackmd.io/_uploads/HyPZhR3B6.png) * Detail A: Augmentation & Upper and Lower Diagonal Matrix ![image](https://hackmd.io/_uploads/HyRmARnH6.png) * Detail B: Bining Techniques ![image](https://hackmd.io/_uploads/r1dZsR2ra.png) * Detail C: Attention Bias ![image](https://hackmd.io/_uploads/r171e16ST.png) * Baseline result ![image](https://hackmd.io/_uploads/B16tNCnSa.png) ![image](https://hackmd.io/_uploads/rJxq403HT.png) #### 2.1 PEFT for training LLMs * (Q)LoRA -> where adjustment happens ![image](https://hackmd.io/_uploads/S1BNNOUB6.png) ![image](https://hackmd.io/_uploads/BkOf7_UB6.png) #### 2.2 Automated Bias for LoRA * One global (parameter sharing) structure * 1 x Transformer block -> Local adjustment for bias #### 2.3 Dimension Alignment * Diagonal matrix * 1 x Transformer block -> dimension alignment for bias injection ### 3. Experiments #### 3.1 Research Questions RQ1: Code2Text -> general performance RQ2: Code2Text -> generalization RQ3: Code Translation + Code Repair -> general performance RQ4: Vulnerability Repair (BigBul, CWEs) -> special generalization (weak vulnerability -> strong vulnerability) #### 3.2 TBD #### 3.3 TBD ### 4. Results TBD