What do they capture?: A structural analysis of pre-trained language models for source code

Yao WAN, Wei ZHAO, Hongyu ZHANG, Yulei SUI, Guandong XU, Hai JIN

Research output: Chapter in Book/Report/Conference proceedingChapters

48 Citations (Scopus)

Abstract

Recently, many pre-trained language models for source code have been proposed to model the context of code and serve as a basis for downstream code intelligence tasks such as code completion, code search, and code summarization. These models leverage masked pre-training and Transformer and have achieved promising results. However, currently there is still little progress regarding interpretability of existing pre-trained code models. It is not clear why these models work and what feature correlations they can capture. In this paper, we conduct a thorough structural analysis aiming to provide an interpretation of pre-trained language models for source code (e.g., CodeBERT, and GraphCodeBERT) from three distinctive perspectives: (1) attention analysis, (2) probing on the word embedding, and (3) syntax tree induction. Through comprehensive analysis, this paper reveals several insightful findings that may inspire future studies: (1) Attention aligns strongly with the syntax structure of code. (2) Pre-training language models of code can preserve the syntax structure of code in the intermediate representations of each Transformer layer. (3) The pre-trained models of code have the ability of inducing syntax trees of code. Theses findings suggest that it may be helpful to incorporate the syntax structure of code into the process of pre-training for better code representations. Copyright © 2022 Association for Computing Machinery.

Original languageEnglish
Title of host publicationProceedings of 2022 ACM/IEEE 44th International Conference on Software Engineering, ICSE 2022
Place of PublicationNew York
PublisherThe Association for Computing Machinery
Pages2377-2388
ISBN (Electronic)9781450392211
DOIs
Publication statusPublished - 2022

Citation

Wan, Y., Zhao, W., Zhang, H., Sui, Y., Xu, G., & Jin, H. (2022). What do they capture?: A structural analysis of pre-trained language models for source code. In Proceedings of 2022 ACM/IEEE 44th International Conference on Software Engineering, ICSE 2022 (pp. 2377-2388). The Association for Computing Machinery. https://doi.org/10.1145/3510003.3510050

Keywords

  • Code representation
  • Deep learning
  • Pre-trained language model
  • Probing
  • Attention analysis
  • Syntax tree induction

Fingerprint

Dive into the research topics of 'What do they capture?: A structural analysis of pre-trained language models for source code'. Together they form a unique fingerprint.