Reinforcement-learning-guided source code summarization using hierarchical attention

Wenhua WANG, Yuqun ZHANG, Yulei SUI, Yao WAN, Zhou ZHAO, Jian WU, Philip S. YU, Guandong XU

Research output: Contribution to journalArticlespeer-review

52 Citations (Scopus)

Abstract

Code summarization (aka comment generation) provides a high-level natural language description of the function performed by code, which can benefit the software maintenance, code categorization and retrieval. To the best of our knowledge, the state-of-the-art approaches follow an encoder-decoder framework which encodes source code into a hidden space and later decodes it into a natural language space. Such approaches suffer from the following drawbacks: (a) they are mainly input by representing code as a sequence of tokens while ignoring code hierarchy; (b) most of the encoders only input simple features (e.g., tokens) while ignoring the features that can help capture the correlations between comments and code; (c) the decoders are typically trained to predict subsequent words by maximizing the likelihood of subsequent ground truth words, while in real world, they are excepted to generate the entire word sequence from scratch. As a result, such drawbacks lead to inferior and inconsistent comment generation accuracy. To address the above limitations, this paper presents a new code summarization approach using hierarchical attention network by incorporating multiple code features, including type-augmented abstract syntax trees and program control flows. Such features, along with plain code sequences, are injected into a deep reinforcement learning (DRL) framework (e.g., actor-critic network) for comment generation. Our approach assigns weights (pays 'attention') to tokens and statements when constructing the code representation to reflect the hierarchical code structure under different contexts regarding code features (e.g., control flows and abstract syntax trees). Our reinforcement learning mechanism further strengthens the prediction results through the actor network and the critic network, where the actor network provides the confidence of predicting subsequent words based on the current state, and the critic network computes the reward values of all the possible extensions of the current state to provide global guidance for explorations. Eventually, we employ an advantage reward to train both networks and conduct a set of experiments on a real-world dataset. The experimental results demonstrate that our approach outperforms the baselines by around 22 to 45 percent in BLEU-1 and outperforms the state-of-the-art approaches by around 5 to 60 percent in terms of S-BLEU and C-BLEU. Copyright © 2020 IEEE.

Original languageEnglish
Pages (from-to)102-119
JournalIEEE Transactions on Software Engineering
Volume48
Issue number1
Early online dateMar 2020
DOIs
Publication statusPublished - 2022

Citation

Wang, W., Zhang, Y., Sui, Y., Wan, Y., Zhao, Z., Wu, J., Yu, P. S., & Xu, G. (2022). Reinforcement-learning-guided source code summarization using hierarchical attention. IEEE Transactions on Software Engineering, 48(1), 102-119. https://doi.org/10.1109/TSE.2020.2979701

Keywords

  • Code summarization
  • Hierarchical attention
  • Reinforcement learning

Fingerprint

Dive into the research topics of 'Reinforcement-learning-guided source code summarization using hierarchical attention'. Together they form a unique fingerprint.