Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Do Pre-trained Language Models Indeed Understand Software Engineering Tasks? release_5krqodh73jbu7gegfzrfjpwbt4

by Yao Li, Tao Zhang, Xiapu Luo, Haipeng Cai, Sen Fang, Dawei Yuan

Released as a article .

2022  

Abstract

Artificial intelligence (AI) for software engineering (SE) tasks has recently achieved promising performance. In this paper, we investigate to what extent the pre-trained language model truly understands those SE tasks such as code search, code summarization, etc. We conduct a comprehensive empirical study on a board set of AI for SE (AI4SE) tasks by feeding them with variant inputs: 1) with various masking rates and 2) with sufficient input subset method. Then, the trained models are evaluated on different SE tasks, including code search, code summarization, and duplicate bug report detection. Our experimental results show that pre-trained language models are insensitive to the given input, thus they achieve similar performance in these three SE tasks. We refer to this phenomenon as overinterpretation, where a model confidently makes a decision without salient features, or where a model finds some irrelevant relationships between the final decision and the dataset. Our study investigates two approaches to mitigate the overinterpretation phenomenon: whole word mask strategy and ensembling. To the best of our knowledge, we are the first to reveal this overinterpretation phenomenon to the AI4SE community, which is an important reminder for researchers to design the input for the models and calls for necessary future work in understanding and implementing AI4SE tasks.
In text/plain format

Archived Files and Locations

application/pdf  3.8 MB
file_mlee2dj2indpbnki5yehwonyoi
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2022-11-19
Version   v1
Language   en ?
arXiv  2211.10623v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 3745b3e2-7acf-4bb5-9c5a-062975dda617
API URL: JSON