Multi-task learning with LLMs for implicit sentiment analysis: Data-level and task-level automatic weight learning

Wenna LAI, Haoran XIE, Guandong XU, Qing LI

Research output: Contribution to journalArticlespeer-review

Abstract

Implicit sentiment analysis (ISA) presents significant challenges due to the absence of salient cue words. Previous methods have struggled with insufficient data and limited reasoning capabilities to infer underlying opinions. Integrating multi-task learning (MTL) with large language models (LLMs) offers the potential to enable models of varying sizes to reliably perceive and recognize genuine opinions in ISA. However, existing MTL approaches are constrained by two sources of uncertainty: data-level uncertainty, arising from hallucination problems in LLM-generated contextual information, and task-level uncertainty, stemming from the varying capacities of models to process contextual information. To handle these uncertainties, we propose MT-ISA, a novel MTL framework that enhances ISA by leveraging the generation and reasoning capabilities of LLMs through automatic weight learning (AWL). Specifically, MT-ISA constructs auxiliary tasks using generative LLMs to supplement sentiment elements and incorporates automatic MTL to fully exploit auxiliary data. We introduce data-level and task-level AWL, which dynamically identifies relationships and prioritizes more reliable data and critical tasks, enabling models of varying sizes to adaptively learn fine-grained weights based on their reasoning capabilities. Three strategies are investigated for data-level AWL, which are integrated with homoscedastic uncertainty for task-level AWL. Extensive experiments reveal that models of varying sizes achieve an optimal balance between primary prediction and auxiliary tasks in MT-ISA. This underscores the effectiveness and adaptability of our approach. Copyright © 2025 IEEE.

Original languageEnglish
JournalIEEE Transactions on Knowledge and Data Engineering
Early online dateOct 2025
DOIs
Publication statusE-pub ahead of print - Oct 2025

Citation

Lai, W., Xie, H., Xu, G., & Li, Q. (2025). Multi-task learning with LLMs for implicit sentiment analysis: Data-level and task-level automatic weight learning. IEEE Transactions on Knowledge and Data Engineering. Advance online publication. https://doi.org/10.1109/TKDE.2025.3623941

Keywords

  • Implicit sentiment analysis
  • Multi-task learning
  • Large language models

Fingerprint

Dive into the research topics of 'Multi-task learning with LLMs for implicit sentiment analysis: Data-level and task-level automatic weight learning'. Together they form a unique fingerprint.