Gradual machine learning

Introduction

GML begins with some easy instances in a task, which can be automatically labeled by the machine with high accuracy, and then gradually reasons about the labels of the more challenging instances based on the observations provided by the labeled instances. The following two properties of GML make it fundamentally different from the existing learning paradigms:

  1. Distribution misalignment between easy and hard instances in a task.
  2. The scenario of gradual machine learning does not satisfy the i.i.d (independent and identically distributed) assumption underlying most existing machine learning models: the labeled easy instances are not representative of the unlabeled hard instances. The distribution misalignment between the labeled and unlabeled instances renders most existing learning models unfit for gradual machine learning.

  3. Gradual learning by small stages in a task.
  4. Gradual machine learning proceeds in small stages. At each stage, it typically labels only one instance based on the evidential certainty provided by the labeled easier instances. The process of iterative labeling can be performed in an unsupervised manner without requiring any human intervention.

Gradual machine learning is a new and promising research direction supposed to complement deep learning. We have successfully applied gradual machine learning to the classification tasks of entity resolution and sentiment analysis. As a general paradigm, GML can be generalized to various classification tasks. We have also initiated an open-source project, at https://github.com/gml-explore/gml, to support GML application and implementation.

The existing GML solution supposes that features play independent roles in gradual inference. However, in real scenarios, this assumption may be untenable since features are usually correlated with each other. To address this limitation, our new work proposes an attention-enhanced approach to improve the accuracy of gradual inference. The details can be viewed at http://chenbenben.org/agml.html.

Selected Publications

DNN-driven Gradual Machine Learning for Aspect-Term Sentiment Analysis. Findings of ACL, 2021.
Murtadha AHMED, QUN CHEN, Yanyan Wang, Youcef Nafa, Zhanhuai Li and Tianyi Duan
[Abstract]  [PDF]

Usually Recent work has shown that Aspect-Term Sentiment Analysis (ATSA) can be performed by Gradual Machine Learning (GML), which begins with some automatically labeled easy instances, and then gradually labels more challenging instances by iterative factor graph inference without manual intervention. As a non-i.i.d learning paradigm, GML leverages shared features between labeled and unlabeled instances for knowledge conveyance. However, the existing GML solution extracts sentiment features based on pre-specified lexicons, which are usually inaccurate and incomplete and thus lead to inadequate knowledge conveyance. In this paper, we propose a Deep Neural Network (DNN) driven GML approach for ATSA, which exploits the power of DNN in feature representation for gradual learning. It first uses an unsupervised neural network to cluster the automatically extracted features by their sentiment orientation. Then, it models the clustered features as factors to enable implicit knowledge conveyance for gradual inference in a factor graph. To leverage labeled training data, we also present a hybrid solution that fulfills gradual learning by fusing the influence of supervised DNN predictions and implicit knowledge conveyance in a unified factor graph. Finally, we empirically evaluate the performance of the proposed approach on real benchmark data. Our extensive experiments have shown that the proposed approach consistently achieves the state-of-the-art performance across all the test datasets in both unsupervised and supervised settings and the improvement margins are considerable.

Attention-enhanced Gradual Machine Learning for Entity Resolution. IEEE Intelligent Systems, 2021.
Ping Zhong, Zhanhuai Li, Qun Chen and Boyi Hou
[Abstract]  [PDF]

Recent work has shown that Entity Resolution (ER) can be effectively performed by Gradual Machine Learning (GML). GML begins with some automatically labeled easy instances, and then gradually labels more challenging instances by iterative factor graph inference without human intervention. In GML, shared features serve as the medium for knowledge conveyance between easy instances and more challenging ones. The existing GML solution supposes that features play independent roles in gradual inference. However, in real scenarios, this assumption may be untenable since features are usually correlated with each other. To address this limitation, this paper proposes an attention-enhanced approach to improve the accuracy of gradual inference. We first propose a method of spectral feature representation to map correlated features to close points in the same vector space, and then present a model of attention neural network to learn the decisive features given arbitrary combinations of features for improved feature weighting. Finally, our extensive experiments on real benchmark data have validated the efficacy of the proposed approach.

Aspect-Level Sentiment Analysis based on Gradual Machine Learning.knowledge-based systems(KBS),2020.
Yanyan Wang, Qun Chen, Jiquan Shen, Boyi Hou, Murtadha Ahmed, Zhanhuai Li
[Abstract]  [PDF]

The state-of-the-art solutions for Aspect-Level Sentiment Analysis (ALSA) were built on a variety of Deep Neural Networks (DNN), whose efficacy depends on large quantities of accurately labeled training data. Unfortunately, high-quality labeled training data usually require expensive manual work, thus may not be readily available in real scenarios. In this paper, we propose a novel approach for aspect-level sentiment analysis based on the recently proposed paradigm of Gradual Machine Learning (GML), which can enable accurate machine labeling without the requirement for manual labeling effort. It begins with some easy instances in a task, which can be automatically labeled by the machine with high accuracy, and then gradually labels the more challenging instances by iterative factor graph inference. In the process of gradual machine learning, the hard instances are gradually labeled in small stages based on the estimated evidential certainty provided by the labeled easier instances. Our extensive experiments on the benchmark datasets have shown that the performance of the proposed solution is considerably better than its unsupervised alternatives, and also highly competitive compared with the state-of-the-art supervised DNN models.

Gradual Machine Learning for Entity Resolution. IEEE Transactions on Knowledge and Data Engineering (TKDE), 2020.
Boyi Hou, Qun Chen, Yanyan Wang, Youcef Nafa, and Zhanhuai Li
[Abstract]  [PDF]

Usually considered as a classification problem, entity resolution (ER) can be very challenging on real data due to the prevalence of dirty values. The state-of-the-art solutions for ER were built on a variety of learning models (most notably deep neural networks), which require lots of accurately labeled training data. Unfortunately, high-quality labeled data usually require expensive manual work, and are therefore not readily available in many real scenarios. In this paper, we propose a novel learning paradigm for ER, called gradual machine learning, which aims to enable effective machine labeling without the requirement for manual labeling effort. It begins with some easy instances in a task, which can be automatically labeled by the machine with high accuracy, and then gradually labels more challenging instances by iterative factor graph inference. In gradual machine learning, the hard instances in a task are gradually labeled in small stages based on the estimated evidential certainty provided by the labeled easier instances. Our extensive experiments on real data have shown that the performance of the proposed approach is considerably better than its unsupervised alternatives, and highly competitive compared to the state-of-the-art supervised techniques. Using ER as a test case, we demonstrate that gradual machine learning is a promising paradigm potentially applicable to other challenging classification tasks requiring extensive labeling effort.

Gradual Machine Learning for Entity Resolution. WWW 2019.
Boyi Hou, Qun Chen, Jiquan Shen, Xin Liu, Ping Zhong, Yanyan Wang, Zhaoqiang Chen,Zhanhuai Li
[Abstract]  [Bibtex]  [PDF]  [Code]

Usually considered as a classification problem, entity resolution can be very challenging on real data due to the prevalence of dirty values. The state-of-the-art solutions for ER were built on a variety of learning models (most notably deep neural networks), which require lots of accurately labeled training data. Unfortunately, high quality labeled data usually require expensive manual work, and are therefore not readily available in many real scenarios. In this demo, we propose a novel learning paradigm for ER, called gradual machine learning, which aims to enable effective machine label ing without the requirement for manual labeling effort. It begins with some easy instances in a task, which can be automatically labeled by the machine with high accuracy, and then gradually labels more challenging instances based on iterative factor graph inference. In gradual machine learning, the hard instances in a task are gradually labeled in small stages based on the estimated evidential certainty provided by the labeled easier instances. Our extensive experiments on real data have shown that the proposed approach performs considerably better than its unsupervised alter natives, and its performance is also highly competitive compared to the state-of-the-art supervised techniques. Using ER as a test case, we demonstrate that gradual machine learning is a promising paradigm potentially applicable to other challenging classification tasks requiring extensive labeling effort.

@inproceedings{hou2019gradual,
title={Gradual machine learning for entity resolution},
author={Hou, Boyi and Chen, Qun and Shen, Jiquan and Liu, Xin and Zhong, Ping and Wang, Yanyan and Chen, Zhaoqiang and Li, Zhanhuai},
booktitle={The World Wide Web Conference},
pages={3526--3530},
year={2019},
organization={ACM}
}

Joint Inference for Aspect-Level Sentiment Analysis by Deep Neural Networks and Linguistic Hints. IEEE Transactions on Knowledge and Data Engineering (TKDE), 2019.
Yanyan Wang, Qun Chen, Murtadha Ahmed, Zhanhuai Li, Wei Pan, and Hailong Liu
[Abstract]  [Bibtex]  [PDF]

The state-of-the-art techniques for aspect-level sentiment analysis focused on feature modeling using a variety of deep neural networks (DNN). Unfortunately, their performance may still fall short of expectation in real scenarios due to the semantic complexity of natural languages. Motivated by the observation that many linguistic hints (e.g., sentiment words and shift words) are reliable polarity indicators, we propose a joint framework, SenHint, which can seamlessly integrate the output of deep neural networks and the implications of linguistic hints in a unified model based on Markov logic network (MLN). SenHint leverages the linguistic hints for multiple purposes: (1) to identify the easy instances, whose polarities can be automatically determined by the machine with high accuracy; (2) to capture the influence of sentiment words on aspect polarities; (3) to capture the implicit relations between aspect polarities. We present the required techniques for extracting linguistic hints, encoding their implications as well as the output of DNN into the unified model, and joint inference. Finally, we have empirically evaluated the performance of SenHint on both English and Chinese benchmark datasets. Our extensive experiments have shown that compared to the state-of-the-art DNN techniques, SenHint can effectively improve polarity detection accuracy by considerable margins.

@article{wang2019joint,
title={Joint Inference for Aspect-level Sentiment Analysis by Deep Neural Networks and Linguistic Hints},
author={Wang, Yanyan and Chen, Qun and Ahmed, Murtadha and Li, Zhanhua and Pan, Wei and Liu, Hailong},
journal={IEEE Transactions on Knowledge and Data Engineering},
year={2019},
publisher={IEEE}

}


Technical Report

Gradual Machine Learning for Entity Resolution (Technical Report).
Boyi Hou, Qun Chen, Yanyan Wang, Youcef Nafa, Zhanhuai Li.
 [PDF]  [Source Code]

GML Framework