Seleção e rotulagem de instâncias para métodos semissupervisionados indutivos

In recent years, the use of Machine Learning (ML) techniques to solve real problems has become very common and a technological pattern adopted in plenty of domains. However, several of these domains do not have enough labelled data to give ML methods a good performance. This problem led to the de...

ver descrição completa

Na minha lista:
Detalhes bibliográficos
Autor principal: Barreto, Cephas Alves da Silveira
Outros Autores: Canuto, Anne Magaly de Paula
Formato: doctoralThesis
Idioma:pt_BR
Publicado em: Universidade Federal do Rio Grande do Norte
Assuntos:
Endereço do item:https://repositorio.ufrn.br/handle/123456789/55155
Tags: Adicionar Tag
Sem tags, seja o primeiro a adicionar uma tag!
Descrição
Resumo:In recent years, the use of Machine Learning (ML) techniques to solve real problems has become very common and a technological pattern adopted in plenty of domains. However, several of these domains do not have enough labelled data to give ML methods a good performance. This problem led to the development of Semi-supervised methods, a type of method capable of using labelled and unlabelled instances in its model building. Among the semi-supervised learning techniques, the inductive methods stand out. The wrapper methods, a particular category within inductive methods, use a process, often iterative, that involves: training the method with labelled data; selection of the best data from the unlabelled set; and labelling the selected data. Despite showing a simple and efficient process, errors in the selection or labelling processes are common, which deteriorate the final performance of the method. This research aims to reduce selection and labelling errors in wrapper methods to establish selection and labelling approaches that are more robust and less susceptible to errors. To this end, this work proposes a selection and labelling approach based on classification agreement and a selection and agreement approach based on distance metric as an additional factor to an already used selection criterion (e.g. confidence or agreement). The proposed approaches can be applied to any wrapper method and were tested on 42 datasets with Self-training, Co-training and Boosting methods. The results obtained indicate that the proposals bring gains for both methods in terms of accuracy and F-measure.