2022 journal article

Leveraging Multiple Representations of Topic Models for Knowledge Discovery

IEEE Access.

By: C. Potts n, A. Savaliya* & A. Jhala n

author keywords: Artificial intelligence; Data analysis; Analytical models; Knowledge discovery; Computational modeling; Clustering algorithms; Semantics; Natural language processing; Artificial intelligence; big data applications; data analysis; data visualization; knowledge discovery; natural language processing
TL;DR: A novel perspective on topic analysis is presented by presenting a process for combining output from multiple models with different theoretical underpinnings, which results in the ability to tackle novel tasks such as semantic characterization of content that cannot be carried out by using single models. (via Semantic Scholar)
Source: ORCID
Added: September 29, 2022

Topic models are often useful in categorization of related documents in information retrieval and knowledge discovery systems, especially for large datasets. Interpreting the output of these models remains an ongoing challenge for the research community. The typical practice in the application of topic models is to tune the parameters of a chosen model for a target dataset and select the model with the best output based on a given metric. We present a novel perspective on topic analysis by presenting a process for combining output from multiple models with different theoretical underpinnings. We show that this results in our ability to tackle novel tasks such as semantic characterization of content that cannot be carried out by using single models. One example task is to characterize the differences between topics or documents in terms of their purpose and also importance with respect to the underlying output of the discovery algorithm. To show the potential benefit of leveraging multiple models we present an algorithm to map the term-space of Latent Dirichlet Allocation (LDA) to the neural document-embedding space of doc2vec. We also show that by utilizing both models in parallel and analyzing the resulting document distributions using the Normalized Pointwise Mutual Information (NPMI) metric we can gain insight into the purpose and importance of topics across models. This approach moves beyond topic identification to a richer characterization of the information and provides a better understanding of the complex relationships between these typically competing techniques.