Kyungjin Park

College of Engineering

2022 journal article

Investigating a visual interface for elementary students to formulate AI planning tasks

JOURNAL OF COMPUTER LANGUAGES, 73.

By: K. Park*, B. Mott*, S. Lee*, A. Gupta*, K. Jantaraweragul, K. Glazewski, J. Scribner, A. Ottenbreit-Leftwich, C. Hmelo-Silver, J. Lester*

author keywords: Artificial intelligence education for K-12; Visual interface; Game-based learning
UN Sustainable Development Goal Categories
4. Quality Education (Web of Science)
Source: Web Of Science
Added: November 7, 2022

2021 article

Designing a Visual Interface for Elementary Students to Formulate AI Planning Tasks

2021 IEEE SYMPOSIUM ON VISUAL LANGUAGES AND HUMAN-CENTRIC COMPUTING (VL/HCC 2021).

author keywords: Artificial intelligence education for K-12; Visual interface; Game-based learning
TL;DR: A visual interface is proposed to enable upper elementary students (grades 3–5, ages 8–11) to formulate AI planning tasks within a game-based learning environment and discusses how the Use-Modify-Create approach supported student learning as well as discuss the misconceptions and usability issues students encountered while using the visual interface. (via Semantic Scholar)
UN Sustainable Development Goal Categories
4. Quality Education (Web of Science)
Source: Web Of Science
Added: June 6, 2022

2020 article

MuLan: Multilevel Language-based Representation Learning for Disease Progression Modeling

2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), pp. 1246–1255.

By: H. Sohn n, K. Park n & M. Chi n

author keywords: Electronic health records; disease progression modeling; interpretability; representation learning
TL;DR: This work presents MuLan: a Multilevel Language-based representation learning framework that can automatically learn a hierarchical representation for EHRs at entry, event, and visit levels and demonstrates that these unified multilevel representations can be utilized for interpreting and visualizing the latent mechanism of patients’ septic shock progressions. (via Semantic Scholar)
UN Sustainable Development Goal Categories
16. Peace, Justice and Strong Institutions (OpenAlex)
Source: Web Of Science
Added: July 26, 2021

2019 article

Predicting Dialogue Breakdown in Conversational Pedagogical Agents with Multimodal LSTMs

ARTIFICIAL INTELLIGENCE IN EDUCATION, AIED 2019, PT II, Vol. 11626, pp. 195–200.

By: W. Min n, K. Park n, J. Wiggins*, B. Mott n, E. Wiebe n, K. Boyer*, J. Lester n

Contributors: W. Min n, K. Park n, J. Wiggins*, B. Mott n, E. Wiebe n, K. Boyer*, J. Lester n

author keywords: Conversational pedagogical agent; Multimodal; Dialogue breakdown detection; Natural language processing; Gaze
TL;DR: Results from a study with 92 middle school students demonstrate that multimodal long short-term memory network (LSTM)-based dialogue breakdown detectors incorporating eye gaze features achieve high predictive accuracies and recall rates, suggesting that multi-modal detectors can play an important role in designing conversational pedagogical agents that effectively engage students in dialogue. (via Semantic Scholar)
UN Sustainable Development Goal Categories
4. Quality Education (Web of Science; OpenAlex)
Sources: Web Of Science, ORCID
Added: December 2, 2019

Citation Index includes data from a number of different sources. If you have questions about the sources of data in the Citation Index or need a set of data which is free to re-distribute, please contact us.

Certain data included herein are derived from the Web of Science© and InCites© (2024) of Clarivate Analytics. All rights reserved. You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.