Works (8)

Updated: October 21st, 2024 05:00

2024 conference paper

Data Quality-aware Graph Machine Learning

Wang, Y., Ding, K., Liu, X., Kang, J., Rossi, R., & Derr, T. (2024, October 21).

By: Y. Wang, K. Ding, X. Liu, J. Kang, R. Rossi & T. Derr

Source: ORCID
Added: October 20, 2024

2024 conference paper

Linear-Time Graph Neural Networks for Scalable Recommendations

Zhang, J., Xue, R., Fan, W., Xu, X., Li, Q., Pei, J., & Liu, X. (2024, May 13).

Source: ORCID
Added: May 15, 2024

2024 journal article

Manufacturing service capability prediction with Graph Neural Networks

JOURNAL OF MANUFACTURING SYSTEMS, 74, 291–301.

By: Y. Li n, X. Liu n & B. Starly*

author keywords: Node classification; Link prediction; Graph neural network; Manufacturing service capability; Manufacturing Service Knowledge Graph
Sources: ORCID, Web Of Science, NC State University Libraries
Added: April 8, 2024

2023 article

Enhancing Graph Representations Learning with Decorrelated Propagation

PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, pp. 1466–1476.

author keywords: Graph Neural Networks; Over-correlation; Over-smoothing; Semi-supervised node classification
TL;DR: A decorrelated propagation scheme (DeProp) is proposed as a fundamental component to decorrelate the feature learning in GNN models, which achieves feature decorrelation at the propagation step and can be used to solve over-smoothing and over-correlation problems simultaneously and significantly outperform state-of-the-art methods on missing feature settings. (via Semantic Scholar)
Sources: ORCID, Web Of Science, NC State University Libraries
Added: August 5, 2023

2023 article

How does the Memorization of Neural Networks Impact Adversarial Robust Models?

PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, pp. 2801–2812.

author keywords: Adversarial example; robustness; over-parameterization
TL;DR: Benign Adversarial Training (BAT) is proposed which can facilitate adversarial training to avoid fitting "harmful" atypical samples and fit as more "benign" atYPical samples as possible and can achieve better clean accuracy vs. robustness trade-off than baseline methods, in benchmark datasets for image classification. (via Semantic Scholar)
Sources: ORCID, Web Of Science, NC State University Libraries
Added: August 5, 2023

2023 article

Large-Scale Graph Neural Networks: The Past and New Frontiers

PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, pp. 5835–5836.

author keywords: Graph Neural Networks; Large-scale Graphs; Scalability
TL;DR: This tutorial aims to provide a systematic and comprehensive understanding of the challenges and state-of-the-art techniques for scaling GNNs, and to explore new ideas and developments in this rapidly evolving field. (via Semantic Scholar)
Sources: ORCID, Web Of Science, NC State University Libraries
Added: August 5, 2023

2022 article

Imbalanced Adversarial Training with Reweighting

2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), pp. 1209–1214.

author keywords: model robustness; imbalanced data; adversarial training; reweighting
TL;DR: The poor data separability is one key reason causing this strong tension between under-represented and well-represented classes and the Separable Reweighted Adversarial Training (SRAT) framework is proposed to facilitate adversarial training under imbalanced scenarios, by learning more separable features for different classes. (via Semantic Scholar)
Sources: Web Of Science, NC State University Libraries
Added: May 22, 2023

2022 journal article

Trustworthy AI: A Computational Perspective

ACM Transactions on Intelligent Systems and Technology.

TL;DR: A comprehensive appraisal of trustworthy AI from a computational perspective to help readers understand the latest technologies for achieving trustworthy AI and focuses on six of the most crucial dimensions. (via Semantic Scholar)
Source: ORCID
Added: April 3, 2023

Citation Index includes data from a number of different sources. If you have questions about the sources of data in the Citation Index or need a set of data which is free to re-distribute, please contact us.

Certain data included herein are derived from the Web of Science© and InCites© (2024) of Clarivate Analytics. All rights reserved. You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.