Dongkuan Xu

Works (9)

Updated: April 16th, 2024 05:02

2023 article

Accelerating Dataset Distillation via Model Augmentation

2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), pp. 11950–11959.

TL;DR: This paper assumes that training the synthetic data with diverse models leads to better generalization performance and proposes two model augmentation techniques, i.e. using early-stage models and parameter perturbation to learn an informative synthetic set with significantly reduced training cost. (via Semantic Scholar)
Source: Web Of Science
Added: November 20, 2023

2023 article

Dynamic Sparse Training via Balancing the Exploration-Exploitation Trade-off

2023 60TH ACM/IEEE DESIGN AUTOMATION CONFERENCE, DAC.

By: S. Huang, B. Lei, D. Xu n, H. Peng, Y. Sun*, M. Xie*, C. Ding

author keywords: Over-parameterization; neural network pruning; sparse training
TL;DR: Experimental results show that sparse models obtained by the proposed method outperform the SOTA sparse training methods on a wide variety of deep learning tasks. (via Semantic Scholar)
Source: Web Of Science
Added: November 6, 2023

2023 journal article

Improving long-tailed classification by disentangled variance transfer

INTERNET OF THINGS, 21.

By: Y. Tian*, W. Gao*, Q. Zhang*, P. Sun* & D. Xu n

author keywords: Internet of things; Long-tail distribution; Image classification; Representation learning; Transfer learning
UN Sustainable Development Goal Categories
Source: Web Of Science
Added: June 12, 2023

2023 article

Neurogenesis Dynamics-inspired Spiking Neural Network Training Acceleration

2023 60TH ACM/IEEE DESIGN AUTOMATION CONFERENCE, DAC.

By: S. Huang, H. Fang, K. Mahmood, B. Lei, N. Xu*, B. Lei, Y. Sun*, D. Xu n, W. Wen*, C. Ding

author keywords: spiking neural network; neural network pruning; sparse training; neuromorphic computing
TL;DR: This framework is computational efficient and trains a model from scratch with dynamic sparsity without sacrificing model fidelity, and designs a new drop-and-grow strategy with decreasing number of non-zero weights, to maintain extreme high sparsity and high accuracy. (via Semantic Scholar)
UN Sustainable Development Goal Categories
7. Affordable and Clean Energy (OpenAlex)
Source: Web Of Science
Added: November 6, 2023

2023 article

RelKD 2023: International Workshop on Resource-Efficient Learning for Knowledge Discovery

PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, pp. 5901–5902.

TL;DR: The proposed international workshop on "Resource-Efficient Learning for Knowledge Discovery (RelKD 2023)" will provide a great venue for academic researchers and industrial practitioners to share challenges, solutions, and future opportunities of resource-efficient learning. (via Semantic Scholar)
Source: Web Of Science
Added: March 4, 2024

2023 article

Rethinking Data Distillation: Do Not Overlook Calibration

2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, pp. 4912–4922.

By: D. Zhu*, B. Lei*, J. Zhang*, Y. Fang, Y. Xie*, R. Zhang*, D. Xu n

Source: Web Of Science
Added: April 15, 2024

2023 article

Toward Efficient Traffic Signal Control: Smaller Network Can Do More

2023 62ND IEEE CONFERENCE ON DECISION AND CONTROL, CDC, pp. 8069–8074.

By: S. Li*, H. Mei*, J. Li n, L. Wei* & D. Xu n

TL;DR: This work presents an efficient RL-based TSC solution for real-world contexts, offering insights into challenges and opportunities in the field and identifies a compact network via a removal-verification strategy, which yields an even sparser network. (via Semantic Scholar)
Source: Web Of Science
Added: March 25, 2024

2023 article

Towards Reliable Rare Category Analysis on Graphs via Individual Calibration

PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, pp. 2629–2638.

By: L. Wu*, B. Lei*, D. Xu n & D. Zhou*

author keywords: Rare category analysis; confidence calibration; graph mining
TL;DR: To quantify the uncertainties in RCA, a node-level uncertainty quantification algorithm is developed to model the overlapping support regions with high uncertainty; to handle the rarity of minority classes in miscalibration calculation, a distribution-based calibration metric is generalized to the instance level and the first individual calibration measurement on graphs is proposed. (via Semantic Scholar)
UN Sustainable Development Goal Categories
16. Peace, Justice and Strong Institutions (OpenAlex)
Source: Web Of Science
Added: March 4, 2024

2023 article

You Need Multiple Exiting: Dynamic Early Exiting for Accelerating Unified Vision Language Model

2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), pp. 10781–10791.

TL;DR: A novel early exiting strategy for unified vision language models is proposed, which allows to dynamically skip the layers in encoder and decoder simultaneously in term of input layer-wise similarities with multiple times of early exiting, namely MuE. (via Semantic Scholar)
Source: Web Of Science
Added: November 20, 2023

Citation Index includes data from a number of different sources. If you have questions about the sources of data in the Citation Index or need a set of data which is free to re-distribute, please contact us.

Certain data included herein are derived from the Web of Science© and InCites© (2024) of Clarivate Analytics. All rights reserved. You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.