Works (3)

Updated: January 8th, 2024 11:25

2018 journal article

3D Auto-Context-Based Locality Adaptive Multi-Modality GANs for PET Synthesis

IEEE TRANSACTIONS ON MEDICAL IMAGING, 38(6), 1328–1339.

author keywords: Image synthesis; positron emission topography (PET); generative adversarial networks (GANs); locality adaptive fusion; multi-modality
MeSH headings : Brain / diagnostic imaging; Databases, Factual; Deep Learning; Humans; Imaging, Three-Dimensional / methods; Magnetic Resonance Imaging / methods; Phantoms, Imaging; Positron-Emission Tomography / methods; Radiation Dosage
TL;DR: Experimental results show that the proposed 3D auto-context-based locality adaptive multi-modality generative adversarial networks model (LA-GANs) outperforms the traditional multi- modality fusion methods used in deep networks, as well as the state-of-the-art PET estimation approaches. (via Semantic Scholar)
Source: Web Of Science
Added: April 20, 2020

2018 article

Locality Adaptive Multi-modality GANs for High-Quality PET Image Synthesis

MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2018, PT I, Vol. 11070, pp. 329–337.

MeSH headings : Algorithms; Electrons; Magnetic Resonance Imaging / methods; Multimodal Imaging / methods; Neural Networks, Computer; Positron-Emission Tomography / methods; Reproducibility of Results; Sensitivity and Specificity
TL;DR: A locality adaptive multi-modality generative adversarial networks model (LA-GANs) is proposed to synthesize the full-dose PET image from both the low-dose one and the accompanying T1-weighted MRI to incorporate anatomical information for better PET image synthesis. (via Semantic Scholar)
Source: Web Of Science
Added: August 19, 2019

2016 journal article

Predicting standard-dose PET image from low-dose PET and multimodal MR images using mapping-based sparse representation

Physics in Medicine and Biology, 61(2), 791–812.

By: Y. Wang*, P. Zhang*, L. An*, G. Ma*, J. Kang*, F. Shi*, X. Wu*, J. Zhou* ...

author keywords: positron emission tomography (PET); sparse representation; mapping-based sparse representation; incremental refinement; standard-dose PET prediction; multimodal MR images
MeSH headings : Brain / diagnostic imaging; Brain Mapping / methods; Humans; Image Processing, Computer-Assisted / methods; Magnetic Resonance Imaging / methods; Multimodal Imaging / methods; Positron-Emission Tomography / methods; Radiation Dosage
TL;DR: Inspired by the success of patch-based sparse representation (SR) in super-resolution image reconstruction, a mapping-based SR (m-SR) framework is proposed for standard-dose PET image prediction and can outperform benchmark methods in both qualitative and quantitative measures. (via Semantic Scholar)
UN Sustainable Development Goal Categories
Sources: Web Of Science, Crossref
Added: August 6, 2018

Citation Index includes data from a number of different sources. If you have questions about the sources of data in the Citation Index or need a set of data which is free to re-distribute, please contact us.

Certain data included herein are derived from the Web of Science© and InCites© (2024) of Clarivate Analytics. All rights reserved. You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.