TY - CONF TI - Mining maximal induced bicliques using odd cycle transversals AU - Kloster, K. AU - Poel, A. AU - Sullivan, B.D. AB - Many common graph data mining tasks take the form of identifying dense subgraphs (e.g. clustering, clique-finding, etc). In biological applications, the natural model for these dense substructures is often a complete bipartite graph (biclique), and the problem requires enumerating all maximal bicliques (instead of identifying just the largest or densest). The best known algorithm in general graphs is due to Dias et al., and runs in time O(M|V|4), where M is the number of maximal induced bicliques (MIBs) in the graph. When the graph being searched is itself bipartite, Zhang et al. give a faster algorithm where the time per MIB depends on the number of edges in the graph. In this work, we present a new algorithm for enumerating MIBs in general graphs, whose run time depends on how “close to bipartite” the input is. Specifically, the runtime is parameterized by the size k of an odd cycle transversal (OCT), a vertex set whose deletion results in a bipartite graph. Our algorithm runs in time O(M|V‖E|k23k/3), which is an improvement on Dias et al. whenever k ≤ 3 log3 |V|. We implement our algorithm alongside a variant of Dias et al.'s in open-source C++ code, and experimentally verify that the OCT-based approach is faster in practice on graphs with a wide variety of sizes, densities, and OCT decompositions.MSC codesbicliquesodd cycle transversalparameterized algorithmsenumerationbipartite C2 - 2019/// C3 - SIAM International Conference on Data Mining, SDM 2019 DA - 2019/// DO - 10.1137/1.9781611975673.37 SP - 324-332 UR - http://www.scopus.com/inward/record.url?eid=2-s2.0-85066094908&partnerID=MN8TOARS ER - TY - JOUR TI - Approximating vertex cover using structural rounding AU - Lavallee, B. AU - Russell, H. AU - Sullivan, B.D. AU - Poel, A. T2 - arXiv DA - 2019/// PY - 2019/// UR - http://www.scopus.com/inward/record.url?eid=2-s2.0-85094768942&partnerID=MN8TOARS ER - TY - JOUR TI - POSTER: GOPipe: A Granularity-Oblivious Programming Framework for Pipelined Stencil Executions on GPU AU - Oh, Chanyoung AU - Zheng, Zhen AU - Shen, Xipeng AU - Zhai, Jidong AU - Yi, Youngmin T2 - PROCEEDINGS OF THE 24TH SYMPOSIUM ON PRINCIPLES AND PRACTICE OF PARALLEL PROGRAMMING (PPOPP '19) AB - Recent studies have shown promising performance benefits of pipelined stencil applications. An important factor for the computing efficiency of such pipelines is the granularity of a task. We presents GOPipe, the first granularity-oblivious programming framework for efficient pipelined stencil executions. With GOPipe, programmers no longer need to specify the appropriate task granularity. GOPipe automatically finds it, and schedules tasks of that granularity while observing all inter-task and inter-stage data dependencies. In our experiments on four real-life applications, GOPipe outperforms the state-of-the-art by up to 4.57× with a much better programming productivity. DA - 2019/// PY - 2019/// DO - 10.1145/3293883.3301494 SP - 431-432 KW - GPU KW - Pipelined Execution KW - Data Dependence ER - TY - JOUR TI - Faster Biclique Mining in Near-Bipartite Graphs AU - Sullivan, Blair D. AU - Poel, Andrew AU - Woodlief, Trey T2 - ANALYSIS OF EXPERIMENTAL ALGORITHMS, SEA2 2019 AB - Identifying dense bipartite subgraphs is a common graph data mining task. Many applications focus on the enumeration of all maximal bicliques (MBs), though sometimes the stricter variant of maximal induced bicliques (MIBs) is of interest. Recent work of Kloster et al. introduced a MIB-enumeration approach designed for “near-bipartite” graphs, where the runtime is parameterized by the size k of an odd cycle transversal (OCT), a vertex set whose deletion results in a bipartite graph. Their algorithm was shown to outperform the previously best known algorithm even when k was logarithmic in |V|. In this paper, we introduce two new algorithms optimized for near-bipartite graphs - one which enumerates MIBs in time $$O(M_I |V| |E| k)$$, and another based on the approach of Alexe et al. which enumerates MBs in time $$O(M_B |V| |E| k)$$, where $$M_I$$ and $$M_B$$ denote the number of MIBs and MBs in the graph, respectively. We implement all of our algorithms in open-source C++ code and experimentally verify that the OCT-based approaches are faster in practice than the previously existing algorithms on graphs with a wide variety of sizes, densities, and OCT decompositions. DA - 2019/// PY - 2019/// DO - 10.1007/978-3-030-34029-2_28 VL - 11544 SP - 424-453 SN - 1611-3349 UR - http://www.scopus.com/inward/record.url?eid=2-s2.0-85076404739&partnerID=MN8TOARS KW - Bicliques KW - Odd cycle transversal KW - Bipartite KW - Enumeration algorithms KW - Parameterized complexity ER - TY - JOUR TI - Structural Rounding: Approximation Algorithms for Graphs Near an Algorithmically Tractable Class AU - Demaine, Erik D. AU - Goodrich, Timothy D. AU - Kloster, Kyle AU - Lavallee, Brian AU - Liu, Quanquan C. AU - Sullivan, Blair D. AU - Vakilian, Ali AU - Poel, Andrew T2 - 27TH ANNUAL EUROPEAN SYMPOSIUM ON ALGORITHMS (ESA 2019) DA - 2019/// PY - 2019/// DO - 10.4230/LIPIcs.ESA.2019.37 VL - 144 SP - SN - 1868-8969 UR - http://www.scopus.com/inward/record.url?eid=2-s2.0-85074821909&partnerID=MN8TOARS KW - structural rounding KW - graph editing KW - approximation algorithms ER - TY - JOUR TI - The use of Bayesian inference in the characterization of materials and thin films AU - Jones, Jacob L. AU - Broughton, Rachel AU - Iamsasri, Thanakorn AU - Fancher, Chris M. AU - Wilson, Alyson G. AU - Reich, Brian AU - Smith, Ralph C. T2 - ACTA CRYSTALLOGRAPHICA A-FOUNDATION AND ADVANCES DA - 2019/// PY - 2019/// DO - 10.1107/S0108767319097940 VL - 75 SP - A211-A211 SN - 2053-2733 ER - TY - CONF TI - In-Place Zero-Space Memory Protection for CNN AU - Guan, Hui AU - Ning, Lin AU - Lin, Zhen AU - Shen, Xipeng AU - Zhou, Huiyang AU - Lim, Seung-Hwan A2 - Wallach, H. A2 - Larochelle, H. A2 - Beygelzimer, A. A2 - d'Alché-Buc, F. A2 - Fox, E. A2 - Garnett, R. C2 - 2019/// C3 - Advances in Neural Information Processing Systems Proceedings DA - 2019/// ER - TY - CONF TI - HiWayLib AU - Zheng, Zhen AU - Oh, Chanyoung AU - Zhai, Jidong AU - Shen, Xipeng AU - Yi, Youngmin AU - Chen, Wenguang T2 - the Twenty-Fourth International Conference AB - Pipeline is a parallel computing model underpinning a class of important applications running on CPU-GPU heterogeneous systems. A critical aspect for the efficiency of such applications is the support of communications among pipeline stages that may reside on CPU and different parts of a GPU. Existing libraries of concurrent data structures do not meet the needs, due to the massive parallelism on GPU and the complexities in CPU-GPU memory and connections. This work gives an in-depth study on the communication problem. It identifies three key issues, namely, slow and error-prone detection of the end of pipeline processing, intensive queue contentions on GPU, and cumbersome inter-device data movements. This work offers solutions to each of the issues, and integrates all together to form a unified library named HiWayLib. Experiments show that HiWayLib significantly boosts the efficiency of pipeline communications in CPU-GPU heterogeneous applications. For real-world applications, HiWayLib produces 1.22~2.13× speedups over the state-of-art implementations with little extra programming effort required. C2 - 2019/// C3 - Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems - ASPLOS '19 DA - 2019/// DO - 10.1145/3297858.3304032 PB - ACM Press SN - 9781450362405 UR - http://dx.doi.org/10.1145/3297858.3304032 DB - Crossref KW - pipeline communication KW - CPU-GPU system KW - contention relief KW - end detection KW - lazy copy ER - TY - CONF TI - Wootz: a compiler-based framework for fast CNN pruning via composability AU - Guan, Hui AU - Shen, Xipeng AU - Lim, Seung-Hwan T2 - the 40th ACM SIGPLAN Conference AB - Convolutional Neural Networks (CNN) are widely used for Deep Learning tasks. CNN pruning is an important method to adapt a large CNN model trained on general datasets to fit a more specialized task or a smaller device. The key challenge is on deciding which filters to remove in order to maximize the quality of the pruned networks while satisfying the constraints. It is time-consuming due to the enormous configuration space and the slowness of CNN training. C2 - 2019/// C3 - Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation - PLDI 2019 DA - 2019/// DO - 10.1145/3314221.3314652 PB - ACM Press SN - 9781450367127 UR - http://dx.doi.org/10.1145/3314221.3314652 DB - Crossref KW - CNN KW - network pruning KW - compiler KW - composability ER - TY - CONF TI - IA-graph based inter-app conflicts detection in open IoT systems AU - Li, Xinyi AU - Zhang, Lei AU - Shen, Xipeng T2 - the 20th ACM SIGPLAN/SIGBED International Conference AB - This paper tackles the problem of detecting potential conflicts among independently developed apps that are to be installed into an open Internet of Things (IoT) environment. It provides a new set of definitions and categorizations of the conflicts to more precisely characterize the nature of the problem, and employs a graph representation (named IA Graph) for formally representing IoT controls and inter-app interplays. It provides an efficient conflicts detection algorithm implemented on a SmartThings compiler and shows significantly improved efficacy over prior solutions. C2 - 2019/// C3 - Proceedings of the 20th ACM SIGPLAN/SIGBED International Conference on Languages, Compilers, and Tools for Embedded Systems - LCTES 2019 DA - 2019/// DO - 10.1145/3316482.3326350 PB - ACM Press SN - 9781450367240 UR - http://dx.doi.org/10.1145/3316482.3326350 DB - Crossref ER - TY - CONF TI - Deep reuse AU - Ning, Lin AU - Shen, Xipeng T2 - the ACM International Conference AB - This paper presents deep reuse, a method for speeding up CNN inferences by detecting and exploiting deep reusable computations on the fly. It empirically reveals the massive similarities among neuron vectors in activation maps, both within CNN inferences on an input and across inputs. It gives an in-depth study on how to effectively turn the similarities into beneficial computation reuse to speed up CNN inferences. The investigation covers various factors, ranging from the clustering methods for similarity detection, to clustering scopes, similarity metrics, and neuron vector granularities. The insights help create deep reuse. As an on-line method, deep reuse is easy to apply, and adapts to each CNN (compressed or not) and its input. Using no special hardware support or CNN model changes, this method speeds up inferences by 1.77--2X (up to 4.3X layer-wise) on the fly with virtually no (DODGE($\mathcal {E})$E) , a tuning tool, runs orders of magnitude faster, while also generating learners with more accurate predictions than seen in prior state-of-the-art approaches. DA - 2019/// PY - 2019/// DO - 10.1109/tse.2019.2945020 SP - 1-1 J2 - IIEEE Trans. Software Eng. OP - SN - 0098-5589 1939-3520 2326-3881 UR - http://dx.doi.org/10.1109/tse.2019.2945020 DB - Crossref KW - Tuning KW - Text mining KW - Software KW - Task analysis KW - Optimization KW - Software engineering KW - Tools KW - Software analytics KW - hyperparameter optimization KW - defect prediction KW - text mining ER - TY - JOUR TI - Special Issue: Graph Computing AU - Jin, Hai AU - Shen, Xipeng AU - Lovas, Robert AU - Liao, Xiaofei T2 - CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE AB - Graph computing now is popular in many areas, including social network and gene sequence alignment. Graph computing system and algorithm have a history prior to the use of graph databases and have a future that is not necessarily entangled with typical database concerns. With the data's increasing size, many distributed graph-computing systems have been developed in recent years to process and analyze massive graphs. Researchers pay more attention on the graph partition schemes on distributed environment. However, other researchers think a single system can avoid the network overhead and may have better performance even if the data size is too big for the memory space. With the rapid development of coprocessors, some researchers think it is promising to build a domain specific computer, just for graph computing. The proposed special issue of Concurrency and Computation: Practice and Experience contains revised and extended versions of selected best papers with respect to graph computing at the 21st IEEE International Conference on Parallel and Distributed Systems (ICPADS’16), which was held at Wuhan, China, on December 13-16, 2016. Established in 1992, ICPADS has been a major international forum for scientists, engineers, and users to exchange and share their experiences, new ideas, and latest research results on all aspects of parallel and distributed computing systems. The purpose of this special issue is to provide a comprehensive view into recent advances in systems software, algorithms, partition schemes, and even graph computer based on new advances in computer architecture and applications. The five selected papers are summarized as follows. The first paper, titled “An efficient iterative graph data processing framework based on bulk synchronous parallel model” by Liu et al,1 presents an efficient computational framework for graph data processing based on the bulk synchronous parallel model. Existing Pregel-like graph processing systems remains in its early stage, and there still exist many challenges with prohibitive superstep-synchronized overhead. Furthermore, the graph data partition strategy in these earlier graph systems fails to support load balancing, therefore causing the increase of network I/O overhead as the scale of graph data grows. Thus, this paper leverages a global synchronization mechanism to enhance the performance of graph computation. Meanwhile, a balanced hash-based graph partition mechanism is presented to optimize the large-scale graph data processing. The work has a real implementation upon on Pregrel system, which can better support a variety of graph analytics applications. The second paper, titled “An efficient iterative graph data processing framework based on bulk synchronous parallel model” by Linchen Yu,2 proposes an optimized scheduling system for parallelizing the programs in the Xen. Virtualization challenges the traditional CPU scheduling, leading that the spin lock in virtualized environment can be preempted by the VMM, increasing synchronization overhead and decreasing the performance of parallel programs. Many studies have proposed the co-scheduling to alleviate this problem. However, these earlier attempts are not suitable to non-parallel workloads with the CPU fragmentation problem as well. Therefore, a simultaneous optimization scheduling system, called CCHybrid, is proposed in the Xen virtualized environment. Results show the efficiency of CCHybrid over the traditional Xen Credit scheduler. The third paper, titled “ms-PoSW: A multi-server aided proof of shared ownership scheme for secure deduplication in cloud” by Xiong et al,3 introduces a novel concept of the Proof for securing client-side deduplication of the shared files. With the rapid development of cloud computing and big data technologies, collaborative cloud applications are inextricably linked to our daily life and, therefore, produce a large number of shared files, which is challenging for secure access and data duplication in cloud. This paper proposes a novel multiserver-aided PoSW scheme for collaborative cloud applications and propose a hybrid PoSW scheme to reduce the computational cost of the shared owner's client. Furthermore, a hybrid PoSW scheme is constructed to address the secure proof of hybrid cloud architectures. The fourth paper, titled “Sparse random compressive sensing based data aggregation in wireless sensor networks” by Yin et al,4 introduces a compressive data aggregation scheme. In wireless sensor networks, the increasingly expanding data volume has high spatial-temporal correlation. Although some earlier studies attempt to eliminate data redundancy, few can handle energy consumption and latency simultaneously. In this paper, the authors a delay-minimum energy-balanced data aggregation method, which can eliminate the redundancy among the readings and prolong the network lifetime. A sparse random matrix is adopted as a measurement matrix to balance communication cost. Particularly, each measurement can form an aggregation tree with minimum delay. Furthermore, a novel scheduling method is used to avoid information interference as well. The fifth paper, titled “Dynamic cluster strategy for hierarchical rollback-recovery protocols in MPI HPC applications” by Liao et al,5 proposes a dynamic cluster strategy to adapt to the runtime variation of communication pattern by using a prediction scheme. The idea comes from a fact that Hierarchical rollback-recovery protocols provide failure containment and reduce the amount of message to be logged, making it an attractive and scalable solution for fault tolerance even at a large scale. This paper shows how the communication pattern changes with the stages of application because MPI HPC applications scale up and become more complex. Therefore, to further increase the efficiency of hierarchical rollback-recovery protocols, the authors propose a dynamic cluster strategy (DCS) to adapt to the change of communication pattern. In contrast to the existing static process partition algorithms, this strategy adopts a prediction mechanism by using the clusters of processes obtained from prior part of applications in the succeeding part. Detailed experiments are then performed to evaluate the effectiveness and efficiency DCS at an extremely large scale. We hope that the readers would find the contents of this special issue interesting and further inspire them to look ahead into the challenges of designing, exploring, and exploiting graph analytics applications. DA - 2019/// PY - 2019/// DO - 10.1002/cpe.5452 ER - TY - JOUR TI - Discussion on “Effective interdisciplinary collaboration between statisticians and other subject matter experts” AU - Typhina, Eli AU - Wilson, Alyson T2 - Quality Engineering AB - Anderson-Cook, Lu, and Parker’s article offers numerous suggestions for ways statisticians can facilitate effective interdisciplinary collaboration, with particular focus on project teams. Their article comes at a time when the importance of collaboration to support innovation is becoming more broadly recognized, bringing with it the inherent challenges of engaging in collaboration. In our discussion, we expand on Anderson-Cook et al.’s insights by describing our experiences working with collaborators from different disciplines and sectors. We contextualize our recommendations with examples of collaborations from our organization, the Laboratory for Analytic Sciences. DA - 2019/1/2/ PY - 2019/1/2/ DO - 10.1080/08982112.2018.1539233 UR - https://doi.org/10.1080/08982112.2018.1539233 ER - TY - JOUR TI - Bayesian variable selection for logistic regression AU - Tian, Yiqing AU - Bondell, Howard D. AU - Wilson, Alyson T2 - STATISTICAL ANALYSIS AND DATA MINING AB - Abstract A key issue when using Bayesian variable selection for logistic regression is choosing an appropriate prior distribution. This can be particularly difficult for high‐dimensional data where complete separation will naturally occur in the high‐dimensional space. We propose the use of the Normal‐Gamma prior with recommendations on calibration of the hyper‐parameters. We couple this choice with the use of joint credible sets to avoid performing a search over the high‐dimensional model space. The approach is shown to outperform other methods in high‐dimensional settings, especially with highly correlated data. The Bayesian approach allows for a natural specification of the hyper‐parameters. DA - 2019/10// PY - 2019/10// DO - 10.1002/sam.11428 VL - 12 IS - 5 SP - 378-393 SN - 1932-1872 KW - joint credible region KW - Laplace prior KW - LASSO KW - Normal-gamma prior ER - TY - JOUR TI - Adaptive Deep Reuse: Accelerating CNN Training on the Fly AU - Ning, Lin AU - Guan, Hui AU - Shen, Xipeng T2 - 2019 IEEE 35TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2019) AB - This work proposes adaptive deep reuse, a method for accelerating CNN training by identifying and avoiding the unnecessary computations contained in each specific training on the fly. It makes two-fold major contributions. (1) It empirically proves the existence of a lot of similarities among neuron vectors in both forward and backward propagation of CNN. (2) It introduces the first adaptive strategy for translating the similarities into computation reuse in CNN training. The strategy adaptively adjusts the strength of reuse based on the different tolerance of precision relaxation in different CNN training stages. Experiments show that adaptive deep reuse saves 69% CNN training time with no accuracy loss. DA - 2019/// PY - 2019/// DO - 10.1109/ICDE.2019.00138 SP - 1538-1549 SN - 1084-4627 KW - CNN KW - neuron vector KW - similarity KW - training KW - adaptive KW - deep reuse ER - TY - JOUR TI - The role of cellular contact and TGF-beta signaling in the activation of the epithelial mesenchymal transition (EMT) AU - Gasior, Kelsey AU - Wagner, Nikki J. AU - Cores, Jhon AU - Caspar, Rose AU - Wilson, Alyson AU - Bhattacharya, Sudin AU - Hauck, Marlene L. T2 - CELL ADHESION & MIGRATION AB - The epithelial mesenchymal transition (EMT) is one step in the process through which carcinoma cells metastasize by gaining the cellular mobility associated with mesenchymal cells. This work examines the dual influence of the TGF-β pathway and intercellular contact on the activation of EMT in colon (SW480) and breast (MCF7) carcinoma cells. While the SW480 population revealed an intermediate state between the epithelial and mesenchymal states, the MC7 cells exhibited highly adhesive behavior. However, for both cell lines, an exogenous TGF-β signal and a reduction in cellular confluence can push a subgroup of the population towards the mesenchymal phenotype. Together, these results highlight that, while EMT is induced by the synergy of multiple signals, this activation varies across cell types. DA - 2019/// PY - 2019/// DO - 10.1080/19336918.2018.1526597 VL - 13 IS - 1 SP - 63-75 SN - 1933-6926 UR - https://doi.org/10.1080/19336918.2018.1526597 KW - EMT KW - TGF- KW - cellular adhesion KW - epithelial KW - mesenchymal KW - breast carcinoma KW - colon carcinoma ER - TY - JOUR TI - Structural sparsity of complex networks: Bounded expansion in random models and real-world graphs AU - Demaine, Erik D. AU - Reidl, Felix AU - Rossmanith, Peter AU - Villaamil, Fernando Sánchez AU - Sikdar, Somnath AU - Sullivan, Blair D. T2 - Journal of Computer and System Sciences AB - This research establishes that many real-world networks exhibit bounded expansion2, a strong notion of structural sparsity, and demonstrates that it can be leveraged to design efficient algorithms for network analysis. Specifically, we give a new linear-time fpt algorithm for motif counting and linear time algorithms to compute localized variants of several centrality measures. To establish structural sparsity in real-world networks, we analyze several common network models regarding their structural sparsity. We show that, with high probability, (1) graphs sampled with a prescribed sparse degree sequence; (2) perturbed bounded-degree graphs; (3) stochastic block models with small probabilities; result in graphs of bounded expansion. In contrast, we show that the Kleinberg and the Barabási–Albert model have unbounded expansion. We support our findings with empirical measurements on a corpus of real-world networks. DA - 2019/11// PY - 2019/11// DO - 10.1016/j.jcss.2019.05.004 VL - 105 SP - 199-241 UR - https://doi.org/10.1016/j.jcss.2019.05.004 KW - Structural sparsity KW - Bounded expansion KW - Complex networks KW - Random graphs KW - Motif counting KW - Centrality measures ER - TY - JOUR TI - Bayesian modeling and test planning for multiphase reliability assessment AU - Gilman, James F. AU - Fronczyk, Kassandra M. AU - Wilson, Alyson G. T2 - QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL AB - Abstract We propose a Bayesian hierarchical model to assess the reliability of a family of vehicles, based on the development of the joint light tactical vehicle (JLTV). The proposed model effectively combines information across three phases of testing and across common vehicle components. The analysis yields estimates of failure rates for specific failure modes and vehicles as well as an overall estimate of the failure rate for the family of vehicles. We are also able to obtain estimates of how well vehicle modifications between test phases improve failure rates. In addition to using all data to improve on current assessments of reliability and reliability growth, we illustrate how to leverage the information learned from the three phases to determine appropriate specifications for subsequent testing that will demonstrate if the reliability meets a given reliability threshold. DA - 2019/4// PY - 2019/4// DO - 10.1002/qre.2406 VL - 35 IS - 3 SP - 750-760 ER - TY - JOUR TI - Subgraph centrality and walk-regularity AU - Horton, Eric AU - Kloster, Kyle AU - Sullivan, Blair D. T2 - Linear Algebra and its Applications AB - Matrix-based centrality measures have enjoyed significant popularity in network analysis, in no small part due to our ability to rigorously analyze their behavior as parameters vary. Recent work has considered the relationship between subgraph centrality, which is defined using the matrix exponential f(x)=exp⁡(x), and the walk structure of a network. In a walk-regular graph, the number of closed walks of each length must be the same for all nodes, implying uniform f-subgraph centralities for any f (or maximum f-walk entropy). We consider when non-walk-regular graphs can achieve maximum entropy, calling such graphs entropic. For parameterized measures, we are also interested in which values of the parameter witness this uniformity. To date, only one entropic graph has been identified, with only two witnessing parameter values, raising the question of how many such graphs and parameters exist. We resolve these questions by constructing infinite families of entropic graphs, as well as a family of witnessing parameters with a limit point at zero. DA - 2019/6// PY - 2019/6// DO - 10.1016/j.laa.2019.02.005 VL - 570 SP - 225-244 UR - https://doi.org/10.1016/j.laa.2019.02.005 KW - Centrality KW - Graph entropy KW - Walk-regularity KW - Functions of matrices KW - Network analysis ER -