GLORIA

GEOMAR Library Ocean Research Information Access

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Association for Computing Machinery (ACM)  (2)
Material
Publisher
  • Association for Computing Machinery (ACM)  (2)
Language
Years
  • 1
    Online Resource
    Online Resource
    Association for Computing Machinery (ACM) ; 2021
    In:  Proceedings of the VLDB Endowment Vol. 14, No. 11 ( 2021-07), p. 1992-2005
    In: Proceedings of the VLDB Endowment, Association for Computing Machinery (ACM), Vol. 14, No. 11 ( 2021-07), p. 1992-2005
    Abstract: As random walk is a powerful tool in many graph processing, mining and learning applications, this paper proposes an efficient in-memory random walk engine named ThunderRW. Compared with existing parallel systems on improving the performance of a single graph operation, ThunderRW supports massive parallel random walks. The core design of ThunderRW is motivated by our profiling results: common RW algorithms have as high as 73.1% CPU pipeline slots stalled due to irregular memory access, which suffers significantly more memory stalls than the conventional graph workloads such as BFS and SSSP. To improve the memory efficiency, we first design a generic step-centric programming model named Gather-Move-Update to abstract different RW algorithms. Based on the programming model, we develop the step interleaving technique to hide memory access latency by switching the executions of different random walk queries. In our experiments, we use four representative RW algorithms including PPR, DeepWalk, Node2Vec and MetaPath to demonstrate the efficiency and programming flexibility of ThunderRW. Experimental results show that ThunderRW outperforms state-of-the-art approaches by an order of magnitude, and the step interleaving technique significantly reduces the CPU pipeline stall from 73.1% to 15.0%.
    Type of Medium: Online Resource
    ISSN: 2150-8097
    Language: English
    Publisher: Association for Computing Machinery (ACM)
    Publication Date: 2021
    detail.hit.zdb_id: 2478691-3
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 2
    Online Resource
    Online Resource
    Association for Computing Machinery (ACM) ; 2020
    In:  Proceedings of the VLDB Endowment Vol. 13, No. 12 ( 2020-08), p. 2813-2816
    In: Proceedings of the VLDB Endowment, Association for Computing Machinery (ACM), Vol. 13, No. 12 ( 2020-08), p. 2813-2816
    Abstract: This paper demonstrates G 3 , a framework for 〈 u 〉 G 〈 /u 〉 raph Neural Network (GNN) training, tailored from 〈 u 〉 G 〈 /u 〉 raph processing systems on 〈 u 〉 G 〈 /u 〉 raphics processing units (GPUs). G 3 aims at improving the efficiency of GNN training by supporting graph-structured operations using parallel graph processing systems. G 3 enables users to leverage the massive parallelism and other architectural features of GPUs in the following two ways: building GNN layers by writing sequential C/C++ code with a set of flexible APIs (Application Programming Interfaces); creating GNN models with essential GNN operations and layers provided in G 3 . The runtime system of G 3 automatically executes the user-defined GNNs on the GPU, with a series of graph-centric optimizations enabled. We demonstrate the steps of developing some popular GNN models with G 3 , and the superior performance of G 3 against existing GNN training systems, i.e., PyTorch and TensorFlow.
    Type of Medium: Online Resource
    ISSN: 2150-8097
    Language: English
    Publisher: Association for Computing Machinery (ACM)
    Publication Date: 2020
    detail.hit.zdb_id: 2478691-3
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...