In:
PLOS ONE, Public Library of Science (PLoS), Vol. 18, No. 9 ( 2023-9-8), p. e0288935-
Abstract:
Accurately predicting mobile network traffic can help mobile network operators allocate resources more rationally and can facilitate stable and fast network services to users. However, due to burstiness and uncertainty, it is difficult to accurately predict network traffic. Methodology Considering the spatio-temporal correlation of network traffic, we proposed a deep-learning model, Convolutional Block Attention Module (CBAM) Spatio-Temporal Convolution Network-Transformer, for time-series prediction based on a CBAM attention mechanism, a Temporal Convolutional Network (TCN), and Transformer with a sparse self-attention mechanism. The model can be used to extract the spatio-temporal features of network traffic for prediction. First, we used the improved TCN for spatial information and added the CBAM attention mechanism, which we named CSTCN. This model dealt with important temporal and spatial features in network traffic. Second, Transformer was used to extract spatio-temporal features based on the sparse self-attention mechanism. The experiments in comparison with the baseline showed that the above work helped significantly to improve the prediction accuracy. We conducted experiments on a real network traffic dataset in the city of Milan. Results The results showed that CSTCN-Transformer reduced the mean square error and the mean average error of prediction results by 65.16%, 64.97%, and 60.26%, and by 51.36%, 53.10%, and 38.24%, respectively, compared to CSTCN, a Long Short-Term Memory network, and Transformer on test sets, which justified the model design in this paper.
Type of Medium:
Online Resource
ISSN:
1932-6203
DOI:
10.1371/journal.pone.0288935
DOI:
10.1371/journal.pone.0288935.g001
DOI:
10.1371/journal.pone.0288935.g002
DOI:
10.1371/journal.pone.0288935.g003
DOI:
10.1371/journal.pone.0288935.g004
DOI:
10.1371/journal.pone.0288935.g005
DOI:
10.1371/journal.pone.0288935.g006
DOI:
10.1371/journal.pone.0288935.g007
DOI:
10.1371/journal.pone.0288935.g008
DOI:
10.1371/journal.pone.0288935.g009
DOI:
10.1371/journal.pone.0288935.g010
DOI:
10.1371/journal.pone.0288935.g011
DOI:
10.1371/journal.pone.0288935.g012
DOI:
10.1371/journal.pone.0288935.g013
DOI:
10.1371/journal.pone.0288935.g014
DOI:
10.1371/journal.pone.0288935.g015
DOI:
10.1371/journal.pone.0288935.g016
DOI:
10.1371/journal.pone.0288935.t001
DOI:
10.1371/journal.pone.0288935.t002
DOI:
10.1371/journal.pone.0288935.t003
DOI:
10.1371/journal.pone.0288935.t004
DOI:
10.1371/journal.pone.0288935.t005
DOI:
10.1371/journal.pone.0288935.t006
DOI:
10.1371/journal.pone.0288935.t007
DOI:
10.1371/journal.pone.0288935.t008
DOI:
10.1371/journal.pone.0288935.t009
DOI:
10.1371/journal.pone.0288935.s001
DOI:
10.1371/journal.pone.0288935.s002
DOI:
10.1371/journal.pone.0288935.r001
DOI:
10.1371/journal.pone.0288935.r002
DOI:
10.1371/journal.pone.0288935.r003
DOI:
10.1371/journal.pone.0288935.r004
DOI:
10.1371/journal.pone.0288935.r005
DOI:
10.1371/journal.pone.0288935.r006
Language:
English
Publisher:
Public Library of Science (PLoS)
Publication Date:
2023
detail.hit.zdb_id:
2267670-3
Permalink