GLORIA

GEOMAR Library Ocean Research Information Access

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Association for Computing Machinery (ACM)  (5)
Material
Publisher
  • Association for Computing Machinery (ACM)  (5)
Language
Years
  • 1
    Online Resource
    Online Resource
    Association for Computing Machinery (ACM) ; 2023
    In:  ACM Transactions on Information Systems Vol. 41, No. 1 ( 2023-01-31), p. 1-23
    In: ACM Transactions on Information Systems, Association for Computing Machinery (ACM), Vol. 41, No. 1 ( 2023-01-31), p. 1-23
    Abstract: Relation ties, defined as the correlation and mutual exclusion between different relations, are critical for distant supervised relation extraction. Previous studies usually obtain this property by greedily learning the local connections between relations. However, they are essentially limited because of failing to capture the global topology structure of relation ties and may easily fall into a locally optimal solution. To address this issue, we propose a novel force-directed graph to comprehensively learn relation ties. Specifically, we first construct a graph according to the global co-occurrence of all relations. Then, we borrow the idea of Coulomb’s law from physics and introduce the concept of attractive force and repulsive force into this graph to learn correlation and mutual exclusion between relations. Finally, the obtained relation representations are applied as an inter-dependent relation classifier. Extensive experimental results demonstrate that our method is capable of modeling global correlation and mutual exclusion between relations, and outperforms the state-of-the-art baselines. In addition, the proposed force-directed graph can be used as a module to augment existing relation extraction systems and improve their performance.
    Type of Medium: Online Resource
    ISSN: 1046-8188 , 1558-2868
    Language: English
    Publisher: Association for Computing Machinery (ACM)
    Publication Date: 2023
    detail.hit.zdb_id: 602352-6
    detail.hit.zdb_id: 2006337-4
    SSG: 24,1
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 2
    Online Resource
    Online Resource
    Association for Computing Machinery (ACM) ; 2024
    In:  ACM Transactions on Information Systems Vol. 42, No. 2 ( 2024-03-31), p. 1-32
    In: ACM Transactions on Information Systems, Association for Computing Machinery (ACM), Vol. 42, No. 2 ( 2024-03-31), p. 1-32
    Abstract: Coarse-grained response selection is a fundamental and essential subsystem for the widely used retrieval-based chatbots, aiming to recall a coarse-grained candidate set from a large-scale dataset. The dense retrieval technique has recently been proven very effective in building such a subsystem. However, dialogue dense retrieval models face two problems in real scenarios: (1) the multi-turn dialogue history is re-computed in each turn, leading to inefficient inference; (2) the index storage of the offline index is enormous, significantly increasing the deployment cost. To address these problems, we propose an efficient coarse-grained response selection subsystem consisting of two novel methods. Specifically, to address the first problem, we propose the H ierarchical D ense R etrieval. It caches rich multi-vector representations of the dialogue history and only encodes the latest user’s utterance, leading to better inference efficiency. Then, to address the second problem, we design the D eep S emantic H ashing to reduce the index storage while effectively saving its recall accuracy notably. Extensive experimental results prove the advantages of the two proposed methods over previous works. Specifically, with the limited performance loss, our proposed coarse-grained response selection model achieves over 5x FLOPs speedup and over 192x storage compression ratio. Moreover, our source codes have been publicly released. 1
    Type of Medium: Online Resource
    ISSN: 1046-8188 , 1558-2868
    Language: English
    Publisher: Association for Computing Machinery (ACM)
    Publication Date: 2024
    detail.hit.zdb_id: 602352-6
    detail.hit.zdb_id: 2006337-4
    SSG: 24,1
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 3
    Online Resource
    Online Resource
    Association for Computing Machinery (ACM) ; 2023
    In:  Proceedings of the ACM on Management of Data Vol. 1, No. 1 ( 2023-05-26), p. 1-19
    In: Proceedings of the ACM on Management of Data, Association for Computing Machinery (ACM), Vol. 1, No. 1 ( 2023-05-26), p. 1-19
    Abstract: Recently, to improve the unsupervised image retrieval performance, plenty of unsupervised hashing methods have been proposed by designing a semantic similarity matrix, which is based on the similarities between image features extracted by a pre-trained CNN model. However, most of these methods tend to ignore high-level abstract semantic concepts contained in images. Intuitively, concepts play an important role in calculating the similarity among images. In real-world scenarios, each image is associated with some concepts, and the similarity between two images will be larger if they share more identical concepts. Inspired by the above intuition, in this work, we propose a novel Unsupervised Hashing with Semantic Concept Mining, called UHSCM, which leverages a VLP model to construct a high-quality similarity matrix. Specifically, a set of randomly chosen concepts is first collected. Then, by employing a vision-language pretraining (VLP) model with the prompt engineering which has shown strong power in visual representation learning, the set of concepts is denoised according to the training images. Next, the proposed method UHSCM applies the VLP model with prompting again to mine the concept distribution of each image and construct a high-quality semantic similarity matrix based on the mined concept distributions. Finally, with the semantic similarity matrix as guiding information, a novel hashing loss with a modified contrastive loss based regularization item is proposed to optimize the hashing network. Extensive experiments on three benchmark datasets show that the proposed method outperforms the state-of-the-art baselines in the image retrieval task.
    Type of Medium: Online Resource
    ISSN: 2836-6573
    Language: English
    Publisher: Association for Computing Machinery (ACM)
    Publication Date: 2023
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 4
    Online Resource
    Online Resource
    Association for Computing Machinery (ACM) ; 2023
    In:  ACM Transactions on Asian and Low-Resource Language Information Processing Vol. 22, No. 4 ( 2023-04-30), p. 1-18
    In: ACM Transactions on Asian and Low-Resource Language Information Processing, Association for Computing Machinery (ACM), Vol. 22, No. 4 ( 2023-04-30), p. 1-18
    Abstract: Neural variational inference-based topic modeling has gained great success in mining abstract topics from documents. However, these topic models usually mainly focus on optimizing the topic proportions for documents, while the quality and the internal construction of topics are usually neglected. Specifically, these models lack the guarantee that semantically related words are supposed to be assigned to the same topic and are difficult to ensure the interpretability of topics. Moreover, many topical words recur frequently in the top words of different topics, which makes the learned topics semantically redundant and similar, and of little significance for further study. To solve the above problems, we propose a novel neural topic model called Neural Variational Gaussian Mixture Topic Model (NVGMTM). We use Gaussian distribution to depict the semantic relevance between words in the topics. Each topic in NVGMTM is considered as a multivariate Gaussian distribution over words in the word-embedding space. Thus, semantically related words share similar probabilities in each topic, which makes the topics more coherent and interpretable. Experimental results on two public corpora show the proposed model outperforms the state-of-the-art baselines.
    Type of Medium: Online Resource
    ISSN: 2375-4699 , 2375-4702
    Language: English
    Publisher: Association for Computing Machinery (ACM)
    Publication Date: 2023
    detail.hit.zdb_id: 2820615-0
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 5
    Online Resource
    Online Resource
    Association for Computing Machinery (ACM) ; 2021
    In:  ACM Transactions on Information Systems Vol. 39, No. 1 ( 2021-01-31), p. 1-37
    In: ACM Transactions on Information Systems, Association for Computing Machinery (ACM), Vol. 39, No. 1 ( 2021-01-31), p. 1-37
    Abstract: Open-domain generative dialogue systems have attracted considerable attention over the past few years. Currently, how to automatically evaluate them is still a big challenge. As far as we know, there are three kinds of automatic evaluations for open-domain generative dialogue systems: (1) Word-overlap-based metrics; (2) Embedding-based metrics; (3) Learning-based metrics. Due to the lack of systematic comparison, it is not clear which kind of metrics is more effective. In this article, we first measure systematically all kinds of metrics to check which kind is best. Extensive experiments demonstrate that learning-based metrics are the most effective evaluation metrics for open-domain generative dialogue systems. Moreover, we observe that nearly all learning-based metrics depend on the negative sampling mechanism, which obtains extremely imbalanced and low-quality samples to train a score model. To address this issue, we propose a novel learning-based metric that significantly improves the correlation with human judgments by using augmented PO sitive samples and valuable NE gative samples, called PONE. Extensive experiments demonstrate that PONE significantly outperforms the state-of-the-art learning-based evaluation method. Besides, we have publicly released the codes of our proposed metric and state-of-the-art baselines. 1
    Type of Medium: Online Resource
    ISSN: 1046-8188 , 1558-2868
    Language: English
    Publisher: Association for Computing Machinery (ACM)
    Publication Date: 2021
    detail.hit.zdb_id: 602352-6
    detail.hit.zdb_id: 2006337-4
    SSG: 24,1
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...