GLORIA

GEOMAR Library Ocean Research Information Access

Language
Preferred search index
Number of Hits per Page
Default Sort Criterion
Default Sort Ordering
Size of Search History
Default Email Address
Default Export Format
Default Export Encoding
Facet list arrangement
Maximum number of values per filter
Auto Completion
Topics (search only within journals and journal articles that belong to one or more of the selected topics)
Feed Format
Maximum Number of Items per Feed

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Online Resource  (5)
  • DiFranzo, Dominic  (5)
  • 1
    Online Resource
    Online Resource
    SAGE Publications ; 2023
    In:  Social Media + Society Vol. 9, No. 1 ( 2023-01), p. 205630512311564-
    In: Social Media + Society, SAGE Publications, Vol. 9, No. 1 ( 2023-01), p. 205630512311564-
    Abstract: There are many factors that account for disclosure of private information on social network sites, but a potentially powerful determinant that remains understudied is social norms, which refer to perceptions of what other people do, approve of, and expect us to do on social media. To address this gap, we conducted an in-depth analysis of descriptive, injunctive, and subjective norms for verbal and visual disclosure on Facebook and Instagram, using a preregistered survey study with 863 participants. We further analyzed whether critical media literacy and media-related self-reflection could buffer against uncritical adoption of these norms. The findings revealed that all three types of norms positively and independently predicted self-disclosure, regardless of the platform or type of self-disclosure (visual vs. verbal), while controlling for other common predictors of self-disclosure, including perceived benefits and risks of self-disclosure. Self-reflection and critical media literacy neither directly predicted disclosure, nor accounted for differences in norm-behavior relationships.
    Type of Medium: Online Resource
    ISSN: 2056-3051 , 2056-3051
    Language: English
    Publisher: SAGE Publications
    Publication Date: 2023
    detail.hit.zdb_id: 2819814-1
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 2
    Online Resource
    Online Resource
    Oxford University Press (OUP) ; 2021
    In:  Journal of Computer-Mediated Communication Vol. 26, No. 5 ( 2021-09-18), p. 284-300
    In: Journal of Computer-Mediated Communication, Oxford University Press (OUP), Vol. 26, No. 5 ( 2021-09-18), p. 284-300
    Abstract: This study evaluates whether increasing information visibility around the identity of a moderator influences bystanders’ likelihood to flag subsequent unmoderated harassing comments. In a 2-day preregistered experiment conducted in a realistic social media simulation, participants encountered ambiguous or unambiguous harassment comments, which were ostensibly flagged by either other users, an automated system (AI), or an unidentified moderation source. The results reveal that visibility of a content moderation source inhibited participants’ flagging of a subsequent unmoderated harassment comment, presumably because their efforts were seen as dispensable, compared to when the moderation source was unknown. On the contrary, there was an indirect effect of other users versus AI as moderation source on subsequent flagging through changes in perceived social norms. Overall, this research shows that the effects of moderation transparency are complex, as increasing visibility of a content moderator may inadvertently inhibit bystander intervention. Lay Summary This study examines the effects of flagging unmoderated offensive posts on social media, and how this changes the users’ subsequent behavior. We examined users’ reactions to the flagging of these posts by other users, an automated system, or an unspecified process to determine whether this affects the users’ ensuing behavior. A 2-day experiment on a simulated social media site showed that the visibility of the “flagger” impacts how users perceive social norms and think about the accountability for their own online actions. The results showed that the visibility of the person/system that flagged the material generally deterred subsequent flagging. The analysis also shows that the effect was stronger when the users thought that it was other users, and not an automated system, that had flagged the online harassment.
    Type of Medium: Online Resource
    ISSN: 1083-6101
    Language: English
    Publisher: Oxford University Press (OUP)
    Publication Date: 2021
    detail.hit.zdb_id: 2024777-1
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 3
    Online Resource
    Online Resource
    Association for Computing Machinery (ACM) ; 2019
    In:  Proceedings of the ACM on Human-Computer Interaction Vol. 3, No. CSCW ( 2019-11-07), p. 1-26
    In: Proceedings of the ACM on Human-Computer Interaction, Association for Computing Machinery (ACM), Vol. 3, No. CSCW ( 2019-11-07), p. 1-26
    Abstract: Bystander intervention can reduce the amount of cyberbullying victimization on social media, but bystanders often fail to act. Limited accountability for their behavior and a lack of empathy for the victim are frequently cited as reasons for why bystanders do not act against cyberbullying. We developed design interventions that aimed to increase accountability and empathy among bystanders. In Study 1, participants were experimentally exposed to three social media posts with different types of empathy nudges. Empathy nudges embedded into social media posts displayed the potential to motivate empathy. In Study 2, participants took part in a 3-day experiment that simulated a social media experience. Results suggested that increased social transparency on social media promoted accountability through heightened self-presentation concerns, but empathy nudges did not encourage greater bystander empathy. Both accountability and empathy predicted bystander intervention, but the types of bystander actions promoted by each mechanism differed. We consider how these results contribute to theories of bystander behavior and designing social media to promote prosocial behaviors.
    Type of Medium: Online Resource
    ISSN: 2573-0142
    Language: English
    Publisher: Association for Computing Machinery (ACM)
    Publication Date: 2019
    detail.hit.zdb_id: 2930194-4
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 4
    Online Resource
    Online Resource
    Association for Computing Machinery (ACM) ; 2020
    In:  Proceedings of the ACM on Human-Computer Interaction Vol. 4, No. GROUP ( 2020-01-04), p. 1-18
    In: Proceedings of the ACM on Human-Computer Interaction, Association for Computing Machinery (ACM), Vol. 4, No. GROUP ( 2020-01-04), p. 1-18
    Abstract: Conversational agents are increasingly becoming integrated into everyday technologies and can collect large amounts of data about users. As these agents mimic interpersonal interactions, we draw on communication privacy management theory to explore people's privacy expectations with conversational agents. We conducted a 3x3 factorial experiment in which we manipulated agents' social interactivity and data sharing practices to understand how these factors influence people's judgments about potential privacy violations and their evaluations of agents. Participants perceived agents that shared response data with advertisers more negatively compared to agents that shared such data with only their companies; perceptions of privacy violations did not differ between agents that shared data with their companies and agents that did not share information at all. Participants also perceived the socially interactive agent's sharing practices less negatively than those of the other agents, highlighting a potential privacy vulnerability that users are exposed to in interactions with socially interactive conversational agents.
    Type of Medium: Online Resource
    ISSN: 2573-0142
    Language: English
    Publisher: Association for Computing Machinery (ACM)
    Publication Date: 2020
    detail.hit.zdb_id: 2930194-4
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 5
    Online Resource
    Online Resource
    SAGE Publications ; 2022
    In:  Big Data & Society Vol. 9, No. 2 ( 2022-07), p. 205395172211156-
    In: Big Data & Society, SAGE Publications, Vol. 9, No. 2 ( 2022-07), p. 205395172211156-
    Abstract: This study examines how visibility of a content moderator and ambiguity of moderated content influence perception of the moderation system in a social media environment. In the course of a two-day pre-registered experiment conducted in a realistic social media simulation, participants encountered moderated comments that were either unequivocally harsh or ambiguously worded, and the source of moderation was either unidentified, or attributed to other users or an automated system (AI). The results show that when comments were moderated by an AI versus other users, users perceived less accountability in the moderation system and had less trust in the moderation decision, especially for ambiguously worded harassments, as opposed to clear harassment cases. However, no differences emerged in the perceived moderation fairness, objectivity, and participants confidence in their understanding of the moderation process. Overall, our study demonstrates that users tend to question the moderation decision and system more when an AI moderator is visible, which highlights the complexity of effectively managing the visibility of automatic content moderation in the social media environment.
    Type of Medium: Online Resource
    ISSN: 2053-9517 , 2053-9517
    Language: English
    Publisher: SAGE Publications
    Publication Date: 2022
    detail.hit.zdb_id: 2773948-X
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...