Directed Social Regard: Surfacing Targeted Advocacy, Opposition, Aid, Harms, and Victimization in Online Media
Researchers introduce Directed Social Regard (DSR), an NLP framework that detects and scores mixed sentiment targets in online messages across multiple dimensions. Unlike traditional sentiment analysis tools that classify text as simply positive or negative, DSR identifies specific targets of both pro-social and anti-social sentiments within the same message, with applications to analyzing influence operations and political rhetoric.
The DSR framework addresses a fundamental limitation in how NLP systems understand sentiment in complex online communication. Traditional sentiment analysis tools provide binary or ternary classifications that obscure the nuanced reality of how people express simultaneous positive and negative views toward different subjects within single messages. This capability becomes critical as platforms host increasingly sophisticated influence operations where bad actors blend legitimate advocacy with targeted harassment or disinformation—techniques that bypass conventional content moderation filters.
The technical approach leverages transformer-based models in a two-stage pipeline: first identifying sentiment targets as text spans, then scoring those spans across three axes grounded in social science theory around moral disengagement. This foundation in behavioral research rather than purely computational methods strengthens the framework's validity for real-world applications. The researchers validated their approach on six third-party datasets, establishing meaningful correlations with existing social science labels.
For platform safety and policy teams, DSR offers actionable intelligence about coordinated campaigns that mix legitimate-seeming content with targeted harms. Detection capabilities improve when systems can surface that a single message advocates for Group A while threatening Group B, a pattern that unified sentiment scores would obscure. For researchers studying polarization and radicalization, the multi-dimensional scoring provides granular data about how rhetoric frames different populations.
The framework's practical deployment depends on whether platforms prioritize nuanced safety tools over simpler approaches. As academic research advances sentiment analysis capabilities, the gap between available technology and deployed systems widens, suggesting adoption faces organizational rather than technical barriers.
- →DSR detects and scores mixed sentiment targets in single messages, addressing limitations of traditional sentiment analysis that only classify overall text polarity.
- →The framework uses transformer-based models grounded in social science theory to identify specific targets of pro-social and anti-social sentiment across three dimensions.
- →Validation across six third-party datasets shows meaningful correlations, suggesting practical utility for platform safety and academic research.
- →The technology can improve detection of coordinated influence operations that blend legitimate advocacy with targeted harassment or threats.
- →Real-world adoption depends on platform prioritization, as technical capabilities often outpace actual deployment in content moderation systems.