top of page
  • Writer's pictureKevin

Bots for Peace

Welcome. This is our first blog post in the project. We are happy that you have found your way to our project.


What is this blog for?

This is maybe one of the first questions you may ask when visiting this site. Frankly, there is no clear roadmap (yet). I will use this blog as one of the ways to track progress on the project, communicate with the outside world, and highlight some of the findings we make. Now and then you may also find some insights from the content moderation market. But I expect things to change quite a bit as this progresses


What's the project about?

Events like January 6 2021, or the Buffalo Shooting in 2022 have highlighted the role that social media plays in radicalization and the incitement of violence. Intelligence agencies around the world including the Mi5, FBI or the Australian Security and Intelligence Organisation have repeatedly highlighted the threat of extremism originating from within our online communities. And social media platforms have a problem: Their current measures don't achieve the intended outcomes. There is evidence of a growing need for community moderation tools that help social media companies to establish safe online environments.


Content moderation is a growing market, projected to double by 2027 reaching US$ 13630 million.

However, social media companies face the problem that current measures are often ineffective in preventing radicalization and online hate. Marc Zuckerberg highlighted this problem in his “blueprint for content governance and enforcement” in 2018, as did YouTube in 2021: Regardless of where companies draw the line to regulate and remove harmful content, engagement with content at the fringe of removal will always be highest. Moreover, content moderation through removal and “shadow banning” has significant implications for freedom of speech. Current measures that include AI-based moderation are also well known for discrimination and marginalization of the most susceptible user groups.


Our solution & mission

I am currently working on narrowing down the mission and vision statement as a means to set the direction for this project. We have come a long way from the initial identification of a possible niche and problem to tailoring this solution.


In a nutshell here is what we got so far:


Our mission is to establish onlineExtremism.org bot as a more inclusive, responsible, and deliberative solution for social media platforms to manage and regulate the online space.

This approach differentiates itself from current measures by embracing civic values without primary reliance on content removal or the practice of shadow banning which blocks or partially blocks users or their content without them knowing that they have been banned. A key feature of this solution is that it prevents the stepwise radicalization from seemingly harmless mainstream content to increasingly extreme and problematic content at the fringe of platform policy.


How did we come up with this solution?

We did some research and talked to industry and governments. Based on research that we did with participants in an online experiment we can see evidence of social bots being capable to appeal to sense and sensibility in users, educating users on the implications of their online behavior, and offering users the means to self-regulate.


But we are not quite there yet and research is ongoing.


What's next?

More research. In the near term we will focus on conducting more research on the applicability of social bots, and look for bot potential and challenges in the deployment as a countermeasure to online radicalization. Some of this research will flow into the Whitepaper that we are working on, which will ultimately be the foundation of our work


More updates, soon...

3 views0 comments

Comments


bottom of page