Among these platforms, X stands out as a prominent battleground. It’s a place where disinformation campaigns thrive, perpetuated by armies of AI-powered bots programmed to sway public opinion and manipulate narratives.
AI-powered bots are automated accounts that are designed to mimic human behaviour. Bots on social media, chat platforms and conversational AI are integral to modern life. They are needed to make AI applications run effectively, for example.
But some bots are crafted with malicious intent. Shockingly, bots constitute a significant portion of X’s user base. In 2017 it was estimated that there were approximately 23 million social bots accounting for 8.5% of total users. More than two-thirds of tweets originated from these automated accounts, amplifying the reach of disinformation and muddying the waters of public discourse.
How bots work
Social influence is now a commodity that can be acquired by purchasing bots. Companies sell fake followers to artificially boost the popularity of accounts. These followers are available at remarkably low prices, with many celebrities among the purchasers.
In the course of our research, for example, colleagues and I detected a bot that had posted 100 tweets offering followers for sale.
Using AI methodologies and a theoretical approach called actor-network theory, my colleagues and I dissected how malicious social bots manipulate social media, influencing what people think and how they act with alarming efficacy. We can tell if fake news was generated by a human or a bot with an accuracy rate of 79.7%. It is crucial to comprehend how both humans and AI disseminate disinformation in order to grasp the ways in which humans leverage AI for spreading misinformation.
For the full article by Professor Nick Hajli visit the Conversation.