Online disinformation has evolved from simple bot-driven spam into sophisticated AI-powered operations capable of mimicking human behaviour, adopting realistic personas, and engaging seamlessly in real conversations. Assessments of major large language models show that such systems can already be built with existing commercial tools, with varying levels of vulnerability—particularly in modified open-source models lacking safety controls, which pose significant misuse risks. Current regulations and safeguards are insufficient to address these threats, which enable large-scale manipulation such as fabricating false consensus and undermining democratic processes. This escalation requires urgent, coordinated action across policy, platform governance, and societal resilience to counter the growing strategic risk.