This Virtual Manipulation report explores the impact of generative AI on social media analysis. Large Language Models (LLMs), such as the powerful GPT-4, can create highly convincing content that appears legitimate and unique. This makes it nearly impossible to distinguish between real and fake accounts. But, defenders can employ the same tools to more effectively monitor social media spaces. Careless implementations by adversaries introduce weaknesses that can result in accounts inadvertently disclosing that they are bots. As LLMs rely on prompts to shape their output, targeted psychological operations (‘psyops’) can provoke chatbots to reveal their true identities. The fight against manipulation is entering a new phase, but it remains unclear whether, in the long run, defenders or attackers will derive greater benefit from AI systems.