Large language models (LLMs) are increasingly used for analysing information environments, but their effectiveness varies significantly across languages, especially between high-resource languages like English and less-resourced ones such as Baltic languages. Earlier NATO StratCom COE studies showed that while some basic NLP capabilities exist, more complex analytical tasks remain underdeveloped and that LLMs perform less accurately and coherently in languages like Latvian. Building on this, the current report evaluates LLM performance in stance detection and sentiment analysis across English, Lithuanian, and Russian, focusing on politically sensitive topics, and tests whether techniques like fine-tuning and retrieval-augmented generation can improve results. The findings confirm persistent performance gaps, with lower accuracy in less-resourced languages, but also demonstrate that targeted model adaptations can significantly improve outcomes, sometimes allowing smaller fine-tuned models to outperform larger systems.