EXPLAINING AI
In 2023, Microsoft had a problem: everyone was talking (and writing) about AI, but almost no one understood it. The conversation was polarized, swinging from hype to horror, and the visual language was stuck in tired clichés. We needed to cut through the noise and build trust at scale by creating something editorial, not promotional, and visually distinct.
Year
2023-2024
Role
Creative Director
Timeline
4 Months
Building Trust Through Education
In 2023, amid the first wave of the AI boom, I led the creation of Explaining AI, a social video series designed to demystify artificial intelligence for broad audiences while positioning Microsoft as a credible, human-centered guide. The challenge wasn't just explaining complex concepts. It was doing it in a way that felt editorial, not promotional, and visually distinct in a sea of generic AI content. Because people see stock footage of brains and hackers and 0s and 1s and they tune out. Immediately. The strategic decision was to resist chasing trends or profits. Instead of reacting to every product announcement or AI headline, we built evergreen infrastructure: content that could live on social feeds, but also be referenced by press, and reused across onboarding, events, and media training. Each episode tackled a single concept, from generative AI to large language models to responsible AI, using plain language, clear metaphors, and relatable use cases. No jargon. No product over-indexing. Just credible, accessible storytelling.
A Visual Language That Mirrors How AI Works
The real breakthrough was the visual language. After speaking with Microsoft's engineering teams to understand how large language models actually work, I realized that AI responses function like a collage - pulling from disparate sources, layering context, and synthesizing something new. That became the creative spine of the series. I approached a motion graphics artist on our team whose collage-style work I'd admired and asked if he'd be interested in tackling the series. The result was a recognizable visual system: layered typography, illustration, stock photography, and animation that mirrored the way LLMs operate. It allowed us to visualize abstract ideas like "hallucinations" or "model training" without relying on sci-fi tropes or anthropomorphized AI. The content felt designed, not templated, and it stood out. We also built a system, not just a set of assets. The editorial tone, visual language, and modular format allowed us to extend the series into Responsible AI Season 2 in 2024, pivoting the focus without rebuilding from scratch. The framework was flexible enough to evolve while staying instantly recognizable.


Performance and Long-Term Impact
The series launched in early 2023 and quickly proved its value. It drove nearly 1 million video views with strong completion rates, significantly exceeding benchmarks. But the impact went beyond the numbers. Explaining AI became a reusable asset, regularly used for media training, integrated into new employee onboarding, and shown at Microsoft Philanthropies events. More than a year after launch, it was still being referenced and repurposed. The work earned a Telly Award and a Webby nomination for Best Social Content Series (Education & Science), recognition that the series succeeded both creatively and culturally. I identified a gap in how Microsoft was communicating about AI, worked with my team to create a visual and tonal system to solve it, and created infrastructure that continued to deliver value long after launch. Explaining AI proved that you don't need to chase hype to build trust. You just need to respect your audience, and give them infomation instead of a sales pitch.





