Prompting with Purpose: Localizing Gen Z Content with AI and Social Listening
About the project

You can view my code here: https://github.com/cchristinezhou/ai-subtitle-translator.git
You can check out the Streamlit app here: https://ai-subtitle-translator.streamlit.app
💬 The Problem: When Great Content Gets Lost in Translation
I’m a vlogger on YouTube, and over time, I’ve noticed something surprising — my content, especially my lifestyle vlogs and casual storytelling, started gaining traction on Chinese social platforms like 小红书 (RED). But with that came a very real challenge.
Most of my videos are in English. And when uploading to Chinese platforms, simply throwing in raw English audio doesn’t cut it. Creators who succeed there usually include bilingual subtitles — both in English (for clarity and language learners) and native Chinese translations that match the tone, rhythm, and slang Gen Z audiences use today.
The problem?
Manually translating my entire video was time-consuming and mentally exhausting. I moved to the U.S. when I was young, and while I’m fluent in conversational Mandarin, my written Chinese — especially current slang — is… let’s just say a little outdated. The few times I tried translating my own subtitles, they sounded robotic at best, and cringe at worst. Definitely not the “influencer energy” I was trying to give.
So I thought — I already have an OpenAI subscription. What if I could build a tool that uses ChatGPT to help me translate subtitles — not just word-for-word, but vibe-for-vibe? Something that could speak like a RED vlogger, not like a textbook.
🧠 The Approach: When Great Content Gets Lost in Translation
Iteration 1: The Barebones Prompt
My first version was just a straight-forward API call to ChatGPT:
“Translate this into Chinese using casual, colloquial Gen Z style.”

I figured adding basic guidance like “use online slangs and expressions a young content creator would use” would be enough. And to be fair, it kind of worked — kind of. The translations were okay, but the tone still felt generic, sometimes even tone-deaf. It lacked the personality and punchiness that RED-style captions are known for.
Below is how the 1st Iteration looks like:

At that point, I was experimenting with multiple tone options (lifestyle, tech, formal, etc.), but even the Gen Z version didn’t hit. I realized I needed more than just adjectives — I needed examples and cultural specificity.
Iteration 2: Listening, Learning, and Leveling Up
After a few runs with my initial prompt, I still wasn’t happy. The translations felt… awkward. Technically correct, maybe — but definitely not something a RED Gen-Z creator would actually say.
So I did what every creator should do when something isn’t landing: I asked my audience.
🧠 I Took It to the people (RED)
I posted on RED asking for tips on how to better localize English vlogs into native Gen-Z Chinese — and the response was amazing. The community really showed up.

Two key takeaways from the feedback:
General AI translation methods don’t work well for multimedia. Subtitles require more flexibility than standard literary or academic translation.
The tone needs to feel like a real person talking. A subtle mismatch in vibe or slang makes the whole thing fall flat, especially for vlog content.
One user even joked that some Gen-Z Chinese translation models were “too Twitter-coded,” making them feel rheumatic (translation: cringe and out of touch).
💡 Real Talk with a Bilingual Friend
I also spoke to a close friend who’s native-level fluent in Chinese and does a lot of creative writing. Her advice?
“Don’t rely just on GPT. Use DeepSeek — it’s trained on a massive Chinese dataset and gets tone way better.”
So I did.
I started feeding examples into DeepSeek and found it far more aligned with the informal, hype-y, vlog-friendly tone I was going for. Around this time, I also began structuring my prompts into clear roles (e.g., “You’re an expert in Gen-Z localization”) and added reference slang like “姐妹们,” “咱就是说,” and “尊嘟绝绝子” to guide the tone.
Iteration 3: Refining with DeepSeek, Hitting My Stride
Armed with advice from the community and a strong nudge from a trusted friend, I started experimenting seriously with DeepSeek — an AI model trained on a massive Chinese corpus.
Here is the final prompt for gen-z translation:
And wow.
Not only did it understand Gen-Z tone better, it also gave me clearer feedback loops when my prompts missed the mark. I paired this with my updated GPT prompt (now more example-driven and persona-specific), and the results were night and day.
I’ll include a side-by-side comparison of an earlier version vs. this most refined output — the difference in flow, tone, and relatability is stark.

What changed?
✍️ I got hyper-specific with examples in my prompts — literally showing GPT/DeepSeek how to localize phrases like “I’m so excited” into “这次旅行我真的狂喜.”
🤝 I treated DeepSeek as a co-creator, not just a tool. I read through suggestions, kept what felt right, and iterated in layers.
💡 I stopped trying to be technically perfect, and focused instead on whether the output felt authentic — like something a Chinese content creator would actually post.
This was the version I finally felt proud of — it sounded like me, just speaking to a Chinese audience in their native voice. It was casual, fast-paced, and full of those “姐妹们懂我吧” vibes.
🔨 Other Challenges:
Like any real project, this one wasn’t without its technical headaches.
Originally, I wanted to speed things up by batching my subtitles — sending large chunks of text to GPT in one go rather than translating line by line. In theory, this would be more efficient. In practice? Total chaos.
One major issue was line mismatches — GPT would return translations with a different number of lines than what I sent in, which made it impossible to realign with the original subtitles. Sometimes it skipped lines, other times it merged them. Debugging all of that just… wasn’t worth it.
I ended up getting warnings like:
Mismatch in line count at batch index xyz. Falling back to line-by-line.
So yeah — I made the executive decision to stop obsessing over optimization and instead focus on what mattered most: translation quality and style. Once I switched back to line-by-line with good prompt engineering, everything became a lot smoother.

💭 Reflection: It’s Not Perfect, But It’s Progress
Let’s be real: this isn’t perfect.
As someone from my community put it, no current AI can truly match the finesse of a seasoned native Chinese speaker — especially when it comes to slang, nuance, or the ever-evolving vocabulary of Gen Z. And they’re right.
But here’s the thing: I’m not a seasoned speaker either.
I can understand modern Chinese slang when I see it, sure — but I couldn’t spontaneously generate it on my own. My writing in Chinese often ends up sounding robotic or overly formal, and that’s been a frustrating creative block for a long time.
That’s why I’m honestly so grateful AI showed up when it did.
This project has helped me learn new words, understand modern phrasing, and connect with an audience I’ve always wanted to speak to more naturally — people from my home country, who now find my work more relatable and engaging.
More importantly, it’s made the entire localization workflow simpler. What used to take me hours and second-guessing can now be streamlined into minutes — with feedback loops, improvements, and prompt iterations all built into the process.
I truly believe this kind of AI-assisted localization has the power to unlock global reach for more creators — especially bilingual or diaspora creators like myself who want to bridge that gap but have felt stuck by language limitations.
If you’re curious, here’s the Streamlit app I built to make it all work.


