Being an AI PM with an In-House AI Team
How a product manager successfully contributes to building an in-house AI team. Shaping data pipelines, compliant architecture and design, hiring processes and vendor partnerships.
Sometimes I get asked: “You actually had to build your AI team from the ground up?” Usually followed by: “Wait, you owned the data pipeline too?”
Yes. And yes.
Here’s the truth: being an AI PM is not just about wrangling JIRA tickets or helping translate ML jargon into product speak. It’s about shaping the system - not just the software. And if you’re lucky (or brave) enough to be building an in-house AI team, congratulations: you’re about to enter the most chaotic, illuminating, ethics-debated systems design challenge of your career!
Let me tell you what it really takes.
Start With the “Boring” Stuff
Before any model gets trained, before you even think about whispering the word “LLM,” and accidentally summon a horde vibecoders into your life, you need to define what problem you’re solving just like any product. Real-life, user-centric, ideally revenue-adjacent problems.
At Unity, our focus was online safety - how to detect, prevent, and mitigate harm in real-time voice chat. That shaped everything: from data labeling frameworks to UI mockups for moderators.
You don’t build “AI.” You build pipelines, infra, and architecture around a specific need. That is your goal.
Build your team as if you are going to Mars
Spoiler: you don’t need 12 PhDs and a Kaggle Grandmaster to build a good AI team.
A machine learning engineer who loves debugging more than hallucinating their own vision.
A data engineer who masters pipelines like Mario and politely tells / teaches you when your data assumptions are wrong.
A research-minded data scientist who likes to reads papers and also shipping metrics.
A product manager (hi!) who can hold the ethical bar high and bridge the gaps with communication skills and collaborative design skills.
Hiring was intentional. We hired humans, not headcounts. It’s about having the right team setup. People who understood that building Safe Voice meant we’d spend 80% of our time getting the plumbing right before seeing live data or even training a model.
Data Pipelines: ethics + security are non negotiable requirements
Want to be a “data-driven” team? Cool. Start with responsible collection.
For us, that meant GDPR-compliant voice data collection, thoughtful user consent flows, and hours spent defining what “toxic” even meant in a multilingual context.
The “pipeline” wasn’t just tech. It was process. Legal reviews. Labeling sessions. Listening to toxic content for hours. Metadata strategies. Feedback loops with moderators. And, crucially, escalation paths for edge cases that no one really wants to talk about but you have to design for anyway. Did I say multilingual context? … Right.
Also you don’t have to be an AI expert to start this job, but believe you will become one if you want to get the job done: just document everything. Future you will thank you when you have to discuss F1 scores and confusion matrixes with the team when discussing production readiness for 4 hours.
Systems Architecture is Product Strategy Disguised in a Hoodie
We built Safe Voice to integrate with Unity’s Vivox voice system. But what that really meant was building infrastructure that could:
Process voice data in near real time conditions
Operate globally, across regional compliance laws, and… did I mention multilingual context?
Expose configurable thresholds and policies to developers
Scale without breaking trust with the players who are the product
And yes — I was in the room defining how those services were scoped, what default settings should be, and how moderator dashboards handled AI predictions. Your architecture is where your product strategy is. A product decision is a system constraint you designed for… or didn’t.
As a PM, you’re not going to be writing pretty specs that gets executed and thats it. You’re continuously discussing and setting boundaries for how this artificial intelligence gets applied, and misapplied, in live environments, which change often.
That’s why your team needs to survive a Mars expedition, with you.
Vendor Partnerships: Choose Wisely
We didn’t build everything. Some tools we licensed, some we evaluated and passed on, some we replaced midstream when the trade-offs didn’t align.
Vendor selection isn’t procurement. It’s strategic alignment. I looked for:
Proven privacy commitments and compliance with highest standards for data handling
Transparent model labeling methodology
A team that wouldn’t ghost us when things go bad
A vendor that was price-conscious and supported our development stages … did I mention multiling… yeah that’s expensive!
And when vendors couldn’t meet our bar? We built parts ourselves.
Exihibit A: automating a labeling pipeline with LLMs.
What No One Teaches you
If you have an in-house dedicated ML team, you won’t be shipping SaaS like your PM team with predictable and steady releases. It will be messy and you will need to defend and explain that you are now riding a magical unicorn to the finish line, not a race horse.
You have more costs than any other type of product, and you will keep asking for more budget without setting off the fire alarms. Work on your leadership and organizational influence, you will need it.
You’ll fight harder for data labeling frameworks and budgets than model architectures themselves. Those labels need to serve your use case and with the least amount of noise in the process that can compromise them.
Your ethical frameworks will be challenged along the way, be it by design trade-offs, stakeholders, business pressure, COGS or pure scale. When users and data subjects trust the systems, they are trusting your ethical compass.
You’ll realize how blurry the line is between “automated moderation” and “automated power”. We want AI products with humans in the loop more than shareholder value. Don’t design for “set and forget” practices, when it comes to features that impact real humans.
And you’ll never stop learning. If you do, you’re probably missing something.
If you are interested in how it all started: