Challenges of Launching a predictive AI Product in a Blue Ocean
Crafting a go-to-market strategy, onboarding early adopters, gathering user feedback, and driving product adoption in the most difficult period of the gaming industry, marked by widespread layoffs.
Some products launch into crowded markets with familiar customer needs and well-worn positioning battles. Safe Voice launched into something entirely different: a market that was barely a market yet. There were no clear customer budgets, no established expectations, and very early competitive pressure.
It was blue ocean in the truest sense, and it demanded a completely different product and go-to-market playbook.
We were building a predictive AI solution for near real-time voice moderation in gaming, an industry where community safety was becoming an urgent, high-visibility problem since it’s growth through the pandemic, but investment still lagged.
Worse, we suffered integration pains through an acquisition and ended up launching during one of the harshest macroeconomic cycles in gaming history: industry-wide layoffs, project cancellations, and R&D freeze-ups everywhere you looked.
So what does it mean to shape a category while the floor is still shaking beneath your feet?
A Market That Didn’t Exist
The market for player safety, especially proactive moderation tools powered by AI, was still largely aspirational when we began out of our OTO.ai days.
Studios acknowledged the problem but hadn’t yet operationalized solutions. Budgets were missing. Procurement categories were vague. We weren’t just creating a product, we were creating urgency for them to act on it.
To gain traction, we needed to align our value proposition to multiple levels:
For exec sponsors, we pitched long-term differentiation through community trust and player retention.
For ops and community teams, we offered automation of tedious manual review work.
For developers, we emphasized seamless integration into their stack with Unity’s ecosystem.
This meant positioning Safe Voice not just as a safety solution, but as a data layer.
A visibility tool. A retention enabler. Honestly, whatever would unlock the stakeholder standing in front of us.
Positioning a Mission-Driven AI Product
Unlike commoditized voice services, Safe Voice wasn’t a toxic flag feature. It was a values-aligned investment in player experience. That came with benefits, and with friction.
We grounded our messaging in:
Player empathy: The cost of toxicity isn’t just moral… it’s measurable. Engagement drops, churn spikes, and communities fragment.
Operational efficiency: Moderation teams are overwhelmed. AI can help scale review workflows and reduce false positives.
Regulatory foresight: New laws (DSA, COPPA, etc.) were just beginning to force studios into compliance conversations. We helped them get ahead of it.
Still, we had to walk a careful line. Safety work can easily go into overpromising outcomes or triggering privacy concerns.
So we rooted everything in consent-based data collection, transparency, and customization, letting studios own their policies while we powered the tech. The road to building this trust was a long one.
GTM in a Hostile Economic Environment
Let’s be real: the market didn’t want new products in 2023. Studios were shedding teams, not trialing AI pilots. "Wait until next quarter" was the default. We saw our early adopters fall through the cracks of layoffs, failed launches and barely surviving another quarter.
To counter this:
We doubled down on that executive alignment. We framed moderation not as cost, but as reputational and operational risk reduction.
We led with clearer insights: publishing thought leadership, toxicity benchmarks, and customer playbooks.
We leveraged our ecosystem pull: positioning Safe Voice as a value-add on top of Vivox Comms, which already had widespread adoption.
Every move had to do double duty: sell the future, and reduce perceived switching or implementation costs in the present.
Strategic Architecture: AI as Infrastructure
From a systems perspective, we designed Safe Voice with three principles:
Trust by default: Our architecture prioritized privacy, consented data handling, and transparent inference reporting. No shortcuts, no ambiguity.
Friendly-user platform capabilities: We needed an interface where voice insights could inform moderation decisions, and no down time configuration could take place by Moderators and Support teams, not developers .
Extensibility: Studios needed flexibility to define what "toxic" meant in their communities, so we built customization layers into thresholds, severity scoring, and feedback loops.
The architecture was the product strategy. These weren’t technical choices - they were our product promises.
Feedback Loops in an Ambiguous Space
We treated early adopters not just as customers, but as collaborators. Safe Voice came with a strong point of view, but every deployment taught us something new:
What qualifies as "harm" in one genre might be tolerated play in another.
Language, tone, and intent are incredibly difficult to classify across cultural contexts.
Moderators need more than AI scores—they need tools to explore, verify, and explain those scores.
We shipped with humans in the loop, gave partners visibility into our decision boundaries, and iterated fast based on moderator needs. Some of our most valuable features came not from roadmap specs, but from observing real-world user flows.
Repositioning for Platform Stickiness: The Data Flywheel Strategy
One of the most strategically valuable pivots we explored was repositioning Safe Voice not just as a moderation tool, but as a gateway into Unity’s broader data platform.
We built a proposal that reframed Safe Voice as the entry point to a multiplayer intelligence layer: tying behavioral insights from voice moderation into:
Matchmaking engines (e.g., pairing by behavioral style)
Player reputation systems (with insights from verified in-game behavior, not just reports)
Community dashboards (highlighting not only toxicity, but prosocial play patterns)
This repositioning would allow us to:
Create a stickier product by embedding Safe Voice insights into other Unity services
Differentiate from strong competitors that moved quickly in the market, but only offered moderation in isolation
Lay the foundation for a data flywheel—where more customer integrations led to better model performance, more product touchpoints, and higher switching costs
It turned a cost center narrative into a strategic growth story.
And while the divestment shifted Unity’s direction, this framing remains one of the clearest examples of how we used product strategy to create future platform leverage.
When Internal Friction Becomes Product Risk
One of the biggest, and perhaps most sobering, lessons from Safe Voice was how much internal company dynamics can limit even the strongest product vision.
Over the two-plus years of development and GTM efforts, we experienced frequent reorgs, shifting leadership priorities, and structural indecision.
Each change reset relationships, paused momentum, or introduced contradictory goals. As a result, it became nearly impossible to build long-term trust with customers, we couldn’t consistently promise what we were structurally unable to commit to ourselves.
That kind of internal turbulence has a compounding effect. It erodes external credibility. It forces PMs and product development teams into reactive mode. And eventually, it becomes a self-fulfilling prophecy: uncertainty inside breeds hesitancy outside. Strategic conviction erodes when there isn’t stability to support it.
A prime example: it took the arrival of new leadership to finally align Unity’s legal and data governance teams around a consistent, strategic interpretation of our role in the AI data pipeline — from data controller to data processor. This misalignment had created months of friction, blocking feature releases and stalling key customer commitments. Once resolved, it unlocked smoother internal workflows and clarified what we could promise externally.
Bringing legal, privacy, and policy experts into the design phase, rather than post-review, proved critical. It softened barriers between teams and helped turn perceived blockers into shared ownership. That one change alone demonstrated how internal cohesion isn’t just good practice; it’s market access.
This isn’t just a cautionary tale, it’s a reminder to senior leaders that operational clarity and continuity are not nice-to-haves. They are prerequisites for shipping bold, category-defining products. Especially in new markets where trust is the product.
A Pivot, and What Remains
Eventually, Safe Voice was wound down as Unity shifted its strategy away from building internal safety tools and went with a partnering strategy instead.
This wasn’t a failure of product, it was a realignment of priorities in a cost-constrained environment.
Still, the product work mattered. We proved customer need. We earned trust. We landed from Indie to AAA customers that echoed that success. We built a team fluent in the ethics, infrastructure, and go-to-market realities of AI in safety tech.
And we all left with a framework on how to build sensitive, predictive AI products in unproven spaces.