Navigating Ethical and Compliance Challenges
Implementing privacy-first design, ensuring data security, and aligning with global regulations like GDPR and CCPA to using biometric data.
If building a product is hard, building an AI product that touches live (voice!) player data is like walking on a tightrope while wearing roller skates, with your legal counsel team throwing compliance rocks at you.
And if you’re in games or any consumer tech space where data is from interactions in real-time, you learn quickly: privacy isn’t just a line in the spec. It is the spec.
While developing Safe Voice, I found myself less in typical PM rituals and more in cross-functional meetings with legal, infosec, policy, and executive sign-offs.
That’s where the real product decisions happened. Because when your AI system ingests biometric or behavioral signals, you’re not just designing features. You’re designing how trust gets built… or ruined.
Designing Privacy-First From the Ground Up
“Privacy-first” sounds great, like all those beautiful UX principles we want to embody in our methods. But in practice? It’s a never-ending cascade of small but strategic choices that start early and compound fast. For us, that looked like:
Minimizing collection scope: No creepy hoarding. We focused on metadata, feature extraction, and ephemeral processing.
User consent pathways: define checkboxes that didn’t feel like pop quizzes or your life insurance contract.
Data isolation: Segmenting processing queues and anonymizing payloads, so even our own systems couldn’t play detective with identity.
Working closely with our DPO and external counsel made this doable. It’s less about getting a stamp of approval and more about designing with legal from day one. Although in our case we had several day ones with legal.
Biometric and Behavioral Data: The Deep End
Voice data is personal. Not just "I only said that thing once" personal, biometric personal. Gender cues, age estimations, emotional tone, plus behavioral metadata like intent, tone escalation, etc.
That meant:
Sorting out our legal role: Processor or controller? It depends! (Nothing like an existential crisis baked into your deployment model.)
Granular audit trails: Everything the model inferred had to be logged, explainable, and reviewable. Not just “because compliance,” because customers.
Compliance defaults: GDPR, CCPA, DSA ... we basically became multilingual in data regulation. What is and isn’t allowed? Our "defaults" were defined like high-stakes geo political negotiations.
You can’t wing this stuff. You design for nuance. You build for audit. You delay launches because nuance takes time, thats the burden of dealing with this type of data.
Legal and InfoSec Aren’t Stakeholders Anymore. They’re Co-Designers.
At Unity, I learned the hard way that looping legal in late (meaning following existing processes that don’t account for our type of product) is like asking for decaf when the server asks you what type of milk. After a few difficult attempts at keeping a legal counsel for more than 6 months, we finally got into a team that we worked together from the beginning.
That meant:
We pressure-tested our designs before they hit sprint planning. (Legal reading PRDs and TDDs and making suggestions)
We built a shared vocabulary between lawyers, engineers, PMs, and designers. (This teams documents deserved a Pulitzer, honestly.)
We de-escalated months-long ambiguity about data rights because we finally worked with legal as product partners, not gatekeepers. With them driving and getting the sign-off for the change we wanted, not the other way around.
Ethics Isn’t “Extra”. It’s the Difference.
Compliance is the floor. Ethics is what actually matters.
We regularly asked:
Can this system be misunderstood or misused in ways we didn’t intend?
Will the end-user (or moderator) understand what the model is saying, or just see a score and assume the worst?
What happens when we get it wrong, and who carries the weight of that mistake?
Some features were held back. Some were redesigned entirely.
Instead, we built for:
Explainability
Confidence intervals
Human-in-the-loop workflows
Because no model is neutral.
And pretending otherwise is how you lose trust at scale.
TL;DR: If you're building AI and handling sensitive data: Privacy isn’t a checklist. Ethics isn’t a luxury. And legal isn’t the “no” department, your job is to make them your besties.