Key Takeaways:
- A founder’s approach to privacy reflects their maturity, ethics, and long-term mindset.
- VCs increasingly include data privacy and ethics in their due diligence.
- Privacy is extremely difficult to build in retroactively. Start early.
- AI complicates data compliance. Tread carefully with personal information.
- Collect only the data you need. Excess data becomes a security risk.
Data privacy has become one of the most urgent challenges and clearest differentiators for today’s startups. As AI systems become more powerful and regulations tighten, weak data practices are no longer just a technical issue. They represent a growing business risk. Startups that neglect privacy may face funding setbacks, lose investor trust, or end up navigating costly compliance failures.
Smart investors are already raising the bar. Increasingly, they expect early-stage founders to have a thoughtful approach to user data, transparency, and risk management. And this shift is happening fast.
Katharine Tomko saw it coming long before most. From parole officer to Facebook privacy lead to VC partner, her career has been a masterclass in understanding risk and navigating it with intention. She recently joined me for a powerful episode of Startup Success to talk about why privacy is no longer optional for startups—especially in an AI-driven world—and how founders can build trust, reduce risk, and avoid data pitfalls from the start.
Here are the most important lessons from my conversation with Katharine, with insights every founder should keep in mind when building a data-driven business.
1. What a Founder’s Privacy Strategy Says to Investors
Privacy goes beyond compliance and risk management and also offers a clear view into a founder’s leadership approach. At First Ascent Ventures, Katharine and her team consider a startup’s approach to privacy as part of the diligence process. They pay close attention to whether a founder has considered how user data is handled, what rights users have, and what systems are in place to protect sensitive information. “I think privacy is a great way to say, are we investing in something that is very risky down the road?” she said. “The other thing I like about it is that it gives a very good sense of where the founder or CEO’s mindset is with respect to how they think about ethics.”
“I think privacy is a great way to say, are we investing in something that is very risky down the road?”
I expect this scrutiny will soon become industry standard, and in the near future, evaluating a startup’s approach to data privacy will be just as routine as reviewing its revenue model or product roadmap.
A thoughtful privacy strategy, no matter how lightweight, signals maturity, ethics, and a long-term mindset. In contrast, founders who are vague or dismissive about data practices raise red flags. In an AI-driven world where trust is more valuable than ever, how you handle data can directly influence investor confidence.
2. The Danger of Treating Privacy as an Afterthought
“When you try to build privacy on afterward, it is really, really challenging.” Katharine told me. And she would know. At Facebook, she spearheaded privacy initiatives during major FTC audits—before frameworks like GDPR or CCPA even existed—which meant creating practices from scratch and trying to bolt them onto existing infrastructure. That experience gave her a clear view of how hard and risky it is to retrofit privacy into a system that wasn’t designed with it in mind.
“When you try to build privacy on afterward, it is really, really challenging.”
Too many startups deprioritize privacy until it becomes a problem. But by then, the architecture is baked in, investor trust is on the line, and regulators are watching.
Ignoring privacy today can lead to costly consequences tomorrow, including legal liabilities, failed funding rounds, or serious reputational damage. Founders can get ahead by auditing how they collect, store, and use data, and by putting even simple usage policies in place. Laying this foundation early is far easier and cheaper than trying to fix it under pressure later on.
3. Privacy in the Age of AI: The Use Case Dilemma
With the rise of AI, privacy risk becomes even harder to define and control. Most regulations hinge on companies being transparent about how they plan to use data. But when that data is used to train or power AI models, those use cases often become unclear, unpredictable, or unknowable.
Katharine pointed out that once personal data is fed into an AI model, founders may no longer have a clear understanding of how it’s being processed, or how it might be used in the future. This ambiguity creates compliance challenges, especially around informed consent.
She was unequivocal in her advice: “Do not put personally identifiable information into your models. Full stop. Don’t do it.” However, she explained that removing names or emails isn’t enough. AI models can often re-identify individuals based on behavioral signals and usage patterns alone. This gray area introduces serious privacy risks, even when traditional identifiers are excluded.
“Do not put personally identifiable information into your models. Full stop. Don’t do it.”
Early-stage founders should work to define their AI strategy from the beginning and put guardrails in place around the data feeding their models. The earlier you do this, the easier it is to prevent risky practices from becoming baked into your infrastructure as you scale.
4. More Data Isn’t Always Better
The old saying that “data is the new oil” has led many startups to hoard as much of it as possible, assuming future monetization opportunities will emerge. But Katharine warned that this mindset can backfire. As hostile AI capabilities advance and breaches become more sophisticated, large caches of unused or unprotected data turn into ticking liabilities.
She emphasized that collecting more data than necessary significantly heightens security and compliance risks. Instead, she encourages founders to embrace data minimization—collecting only what’s essential and deleting what isn’t. It’s a strategic move that protects your company and builds trust with users from day one.
5. A Founder’s Guiding Compass: Think Like a User
Katharine’s parting advice was simple and powerful: “Put yourself in the shoes of the end-user.”
When it comes to privacy, one of the most powerful tools founders have is empathy. Katharine stressed that it’s easy to get lost in the mechanics of data collection, but the real test is how your practices would feel to someone on the other end.
She encouraged founders to step back and ask themselves whether their product is reinforcing trust, or quietly eroding it. In an era where AI is pushing us closer to constant surveillance, even well-meaning startups can unintentionally contribute to that future.
In Katharine’s words, “If you’re working in the world of tech and you’re building a company, then you understand the direction we’re going, so do think about your children and your grandchildren.”
The key is staying aware of how user data is being used, what unintended consequences might emerge, and whether you’d be comfortable if the roles were reversed.
Bottom Line for Founders
You don’t need to become a privacy expert. But you do need to bake responsible data practices into your startup’s DNA from day one. Not just to stay compliant, but to build trust, earn investment, and future-proof your business.
To learn more about Katharine Tomko and First Ascent Ventures, visit firstascentventures.com.
Looking for guidance on building a scalable, privacy-conscious business? Burkland’s expert fractional CFOs and startup finance team can help you put the right systems in place from day one. Contact us to learn more.