The Human Cost of AI: Learning from FinTech's Biggest Mistakes
In this episode of The Curiosity Code podcast, host Alex Khomyakov sits down with Deniz Çelikkaya, a leading expert in AI risk assessment and technology law. As a partner at ARC Law and a key figure at Revizr, Deniz helps fintech startups navigate regulatory challenges, secure funding, and design scalable, compliant AI systems. The discussion explores the complexities of AI governance, highlighting the risks of algorithmic bias, data privacy concerns, and the need for continuous regulatory adaptation. Deniz shares insights into the cultural and regulatory differences in AI adoption across the UK and Turkey, explaining how fintechs in both regions approach compliance and innovation. She provides strategic advice for fintech founders, emphasizing the importance of implementing ethics by design, avoiding over-reliance on AI, and balancing profitability with fairness. Deniz also discusses real-world AI failures—including risks in gamified trading platforms and automated lending biases—offering lessons on responsible AI deployment. Looking ahead, Deniz predicts a future where AI governance extends beyond compliance to focus on long-term societal impact. She urges fintech leaders to proactively address ethical risks, invest in AI transparency, and integrate responsible innovation into their business models. This episode offers valuable insights into the evolving role of AI in financial services, equipping fintech founders, investors, and compliance professionals with the knowledge to navigate an increasingly AI-driven industry.

Alex: Hey everybody and welcome to another episode of The Curiosity Code podcast. Our guest today is Deniz Çelikkaya, a leading voice in AI risk assessment and technology law. As a partner at ARC law firm, Deniz helps fintech startups and high-growth companies secure funding and design scalable, compliant AI systems. She also works at Revizr, developing AI governance tools aligned with the EU AI Act. We’ll be talking a lot about that, along with her mentorship of startups navigating regulatory challenges. Deniz, welcome to the show!

Deniz: Hi Alex, how are you?

Alex: Yeah, doing great. Let’s just dive in. In your opinion, what are the main components of a robust AI governance system to ensure compliance and ethical AI development? And how can fintech companies mitigate risks, particularly regarding algorithmic bias and data privacy?

Deniz: Great question. AI governance in finance is really important, especially when it comes to bias and data privacy, because financial data is highly sensitive. And I’d like to start with an example to illustrate this. In some cases, financial data has been used in criminal justice systems to predict a person’s criminality. There have been studies where an algorithm labeled a person as high-risk simply because they had a lower credit score.

Alex: Mm-hmm.

Deniz: So you can understand how financial data affects many areas of our lives. Now that I have your attention, I’d say a strong AI governance system has three key components: transparency, accountability, and adaptability. Transparency ensures that AI decisions are explainable. In fintech, if an AI model denies a loan application, both the customer and the regulator should understand why. Accountability means having clear responsibility for AI decisions, including compliance teams, risk officers, and AI ethics committees. And adaptability means AI risks evolve over time, so governance must be dynamic, not static.

Deniz: To address bias, fintechs should regularly audit their models with diverse datasets and conduct fairness assessments. A well-known example is the Apple Card case, where a husband and wife applied for a credit limit assessment with the same assets and financial history. Yet, the husband received a higher limit than the wife. This highlighted how AI bias can negatively impact consumers. And for data privacy, fintechs should ensure data minimization—only collecting what is necessary. Many fintechs are now opting for synthetic data to protect sensitive financial information.

Alex: Yeah, that’s an interesting example, especially when you’re living with that person and wondering why they got a better deal. You work a lot in the UK and Turkish markets, right? Beyond regulatory differences, what cultural or social factors in these regions significantly impact AI ethics in fintech?

Deniz: Turkey and the UK are culturally very different, and that impacts how AI ethics are approached. AI has human-like capabilities, which means we apply ethical rules to it. But ethics are shaped by cultural perspectives, and regulations often struggle to keep up with AI innovation.

Deniz: In the UK, discussions around AI ethics tend to focus on data protection and fairness, often aligning with GDPR principles. UK fintechs emphasize compliance-driven AI governance. In Turkey, AI ethics are more influenced by trust in technology and regulatory flexibility. There is also a stronger emphasis on human oversight in AI decision-making.

Deniz: Interestingly, despite these differences, Turkish banks are more eager to comply with the EU AI Act than UK banks. Even though neither country is part of the EU, Turkish banks seem more motivated to align with European regulations to stay competitive in global markets. Meanwhile, UK banks are waiting for more clarity on the UK’s AI regulatory stance before making major compliance decisions.

Alex: That’s fascinating. So in the UK, there’s an absence of solid AI governance, and companies don’t want to move in a direction that hasn’t been clearly defined by regulators yet. Whereas in Turkey, banks are proactively seeking compliance to gain a competitive edge. That makes sense.

Deniz: Exactly.

Alex: Let’s switch gears. You advise startups on scalable compliance structures. What are the key differences in AI governance needs between early-stage fintech startups and more established companies? And how can startups build a solid foundation for responsible AI development?

Deniz: Great question. The biggest difference comes down to priorities. Startups usually don’t have the funds to invest in compliance early on, whereas scale-ups do. Startups often focus on innovation first, putting compliance on the back burner, while established companies prioritize regulatory compliance to avoid fines.

Deniz: That said, startups should implement ethics by design from the outset. They often lack the compliance infrastructure of larger firms, so they should use automated risk assessment tools to flag potential ethical issues early. This helps them comply with regulations sooner and avoid costly fines later.

Deniz: Established companies, on the other hand, have legacy systems and greater regulatory scrutiny. They focus on continuous auditing and compliance-driven governance structures. One of the biggest challenges for startups is that they’re caught in a cycle—they need funding to afford legal guidance, but without compliance, they may struggle to enter the market. That’s why it’s critical for startups to integrate compliance practices early.

Alex: Yeah, I can imagine that’s a tough balance. At ProductEra, we’ve built several AI-powered fintech solutions, and we often use off-the-shelf language models. What does "ethics by design" look like in practice? Have you seen successful cases of startups scaling with that mindset?

Deniz: It’s still a very new field, so I wouldn’t say there are many established success stories yet. But that doesn’t mean startups aren’t implementing ethics by design. One practical step is ensuring data provenance—knowing exactly where training data comes from. Documenting the auditing and fine-tuning processes also makes compliance much smoother down the line.

Alex: That makes sense. But documentation is usually the last thing on a founder’s mind—they just want to build and launch their product as quickly as possible.

Deniz: Exactly. That’s why I see an opportunity for new fintech solutions that help automate compliance processes. If startups had access to tools that made documentation and AI risk assessment seamless, they could focus on innovation without falling behind on compliance.

Alex: That’s an interesting opportunity—building tools that automate compliance so founders can focus on product innovation. But since we don’t have many success stories yet, what about mistakes? Have you seen cases where AI governance wasn’t handled well, and what were the consequences?

Deniz: Great question! I actually love talking about mistakes because they teach us so much. One of the biggest mistakes companies make is treating AI governance as a one-time compliance task rather than an ongoing process.

Deniz: Many people think regulatory fines are the biggest risk, but reputational damage is often worse. A good example is Robinhood, which used AI-powered, gamified trading recommendations. Unfortunately, it led to inexperienced investors making risky financial decisions without fully understanding them.

Deniz: One tragic case involved a 20-year-old trader who believed he owed $730,000 due to an error in Robinhood’s algorithm. He panicked, tried to contact the company but couldn’t, and ultimately took his own life. While there were likely other factors at play, better transparency and clearer warnings about how the AI functioned could have prevented the misunderstanding.

Deniz: Another case was the 2010 Flash Crash, where high-frequency trading algorithms interacted in unexpected ways, momentarily wiping out $1 trillion from the stock market. Even though it was temporary, the incident caused widespread panic and financial losses. These examples show how AI mistakes in fintech have real-world consequences.

Alex: Yeah, that’s intense. It makes you realize that AI is not just a tool—it directly impacts people’s lives and financial well-being. So, how do we balance innovation with responsibility?

Deniz: It comes down to making AI governance a continuous process, not just a checkbox for compliance. Companies should constantly test their AI models, conduct fairness audits, and clearly communicate to users when AI is making decisions that impact them.

Alex: That makes sense. So let’s flip it—what’s the most counterintuitive thing you’ve learned about AI risk in fintech that could save companies a fortune?

Deniz: I’d say the biggest misconception is that "more AI" means "better AI."

Alex: That’s interesting. Tell me more.

Deniz: Many fintech companies assume that adding AI to everything will automatically improve their products. But sometimes, simpler systems work better and carry less risk. AI should enhance decision-making, not replace existing business logic that already works well.

Deniz: Another overlooked factor is AI literacy within companies. If employees blindly trust AI outputs without critical thinking, they can make poor decisions. I call this the "technological leap of faith"—people assume AI is always right, but it’s not. Companies need to train their staff on how to use AI effectively and critically evaluate its outputs.

Alex: That’s a great point. I’ve noticed that too, especially in the startup space. There’s a pressure to include AI in pitch decks just to attract investors, similar to how crypto was a buzzword a few years ago.

Deniz: Exactly! AI is a great tool, but using it just for the sake of it can create more problems than it solves.

Alex: What’s one crucial question that fintech companies should be asking but often overlook?

Deniz: Companies should ask themselves: "Are we optimizing AI purely for profit, or are we also considering fairness and governance?"

Deniz: Many fintechs focus on maximizing loan approvals or minimizing fraud risks but don’t think about the social impact. For example, an AI-driven lending model that prioritizes low-risk borrowers based on historical data might exclude first-time borrowers with no credit history. Ethical AI governance should include fairness constraints to prevent financial exclusion.

Alex: That’s a great insight. And what are some of the biggest myths surrounding AI compliance in fintech?

Deniz: One major myth is that compliance automatically equals ethical AI. That’s not true. Compliance is the bare minimum—it’s a legal requirement, not a guarantee that AI is being used responsibly.

Deniz: Ethical risks evolve faster than regulations. For example, early buy-now-pay-later fintechs complied with existing financial laws but still created debt traps for young users. That wasn’t illegal at the time, but it was ethically questionable. Good AI governance means anticipating risks before they become compliance issues.

Alex: That’s a great way to put it—compliance is the floor, not the ceiling. Now, I want to wrap up by talking about the book you’re working on. Can you tell us about it?

Deniz: Sure! I’m currently working on two projects—an article for a book called *My Data is Mine*, which is part of the European Consumer Empowerment Project and will be presented at the Venice Privacy Summit. The article explores whether synthetic data can mitigate bias in AI.

Deniz: I’m also writing my own book on AI risk in financial services. It covers three main themes:

Deniz: First, the hidden risks of AI in fintech, like biosecurity vulnerabilities and over-automation, which can create financial instability.

Deniz: Second, building trustworthy AI—offering practical frameworks for fintechs to ensure fairness, accountability, and transparency.

Deniz: And third, AI governance beyond compliance—why AI ethics should go beyond laws and focus on long-term societal impact.

Alex: That sounds fantastic! For our listeners who want to learn more about AI compliance and ethics, we’ll post links as soon as your book is available. Deniz, thank you so much for joining us today.

Deniz: Thank you, Alex! It’s been a pleasure.

Alex: And to our listeners, thanks for tuning in! Don’t forget to subscribe, share, and stay curious. See you next time!