The DARK Side of AI in Finance, Can We Trust It?
24 Jan, 2025
In this episode of the Curiosity Code podcast, Alex interviews Daniel Flowe, head of digital identity at the London Stock Exchange Group. With leadership experience at Bandwidth and advisory roles at ArenaCX, Daniel is a thought leader in digital innovation, ethics, and AI systems in finance. The conversation centers around the role of AI in financial systems, highlighting the challenges of integrating facial recognition and digital identity verification without introducing bias. Daniel explains how many existing AI models are trained on non-representative data sets, leading to significant errors—especially for women and people of color. He shares how institutions can mitigate this through better data sets like the Janus project and structured governance frameworks that prioritize ethical outcomes alongside business goals. Daniel discusses practical strategies for financial institutions to implement explainable AI and create mechanisms for continuous oversight, including diverse committees with technical expertise and independent accountability. He underscores the importance of aligning AI-driven decisions with fairness, accessibility, and transparency to build consumer trust. As the discussion expands, Daniel offers insights into the future of AI governance in finance and how businesses can navigate ethical dilemmas by ensuring technical stakeholders play a central role in decision-making processes. The episode offers valuable takeaways for professionals looking to enhance the balance between innovation and responsibility in financial services.

Alex: Hello everybody, I’m Alex, CEO of Productera, and welcome back to another episode of *The Curiosity Code* podcast. We’re happy to have Daniel Flowe with us today. Daniel leads digital identity at the London Stock Exchange Group, tackling some of the biggest challenges in secure and ethical tech with leadership experience at companies like Bandwidth and advisory roles at ArenaCX. Daniel, welcome to the show.

Daniel: Thanks, Alex. It’s great to be here.

Alex: Let’s dive in. As head of digital identity at the London Stock Exchange Group, how do you ensure that the integration of AI in financial systems prioritizes ethical considerations while fostering innovation?

Daniel: That’s a big question, Alex. My focus area is in identity, and when we talk about identity, we often refer to facial recognition. This is one of the core technologies used in identity verification today, but unfortunately, it’s also a prime area where bias issues arise. Poor training data selection leads to biased outcomes, and it’s a problem many of us are working to address.

Alex: Are you referring to facial recognition in payment systems or other financial applications?

Daniel: Both, really. You see it in unlocking phones, authorizing payments, fraud detection, and KYC (Know Your Customer) processes. For example, when you sign up for a financial service or a crypto exchange, facial biometrics are often used to confirm your identity. This can create challenges when the underlying data introduces bias.

Alex: You mentioned bias in the data. What steps can institutions take to mitigate this issue?

Daniel: It starts with the training data. Most facial recognition systems rely on data sets that aren’t diverse. For instance, the Labeled Faces in the Wild (LFW) data set managed by MIT is widely used, but it’s overwhelmingly composed of middle-aged white males. This skews the accuracy of the technology. To address this, institutions need to use data sets like the Janus project by IARPA and NIST, which are intentionally diverse and representative of age, gender, and ethnicity.

Alex: What impact does this bias have on the digital identity verification process?

Daniel: The practical impact is significant. Studies show that error rates for middle-aged Caucasian males can be below 1%, but for darker-skinned females, error rates can exceed 30%. This leads to frustrating user experiences and higher failure rates for certain demographics. Users often face additional manual verification steps, adding friction and exclusion, which sends a message that these systems aren’t built for them.

Alex: Are there any evolving data sets or initiatives addressing this problem?

Daniel: Yes, there are options like the Janus data set, which is representative of global demographics. However, many peer-reviewed papers still cite the older, biased LFW data set. This creates a future problem because new algorithms continue to be trained on outdated data. We need a shift in research and industry practices to prioritize more diverse and ethical data sets.

Alex: Let’s zoom out. You’ve worked in various organizations dealing with ethical AI. What frameworks do you recommend for addressing ethical dilemmas in AI?

Daniel: Explainability and accountability are key. We need to ensure AI decisions are explainable and auditable, especially in finance. For example, if someone is denied a loan, the institution should be able to explain why. Additionally, organizations must have oversight mechanisms and independent reviews to hold decision-makers accountable for any negative outcomes.

Alex: Are there real-life examples where this has been implemented successfully?

Daniel: Yes. For example, at our organization, we have a structured AI governance process involving internal and external independent parties. They review demographic impacts, algorithm outcomes, and long-term performance. This ongoing review ensures we maintain both ethical and business standards.

Alex: How do you ensure that non-technical stakeholders understand and contribute to AI governance?

Daniel: It’s challenging, but the solution is to have technically knowledgeable people at the decision-making table. Technical stakeholders should be primary contributors because AI governance is complex. If you rely on non-technical decision-makers, you risk misaligned outcomes. We’re seeing a shift where technical expertise is becoming central to governance processes.

Alex: Let’s wrap it up by discussing consumer trust. What can financial institutions do to build trust when deploying AI for critical decisions?

Daniel: Trust comes from explainability and accountability. Consumers need to know why decisions are made and that someone is responsible when things go wrong. They want smooth, accurate, and personalized experiences. AI can deliver that, but institutions must ensure transparency and fairness to gain consumer confidence.

Alex: That’s a great note to end on. Thank you, Daniel, for sharing your insights. It’s been a fascinating conversation.

Daniel: Thanks, Alex. It’s been a pleasure.

Alex: And to our listeners, don’t forget to subscribe and stay tuned for the next episode of *The Curiosity Code* podcast. See you next time!