Legal, ethical risks in use of artificial intelligence in fintech industry

By Kelechukwu Uzoka

Kelechukwu Uzoka

Fintech, or financial technology, is the term used to describe any technology that delivers financial services through software, such as online banking, mobile payment apps or even cryptocurrency.

Fintech is a broad category that encompasses many different technologies, but the primary objectives are to change the way consumers and businesses access their finances and compete with traditional financial services.

Artificial intelligence simply put is the simulation of human intelligence processes by machines, especially computer systems.

Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision.

HOW HAS ARTIFICIAL INTELLIGENCE BEEN ABLE TO AFFECT THE FINTECH INDUSTRY?

AI-powered analytics and decision-making tools help banks and other financial service providers gain a deeper understanding of customer behaviour, identify new growth opportunities, and make more informed decisions.

Personalization of information increases client trust and loyalty to the brand.

The Fintech apps uses AI to collect and evaluate data gotten from customers, offering customized and personal financial advice and pre-approved items.

This leads to improved communication and higher levels of consumer satisfaction, boosting profitability.

One of the most promising areas for the application of AI in banking and Fintech is the field of customer service.

READ ALSO: Shaibu visits gully erosion site in Jattu-Uzairue

AI-powered chatbots and virtual assistants with generative AI handle a wide range of customer service tasks, such as answering Frequently Asked Questions (FAQ), providing account information, and even helping customers complete transactions.

AI-driven sentiment analysis is being utilized to enhance customer service to address gaps in the client experience.

With self-learning capabilities, AI systems discern patterns from voice and speech characteristics and forecast a customer’s mood, providing valuable insights to guide the customer service representative toward appropriate solutions.

Another area where AI makes a significant impact in Fintech is in the field of fraud detection.

AI-powered fraud detection systems analyse large amounts of data from multiple sources, such as customer transactions and account activity, to identify patterns of suspicious behaviour in the customer’s account.

One of the areas we are familiar with is Authentication using facial recognition, voice recognition, and fingerprints to increase financial security of customers funds.

As a result of these extra security features, hackers find it more difficult to exploit the customer’s account compared to using standard passwords.

Although usernames and passwords are still improving, AI-powered security solutions may eventually replace all of them in the future by providing AI-powered algorithms that can analyse and match unique biometric markers.

We can see that AI is around us and involved in our everyday life. Mere using your bank or financial institution apps involves AI.

How Do AI learn to do things on its own? Machine Learning

The AI is trained over time with information and it uses algorithms which study patterns sometimes supervised and unsupervised.

READ ALSO: Alleged N1.7bn fraud: First Nation MD, Odukoya knows fate Dec 1

Most times the machine is fed information (supervised learning) which it uses to understand the patterns of the user over time.

So, know today that the way you use your bank apps, credit/debit cards or any electronic mode in your financial life is being monitored, tracked and understood by AI through ML.

WHAT ARE THE LEGAL OR ETHICAL RISKS OR ISSUES ASSOCIATED WITH THE USE OF AI IN FINTECH?

A. Data Privacy and Security and Data: In today’s digital age, AI relies heavily on vast amounts of sensitive financial and personal data.

However, this reliance raises concerns about data privacy and security. Fintech companies must prioritize robust security measures to protect user data from breaches and unauthorized access.

Failure to adequately protect data can result in significant legal and financial consequences, eroding customer trust and tarnishing a company’s reputation.

For example, in 2019, Capital One experienced a major data breach that exposed the personal and financial information of millions of customers.

The incident led to significant legal and regulatory consequences for the company, highlighting the importance of data privacy and security measures.

B. Bias and Discrimination: One of the most critical challenges in AI implementation is ensuring fairness and avoiding discrimination.

AI algorithms are trained on historical data, which may contain biases or reflect societal inequalities.

Without scrutiny, AI systems can perpetuate discriminatory practices, such as in loan underwriting or credit scoring, leading to unfair treatment of certain individuals or groups.

For instance, algorithms trained on historical data may exhibit racial, ethnic, or gender biases that result in discriminatory outcomes in lending decisions.

Fintech companies must proactively monitor and mitigate biases in their AI models through rigorous testing, ongoing monitoring, and data diversification to ensure fairness and inclusivity.

READ ALSO: President Tinubu appoints Ola Olukoyede EFCC chairman

A biased data will lead to a biased outcome and that’s what the AI learns and act on.

This situation may lead to denial of financial assistance e.g. accessing loan or credit opportunities.

C. Transparency and Explainability: The opacity of AI systems poses challenges in understanding and explaining how decisions are made. Fintech companies must ensure transparency and provide clear explanations of their AI algorithms’ outcomes.

Lack of transparency can lead to mistrust among customers and regulators and create legal and ethical concerns.

For example, in the European Union, the General Data Protection Regulation (GDPR) grants individuals the right to receive meaningful information about the logic, significance, and consequences of automated decision-making processes that significantly affect them.

Fintech companies operating in the EU must comply with these transparency requirements to ensure legal and ethical AI practices.

The GDPR which is used by EU countries is being improved upon to make AI and its use more transparent.

Nigeria recently passed the Nigeria Data Protection Act 2023 which set out requirements to ensure transparency in the way data is being handled by organisations that deal with customers’ data.

D. Regulatory Compliance: The deployment of AI in fintech operates within a complex landscape of regulations and legal requirements.

Fintech companies must navigate data protection laws, Anti-Money Laundering (AML) regulations, consumer protection regulations, and financial services regulations, among others.

Failure to comply with these regulations can result in significant penalties, fines, and reputational damage.

For example, financial institutions employing AI for AML compliance must ensure that their AI models meet regulatory requirements and are explainable and auditable.

Understanding and proactively addressing these regulatory compliance challenges is essential for fintech companies utilizing AI in their operations.

E. Market Manipulation: AI systems could be used to manipulate financial markets. For example, an AI system could be used to buy and sell stocks in a way that artificially inflates or deflates their prices.

Imagine a situation where AI manipulate the stock market either bullish or bearish depending on the information that it is fed with or patterns it learns.

F. Cybersecurity: AI systems are vulnerable to cyberattacks. If an AI system is hacked, it could be used to steal data or to launch cyberattacks on other systems.

G. Accountability: It is not always clear who is responsible for the actions of AI systems.

This could make it difficult to hold AI systems accountable for their actions, and it could also make it difficult for users to trust AI systems.

H. Job Displacement and Socioeconomic Impact: AI automation has the potential to disrupt traditional job roles in the financial industry.

While AI can enhance efficiency and streamline processes, it may also lead to job displacement for certain workers.

Fintech companies must consider the socioeconomic impact of AI implementation and explore ways to reskill or transition affected employees.

HOW CAN FINTECH COMPANIES MITIGATE THESE RISKS?

To mitigate legal and ethical risks associated with AI in fintech, collaboration among legal, technology, ethics, and compliance teams is crucial.

READ ALSO: Address Nigerians on your true identity, Obi tells Tinubu

Fintech lawyers play a pivotal role in understanding and navigating the regulatory landscape.

Ongoing monitoring, compliance audits, and adaptation to evolving regulations are essential for responsible AI practices.

Fintech companies should prioritize the following actions:

1. Conduct thorough impact assessments: Assess the potential legal and ethical implications of AI systems on data privacy, bias, transparency, and compliance with regulations.

2. Implement robust data governance: Establish comprehensive data governance frameworks to ensure compliance with data protection regulations and safeguard customer data throughout its lifecycle.

3. Foster diversity and inclusivity in data and model development: Proactively address biases by ensuring diverse representation in training data and regularly assessing AI models for potential biases and discrimination.

4. Enhance transparency: Employ techniques such as interpretability methods and algorithmic auditing to promote transparency and provide explanations of AI-driven decisions to customers and regulators.

5. Engage with regulators and industry stakeholders: Collaborate with regulatory bodies, industry associations, and other stakeholders to actively participate in the development of AI-specific regulations and best practices.

CONCLUSION
The integration of artificial intelligence (AI) in the fintech sector holds immense promise for revolutionizing financial services.

However, it is crucial to recognize and address the legal and ethical risks associated with AI implementation.

Fintech lawyers play a pivotal role in navigating these risks, ensuring compliance with regulations, and fostering responsible AI practices that prioritize customer trust, fairness, and accountability.

By focusing on data privacy and security, fintech companies can safeguard sensitive financial and personal information, mitigating the risk of data breaches and unauthorized access. Proactively identifying and addressing biases in AI algorithms is essential to ensure fairness and avoid discriminatory outcomes.

Transparent and explainable AI systems not only promote regulatory compliance but also foster trust among customers and regulators by providing understandable explanations for AI-driven decisions.

Navigating the complex regulatory landscape is a constant challenge for fintech companies utilizing AI.

Fintech lawyers must stay informed about evolving regulations, proactively assess their AI systems’ compliance, and adapt to regulatory changes.

Compliance with data protection laws, anti-money laundering regulations, and consumer protection measures is vital to avoid penalties, fines, and reputational damage.

Ethical considerations are paramount in the responsible implementation of AI in fintech.

Fostering customer trust and accountability through transparent communication, privacy policies, and robust auditing mechanisms is crucial.

READ ALSO: Keeping societies safe from drugs demand global commitment – Marwa

Additionally, addressing the potential socioeconomic impact of AI-driven job displacement by investing in reskilling programs and collaborating with stakeholders helps ensure a fair and inclusive transition to a digitalized workforce.

In the rapidly evolving landscape of AI in fintech, collaboration among legal, technology, ethics, and compliance teams are essential. Fintech lawyers should engage with regulators and industry stakeholders to actively shape AI-specific regulations and best practices.

By conducting impact assessments, implementing robust data governance frameworks, fostering diversity and inclusivity, enhancing transparency, and engaging in ongoing monitoring, fintech companies can mitigate legal and ethical risks associated with AI implementation.

As we embrace the potential of Artificial Intelligence in fintech, let us uphold the principles of fairness, transparency, and accountability.

By harnessing the power of AI responsibly, we can drive innovation, create efficient financial systems, and enhance customer experiences while ensuring legal compliance, protecting privacy, and addressing societal concerns.

Together, let us shape a future where AI and fintech coexist harmoniously, promoting trust, inclusivity, and sustainable progress.

Sources and Further Reading
1. Artificial Intelligence (AI). Available at https://www.opendatasoft.com/en/glossary/artificial-intelligence-ai/

2. 6 Ways AI will Dominate the Banking and Fintech Industries.

Available at https://www.softwaregroup.com/insights/blog/article/6-ways-ai-will-dominate-the-banking-and-fintech-industries

3. Emerj. Discovering Automation and AI Opportunities in Financial Services. Available at: https://banking.emerj.ai/

4. Building the AI bank of the future. Available at https://www.mckinsey.com/~/media/mckinsey/industries/financial%20services/our%20insights/building%20the%20ai%20bank%20of%20the%20future/building-the-ai-bank-of-the-future.pdf

5. OECD (2021), Artificial Intelligence, Machine Learning and Big Data in Finance: Opportunities, Challenges, and Implications for Policy Makers, Available at https://www.oecd.org/finance/artificial-intelligence-machine-learningbig-data-in-finance.htm.

6. Powering the Digital Economy: Opportunities and Risks of Artificial Intelligence in Finance. Available at https://www.elibrary.imf.org/view/journals/087/2021/024/article-A001-en.xml

7. Aldboush HHH, Ferdous M. Building Trust in Fintech: An Analysis of Ethical and Privacy Considerations in the Intersection of Big Data, AI, and Customer Trust.

International Journal of Financial Studies. 2023; 11(3):90. Available at https://doi.org/10.3390/ijfs11030090

*Kelechukwu Uzoka is a Nigerian Tech Lawyer. He is the Lead Partner of K&C Partners Law Firm and a Director in Wekrea8.com a Legal tech company in Nigeria.

kcuzoka@gmail.com or kuzoka@kcpartnerslaw.com or kelechukwu@wekrea8.com

Leave a Reply

Your email address will not be published. Required fields are marked *