Navigating Artificial Intelligence: Balancing Innovation with Ethics

This article is published in collaboration with DBS.

On the morning of 22 May 2023, a fabricated image depicting an explosion at the Pentagon triggered a flash crash in the S&P 500 index, wiping about $120 billion in value momentarily. The hoax, generated by artificial intelligence (AI), revealed a glaring vulnerability in the intertwining of real-time financial markets and AI-generated content. The ease with which a fabricated narrative could trigger a significant financial tremor is a stark reminder of the perils that come with the unregulated or unethical use of AI.

 This incident, alongside others, accentuated the potential misuse of AI technologies for misinformation or market manipulation, fostering a discussion on the ethical dimensions of AI. The McKinsey report on AI in 2023 and a statement from the US-based Center for AI Safety further underscored the risks and ethical dilemmas, pushing for clear governance and ethical guidelines.[1]

All this was a far cry from some six months ago, when the launch of generative artificial intelligence (Gen AI) language platform ChatGPT in November 2022 was described as a watershed moment.

Gen AI – the newest wave of AI technology – was widely lauded for its unprecedented advanced ability to hold a conversation, as well as its newfound capacity to create new text, images, or sound. Its utilisation of unsupervised or semi-supervised learning techniques to decipher underlying patterns was seen as a revolutionary stride, promising real-time learning and decision-making capabilities that were believed to significantly broaden the horizon of AI applications.

The fervour surrounding Gen AI painted a picture of a technological utopia, where the barriers between human and machine intelligence seemed to blur, opening up a realm of endless possibilities and setting high hopes for a future seamlessly integrated with AI. This same fervour was mirrored in the stock market, with AI-focused companies and AI-related cryptocurrencies seeing substantial growth.

Acknowledging the darker sides of technology, a more nuanced perspective has emerged. Discussions have delved into the philosophical realms, exploring the autonomy, rights, and responsibilities in a world where machines mimic human intelligence. The contemplation of an AI-driven future prompts reflections on the moral compass guiding AI development, the preservation of human dignity, and the potential reshaping of social constructs. 

Tech titans, too, are recalibrating their ethical frameworks and approaches in response to the advent of Gen AI. The dynamic nature of Gen AI has spurred these companies to augment their ethical oversight, establish dedicated centres of excellence, and foster collaborations to ensure responsible AI deployment. For instance, IBM Consulting recently announced a Center of Excellence for Gen AI, engaging over 1,000 consultants with specialised generative AI expertise to transform core business processes across various sectors, aligning them with ethical guidelines and best practices​.[2] Similarly, Accenture has established a Gen AI and Large Language Model (LLM) centre of excellence, emphasising the necessity of responsible AI guidelines, particularly concerning data privacy and ethical frameworks, ensuring that AI technologies are deployed responsibly across diverse sectors​​.[3]

Striking the Equilibrium: Leveraging AI in Business

These grand declarations, no matter how well-intentioned, are only as good as the commitment to enact them and how businesses are using the technology in practice.

Artificial intelligence itself is not a new concept. It’s been around as early as the 1950s, and the technology is deeply embedded in many businesses. Industries such as finance have long used predictive AI – which uses supervised learning to meticulously sift through vast historical datasets to offer precise predictions – for data categorisation and pinpoint forecasting, credit scoring and fraud detection. Interactive chatbots’ power have revolutionised customer engagement.

Beyond finance, the healthcare sector is employing AI to enhance patient care, streamline operations, and propel research forward. Ethical guidelines here are paramount to ensure the privacy of patient data, the fairness of AI algorithms, and transparency in AI-driven decisions, especially in predictive applications regarding patient disease progression and personalised treatment plans​.[4] 

The legal sector too sees AI aiding in legal research and predictions, although its use in courtrooms presents an ethical grey area. Transparency, fairness, and the ability to challenge AI-driven decisions are called for to uphold justice and the rule of law​​.

Cloud-based software firm Salesforce, which has been exploring and deploying AI since 2014, published its new AI Acceptable Use Policy (AI AUP) in August 2023 to align with industry standards and its partners, and to protect its customers. “It’s not enough to deliver the technological capabilities of generative AI, we must prioritise responsible innovation to help guide how this transformative technology can and should be used,” chief ethical and humane use officer Paula Goldman said.[5]

The nuanced discussion around ethical AI spans across sectors, emphasising the critical balance between leveraging AI for its vast potential benefits while also addressing the ethical dilemmas it presents. This discourse is pivotal as AI becomes increasingly integrated into our daily operations, urging a transparent, fair, and beneficial AI deployment for all stakeholders involved. 

AI in Action: DBS Bank's Strategic Approach

DBS Bank, for instance, now taps AI to send out 45 million hyper-personalised nudges monthly to over five million customers across the region. The bank also uses AI/ML and data analytics to pre-emptively alert small and medium enterprise (SME) customers of credit risks as well as pre-identify SME customers that need financing. 

In fact, in 2022, the bank was able to successfully identify over 95% of non-performing SME loans at least three months before the businesses experienced credit stress. Over 80% of identified at-risk borrowers were averted from risk. 

The benefits of the technology are well documented, and AI is set to be even more entrenched in businesses. The financial realm, tantalised by Gen AI’s promise, has welcomed it with open arms. McKinsey projects that harnessing Gen AI fully could pump an additional $200 billion to $340 billion into the industry annually.

The key now, is to mitigate the accompanying risks that come with deeper use of AI technology.

Predictive AI, for instance, relies heavily on past data, restricting agility and adaptability to swiftly changing market conditions and emerging consumer behaviours. This raises ethical concerns, particularly around issues of fairness and inclusivity, as it may not adequately account for evolving societal values and expectations, for example. With Gen AI, we see ethical concerns such as algorithmic bias, the risk of misuse, and increased data privacy issues.

Over-automation also poses a risk, potentially leading to job displacement and reduced human oversight in critical decision-making processes. Given the relatively new and less vetted status of Gen AI, there is a need for upskilling and reskilling, rigorous testing, ethical oversight, and ongoing monitoring.

By focusing on responsible data use, continuous upskilling, robust technology platforms, and ethical frameworks, businesses can make AI not just a pervasive, but also, responsible part of their operations.

"Sustainable AI requires initiatives across technology, process and enterprise culture... it is important for enterprises to strive for safe, secure and sustainable AI solutions,"​ Mahesh Kshirsagar, CTO, Analytics & Insights at Tata Consultancy Services, said.[6]

In practical terms, this holds even greater importance for businesses operating in sectors that directly impact and serve the broader community. Take the healthcare industry, for example. It’s crucial that AI is used ethically to protect patient privacy, maintain transparency, and prevent bias in decision-making. In the education sector, it is vital to address ethical issues related to data privacy, student profiling, and the potential for exacerbating educational inequalities. In finance, ethical considerations are necessary to avoid unfair lending practices, protect customer data, and minimise the potential for financial manipulation or market instability.

Financial Frontier: Ethical Governance in AI Adoption

DBS Bank embodies a solid trio of principles in AI adoption: data safety, responsible utilisation, and stringent privacy.

Firstly, the bank has engineered two in-house solutions: ADA (Advancing DBS with AI) for unified data governance, and ALAN, an award-winning AI protocol for rapid, scalable AI deployment.

“Taken together, ADA and ALAN allow our teams to build and deploy AI models rapidly, improving our operations and decision-making, to provide a greater level of hyper-personalisation in our services to customers,” said Jimmy Ng, Group Head of Operations at DBS.

ADA centralises data, ensuring governance and quality, while ALAN accelerates AI model creation and deployment. For example, ALAN can be used to find past use cases on a certain scenario, such as attrition, and how it was solved. The platform can then show what data types and AI/ML techniques were used in those cases, as well as the final AI model and its accuracy. ALAN allows the bank’s data scientists to quickly deploy reusable assets such as natural language programming, instead of writing the code from scratch. 

To guide responsible AI and data use, DBS has further instituted the PURE framework, which emphasises “Purposeful, Unsurprising, Respectful, and Explainable” principles. A cross-functional PURE committee deliberates on ambiguous cases, ensuring aligned, responsible AI utilisation.

Explained Sameer Gupta, DBS’s chief analytics officer, in an interview with The Edge Singapore, “Users need to understand the purpose of the data [they intend to use], ensure the use of that data is unsurprising, and communicate the use of that data to end customers and users respectfully. We’ve created a PURE committee of people across functions to deliberate on some of the cases which might not be clear.” 

Human Oversight: The Cornerstone of Ethical AI Deployment

As AI technologies become increasingly integrated into various sectors, the discussion around maintaining a balanced approach is gaining urgency. While the "human-in-the-loop" philosophy is often cited as a way to build trust and ensure ethical AI use, its implementation varies in effectiveness and scope across different organisations and sectors.

The idea of a people-centric approach suggests that human oversight can mitigate the ethical and practical risks associated with AI. In healthcare, for example, human doctors are considered essential for final diagnostic and treatment decisions, even when AI provides preliminary analyses. 

Similarly, in education, AI can offer personalised learning paths, but educators are deemed crucial for addressing the emotional and social needs of students. However, the effectiveness of this approach is contingent upon the quality of staff training and the level of human engagement in decision-making processes. 

Previous incidents in the financial world such as algorithmic trading mishaps caused by fake news in the United States highlight the limitations of relying solely on AI technologies and underscore the need for effective human intervention.

Closing Reflections: Balancing AI Innovation and Ethical Responsibility

Dr Rumman Chowdhury, a known figure in applied algorithmic ethics, notes that ethics is "not just about improving technology" but also addressing biases both in data/models and in the imperfect world, hinting at a broader societal context​​.[7] These insights reflect a growing industry-wide commitment towards ensuring that the deployment of AI, especially Gen AI, aligns with ethical principles to prevent harm and promote equity.

As AI technologies continue to evolve, the challenges and opportunities they present will grow in complexity. The need for responsible AI deployment is an ongoing commitment that requires adaptability, vigilance, and a strong ethical foundation. Human oversight remains crucial in this equation, serving as a safeguard against ethical lapses and as a mediator for nuanced, contextual decision-making. 

By striking a balance between technological innovation and ethical responsibility, the financial sector can set a precedent for other industries, harnessing the full potential of AI while upholding the societal values that are integral to a functioning democracy.


[1] https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
[2] https://www.ibm.com/blog/ibm-consulting-unveils-center-of-excellence-for-generative-ai/
[3] https://aimagazine.com/articles/accentures-expansive-vision-for-generative-ai
[4] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8826344/
[5] https://www.salesforce.com/news/stories/ai-acceptable-use-policy/
[6] https://www.tcs.com/what-we-do/services/data-and-analytics/white-paper/sustainable-ethical-trustworthy-ai-solutions
[7] https://dataethicsrepository.iaa.ncsu.edu/2023/04/30/responsible-ai-data-science-and-ethics-with-dr-rumman-chowdhury