Developing safe and human centric AI
Prioritising responsible AI adoption, we’re here to enhance efficiency while maintaining trust and accountability to our clients.

This article was originally published by The Digital Banker. Minor edits have been made as per our style guide.

In this exclusive interview with The Digital Banker, Dr. Mohammed (Mo) Rahim, Group Chief Data Officer at Standard Chartered, discusses how the bank strives to uphold principles of fairness, ethics, transparency and data privacy, through its own responsible AI framework, work actively to prevent bias, while focusing efforts on training its 70,000 employees to make the best use of AI.
The Digital Banker: How do you ensure that the AI technologies being used currently adhere to the ethical principles? How do you mitigate any potential biases or stereotypes?
Dr. Mo Rahim, Standard Chartered: Standard Chartered Bank has a responsible AI standard, and this standard encompasses all principles around fairness, ethics, transparency, model, performance, cyber security risks, and data privacy risks. This principles-based standard governs how we deploy AI and this has been in place since 2021.
Right now, the standard is operationalised through a group – Responsible AI Council – that I chair, and the council looks at these principles, provides an assessment of each model against these principles, and then decides whether to approve a model or not. So while it’s not a standard monthly meeting, the council stands up as and when needed to get things across the line and enable speed.
The way we prevent bias is again, through our responsible AI standard, we look at the types of data that could lead to what we call unjust bias – data points such as gender, ethnicity, race, political opinions, there are certain types of data which we call protective variables, or protected data which are more sensitive. With those, we take extra care and precaution.
Our first rule is we try not to use those as much as possible because it could be subject to unintended bias. So, we try and remove these sensitive data elements from models on most occasions to prevent unjust bias.
The way we prevent bias is again, through our responsible AI standard, we look at the types of data that could lead to what we call unjust bias – data points such as gender, ethnicity, race, political opinions, there are certain types of data which we call protective variables, or protected data which are more sensitive. With those, we take extra care and precaution. Our first rule is we try not to use those as much as possible because it could be subject to unintended bias. So, we try and remove these sensitive data elements from models on most occasions to prevent unjust bias.
How do you address the ethical challenges that would arise from handling large volumes of data, and how do you protect this information from misuse or even falling into the wrong hands?
We have a data ethics framework as well, which is underpinned by things such as making sure that we put our customers first. If you look at our bank’s website, we have privacy notices in place across all of our countries, focusing on transparency and how we are handling customers’ data right now. But also, how do we apply that internally? We make sure that, again, if we’re using customers’ data, we are doing so in compliance with regulations such as General Data Protection Regulation (GDPR), not just for non-AI, but for AI as well.
We’ve embedded data privacy as part of our responsible AI framework to make sure that as we build AI use cases and deploy more AI into the organisation, we do so in line with data ethics and privacy requirements.
How do you strike a balance between compliance and innovation? Adding on to that, could you let me know some AI use cases within Standard Chartered that serve as good examples of innovation?
So, the two are not mutually exclusive, and they can coexist. You need to strike a balance, and you can definitely apply both. So how do we do this? Firstly, we make sure that we encourage innovation, because we are a cross-border affluent bank, we need to service our customers, and in order to improve the customer service, we need to focus on automation. Part of automation is using AI, right?
" Using AI is probably more beneficial than not using it. In the space of financial crime, using AI to detect financial crime is more accurate and more reliable than not using it."Dr Mohammed (Mo) RahimGroup Chief Data Officer
In order to improve customer service, we do need to use AI and we do need to automate. We do need to take advantage of newer technologies. However, we need to do this in a way that’s safe, so we protect customers data. Like I mentioned earlier, we will avoid trying to use sensitive data in AI models to prevent bias, make sure we adhere to cyber security checks and controls, so the two don’t are not mutually exclusive. They can coexist because in order to service your clients better, you can still adopt these technologies, but in a safe and transparent way.
I’ll give you another example where actually using AI is probably more beneficial than not using it. In the space of financial crime, using AI to detect financial crime is more accurate and more reliable than not using it.
Another example is how we’ve rolled out AI at scale. Most recently, you may have seen on our website, that we rolled out SC GPT, which is which is now available in 41 of our markets, across 70,000 colleagues. SC GPT allows our colleagues to use generative AI as part of their roles where they can automate tasks, create new content and generate ideas. I believe this is a great example of how we’re using AI and putting it into the end use of our employees.
Another really good example is how through our employee platform, we’ve rolled out AI to all of our colleagues so that they can write objectives and give feedback to one another, impacting colleagues straight away, which is powerful. A large part of our AI journey has been in machine learning and predictive analytics, and we are now at the stage where we’re focusing our next phase of our strategy on generative AI and how we maximise generative AI.
Additionally, we’re working very closely with regulators across the world, including the Monetary Authority of Singapore (MAS). Since the MAS Fairness, Ethics, Accountability and Transparency (FEAT) principles were published, we worked with them on a few initiatives, one called Veritas, which is how we take the FEAT principles and make them applicable.
We’ve been working very closely with the DIFC on safe adoption and opportunities in AI in the UAE, and have actually just signed a partnership earlier in the year, an MOU of how we collaborate together to test use cases.
So we’re working very closely with regulators, with the consultants and with my peer community, with whom I speak regularly around the work they’re doing on AI safety. We are also bringing in people who’ve got both industry and regulatory experience. So we’re trying to do a lot in this space.
What feedback have you received from employees with regards to SC GPT and the other features introduced for them?
Standard Chartered prides itself on being a skills-based organisation. One area we’re focusing on is literacy and how we can improve the employee experience. One of the new skills that the industry needs is, I would probably say, is learning prompting, which is a bit like 20 years ago when we had to learn how to do Google searches. How do I prompt the generative AI to ask the right question, and know the right way to get the best response? So one of the key skills we’re teaching people is this concept of prompt engineering – asking the right question to get the right response.
The other thing we’re focusing on is individual accountability. As you start using generative AI, the individual is accountable for the outcome. Still, the human is right, not the machine – so we’re putting the human first, and that’s important to us, because we’re not saying AI is replacing you. We’re saying AI is augmenting you. It’s improving the way you work.
Overall, the feedback from colleagues has been very positive. In the weeks since we’ve rolled out SC GPT, we’ve had 200,000 prompts, which is quite significant. We have also received feedback that they would like to see more features and faster response times on the prompts. But overall, they’re quite positive. We’re now starting to see how we can roll this out beyond the 41 countries, so we’re going through the approval processes on that but overall, quite an impressive start.
Where do you see AI going in the next couple of years, especially in the financial services and banking landscape?
It’s a fantastic question, so I probably think about it from two vantage points, the first one is from an employee perspective. I think one key skill that employees will have to learn is how to use AI to be productive. As part of skills of the future or even now, the big change that people will see is being able to use AI. I always tell people this, in the last 30 years we’ve probably seen three waves of growth technologies. The first one was computers. Then we had mobile phones, and now AI. It’s the third big, significant, real change.
The second thing in the way you interact with customers is going to change as well. Today, if you want to speak to your bank, you contact a call centre and you interact with a human being. But that could change in the future. Imagine then sitting there, eating your breakfast, and you can either chat or you can talk to your device, which is like an AI agent. And the AI agent knows and understands you and what your risk profile is. It understands your portfolio and how it’s been doing. It understands what’s happening in the market and make suggestions to you.
So the way we interact is going to change completely. Also, for us as a bank, we may not know whether we are interacting with a human being or we’re interacting with AI, potentially, because some of our clients could be using AI technologies as well. So the interaction is going to change a lot and in order for it to be seamless and successful, it goes back to the first point. Literacy is really going to have to grow amongst all of us. We’re going to have to re-skill, keep learning, and do this in a safe way.
Explore more insights
The future of banking in Asia – Key trends and insight…
With trillions in new wealth being created, banks must adapt to remain competitive. This has led to a significan…