Skip to main content

Behind the Mask of Ethics in AI

· 8 min read

Every morning when I check the news I see another story about concerns and horror stories in AI, talking about how it will end the world.Or even some horrid examples of how it’s used today for nefarious purposes. This fear then morphs into hope through infectious optimism when we log in to work in our various roles across technology.

Masks on the Wall Source: Unsplash

Sometimes it can feel like we need to switch masks to find the costume that fits our current AI audience. Or that we often have mixed feelings about their use based on our experiences using them, or the direct impact they have on our own lives. The reality is that we need to call for pragmatism and adhere to best practices in ethics to ensure AI technology. Only then can we ensure is used responsibly, securely and to the benefit of this world.

Given it’s October, and Halloween is vastly approaching, let’s consider some of the ghouls and goblins haunting the field of AI ethics, along with the state of best practices in the field.

Bias and Transparency

Transparency of training datasets is an important consideration for assessing the potential for bias in the results generated by a given machine learning model or algorithm. When using algorithms in our applications without the ability to scrutinise the training datasets, we open ourselves to building applications that contain the biases mirrored in our society.

There are many stories about this that have existed for a considerable period of time. The book and podcast Invisible Women by Caroline Criado-Perez talks about the emergence of various technologies and innovations throughout history where their design negatively impacts women, from the use of crash test dummies based around men from the 1950s until 2011 to AI within the healthcare industry being less likely to diagnose medical conditions in women and other diverse populations. Or even this admission from Tom Cools at Devoxx UK that his attempts to automate the attendance register went wrong when racial bias in the machine learning algorithm he used wouldn’t register one of his students. But we also need to consider other points of bias such as political affiliation as covered in this paper from Uwe Peters or even the skewing of results by misinformation.

To build applications and technologies and automate away the complexities of life in this AI world in a way that reflects our amazing and diverse society, we need visibility into how they are trained, as well as potential outputs. Model cards are a crucial first step to solving this problem. Quite simply, model cards are documentation that describes useful aspects of a model including:

  1. Intended use cases
  2. Limitations
  3. Ethical considerations
  4. Datasets used in training
  5. Results of experiments to test the model

Not all models have a publicly available card. Prominent repositories are adopting this practice, including Hugging Face which has a publicly available template. Not all models out there have full specifications or details of their datasets to balance with competition considerations. The situation is also the same with dataset cards, for which there is also a template available, where often details of biases in datasets are not transparent. For both, we need an agreed standard and charter committing to provide these assets.

Model Interference and Validation

Just like applications built without cutting-edge technologies, AI technologies are susceptive to security attacks and manipulation. With the emergence of LLMs, OWASP has created an official Top 10 for Large Language Model Applications.

Many of the threats listed relate to tampering with data, prompts or even the outputs which can lead to the spread of misinformation or malicious activity in the wrong hands. Software libraries exist allowing for the manipulation of these models that, in the event of model theft or duplication, could be used to provide malicious results that cause reputation damage to institutions.

Data Privacy

It’s not even the case that knowledge of programming is required to potentially manipulate these algorithms. Staying with the LLM example, researchers at IBM reported in August results from an experiment where they were able to hypnotize several LLMs to trick them into releasing confidential information and creating malicious code. In heavily regulated industries such as financial services, this can lead to not only significant reputational damage but also sanctions and fines by regulators across the world.

Moving away from attacks, exposure to proprietary customer information can also be a concern. As covered in a recent FINOS Zenith Brain Trust meeting, the state of licensing for models, and also the code for these technologies, is not mature or well understood.

Even with well-understood licensing, we need to consider the usability challenges this can introduce to developers using these technologies in applications, as well as employees using the tools as part of automating their own work. Just in our personal lives when we click accept on the license terms whose length rivals War and Peace without reading them, there is a potential that ignorance of the terms may lead to the leaking of private information as training for models and algorithms.

Many organizations are already providing company guidelines on the use of AI tools and rules on the exposure of private data, including lists of allowed tools. They are also looking to use on-prem technologies and self-hosted LLMS to prevent data leaks. Even in these cases, the license terms must be carefully reviewed to ensure compliance.

Model Governance

Transparency in the use of these models within organizations is pivotal to determining which AI technologies are used within the company. Much like the need to track software dependencies to remediate security vulnerabilities quickly, models and AI software must be included in software inventories. Consider the case where a particular model is found to be compromised, or open to a particular security issue.

We need to know exactly where it is used, as well as which environments (production versus development) to act quickly to remediate the problem. AI Factsheets and inclusion of models in existing software inventories are recommended as the approach to handle these concerns.

Another consideration would be the potential accreditation of these technologies, or the organizations building the AI software you use. Governance and accreditation by leading organizations such as AI For Good could help in the decision-making process alongside license review on which models from which organizations meet the required ethical standards that you wish to adhere to, as well as following best practices from these organizations on their usage.

Application Terms of Use

Terms of use of not just AI technologies, but the applications we build that make use of them, is the final consideration within the ethical AI space. Part of the reason that we hear such terrifying stories of the impact of their usage is because of a lack of published and enforced codes of conduct stating how this technology should be used.

There is a balance in the ethics of usage. From the extreme misuse such as the example in Spain of AI image technology being used to generate naked images of young girls to the more grey area of uploading AI-generated videos of dead children to TikTok for education on child violence that caused legitimate upset to families, applications without explicit terms of use and blocking of violators can find themselves inadvertently being used for nefarious activities. Much like sharing software such as Napster and LiveWire were shut down for copyright infringement due to users sharing music, or the use of data on social media platforms by Cambridge Analytica to influence voters, software intended to change the world and benefit people can often be used in unintended and unethical ways.

There is guidance out there from some model makers, such as the Responsible Use Guidelines from Meta for developers which is referenced in the model cards of their models. But much like the communities and meetups that developers belong to, established codes of conduct in an easily digestible format that outline the acceptable activities in that group are also required to make clear the intended usage of these technologies. This must also extend the applications we build using these technologies, for example, AI image generation applications, stating what is and is not okay.

Conclusion

While these ethical challenges exist within the current ecosystem, and make adoption by financial services institutions difficult, with clear ethics and terms of use there is no reason that these technologies can't be used to improve the state of our lives and products.

AI is not all about replacing humans. One great recent example that leaves me with hope is the use of AI software trained on keyhole brain surgery to help train neurosurgeons without putting patients at risk, as well as highlighting key areas and tumours. Far from aiming to eliminate surgeons, it sets out a future to make brain surgery safer.

We still need to think about the malicious applications of our software. But we should also take advantage of the benefits AI brings.