Research project launches free tool to make AI safer and more trustworthy

Group looks at a computer

In 2024 Kathryn Simpson, Lecturer in Digital Humanities, joined 25 researchers on the Participatory Harm Auditing Workbenches and Methodologies (PHAWM) project, developing solutions for effective and ethical uses of generative and predictive AI.

The research project, led by the University of Glasgow, is releasing a free tool to help organisations, policymakers, and the public maximise the benefits of AI applications while identifying their potential harms.

The tool, developed as part of the  (PHAWM) project, aims to help address the urgent need for rigorous assessments of AI risks caused by the rapid expansion and adoption of the technology across a wide range of sectors.

It is designed to help support the aims of regulations like the European Union鈥檚 AI Act, introduced in 2024, which seek to balance AI innovation with protections against unintended negative consequences.

PHAWM鈥檚 new open-source AI application auditing workbench will help empower users without extensive backgrounds in AI to conduct an in-depth audit of the strengths and weaknesses of any AI-driven application. The PHAWM Workbench is an online tool designed to enable participants without AI expertise, including end鈥憉sers and people affected by the outcomes of AI applications, to carry out audits of AI applications. The PHAWM Methodology is a guiding framework that supports organisations and communities through the process of completing a participatory audit of an AI application. This methodology was led by the 爆料TV鈥檚 Kathryn Simpson and Dr Emily O鈥橦ara.

It also helps to actively involve audiences who are usually excluded from the audit process, including those who will be affected by the decisions made by the AI application, in order to produce better outcomes for end-users of the applications.

The tool is the first public outcome from PHAWM, which was launched in May 2024 and supported by 拢3.5m in funding from Responsible AI UK (RAi UK).

It brings together more than 30 researchers from seven leading UK universities with 28 partner organisations to tackle the challenge of developing trustworthy and safe AI systems. 

Speaking on the importance of the PHAWM project, Kathryn Simpson said: 鈥淎 significant issue in the increased use of AI applications in the UK is our limited ability as end-users or participants to judge or place a value on their outputs. The newly launched PHAWM Workbench and Methodology are a significant step forward in this area. They allow people to understand the implications of the outputs of the AI applications and to make judgements on said applications. 

鈥淚mportantly our Workbench and Methodology enables people to review AI applications based on issues that are important to them. As both the Departments for Science, Innovation and Technology and for Culture, Media and Sport have recently highlighted the broader implications of AI can only be understood by increasing AI literacy in an applied as well as technical sense: the PHAWM Workbench and Methodology enables this literacy.鈥

The tool and accompanying guiding framework has been developed through extensive co-design workshops with the project鈥檚 partners and other stakeholders in the health and cultural heritage sectors. These sectors are two of the four areas that the PHAWM project was established to investigate, alongside media content and collaborative content generation.

The PHAWM tool works by systematically gathering diverse perspectives on an organisation鈥檚 current or prospective AI application  through a four-stage auditing process. 

First, the audit instigator is guided to provide information about the AI system in accessible, non-technical language. 

Secondly, they invite relevant stakeholders, including users of the system and the people the systems鈥 decisions will affect such as the public or patients for health AI applications to participate in the auditing process.

Next, the audit participants are guided to align the audit with their concerns and lived experience of the AI application鈥檚 impact in their daily lives or profession. The tool and framework then help participants identify potential positive and negative impacts, create metrics to measure them, and assess whether the AI application under audit is capable of meeting their criteria. The AI application will receive a pass or fail grade based on the audit criteria set by each participant. 

Finally, the audit instigator collects the data and insights from the audit participants, identifying areas of concern raised during the process. They can use the diverse perspectives gathered to develop action plans which will inform their decisions about how the AI application is developed or integrated into practice. 

Professor Simone Stumpf, of the University of Glasgow鈥檚 School of Computing Science, leads the PHAWM project. She said: 鈥淭he tools and processes we鈥檝e developed offer a practical, community鈥慶entred approach to evaluating the real鈥憌orld impacts of artificial intelligence. The workbench is a flexible tool which can be used to run in-depth audits of AI applications an organisation has developed in-house,as well as being used to investigate whether off-the-shelf AI applications will meet organisations鈥 needs before they are purchased. 

鈥淏eing able to look in such depth and from so many different angles will help organisations make properly informed decisions which assess the balance of risk and reward which comes from adopting new technologies. Our hope is that organisations will be encouraged to use the tool and framework we鈥檝e developed with our partners and stakeholders will enable them to reap the benefits of AI while avoiding any potential for harms.鈥

The PHAWM team are continuing to refine the tool and framework in collaboration with representatives from their four key areas of investigation. 

Public Health Scotland and NHS National Services Scotland (NSS) contributed to PHAWM鈥檚 health use case, while Istella contribute to media content. The National Library of Scotland, Museum Data Service and David Livingstone Birthplace Trust participate in the cultural heritage use case, and Wikimedia are involved in the collaborative content generation use case.

The PHAWM team are also currently developing comprehensive training and support for certification to help organisations adopt PHAWM鈥檚 auditing tools as effectively as possible. 

For more information on PHAWM, visit the .