Charities looking to use artificial intelligence in their work are being urged not to view the technology “in a vacuum” and to ensure they consider the legal implications of its use.
The warning has come from Kieran John, managing associate of law firm Michcon de Reya who warns charities that their starting point in using AI “has to be your legal duties”.
The key legal areas he urges charities to be wary of are data collection laws and GDPR compliance, infringement risks of AI generated work and discrimination and bias in AI produced work.
Another area of risk is “hallucinations or inaccuracies”, which is a “particular risk when charities rely on AI for impact data analysis.
He says that risks can be effectively managed by checking who owns any data being use and to keep AI policies under review.
Practical steps charities can take include a skills audit to assess how much training in using AI is needed in the organisation. Nominating a board member to focus on AI is another recommendation he makes, as is seeking external advice.
“Remember, using external tools doesn’t remove responsibility in decision -making,” he adds.
John delivered the advice during a discussion session run by think tank New Philanthropy Capital, supported by the Clothworkers’ Company.
Also speaking at the event were NPC’s director of Open for All Tris Lumley, Joseph Rowntree Foundation senior policy advisor Yasmin Ibison and Parkinson’s UK head of technology services Gabriela Caldera-Cabral, who urges charities to think of AI “as a trainee”.
She told charities that “you’re the subject matter expert, and you need to train it up”.
“Trustees need to stay informed and ask difficult questions, there may be things missed,” she added.
“Be aware of the risks, but don’t be afraid of AI.
Recent Stories