Transforming underutilised land to create thriving new communities
1st July 2025
From foundations to futures: Respond’s strategic growth in housing delivery
1st July 2025
Transforming underutilised land to create thriving new communities
1st July 2025
From foundations to futures: Respond’s strategic growth in housing delivery
1st July 2025

Ethical and responsible AI for the housing sector

Rachel Finn, Director of Managed Services and Head of Irish Operations at Trilateral Research, outlines a practical vision for implementing ethical and responsible artificial intelligence (AI) in the social housing sector.

Speaking from her experience at the crossroads of law and emerging technologies, Finn warns that “while AI presents major opportunities for improving housing services, it must be implemented with care”.

“Technology itself is value-neutral but when we make decisions about how we design and deploy it, we embed our societal values for better or worse.”

Trilateral Research, originally a research-focused organisation specialising in privacy, data protection, and ethics, now builds AI tools to solve complex social problems and advises public sector bodies on AI governance.

Drawing on 15 years of work across both academic and applied settings, Finn’s key message is clear: responsible AI is achievable, but it takes planning, oversight, and collaboration.

The promise and hesitation around AI

Finn highlights that many organisations in housing and beyond are still hesitant to adopt AI, despite its potential. Public services, already under pressure, are seeking ways to increase capacity and AI is often cited as a means of delivering “massive productivity increases, from 15 per cent to 400 per cent”.

However, Finn outlines three fears that commonly hold organisations back:

  1. approving a risky system;
  2. choosing the wrong tool; and
  3. lack of internal oversight.

These challenges are well-founded, says Finn, and are reflected in global examples of AI systems causing harm through bias, hallucinations, or lack of transparency. However, she notes that organisations now have more clarity on why systems fail and what can be done to avoid it.

Three steps for responsible AI

Finn offers a practical three-step framework for housing organisations seeking to use AI to improve services while mitigating risk:

1. Pick the right use case

AI is most effective in high-volume, data-rich environments where decisions are repetitive and time-sensitive. In housing, this could include:

  • real-time tenant support such as arrears prediction;
  • chatbots for benefit advice;
  • application triage; and
  • resource planning.

She cites an example from Lincolnshire in England, where Trilateral Research developed a safeguarding tool for identifying children at risk. The system consolidated data from datasets of arrests, social services, and anti-social behaviour to reduce case review time from 25 person-days to 20 minutes.

“That is time that can now be spent safeguarding children; rather than reporting on it,” Finn says.

2. Build an interdisciplinary team

Ethical AI is not just a technical project. Finn stresses the importance of collaboration between subject matter experts, legal professionals, behavioural scientists, and user interface designers, especially in public services where trust is critical.

She describes a project in Trim, County Meath, where Trilateral Research built a hyper-local air quality monitoring tool. Using AI, it translated raw environmental data into “meaningful, health-related outcomes” such as localised asthma or diabetes risks for both residents and policymakers. “If all local authorities reduced carbon emissions by 20 per cent, we could save 360 lives and ]18 million annually for the HSE,” she says.

3. Be an active partner

Finn is firm that AI adoption is “not plug-and-play”. “It is not like a TV that you send to someone’s house. It is more like a houseplant; it needs regular care,” she says. Data evolves, user needs change, and systems must be monitored for accuracy over time.

That, Finn states, is why Trilateral Research sets up shared responsibility models with its partners, clearly defining who manages what, and how decisions will be made across the AI lifecycle.

She also asserts that organisations must invest in AI literacy, ensuring staff understand both capabilities and limitations, and establish AI governance programmes alongside existing compliance functions (e.g., GDPR or information security).

From risk to reward

Finn says: “Done right, responsible AI brings tangible benefits: faster insights, better service outcomes, and scalable solutions that remain ethical and trusted. These investments really pay off not just in efficiency, but in ensuring that technology works for the people we serve.”

In a housing sector facing rising demand and shrinking resources, Finn concludes: “Ethical, well-governed AI is not a luxury; it is a necessity.”