Welcome to the AI Adult Social Care Risk Register

This risk register matrix has been devised for NW ADASS to be used in monitoring risks in using AI in Adult Social Care in the United Kingdom.

This has been produced as a starting point to help define your risks in AI and Adult Social Care projects and to monitor and report against them.

This aid can help you determine interventions you may need to mitigate risks and help you move forward with your projects.

There is a link to download the file at the end of the section for you to use.

Please download, use and amend as you see fit.

De-risked Minor Moderate Significant
Risk Category 1 (Extremely Low Risk) 2 (Low Risk) 3 (Moderate Risk) 4 (High Risk)
Ethical: Bias, Discrimination, and Fairness in AI AI models are fully transparent, rigorously tested for bias, and comply with equality and fairness standards. No evidence of discriminatory outputs in care recommendations. Minor concerns about AI bias or fairness, well mitigated through regular audits and bias detection mechanisms. Moderate risk of bias or discrimination in AI-prioritised assessments, with partial mitigation. Challenges to fairness in decisions mitigated with the communication that all care decisions are human. Concerns regarding bias in AI model, with insufficient controls to ensure fairness in resource allocation.
Data Security and Privacy: Misuse or Exposure of Personal Data AI systems handling sensitive personal care data are fully compliant with UK GDPR and other legal standards, with strong encryption and regular audits to prevent breaches. Minor risks to data privacy, but well-controlled through robust security protocols and monitoring. Moderate risk of data misuse or breaches, with some vulnerabilities in data handling by AI systems, partially mitigated through safeguards. Concerns about data security in AI systems, with potential for large-scale breaches or misuse of sensitive health and care data.
Operational: Impact on Day-to-Day Care Delivery AI integrates seamlessly into care workflows, improving efficiency without disrupting service delivery. Minor operational disruptions, easily manageable and not significantly affecting day-to-day care delivery. Moderate disruptions to care workflows, such as errors in care recommendations or delays in processing assessments, but staff can adapt. Disruptions to care workflows, leading to delays in assessments or service delivery, impacting care outcomes and wellbeing or people with lived experience.
Regulatory Compliance: AI Adherence to Legal Standards AI systems are fully compliant with relevant UK laws and regulations (e.g., Care Act, GDPR), with thorough oversight and regular audits. Minor risks of non-compliance, well managed through regular reviews and updates of AI systems to ensure ongoing compliance. Moderate risks of regulatory non-compliance in the AI's decision-making processes, including challenges with transparency and auditability. Significant risk of non-compliance with legal and regulatory standards, including data protection, fairness in decision-making, and transparency of AI algorithms.
Technical: AI System Failure or Inaccuracy AI systems are highly reliable, with extensive testing, continuous performance monitoring, and proven accuracy in care prioritiation and /or recommendations. Minor technical issues or inaccuracies in AI outputs, rare and easily correctable through manual intervention or system updates. Moderate risks of AI system failure or inaccurate recommendations, impacting care decisions and requiring staff intervention, but not critical. Significant risk of AI system errors, producing incorrect assessments or care recommendations, leading to delays or negative impacts on outcomes of people with lived experience.
Workforce Impact: Resistance or Lack of AI Skills Among Staff The staff that use AI are fully trained, comfortable using AI systems, and supportive of AI integration in care workflows. Minor skills gaps or resistance to AI, but the team is generally supportive and can quickly adapt through training. Moderate resistance from staff due to fears about AI replacing jobs, concerns about fairness, or significant skills gaps requiring intensive training and change management. Significant resistance from staff due to lack of skills or distrust of AI, leading to delays in implementation or failure to utilise AI effectively for the workflow it is intended.
Reputational: Damage Due to AI Failures The AI system is seen as a beneficial innovation, improving service efficiency and outcomes, with no foreseeable risk to the council's reputation. Minor reputational risks due to AI's perceived fairness and effectiveness, manageable through effective communication with the public and stakeholders. Moderate risks to the council’s reputation, stemming from public concerns about AI fairness, data privacy, or its role in decision-making, leading to potential media scrutiny. Significant reputational risks, including negative public perception, media attention, and stakeholder dissatisfaction due to AI errors, fairness concerns, or data misuse.
Financial: Budget Overruns, Hidden Costs, or Insufficient ROI AI implementation is fully within budget, with clear ROI, no hidden costs, and expected efficiencies in care delivery and administrative tasks. Minor risks of budget overruns or hidden costs, but these are well managed through regular financial reviews and projections. Moderate risk of AI-related financial issues, including unforeseen implementation costs, technical upgrades, or delays in realising expected efficiencies, benefits or savings. Significant risk of budget overruns, hidden costs, or delays in achieving the anticipated cost savings or efficiencies through AI, leading to financial strain.
Safety and Wellbeing of people with lived experience due to AI use AI is used safely to enhance outcomes for people with lived experience, with accurate care recommendations and safeguards in place to protect wellbeing such as always human decision making in care provision. Minor risks to service recommendations, easily managed through regular reviews of AI outputs and human oversight in decision-making. Moderate risks of inaccurate or incomplete AI recommendations, but mitigated by human oversight and intervention. Significant risks to service user safety due to AI making incorrect care recommendations, potentially leading to delays in accessing services.
Demand Management and Service Capacity via AI AI effectively manages demand for services, optimising capacity and improving response times, without disrupting service delivery. Minor risks of AI not fully optimising service capacity or response times, but these are manageable through manual oversight and adjustments. Moderate risks of AI not meeting expectations in managing demand or capacity, leading to some delays in care provision but not critically impacting service delivery. Significant risks of AI failing to manage growing demand for services or capacity issues, resulting in delays or gaps in service delivery for some people.
Transparency and Accountability of AI Decisions AI decision-making is fully transparent, with clear audit trails and explainable outputs, ensuring accountability for care decisions and that care decisions are always made by humans. Minor transparency issues, with clear mechanisms for staff to explain AI-driven recommendations. Moderate risks to transparency and accountability, with some challenges in understanding or explaining AI-driven recommendations, but mitigated by oversight and intervention. Significant risks to transparency, with AI decisions difficult to audit or explain, leading to loss of trust from staff and people with lived experience, and challenges in defending actions.

Download the Risk Register in Excel here:

NW ADASS AI ASC Risk Register.xlsx