Swedish welfare authorities suspend ‘discriminatory’ AI model

0
6


A “discriminatory” artificial intelligence (AI) model used by Sweden’s social security agency to flag people for benefit fraud investigations has been suspended, following an intervention by the country’s Data Protection Authority (IMY).

Starting in June 2025, IMY’s involvement was prompted after a joint investigation from Lighthouse Reports and Svenska Dagbladet (SvB) revealed in November 2024 that a machine learning (ML) system being used by Försäkringskassan, Sweden’s Social Insurance Agency was disproportionally and wrongly flagging certain groups for further investigation over social benefits fraud.

This included women, individuals with “foreign” backgrounds, low-income earners and people without university degrees. The media outlets also found the same system was largely ineffective at identifying men and rich people that actually had committed some kind of social security fraud.

These findings prompted Amnesty International to publicly call for the system’s immediate discontinuation in November 2024, which it described at the time as “dehumanising” and “akin to a witch hunt”.

Introduced by Försäkringskassan in 2013, the ML-based system assigns risk scores to social security applicants, which then automatically triggers an investigation if the risk score is high enough.

According to a blog published by IMY on 18 November 2025, Försäkringskassan was specifically using the system to conduct targeted checks on recipients of temporary child support benefits – which are designed to compensate parents for taking time off work when they have to care for their sick children – but took it out of use over the course of the Authorities investigation.

“While the inspection was ongoing, the Swedish Social Insurance Agency took the AI ​​system out of use,” said IMY lawyer Måns Lysén. “Since the system is no longer in use and any risks with the system have ceased, we have assessed that we can close the case. Personal data is increasingly being processed with AI, so it is welcome that this use is being recognised and discussed. Both authorities and others need to ensure that AI use complies with the [General Data Protection Regulation] GDPR and now also the AI ​​regulation, which is gradually coming into force.”

IMY added that Försäkringskassan “does not currently plan to resume the current risk profile”.

Under the European Union’s AI Act, which came into force on 1 August 2024, the use of AI systems by public authorities to determine access to essential public services and benefits must meet strict technical, transparency and governance rules, including an obligation by deployers to carry out an assessment of human rights risks and guarantee there are mitigation measures in place before using them. Specific systems that are considered as tools for social scoring are prohibited.

Computer Weekly contacted Försäkringskassan about the suspension of the system, and why it elected to discontinue before IMY’s inspection had concluded.

“We discontinued the use of the risk assessment profile in order to assess whether it complies with the new European AI regulation,” said a spokesperson. “We have at the moment no plans to put it back into use since we now receive absence data from employers among other data, which is expected to provide a relatively good accuracy.”

Försäkringskassan previously told Computer Weekly in November 2024 that “the system operates in full compliance with Swedish law”, and that applicants entitled to benefits “will receive them regardless of whether their application was flagged”.

In response to Lighthouse and SvB’s claims that the agency had not been fully transparent about the inner workings of the system, Försäkringskassan added that “revealing the specifics of how the system operates could enable individuals to bypass detection”.

Similar systems

Similar AI-based systems used by other countries to distribute benefits or investigate fraud have faced similar problems.

In November 2024, for example, Amnesty International exposed how AI tools used by Denmark’s welfare agency are creating pernicious mass surveillance, risking discrimination against people with disabilities, racialised groups, migrants and refugees.

In the UK, an internal assessment by the Department for Work and Pensions (DWP) – released under Freedom of Information (FoI) rules to the Public Law Project – found that an ML system used to vet thousands of Universal Credit benefit payments was showing “statistically significant” disparities when selecting who to investigate for possible fraud.

Carried out in February 2024, the assessment showed there is a “statistically significant referral … and outcome disparity for all the protected characteristics analysed”, which included people’s age, disability, marital status and nationality.

Civil rights groups later criticised DWP in July 2025 for a “worrying lack of transparency” over how it is embedding AI throughout the UK’s social security system, which is being used to determine people’s eligibility for social security schemes such as Universal Credit or Personal Independence Payment.

In separate reports published around the same time, both Amnesty International and Big Brother Watch highlighted the clear risks of bias associated with the use of AI in this context, and how the technology can exacerbate pre-existing discriminatory outcomes in the UK’s benefits system.


For any collaboration, feel free to email us at support@ichibanelectronic.com. Thanks

Source link

قالب وردپرس

Leave a reply