What role does machine learning play for public services?

Hi, I’m Chris.

I’m here to help you through every step of your PRVCY journey.

Wether you’re already taking the PRVCY online courses or a new subscriber, I’ll post constant news and information based on our research to help you taking back control of you PRVCY!

These systems, often referred to as “risk assessment tools,” are used to decide whether defendants will be released pretrial, whether to investigate allegations of child neglect, to predict which students might drop out of high school, and more.

Machine Learning (ML) increasingly permeates every sphere of life. Complex, contextual, continually moving social and political challenges are automated and packaged as mathematical and engineering problems. Simultaneously, research on algorithmic injustice shows how ML automates and perpetuates historical, often unjust and discriminatory, patterns. The negative consequences of algorithmic systems, especially on marginalized communities, have spurred work on algorithmic fairness.

Many of these systems used today are built on large volumes of historical data and machine learning models that predict the future behavior and successes of system participants. For example, Microsoft uses such models to rank the skill level of players in online games, banks evaluate the reliability of potential borrowers when they submit applications for loans, and several companies have even tried to automate the process of reviewing resumes for open vacancies. In these situations, developers put their trust in the algorithms.

Still, most of this work is narrow in scope, focusing on fine-tuning specific models, making datasets more inclusive/representative, and “debiasing” datasets. Although such work can constitute part of the remedy, a fundamentally equitable path must examine the wider picture, such as unquestioned or intuitive assumptions in datasets, current and historical injustices, and power asymmetries.

The developers of these tools and government agencies that use them often claim that risk assessments will improve human decisions by using data. But risk assessment tools (and the data used to build them) are not just technical systems that exist in isolation — they are inherently intertwined with the policies and politics of the systems in which they operate, and they can reproduce the biases of those systems. Let’s delve deeper into the concept of risk assessment tools and explore real-world examples of their application.

1.Social Welfare Programs:

One of the most common applications of risk assessment tools is in determining eligibility for social welfare programs. For instance, governments collect data on income, employment status, and family size to decide if a family qualifies for food assistance or unemployment benefits. By analyzing this data, they aim to ensure that those who genuinely need assistance receive it.

2.Healthcare Allocation:

In the realm of healthcare, governments use personal medical records and health data to allocate resources efficiently. A pertinent example is the use of data during a pandemic. Authorities use information to identify vulnerable populations and prioritize them for vaccination, thereby maximizing the impact of limited vaccine supplies.

3. Criminal Justice System:

Risk assessment tools have also made their way into the criminal justice system. These tools analyze an individual’s criminal history and background to assess their risk of reoffending. The results can influence decisions related to bail amounts or sentencing, aiming for a more equitable and evidence-based approach to justice.

4. Education:

In the field of education, data plays a crucial role in resource allocation and demographic analysis, the educational system is one of the easiest places to collect data since in most of the countries school attendance in mandatory. From the educational system, governments collects information about finance, family compounds, location, as well as mental and physical health.

5. Taxation:

Tax collection is another domain where personal financial data comes into play. Governments use income and property ownership data to determine the amount of property tax an individual owes.

6. Public Health:

In the context of public health, data collected from various sources, such as mobile apps and wearable devices, aids in monitoring the health of the population. During disease outbreaks, data is used to track the spread of the disease and make decisions about quarantine measures.

7. Housing Assistance:

Governments use personal financial and housing data to determine eligibility for housing assistance programs. Information on income, family size, and housing conditions ensures that those in need of affordable housing receive the support they require.

8. Immigration:

Immigration authorities rely on personal data to make decisions about visa approvals and deportations. Factors like travel history, criminal record, and employment status are considered when deciding on immigration matters.

9. Disaster Response:

During natural disasters, governments collect data on affected populations to allocate resources effectively. Location data, for example, is used to identify areas in need of immediate assistance, aiding in swift disaster response efforts.

Why this matter for a PRVCY world?

Because possessing all this data sometimes leads governments to become greedy and attempt to control our personal lives, as we witnessed during the plandemic.

While risk assessment tools undoubtedly hold promise for better governance, privacy, fairness, and potential biases in these systems are ever-present. Striking the right balance between using data for the public good and protecting individual rights is an ongoing challenge for governments worldwide, they are not doing very good.

In United States, they used PATTERN

to inform decisions about whether incarcerated people would be released from prison to home confinement at the onset of the COVID-19 pandemic.

PATTERN outputs “risk scores” — essentially numbers that estimate how likely it is a person will be rearrested or returned to custody after their release. Thresholds are then used to convert these scores into risk categories, so for example, a score below a certain number may be considered “low risk,” while scores at or above that number may be classified as “high risk,” and so on.

Digital PRVCY experts, declare this system as an Algorithmic Injustice.

AI relies on machine learning algorithms, which, while widely praised in the tech industry as a versatile solution, are not without their flaws. Much like any other computer system, they are susceptible to errors.

For instance, in the realm of cybersecurity, machine learning algorithms are employed to swiftly identify previously unknown malware. However, a dilemma emerges: as the detection rate increases, so does the likelihood of encountering “false positives,” where the system erroneously categorizes a non-malicious file as malicious. This arises from the fundamental workings of machine learning, where the system doesn’t delve into the specifics of an object but rather compares its visual characteristics to those of known objects.

In certain scenarios, benign objects may closely resemble malicious ones in appearance, and a scoring-based system would likely classify the object as malicious. When applied in a context where automated systems evaluate people’s behavior, this particular aspect of machine learning systems can result in numerous unpleasant situations where an innocent individual is erroneously implicated in “wrong” actions.

In another examples, goverments are using data collection in ways, we can’t even know to what for.

In 2020, the presidency of Costa Ricas faced significant upheaval as federal investigators conducted searches at presidential offices and witnessed the departure of four prominent figures, including the President’s mentor and chief associate. The focal point of this turmoil was a data analysis division established within the executive offices, which, over the preceding 18 months, had been aggregating and assessing private personal data obtained from various government sources, purportedly to assist in shaping public policies. The crux of the issue lay in the fact that the Presidential Unit of Data Analysis lacked a legal basis until the government issued a decree on February 19, authorizing its creation and granting it the authority to request confidential personal information from other government entities. 

How can we ensure that government agencies and other decision-makers are held accountable for the potential harm that risk assessment tools may cause?

These systems are susceptible to issues such as developer bias, false correlations, and feedback loops, and, unless specifically included by the developer, the algorithms do not factor in ethical considerations. To simply input massive quantities of information into a machine learning system and then accept the result without any critical assessment could lead to a host of unintended consequences, including choices that ultimately infringe upon the rights of certain citizens.


In Europe and America, legislation is actively being developed that makes it obligatory for companies to present people with understandable information about why their rating has declined or increased (ECOA 1974, GDPR 2016)—i.e. so that automation doesn’t become black box testing (the tester knows what the software is supposed to do but not how). However, the same is not happening in every other country that is considering the implementation of a social scoring system in the future, at least based on the information we were able to find in public sources. If true, that means a host of decisions can be placed entirely in the hands of AI, which is a problem.

Their ability to change the rules of how the system works may significantly influence the life of those social groups who are not in a position to influence the scoring rules. To push for this change, we’ve created a resource for you, to feel empowered to ask when government agencies or developers make claims about risk assessment tools

Most of these systems are built on top of a publically available interfaces, which contains massive amounts of personal data, including all “offenses” committed. This type of interface is more prone to leaks, and, if accessed illegally, could lead to terrible consequences for the individuals attacked. It also doesn’t have to be hacked. Entities that use such systems often provide APIs that allow people to look up the various violations of an individual by inputting information like his or her phone number or passport number.

Will our reality, as a whole, devolve into a sort of dystopian future, based on machine learning to make decisions?

Not yet, but probably yes. There’s still far too many unknowns, and whether nationwide systems could actually be implemented in countries with such vastly different forms of government and legislative frameworks is unclear.
As citizens, it’s crucial to stay informed about how our data is being used and advocate for transparency and fairness in these systems. In the end, the power of data lies not only in its collection but in its responsible and ethical use.

Yet one thing is clear: as technology develops at an unhindered pace, the lines between digital tech and larger social and political issues will only become more blurred.

#PRVCYTips:

Keep your personal data such as location, financial data and health only for yourself.

Latest PRVCY Insiders:

Categories

Hi, I’m Chris.

I’m here to help you through every step of your PRVCY journey.

Wether you’re already taking the PRVCY online courses or a new subscriber, I’ll post constant news and information based on our research to help you taking back control of you PRVCY!

PRVCY Insider

Stay up to date with the latest news on data protection and controlling your privacy online.

EN - PRVCY Insider