![twisted insane floor boards twisted insane floor boards](https://i.ytimg.com/vi/qdAC6Zw4JdE/maxresdefault.jpg)
So, what might stop future wayward AI implementations from causing harm? “If even within these favorable circumstances, such a dangerously erroneous system can be deployed over such a long time frame, one has to worry about what the situation is like in other, less regulated jurisdictions,” says Lewin Schmitt, a predoctoral policy researcher at the Institut Barcelona d’Estudis Internacionals, in Spain. country with strong regulations, rule of law, and relatively accountable institutions-serves as a warning. For all of them, the tale of the Dutch algorithm-deployed in an E.U. Private-sector companies no doubt see opportunity in helping the public sector.
![twisted insane floor boards twisted insane floor boards](https://ksassets.timeincuk.net/wp/uploads/sites/56/2018/03/How-to-replace-a-floorboard.jpg)
“The government had more faith in its flawed algorithm than in its own citizens, and the civil servants working on the files simply divested themselves of moral and legal responsibility by pointing to the algorithm,” says Nathalie Smuha, a technology legal scholar at KU Leuven, in Belgium.Īs the dust settles, it’s clear that the affair will do little to halt the spread of AI in governments- 60 countries already have national AI initiatives. And they lacked any sort of due process or recourse to fall back upon. For those affected, it could be nigh impossible to tell exactly why they had been flagged. The tax authority’s algorithm evaded such scrutiny it was an opaque black box, with no transparency into its inner workings. That includes things like what the model’s accuracy rate is like, he adds. “The performance of the model, of the algorithm, needs to be transparent or published by different groups,” says Lee.
![twisted insane floor boards twisted insane floor boards](http://www.crodog.org/house/hard2.jpg)
![twisted insane floor boards twisted insane floor boards](https://ksassets.timeincuk.net/wp/uploads/sites/56/2018/03/How-to-replace-a-floorboard-500x500.jpg)
The model saw not being a Dutch citizen as a risk factor. Many of the victims had lower incomes, and a disproportionate number had ethnic minority or immigrant backgrounds. Postmortems of the affair showed evidence of bias. We need to define what ‘fair’ is,” says Yong Suk Lee, a professor of technology, economy, and global affairs at the University of Notre Dame, in the United States. “When there is disparate impact, there needs to be societal discussion around this, whether this is fair. So, for years, the tax authority baselessly ordered thousands of families to pay back their claims, pushing many into onerous debt and destroying lives in the process. In reality, the algorithm developed a pattern of falsely labeling claims as fraudulent, and harried civil servants rubber-stamped the fraud labels. In the tax authority’s workflow, the algorithm would first vet claims for signs of fraud, and humans would scrutinize those claims it flagged as high risk. Those claims passed through the gauntlet of a self-learning algorithm, initially deployed in 2013. When a family in the Netherlands sought to claim their government childcare allowance, they needed to file a claim with the Dutch tax authority. But that’s precisely what happened in the Netherlands in January 2021, when the incumbent cabinet resigned over the so-called kinderopvangtoeslagaffaire: the childcare benefits affair. Until recently, it wasn’t possible to say that AI had a hand in forcing a government to resign.