More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech
readinglist | |
---|---|
author | Broussard |
summary |
Argues that racism, sexism, and ableism in AI are not glitches but are in fact built into its foundation. |
status | read |
Notes
Introduction
-
when making more equitable tech, people start with “fairness”; while a step in the right direction, it is not a large enough one
-
consider two kids splitting a cookie; the mathematically fair answer is to split it in half, but that plan falls apart in the real world, as one half will nearly always be larger than the other
-
this situation can be a springboard to further negotiation; maybe they agree that a fair trade is the smaller portion for TV selection
-
there is a key distinction here between social fairness and mathematical fairness, and computers can only calculate the latter
-
people continue to insist on using technology to solve more problems because of technochauvinism, a kind of bias that considers computational solutions superior to all other solutions
-
this bias contains an a priori assumption that computers are better than humans; this is really a claim that the the people who make and program computers are better than other humans
-
technochauvinism is usually accompanied by equally bogus notions like “computers make neutral decisions because their decisions are based on math”
-
this is patently untrue, as computers constantly fail at making social decisions
-
“The next time you run into a person who insists unnecessarily on using technology to solve a complex social problem, please tell them about cookie division and the difference between social and mathematical fairness.”
-
equality is not the same as equity or justice
-
we cannot adequately address the shortcomings of our algorithmic systems until we acknowledge that racism, sexism, and ableism are not glitches; glitches are temporary and inconsequential, but these biases are baked into the very core of these systems
-
look into: Safiya Noble and Ruha Benjamin
-
sometimes we can make the tech less discriminatory; sometimes we can't and shouldn't use it at all; sometimes the solution is somewhere in between
-
consider the case of the racist soap dispenser, which first reached public prominence in a 2017 viral video; a dark-skinned man and a light-skinned man both tried to use an automatic soap dispenser; the dispenser refuses to work for the dark-skinned man until he covers his hand with a white paper towel, demonstrating that the dispenser only responds to light colors
-
every kind of sensor technology, from facial recognition to automatic faucets, is similarly discriminatory
-
this problem goes back to film tech; until the 1970s, Kodak tuned its film-development machines using Shirley cards, which contained an image of a light-skinned woman surrounded by bright primary colors
-
they only included darker skin colors on the Shirley cards because furniture manufacturers complained that their walnut and mahogany furniture looked muddy in color photographs; in other words, they only improved rendering for darker skin tones as a side effect of an unrelated decision because they stood to lose money from corporate clients
-
we need to start by recognizing the role that unconscious bias plays in the technological world; it seems most likely that the soap dispenser (for example) was designed by a small, homogeneous group of light-skinned people who tested it on themselves and assumed that it would work similarly for everyone else
-
“They probably thought, like many engineers, that because they were using sensors and math and electricity, they were making something 'neutral'. They were wrong.”
-
quoting Nikole Hannah-Jones: “Black Americans are amongst the most astute political and social observers of American power because our survival has and still depends on it.”
-
look into: Artificial Unintelligence (Broussard)
-
while many computer scientists have come around to the idea of making tech “more ethical” or “fairer”, this is not enough; we need to audit all of our tech to find out how it is racist, sexist, or ableist
-
here's something I hadn't thought about: if a city moves its public alert system to social media, then it is cutting off access for those who are Blind or who lack access to technology for whatever other reason
-
“We should not cede control of essential civic functions to these tech systems, nor should we claim they are “better” or “more innovative” until and unless those technical systems work for every person regardless of skin color, class, age, gender, and ability.”
-
we can start by improving diversity among engineering teams; Google's annual diversity report showed that only 3% of its employees were Black, 2% of its new hires that year were Black women, and Black, Latinx, and Native American employees left the company at the highest rates; this typical of (indeed the best among) major tech companies
-
instead of technochauvinism, we need to use the right tool for the task, whether or not it's a computer
-
this brings to mind Dr. Ali's comments on education and creativity (or more precisely, my rebuttal)
look into: Algorithms of Oppression (Noble)Understanding Machine Bias
-
real AI, the type we have and use every day, is narrow AI
-
“Lots of people like to imagine that narrow AI is a path to general AI, but it is not.”
-
we'll revisit this later
AI has several subfields (machine learning, expert systems, natural language generation, natural language processing); the most popular subfield is machine learning and its subfields deep learning and neural netspresumably most of the dinguses covered in the “Tech Bro AI Cult” episode of BtB are not actually involved in research, so they wouldn't be so aware that machine learning is just a mathematical model for predicting patterns in historical data; maybe some do but are too high on their own supplymachine learning models are often described as black boxes to abstract away the mathematical features for the purpose of conversation-
the version of this I really hate is when people say that even those building the models don't understand how they work; that is a dangerous misstatement of the phenomenon described here, because it suggests a life to the models that simply isn't there
catch me skimming the section on basic mathematical modeling“The stories we make up to explain mathematical patterns often have a ring of truth. We imagine that a story is truer because it is supported by a mathematical diagram.”“The ability to visualize is also limited by physical constraints. If you are Blind or have low vision, for example, it might be harder to understand the shapes I've shown above. If you are listening to this text instead of reading, you likewise won't see them on the page and will have to create them in your mind.”ownership of a model carries considerable implicit power; a danger of large models is that they contain millions of data points and associations, and nobody is checking each point for errors“Data, model, prediction, math. That's the core of machine learning. It's not magic. It is an impressive human achievement, and the math underneath it is beautiful, but it is not magic. It also does not have any larger transcendental meaning. We are not entering into a new phase of human evolution because we can do more math with machines.”-
God, it's so painful seeing how many people are lost in the sauce over this
the machines building the models and the data used to train them are both produced out of a specific human context, which leads to flaws in those modelslook into: critical race and digital studieseveryone uses cognitive shortcuts, especially in our increasingly complex world; the problem is that these shortcuts often contain biasthis is the “don't look for zebras” approach to troubleshooting and medical diagnosis, but it falls apart especially hard in social decision making, where those shortcuts are typically based in problematic assumptionsthere will always be edge cases, but how a software developer defines edge cases is heavily dependent on the developer's own experience; this is why tech is overwhelmingly optimized for able-bodied, white cis men — that's what most of those developers arehuman systems are (often) flexible enough to deal with edge cases, but computers are not; and when entire groups of people are considered edge cases…-
consider Olsen's words in Line Goes Up, where he points out that few systems benefit from the strictest possible enforcement
Broussard highlights The Markup, ProPublica, and the Wall Street Journal as news organizations best known for in-depth algorithmic accountability recordingthe structural biases embedded into AI can be hard to understand or contest because of the mathematical complexity at playmathematicians have increasingly spoken out against collaborating with the police in the wake of the murders of George Floyd, Breonna Taylor, Tony McDade, and othersrace is socially constructed, distinct from ethnicity and genetics/epigenetics; today's “racial groups” are descended from those manufactured in the 1400s to justify slaveryscientific racism was codified by Carl Linnaeus, who crafted hierarchical groupings of humans based on continent of origin and also developed our system of biological taxonomy; his ideas on taxonomy were so useful that his hierarchical classification of humanity were also accepted as facttechnochauvinism functions similarly: engineers create tech systems that are so useful that their problematic ideas about society are overlooked; race is a social construct but ends up embedded in computational systems as if it were scientific factthat said, we shouldn't discard the concept of race entirely; it is often necessary to track racial statistics to ensure equitable access for different groups of people-
there's a certain type of person who lodges the disingenuous claim that antiracists are the people who are really “obsessed with race”
quoting Audre Lorde: “The master's tools will never dismantle the master's house. They may allow us temporarily to beat him at his own game, but they will never enable us to bring about genuine change.”“Likewise, we cannot depend on computational solutions alone (including social media campaigns) to achieve lasting social change. Computers are important tools in the fight for social justice. People are the ones who power change.”the reason algorithmic systems act in racist ways is because they are trained on historical data, which is itself often reflective of racist policy; home loans, for example, have historically been denied to people of color much more often than white people, so a model trained on this data will perpetuate this biaslook into: How to Be an Antiracist (Kendi)quoting Angela Davis: “In a racist society it is not enough to be non-racist, we must be anti-racist.”antiracism means challenging the systems of oppression in our world while building new technology to avoid reproducing that oppression in those new systemslet's not use technology when it's not the right tool for the joblook into: Shape (Ellenberg)quoting Ellenberg: “If you're finding it hard to imagine what a fourteen-dimensional landscape looks like, I recommend following the advice of Geoffrey Hinton, one of the founders of the modern theory of neural nets: 'Visualize a 3-space and say 'fourteen' to yourself very loudly. Everyone does it.'”look into: Democracy's Detectives (Hamilton)look into: “Large Numbers of Loan Applications Get Denied” (Harney)look into: “Math Boycotts Police” (Aougab et al)look into: “Why Hundreds of Mathematicians Are Boycotting Predictive Policing” (Linder)look into: Sister Outsider (Lorde)look into: “The Secret Bias in Mortgage Approval Algorithms” (Martinez and Kirchner)Recognizing Bias in Facial Recognition
-
January 2020: Robert Julian-Borchak Williams received a call from Detroit police asking him to turn himself in; having committed no crimes (and figuring it was a prank call), he told them to pick him up at home, which they did
-
it turned out he had been incorrectly flagged by a facial recognition application used by Detroit police in connection with a shoplifting incident in another part of town
-
there are so many red flags with this software: it was sold by South Caroline firm DataWorks Plus (founded in 2000) but was developed by outside vendors and first sold in 2005
-
the software took in a photo of the suspect, set markers between facial features, and compared those markers against the State Network of Agency Photos (SNAP) database
-
the SNAP DB is problematic as a source of references in its own right: it contains mug shots, sex offender registry photos, driver's license photos, and state ID photos, around 40 million images total in 2017; there is no way those photos are consistent enough to provide accurate references, and with one or maybe two photos per person, I can't imagine the software would be good at matching them, to say nothing of how underrepresented Black people are in training datasets (not to mention the result of aging)
-
facial recognition is known to work better on people with light skin than people with dark skin, better on men than women, and to routinely misgender trans, nonbinary, or gender-noncomforming people; despite these shortcomings, it is often deployed against Black and Brown communities
-
it must be emphasized that this was a failing at multiple points: a Michigan State Police image examiner fed a poor-quality surveillance image into the software, which incorrectly flagged Williams as a potential match, and the human staff who were supposed to verify the results decided it was a close enough match to arrest Williams
-
additionally, the software only checked the photo against Michigan residents, assuming that only state residents would commit crimes within state boundaries; this is the kind of blind spot that pops up as a result of technochauvinism
-
law enforcement does not have a good track record with high-tech tools (I would argue they have a bad track record with any tech), and DataWorks in particular has had few successes since its founding in 2000
-
Detroit police policy dictates that the computer match must be validated by a human checker and supervisor; while a good idea in theory, it has a number of pitfalls (garden-variety human bias, boredom, workload, lack of incentive to call foul on questionable results, unwavering faith in the software)
-
the National Association of Shoplifting Prevention (no doubt a lobbying organization) estimates that it costs \$2000 each time someone enters the criminal justice system; this figure establishes a reasonable floor on which incidents of shoplifting are worth prosecuting
-
as retailers move to automated checkout stations, they have unwittingly made shoplifting easier; in turn they pressure the state to spend more public money on surveillance and prosecution to protect their profits
-
Detroit PD had invested millions in surveillance technology and were looking for ways to use it; even if it didn't work, they wanted to justify that expense
-
look into: Detroit PD Project Green Light
-
“Such faith in computational results is low stakes when Google Maps or Waze suggests getting off the highway and driving on surface streets when you are trying to get to the airport. But it is high stakes when it's wildly expensive, involves the risk of personal danger in jail, and is going to affect the rest of someone's life.”
-
that's a good metric for deciding when to use AI to solve a given problem
at every step leading to Williams's arrest, the humans involved could have chosen not to pursue this extremely tenuous lead, but they were all predisposed to believe that a Black man would be a thiefwhen shown the surveillance photo, Williams himself laughed at the utter lack of resemblance; in particular, the suspect in the surveillance photo was wearing a Cardinals cap, and Williams did not follow baseballdespite their begrudging admission of this mistake, the Detroit PD still held him for several more hoursthis demonstrates why simply “not being racist” is insufficient“A nonracist person says, 'I'm not racist,' and continues on as before, claiming to be neutral on topics of race.”quoting Kendi: “[A] not-racist is a racist who is in denial, and an antiracist is someone who is willing to admit the times in which they are being racist, and who is willing to recognize the inequities and the racial problems of our society, and who is willing to challenge those racial inequities by challenging policy. You have to be willing to admit you were wrong.”the macho bro cultures of Silicon Valley and American law enforcement do not value vulnerability; their only choice is to double down on their mistakesthe public understanding of facial recognition tech as fundamentally flawed was sparked by the graduate work of Dr. Joy Buolamwini; inspired by Do Androids Dream of Electric Sheep and Anansi the spider, she sought to build a mirror utilizing facial recognition to overlay positive affirmations on the user's face; it immediately failed to recognize her face until she covered it with a simple white mask-
judging by the description Broussard gives, it's probably a Guy Fawkes mask
white people often reply to reports of such incidents with the unhelpful “I don't know why it's not working for you”; Buolamwini filmed her situation, incontrovertibly proving the problemlook into: Weapons of Math Destruction (O'Neil)together with Timnit Gebru, she wrote a paper called “Gender Shades”, which proved that existing facial recognition tech is biased due to inadequate training data; they also composed a new training dataset using photos of international politicians-
I have a copy of this paper both in print and in the file server
it's tempting to consider this problem solved, but making FRT better only enables its further use to oppress marginalized communities; the socially just answer is to not use FRT in policing, where it will be used disproportionately against those communitieseven Big Tech firms had to respond to “Gender Shades”; Microsoft and IBM halted development of FRT for law enforcement, and Amazon paused their development first for a year and then “indefinitely”-
concerning that they described it as an “infinite pause” as opposed to an outright halt
a 2019 NIST replication study confirmed the original result, additionally finding that existing FRT systems misidentified Black and Asian faces 10–100 times more than white face, Native Americans had the highest rate of errors, and older adults were misidentified 10 times more often than middle-aged adultsa July 2021 audit by the GAO found that half of the federal law enforcement agencies use FRT; of the 17 systems purchased by federal agencies, only four were in use at only three agencies (BOP, CBP, and FBI) by March 2020even as several localities have moved to ban FRT usage, convenient loopholes still exist; a LEO barred from using FRT themself can ask a colleague at another agency to do it for themfurther, the FBI's FACES system has agreements with several states granting it access to their FRT systemsof the 14 agencies that reported using nonfederal systems, 13 do not have a way of tracking what employees are doing with those systemsof the 20 agencies that claimed to use FRT, the majority used it for criminal investigations and surveillance; 6 agencies claimed they had used it to “support criminal investigations related to civil unrest, riots, or protests”, even though the majority of those protests were legal and peacefulFRT played a much-hyped role in identifying participants in the January 6 insurrection…because those dipshits had terrible opsec and publicly posted pictures of their unmasked faces; it's not a high barhilariously, one dude was uncovered by a woman using Bumble to find these idiotsof the 17 cities with bans on FRT, 6 have loopholes allowing police to use it anyway (and the police there are using those loopholes); a wholesale ban is absolutely necessaryWilliams was the first widely reported wrongful arrest tied to FRT misidentification, but other cases have followedDetroit man Michael Oliver was arrested and charged with felony larceny (a spurious charge without the FRT BS); despite obvious between Oliver and the video subject (skin tone, hair length, lots of very visible tattoos), the case proceeded long enough for him to need a lawyerNew Jersey resident Nijeer Parks was charged with a wild attempted shoplifting/crash-and-run 30 miles away“Facial recognition software is far less effective in policing than anyone imagines. Its big successes have been in identifying a couple of shoplifters. I'm not too worried about the public safety menace of shoplifting.”look into: USGAO report “Facial Recognition Technology”Machine Fairness and the Justice System
-
companies are selling predictive policing software on the promise that it can predict where crime will happen and who will commit it so that police can intervene in advance
-
consider the harrowing case of Robert McDaniel; police knocked on his door in 2013 and claimed that a computational model had predicted his involvement in a future shooting, though the model didn't say whether he was going to be perpetrator or victim
-
it's fortune cookie-tier shit!
he declined their “assistance”, but they continued to appear on his doorstep; the frequent police presence at his house fuelled rumors in the neighborhood that McDaniel was an informant, and he was shot one evening in a dark alley; he recovered, continuing to dismiss the cops, and was shot again a couple of months laterthe irony of the situation is that police attention on behalf of the model is what actually caused him to be shot; he was tagged by the algorithm for living in a poor, mostly-Black neighborhood, all in the name of “safety”there are two major flavors of predpol: person-based creates an identity profile of who will commit or be the victim of a crime based on past crimes, and local police identify people in the community who match that profile; place-based systems try to forecast the place and time of a possible future crime, and police are dispatched to that area around the predicted time-
recall that, as with the loan models discussed two chapters ago, there are biased patterns to whom was arrested for what in the past
-
also, as will be relevant throughout, the training data is based on arrest data as opposed to actual crime or conviction data
predpol is intellectually descended from broken windows theory, which posits that the appearance of disorder in an area leads to an increase in crime; this theory was introduced in 1982 and gained widespread popularity among American police forces throughout the 90sas NYC police commissioner, William Bratton launched CompStat, and it was adopted across the country (and world) over the following decades; its reliance on crime statistics as a performance metric trained police and bureaucrats to prioritize quantization over accountability; it fostered a belief in these quantization metrics as “objective” and “neutral”-
think phrenology with a new coat of paint
look into: Predict and Surveil (Brayne)thus was technochauvinism integrated into American policing and used to justify further surveillance and harassment in communities that were already overpoliced-
and overpolicing an area leads to a higher arrest rate in that area, which leads to greater perceived “danger”/“unrest” there, which leads to more policing, rinse and repeat
look into: Black Software (McIlwain)the idea of software as a solution to crime has its roots as far back as the dawn of digital computing“[then-head of IBM Thomas] Watson wanted to sell computers and software, so he offered his company's computational expertise for an area that he knew nothing about, in order to solve a social problem that he didn't understand using tools that the social problem experts didn't understand. He succeeded, and it set up a dynamic between Big Tech and policing that still persists.”major players in the predpol space include Palantir, Clearview AI, and PredPol; their deeply faulty products are purchased with public money and end up worsening the lives of that public-
that Palantir is on this list is further evidence that libertarians lack ideological consistency; of course, it's just as likely that Thiel figures (perhaps not wrongly) that he's beyond the reach of such solutions - as Broussard put it earlier, ownership of a model confers a great deal of power
“Context matters, and so does the exact implementation of technology, as well as the people who use it.” (emphasis mine)-
those first two words are so important, especially as a foil to the Ben Shapiros of the world
example: automatic license plate readers for automatic toll collection (with short data retention) versus for dragnet surveillance (with indefinite data retention)in 2011 the Pasco County Sheriff's Office in Florida created a watchlist of people it considered future criminals; the department then sent deputies to monitor and harass the people on the list, often without probable cause, search warrants, or evidence of a crime; unsurprisingly, a large percentage of those on the list were BIPOCthis project was the brainchild of sheriff Chris Nocco and was able to operate below public awareness for 10 years, and Nocco gathered powerful allies in local politicsthe program also created a list of schoolchildren it considered likely to become future criminals using protected data like student grades, school attendance records, and child welfare histories; parents and teachers were not told that children were designated as future criminals, and even the local superintendent was for a time unaware that the police had access to student data-
this is so fucked; I'm reminded of Murder, Inc.'s “Mr. President”: “when a dealer gets busted / where the fuck the money go? / do you use it to build a big shelter for the homeless / or do you just consider it your own shit?”; really the whole track demonstrates an awareness of the connection between poverty (also heavily linked to school performance) and crime far beyond this
the resulting outcry, lawsuits, and federal DoE investigation have barred police analysts from accessing student grades in the futuremany people believe that using more tech will make things “fairer”, including the idea of using machines instead of human judges-
what a terrifying concept; see again Line Goes Up
further complicating this situation is that many people involved in the chain are not malevolentmodern American policing has its roots in 18th century Charleston as a slave patrollook into: Dark Matters: Surveillance of Blackness (Brown)the legacy of lantern laws (which required Black or mixed-race people to carry lanterns after dark if unaccompanied by a white person) lives on in the modern policy of lighting high-crime areas with police floodlights-
how does this relate to those pop-up cop towers that are seemingly everywhere now? Who makes and sells?
-
fun fact: at least the scissor lift towers (called SkyWatch) are manufactured by Teledyne FLIR, which is based out of Wilsonville, though they don't seem to manufacture the autonomous ones typically deployed outside supermarkets
police reform is necessary, but it will not be found in machinesBroussard has served on the ACM FAccT program committee, which every year receives a handful of papers trying to make “better” recidivism algorithms to predict which people are likely to be arrested again in the near future-
and what, I wonder, do those papers propose to do for the people identified by said algorithms? Is it not a better use of resources to identify factors leading to recidivism and focus efforts on alleviating those across the board?
Julia Angwin sparked algorithmic fairness studies when she proved that the COMPAS recidivism algorithm was mathematically incapable of treating white and Black people fairlylook into: “Machine Bias” (Angwin et al)policing stats tell us the number of arrests, not the number of crimes; given that the history of American policing is lousy with racist policy and racialized disparities in enforcement, it is inevitable that computer models will conclude that Black people are more dangerous-
hell, plenty of humans cling to this data as “proof” of that idea
look into: “Runaway Feedback Loops” (Ensign et al)it has been established that Black and white people use drugs at equal rates, but Black people are arrested more often and receive longer sentences for drug charges than whitesmany people fixate on crime (read: arrest) data, mistakenly assuming that this data reflects the entirety of a problemlook into: “Racial Segregation and the Data-Driven Society” (Richardson)TIL: there exist companies called b-corps, which are hybrids of for-profit and nonprofit corporations and are supposed to be oriented toward the public good; I imagine they're as good at hybridizing these two motives as the last several Democratic presidential candidates have been at hybridizing right and leftone such company, Azavea, has worked on a crime prediction software called HunchLab; the product attempts to predict the likelihood of various crime types occurring in different zones of a locality; these predictions are meant to be used to send police to patrol zones with likely crimequoting Tawana Petty: “Pretending a thing creates safety, pouring millions of dollars into building out, and enforcing said thing, and pushing a media campaign that consistently calls the unsafe thing safety…actually makes the community less safe.”an art project called the White Collar Crime Risk Zones highlights the shortcomings of the HunchLab approach; it featured an interactive map of white-collar crime in NYC; its predicted high-crime areas were centered around lower Manhattan as opposed to the Bronx (a poorer borough where most police activities are centered); the accompanying tongue-in-cheek paper is delightfully pitch perfectquoting the paper: “In this paper we have presented our state-of-the-art model for predicting financial crime. By incorporating public data sources with a random forest classifier, we are able to achieve 90.12% predictive accuracy. We are confident that our model matches or exceeds industry standards for predictive policing tools…Our current model relies solely on geospatial information. It does not consider other factors which may provide additional information about the likelihood of financial criminal activity. Crucially, our model only provides an estimate of white-collar crimes for a particular region. It does not go so far as to identify which individuals region are likely to commit the financial crime. That is, all entities within high risk zones are treated as uniformly suspicious.”the authors then went on to generate a composite image of the predicted white-collar criminal by averaging the photos of 7000 corporate executives whose LinkedIn profiles suggested they worked for financial organizations; the result: a smiling young white manviewing the two maps together shows how racialized logic guides assumptions about police technology; as Ezekiel Dixon-Roman has noted, it is not only racialized logic but is also extractive capitalist logicparaphrasing Dixon-Roman: “technopolitical, sociotechnical systems target particular spaces and go after particular bodies. They impose their systems of racialized logic in order to feed their particular epistemologies of power.”consider tax evasion: the IRS lacks the resources to go after the ultra wealthy, so they go after poor people who lack the resources to hire lawyers and accountants; compare against the price of paying for police brutality, which is both expensive and shouldered by taxpayerscompare also Broussard's twin experiences of being pulled over while riding with her Black father and white husband; the former had been tense and contentious, while the latter was much easier; and while both incidents featured expired inspection stickers, only her Black father was ticked“These experiences make us feel vulnerable, showing us how thin the membrane of the civilized world is. We are only one step away from the crazies in the white hoods burning crosses in those moments.”-
Please don't go to Arkansas or central Alabama!
“White people in America don't have these kinds of moments. It's part of what makes many white people oblivious to racism in the world, and also to racism in tech.”some have claimed that crime has decreased as a result of police using technology; this is simply false — crime is down overall0; violent crime in Pasco County actually increased during their harassment campaignthe Oakland PD has spent millions on license plate reader tech since 2006; they had to implement a six-month data retention policy because they filled up an 80 GB hard drive with plate data and didn't have enough money in the budget to buy more storage-
this is howling clown shit
look into: “Cops Decide to Collect Less License Plate Data” (Farivar)police have been militarized by being sold or gifted military surplus; they do not need military-grade hardware for routine policing, and this often leads to situations like city cops carrying machine guns at subway stations while children commute to school; this builds a skewed perception of dangerlook into: Thicker Than Blood: How Racial Statistics Lie (Zuberi)uncritically assuming a future guided by machine learning is equivalent to saying that white supremacy is the future because the mainstream sociopolitical subtext of “digital” is a specific kind of capitalist white supremacyconsider an IBM governance report: “Extensive evidence has shown that AI can embed human and societal biases and deploy them at scale. Many experts are now saying that unwanted bias might be the major barrier that prevents AI from reaching its full potential…So how do we ensure that automated decisions are less biased than human decision-making?”Broussard points out that “full potential” is a very loaded concept built entirely on the imagination of a small, homogeneous group of people who have been consistently terrible at predicting the futureregarding the question of how to ensure automated decisions are less biased, she argues that the question implicitly asserts that computational decisions are less biased-
I don't exactly get that out of it, but I do believe it asserts that such decisions can be less biased than humans, which remains decidedly unproven (and unlikely)
the problem is technochauvinist binary thinking: computers or humans; we need human checks on computational decisions, computational checks on human decisions, and additional safety nets and the flexibility to change and adapt toward a better worldreturning to McDaniel, bureaucratic imperatives and savior complexes kept the humans involved from listening when he declined their “help”, and it was their refusal to listen that endangered his life; in his particular context, the regular appearance of marked police cars signalled risk to the community rather than public safetya better resolution would have been to look more closely at the situation, examine assumptions, and chart a new path forward; adding more data to the computational system would not have helped, and in fact a computational system would never envision such a solutiontechnochauvinists cloak their opinions in claims of “science” and “data”, but as O'Neil says in Weapons of Math Destruction, algorithms are opinions implemented in code-
having the framework of posets in my mind has influenced my social thinking in some fascinating ways; it has made me more critical of the implicit assumption that comparisons are necessarily linear; consider the claim that person A is “smarter” than person B — does this not imply that intelligence is a single axis of comparison, that person B's knowledge is entirely contained within person A's? Or consider the prospect of ranking people, say friends or partners: it's impossible to label one as “better” than the other without collapsing their unique combination of virtues and vices into a single scalar value, and doing so means making countless value judgements along the way. Or more relevant to this book, neither computational nor human decision-making is universally superior to the other
“If you find yourself using a rationalization beloved of eugenecists in order to rationalize oppression, think again. This is a pretty good indicator that you are not on the side of the angels.”-
really chew on this one — something about it isn't going down right for me; note that it must be taken in context with a preceding blockquote not reproduced here
-
I think it's less about the form of Pearson's argument than the specific ways in which his assumptions are flawed; he assumes that human features can be measured objectively on a linear scale, and he assumes that he possesses this scale
-
he is more or less arguing like Ben Shapiro: he maintains a “facts don't care about your feelings” posture while failing to notice (or ignoring) that his “facts” are completely wrong
-
OK, maybe this is it: if your counter to claims of oppression is to lean on your “facts and logic” as superior to their “feelings” (implicitly claiming that you possess all the “facts”), then you are probably on the wrong side of the argument
Baltimore used a form of CompStat called Citistat, originally developed to reduce employee absenteeism; they originally deployed it on the Bureau of Solid Waste within Public Works in June 2000; though based on CompStat, Citistat relied on data collected in MS Excel, presentations assembled in PowerPoint, and maps generated in ArcView-
I love how Broussard describes it as being “deployed on” workers; that sounds like journalistic shade
-
it's interesting how Baltimore first used it to monitor attendance among what I assume are the city's garbage collection workers
-
also, Excel, PowerPoint, and ArcView? Howling clown shit
data-driven practices can be effective in ensuring government accountability; problems arise when people start imagining that data-driven practices are appropriate to predict or solve every problemlook into: “Inherent Trade-Offs in the Fair Determination of Risk Scores” (Kleinberg, Mullainathan, Raghavan)look into: “White Collar Crime Risk Zones” (Clifton, Lavigne, Tseng)look into: “Of Techno-Ethics and Techno-Affects” (Amrute)look into: “How Eugenics Shaped Statistics” (Clayton)Real Students, Imaginary Grades
Ability and Technology
Gender Rights and Databases
Diagnosing Racism
An AI Told Me I Had Cancer
Creating Public Interest Technology
Potential Reboot
Thoughts
- reading/more_than_glitch.txt
- Last modified: 2024-09-08 05:39
- by asdf
-
-
-
-