clock menu more-arrow no yes mobile
Aerial view of dense apartment buildings.
New ways of advertising homes and apartments and evaluating tenants and potential owners, including tenant selection systems fueled by artificial intelligence (AI), have raised alarm bells among advocates and activists.
Shutterstock

Filed under:

Housing discrimination goes high tech

How algorithms, ad targeting, and other new technologies threaten fair housing laws

Last year, as the U.S. celebrated the 50th anniversary of the Fair Housing Act, the landmark 1968 law that sought to ban housing discrimination, it was clear that despite decades of progress, there was still much work to be done. The homeownership rate for black Americans stood at 42.3 percent last year, just marginally better than 1970, when it was 41.6, and a report by the National Fair Housing Alliance (NFHA) last month found that housing discrimination cases were on the rise across the nation.

Advocates have also sounded the alarm about another emerging threat to housing equality: the rapid adoption of new technologies for selling and renting homes. As the NFHA noted in its 2019 Fair Housing Trends Report, new ways of advertising homes and apartments and evaluating tenants and potential owners, including tenant selection systems fueled by artificial intelligence (AI) and advertising that uses demographic microtargeting to zero in on a certain audience, threaten to perpetuate the systemic discrimination of the past by modern means.

“We don’t have a big enough movement around this issue,” says Lisa Rice, President and CEO of NFHA. “Many technology firms and banking institutions don’t have an understanding of how the systems they’re using can generate bias.”

Bias is a feature, not a bug

Undergirding the warnings being issued by Rice and other housing advocates and technologists is the idea that algorithms aren’t just impartial, unbiased systems that fairly sort through data. Rather, they tend to manifest the biases of their creators, and of that society at large.

For instance, when looking at tenant applications, an automated system may reject applicants based on unintended connections between data sets; living in a low-income neighborhood may be correlated with an inability to pay rent, for instance. And since modern algorithms compile and sort among myriad data sets, it can be hard for designers and programmers to understand exactly which data point may have caused the system to reject an applicant.

This is not merely an abstract fear. Research from a team of Berkeley researchers released last month found that lenders using algorithms to generate decisions on loan pricing have discriminated against borrowers of color, resulting in a collective overcharge of $765 million each year for home and refinance loans. The analysis of roughly 7 million 30-year mortgages also found that both in-person and online lenders rejected a total of 1.3 million creditworthy applicants of color between 2008 and 2015.

“All technological systems will manifest discrimination,” says Rice. “The bias is baked into the data.”

The problem stems from what Meredith Broussard, an NYU professor, software developer, journalist, and author of Artificial Unintelligence, labels technochauvinism, the idea that technology is inherently better than human intelligence.

Startups and programmers see housing as just another industry ripe to be revolutionized by technology, according to Broussard. Employing new methods like machine learning and artificial intelligence can make processes such as sorting through tenant applications faster, more efficient, and cheaper.

The problem, she says, is that when you try to build an automated system that solves social problems, you end up creating something that looks at the data of the past and “learns the sins of the past.” It’s why many advocates believe the answer to this unconscious bias is to change the way these new systems are designed in the first place.

“People in software feel like they don’t have to be aware of existing rules, laws, and policies,” says Broussard. “It’s the idea that a computer can be more objective than a human. But not if it’s drawing from existing data that holds existing biases.”

Targeting some, excluding others

One of the more high-profile examples of technology creating new types of housing discrimination arose from online advertising. Facebook has been cited numerous times by the ACLU and other advocacy groups for its microtargeting feature, which lets advertisers send ads to specific groups via a drop-down menu of categories, including age, race, marital status, and disability status.

Since 2016, ProPublica has published a series of stories that demonstrated how its reporters were able to purchase and publish ads on Facebook that discriminated against different racial groups and other categories protected by the Fair Housing Act. Facebook has since apologized and restricted targeting capabilities for housing ads; earlier this month, as part of a settlement with the ACLU and other groups who had filed a lawsuit, Facebook said that housing, employment, and credit card ads can no longer be targeted based on age, gender, ZIP code, or multicultural affinity, and that the social network would maintain a searchable ad library so civil rights groups and journalists could keep tabs on future housing advertisements.

Rice says Facebook’s problems with discriminatory housing ads exemplify how technology experts often aren’t trained on civil rights issues. They “design systems in a vacuum without a full understanding of how they’re going to impact our society,” she says.

While the social network has removed many of the features that allowed discriminatory posts, the issue is far from over. Other tech giants, including Google and Twitter, have been investigated by the Department of Housing and Urban Development (HUD) for similar issues. And the nature of these social and ad networks can also lead to unintentional targeting, says Rice.

For example, many of these systems allow for lookalike audience targeting, a feature that can, say, help a clothing company target consumers similar to those who already like or follow a brand. Carry that over to the housing world, and it could help a high-end apartment developer target potential renters who are similar to existing tenants—in effect concentrating on the same kinds of renters who already live in the building, and potentially excluding others.

“If your existing pool is homogeneous, you’ll end up with skewed results,” says Rice.

How AI can skew applications

Another area where the increased use of technology raises discrimination concerns is the application process, whether for a tenant seeking an apartment or a buyer seeking a home loan. While studies of the lending market have found that loan rejections have decreased overall in recent years, strong racial disparities remain: A recent study by Lending Tree found the home loan denial rate for black Americans, 17.4 percent, was more than double that for non-Hispanic whites, 7.9 percent.

While the use of AI and sophisticated algorithms is relatively new, it’s a growing field that already has established legal precedent. In March of 2019, a Connecticut federal district court ruled that CoreLogic, a tenant screening company that utilizes data mining to evaluate potential renters, could be held liable for discrimination claims brought under the Fair Housing Act; the case alleged that CoreLogic’s system took arrest records, disability, race, and national origin into account, which are illegal.

In August, HUD added to the fears of fair housing advocates when it put forth a proposed ruling that stated that landlords, lenders, and property sellers who use third-party machine learning algorithms to evaluate approvals can’t be held liable for discrimination that results from these algorithms.

It’s important to grapple with the potential for bias in these systems now, at a time when the industry is making a large-scale shift toward automation. Nearly half of the 2,100 lenders evaluated by the Berkeley study utilized some form of online or app-based application. And while a recent analysis from the National Bureau of Economic Research suggests that computer loan evaluations can be more fair that in-person evaluations, the HFHA and others believe that even more needs to be done to level the playing field.

How can we tame this technology?

One step toward changing how these algorithms work could be by changing who designs them. Tech firms have traditionally battled with problems of diversity in hiring, and Rice and others argue that the potential for unconscious bias is another reminder of the need to diversify the tech workforce. She says that some financial firms have kept this top of mind, including Quicken Loans and Freddie Mac, but more needs to be done.

Rice argues that advocates within fair housing and technology need to educate programmers and others about how bias manifests itself in these systems, while also designing technology that includes what she calls “discriminatory flares or bias signals”: built-in checks that can evaluate how systems are performing and whether or not they may be creating biased outcomes.

Larger legal remedies may also be afoot. The House Financial Services committee has been looking into the issue and held a hearing in July, and some advocates have raised the idea of revamping the Communications Decency Act, which governs the behavior of tech firms and social networks, to create more specific rules around this type of bias and discrimination.

Broussard says that a big part of the solution should be keeping humans within the system. Housing can be so foundational to achievement, household wealth, and equality, she says, that some things shouldn’t be left to machines.

“There’s this idea that math is just better than humans,” she says. “There’s a difference between mathematical fairness and social fairness. We should design for justice instead.”

News

The VP Debate Stage Is the Coronavirus Response in Physical, Design-Failure Form

News

Is the U.S. Really Planting a Billion Trees, as Trump Said?

News

Expect More Piles of Garbage — Proverbial and Otherwise — as NYC’s Budget Crisis Unfolds

View all stories in News