Impact Portfolio
Take a look inside the lab. Our impact portfolio summarizes key projects in progress, categorized by theme:
Building Fair & Equitable ArtificiaI Intelligence (AI)
Understanding & Reducing Discrimination in AI and Human Decision-Making
Using AI to Address Learner Variability
Creating Pathways to Economic Mobility
Building Fair & Equitable ArtificiaI Intelligence (AI)
-
Housing is a critical pathway to economic mobility but discrimination is widespread: landlords systematically discriminate on the basis of race, source of income (e.g. voucher holders), and criminal background.
At the same time, many landlords have turned to AI to screen applicants, which could reduce bias — or exacerbate it. Optimizing the use of AI to satisfy the business needs of landlords while reducing discrimination and increasing opportunities for traditionally-excluded tenants requires a multi-disciplinary approach. We combine tools from computer science (machine learning), economics (causal inference), and behavioral science (information interpretation) with qualitative interviews to develop a new algorithm with novel formulations of fairness constraints based on our academic research.
Our resulting work, which aims to support HUD’s mission to affirmatively further fair housing, will be scaled across the largest provider of housing listings for low-income families in the U.S.
-
Algorithmic pricing has become widespread in housing markets and elsewhere, which has contributed to rising rents. Research has shown that, even without direct intervention, algorithms can collude to artificially inflate prices.
This is not hypothetical: one of the largest algorithmic rental pricing companies in the country was sued by the Department of Justice because its software aimed to “decrease competition among landlords in apartment pricing and to monopolize the market for commercial revenue management software that landlords use to price apartments.”
We aim to build an adversarial algorithm to combat anti-competitive pricing schemes in housing markets. HUD requires all Public Housing Authorities (PHAs) to conduct a “rental reasonableness” checks but they struggle to keep up with sophisticated, dynamic pricing schemes. We build the dynamic pricing algorithm to estimate landlords’ reserve prices that balances “exploiting” existing statistical relationships between covariates like market, unit, and applicant characteristics and rental prices with an “exploration” component that varies a small subset of these estimated values to take into account potential changes over time and place, so that fraud and market imperfections do not inflate estimates above what landlords are willing to accept. We will implement this algorithm with PHAs to measures the resulting savings to PHAs and renters.
-
Traditional machine learning algorithms rely on historical data to formulate predictions about the future. In lending, this means past loan decisions and outcomes predict how changes to loan terms will affect disbursement and repayment rates. This risks replicating historical lending patterns and making inaccurate forecasts: how can we predict the loan outcome of someone who, historically, would never have received a loan? Machine learning algorithms often rely on strong assumptions to make these inferences. Instead, we “debias” the data.
In partnership with a U.S. microfinance nonprofit, which provides loans to financially excluded small-business owners, we systematically relaxed loan terms for a random sample of applicants. Combining this random assignment with weighting and machine learning, we estimate the causal effects of specific loan terms on disbursement and repayment rates. We then simulate hundreds of thousands of loan underwriting scenarios to maximize loan access (disbursement rates) and repayment rates.
-
The increasing use of generative AI in resume screening raises concerns that it may worsen discrimination compared to human reviewers. Studying hiring disparities is challenging; data usually only reflects outcomes for applicants who pass resume screening. To overcome this selection challenge, we conducted experiments tracking both resume reviews and hiring results for all applicants, allowing our research team to compare bias in human and AI recruiters directly in terms of their disparate treatment and disparate impact.
-
The COVID-19 pandemic created unprecedented learning losses in the U.S. K-12 education, setting student performance back two decades. While evidence-based interventions are urgently needed, researchers, nonprofits, and education technology providers face significant barriers in accessing student data, even with formal sharing agreements in place. Most school districts lack the capacity to fulfill frequent data requests, whether for research or product.
We aim to transform K-12 data access by extending a major partner’s existing infrastructure — currently used by 97 of the 100 largest U.S. school districts — to include data elements commonly required for student support and education R&D.
The initiative brings together our research expertise with two partners’ extensive school district partnerships and trusted technical infrastructure to create a philanthropically-supported public good that democratizes access to student data while maintaining strict security and governance standards.
This project will accelerate the production of evidence on educational interventions while reducing the technical and administrative burden on school districts.
Understanding & Reducing Discrimination in AI and Human Decision-Making
-
We propose a scalable, low-cost audit technology to detect illegal housing discrimination across the US. Earlier work, which used a massive resume correspondence experiment to automate the study of employment discrimination among large US companies, was written about in The New York Times. This showed that race and gender discrimination are highly concentrated in a small set of Fortune 500 employers and that it is feasible to experimentally detect specific employers engaged in discrimination with high confidence. Our proposed application will refine and extend these methods to measure discrimination in the housing market and detect particular landlords engaged in discrimination. The resulting outputs will include:
The demonstration of a low-cost, scalable technology to identify whether individual landlords are illegally discriminating based on race, gender, and other protected characteristics of interest to HUD. This information may be used by regulators and enforcement organizations or used to conduct an additional intervention aimed at reducing housing discrimination among the worst offenders.
A detailed picture of the distribution of discrimination in the housing market across the U.S.
-
To more effectively reduce discrimination requires recognizing colorism: discrimination based on skin tone. However, we often lack the relevant data to measure and therefore reduce this form of discrimination. Most data, administrative or otherwise, record coarse measures of race; in the data, individuals are either considered, for instance, “White,” “Black,” “Hispanic,” etc. We use photo IDs from hundreds of thousands of rental applications across the country, combined with tools from AI we developed to measure aspects of appearance and detect colorism in the housing rental market. We incorporate these findings and measures into the design of tenant screening algorithms and processes to reduce this form of discrimination.
-
Many owners and property managers rely on third party vendors to provide data about an applicant’s background—credit reports, eviction histories—to inform or even suggest leasing decisions. While such data can help landlords predict payment default and other risks, there are concerns that this information reduces housing opportunities for disadvantaged applicants and induces biased decisions.
Criminal background checks, for example, vary in their accuracy and presentation of data. Exacerbating the potential for bias, many owners and property managers serving the affordable housing market are small, and so have little oversight. One in four Americans has a criminal background; compared to the general public, formerly incarcerated individuals are 10 times more likely to be homeless. These statistics reflect a struggle for people with criminal backgrounds to find housing, imposing substantial social costs.
In partnership with the largest provider of low-income housing listings in the U.S., our project complements our algorithmic screening work by testing interventions that will aggregate, present, and frame criminal background information in ways that aim to improve accuracy and reduce biased decision-making.
-
Screening systems - whether humans, AI, or both - are often a “black box.” This has led to valid concerns that such systems yield opaque, discriminatory decisions. We show how to unpack this black box by constructing an “explainable” model of decision making: each parameter can be directly interpreted in terms of the preferences, beliefs, and any potential biases of the decision maker. We apply our model to data from human and AI hiring decisions to show we can:
Use our model to build non-discriminatory AI and human applicant screening systems.
Use the “explainability” of our model to identify different sources of discrimination at the employer level: statistical discrimination, biased beliefs, and so-called “taste-based” discrimination.
Measure biases of AI screening (e.g. ChatGPT) compared to human employer biases.
Assess how labor-market policies like “ban the box” and gender “blinding” affect bias and decision-making quality.
Using AI to Address Learner Variability
-
Teaching requires understanding what's inside a student's mind — a skill that can't be replicated by current algorithms, which lack the nuanced mental models that good teachers create for each student. Unlike Large Language Models (LLMs), which replicate language but not human understanding, teachers sense when a student is frustrated, confident, or distracted.
To bridge this gap, we are developing an algorithmic model of the mind, which maps each step of a student’s reasoning. Building this model requires architectures and data that are very different from what underlie LLMs. We draw on work from robotics, where computational models of the environment must be inferred from data inputs, and incorporate this approach with our own research that combines machine learning with behavioral models of people. We are beginning with math and expanding to other subjects and socio-emotional skills.
-
AI can aggregate big data efficiently into a prediction, but humans often observe information that is not easy to quantify. How can we effectively combine these sources of information without bias?
K-12 education exemplifies the importance of this question: Good grades and access to advanced courses create pathways to economic mobility, but test scores and staff referrals overlook minority students for accelerated programs and AP classes, even for students with equal capability. We built an AI course recommender for seven community colleges that placed dramatically more minority students into college-level courses without reducing pass rates. We are extending this work into K-12 to build course recommendation systems that combine human relational capital and AI predictions for tens of thousands of students across the country.
-
Students encounter diverse challenges that require early intervention and personalized support. While more than half of public high schools use an Early Warning System to identify students at risk of dropping out, these systems lack actionable insights or any ability to learn and recommend what interventions will work well for whom.
We are developing a "precision education" platform. This will be a data-driven platform, which can be scaled across settings beyond our partner, that informs school staff about how to provide tailored support to each student. Our partner enables this work by providing mentors to high-need schools, where they act as intervention providers and high-frequency data collectors who document student engagement, behavior, and responses to support strategies. Combined with administrative records, these data create a feedback loop to build an algorithm identifying the most effective strategies for each student.
Creating Pathways to Economic Mobility
-
Microcredentials offer quick, adaptable pathways to skill certification, appealing for fields with evolving skill demands and for workers in need of upskilling. Despite how commonplace they’ve become, little is known about how employers value these credentials.
To investigate, we will create an automated tool to send thousands of fictitious job applications to major U.S. employers, assigning varied credentials (e.g., digital badges, professional certificates) to measure callback rates. Focusing on fields like IT, data analytics, and digital marketing, analytical methods will determine if recognition of micro-credentials is widespread or limited to certain employers. This study will quantify the real-world value of micro-credentials, guiding jobseekers, educators, and policymakers in upskilling and professional development strategies.
-
Emergency rental relief funds are a promising intervention to keep families housed; prior work has shown that emergency cash assistance, when well timed, can lead to sustained reductions in homelessness. This approach gained policy traction when Congress approved the Emergency Rental Assistance Program (ERAP) to help households at the risk of losing their rental units due to COVID-19 pandemic. This assistance covered rental, utility, and other related expenses to households. However, federal funding is largely exhausted and sustaining such programs is difficult.
How can we target and underwrite such funding to maximize its effectiveness and sustainability? With a partner who offers up to $5,000 in rent relief via no-interest loans, we aim to develop and evaluate the impact of a basic cash assistance solution, and test the following:
An early warning system that predicts when a renter is likely to struggle, to enable providers to reach out earlier in their journey.
A services matching/recommendation algorithm that develops customized offerings for individuals at risk based on what service (or combination of services) would be most likely to lead to impact for that specific individual and their situation.
An underwriting algorithm that leverages alternative-risk scoring data to balance repayment goals for sustainability and efficacy goals that maximize social impact.
-
Low-income families are segregated into neighborhoods with lower-quality schools, lower employment rates, and higher rates of environmental contaminants that impede child development and reduce paths to upward economic mobility for children. However, there is significant variation in economic mobility and environmental quality within commuting zones, and, at the time of their search for housing, it is difficult for families to discern which neighborhoods will be both affordable and best for their children.
To help address this problem, we are randomizing the addition of two forms of information onto unit listings for low-income families across the US: economic mobility data from the Opportunity Atlas, and historical air-quality data aggregated by our research team. We then track how this information affects search, application, and lease up rates into neighborhoods that promote child development and upward mobility. These information interventions will then be scaled and sustained across the largest provider of housing listings for low-income families in the U.S.
-
Housing Choice Vouchers are a proven tool for improving economic mobility and life outcomes, helping over 2 million families access better neighborhoods, schools, and opportunities. More than 2,100 Public Housing Authorities (PHAs) manage wait list systems to determine who gets these opportunities. Accordingly, Federal law requires PHAs to demonstrate these priority rules do not discriminate against protected classes under the Fair Housing Act. HUD has intervened in multiple cases where priority systems had discriminatory effects. But PHAs lack the analytical tools to optimize their voucher priority systems to reach those who would benefit most while preventing unintended discriminatory effects.
We built a Waitlist Simulator to help PHAs forecast the impacts of different waitlist systems and priorities before implementation, ensuring limited housing resources are allocated both efficiently and equitably. By enabling data-driven decisions about voucher allocations, this tool can improve life trajectories for hundreds of thousands of families and help housing authorities better fulfill their mission.