We’ve written before about the importance of state legislative elections and, if anything, the situation for Dems and progressives has only become more urgent since then. Republican control of state-houses has opened the door to partisan and race-based gerrymandering and voter suppression, which allows the Republicans to hold state-houses and a disproportionate share of the U.S. house. In addition, on hundreds of substantive issues, including abortion access, COVID-safety, Medicaid expansion, and education policy, Republican control is an ongoing disaster for Democratic policy positions and, more importantly, for the people of these states.
Virginia is the rare good-news story in this regard: the state’s House of Delegates (lower house) and Senate went blue in 2019, after 20 years of Republican control (VA state-house elections occur in odd-numbered years). We’d like to keep it that way–which is why we are trying to figure out the highest-impact races to support this fall.
But modeling State-legislative elections to figure out which ones need the most help from donors is tricky. That’s mainly because the data are extremely challenging: polling is almost non-existent and detailed demographic information gets more difficult to find for smaller regions, making any kind of forecasting difficult.
Despite the challenges, we’ll do our best to identify the upcoming Virginia state legislature races that the Dem candidate can win, but is not certain to win. Below we’ll look at our first attempt to model expected closeness of elections in the VA lower house. In this post we’ll feed our model 2018 data and compare the predictions to the 2019 election outcome. To be very clear: we’re not interested in predicting outcomes. We are interested in determining which races are likely to be close and are thus flippable or in need of defending.
When people use election-data and demographics to craft new districts1 for congressional or state legislative elections, they focus almost entirely on past election results: breaking past results down to their smallest available geographies, usually “precincts”, further splitting those into census blocks, then reassembling them in different ways to build new districts with predictable political leanings. We don’t doubt the accuracy of these methods, especially given the financial and computational resources given over to them during redistricting.
But what if past results are inconsistent with what we might expect from a purely demographic model? While we expect the past-results model to be highly predictive, we also believe that mismatches between past results and intrinsic demographics might be extremely informative. These differences might highlight the places where the right candidate, campaign or organizing effort might produce a surprising result by activating Dem-leaning voters, or alerting us to districts where an incumbent Democrat might need more help defending a seat than one might expect from past results alone.
To do this, we’re going to model state-legislative elections using only demographic variables: population density, sex (female or male are the only classifications in the data we have), education level (non-college-grad or college-grad), and race (Black, Latinx, Asian, white-non-Latinx, other). We would very much like to have an age factor as well but the tables made available by the census at the SLD level preclude this2.
We’ve built very similar models before, most-recently to look at the turnout gap between voters-of-color and white voters . So, for more details on our demographic and turnout data, or multi-level-regression and post-stratification (MRP), please click the links above.
This model has a few key differences from what we did before:
In addition to modeling voter turnout, we also model voter preference and we build those models jointly since there are likely correlations between turnout and preference that might get lost in separate models3.
We need demographic information for each SLD which is harder to assemble from census data than the demographics of. e.g., a congressional district.
We assemble SLD-level demographic information using census provided shapefiles for each district. The shapefile is used to find all the census block-groups inside the district and those are aggregated4 to construct SLD-level demographic breakdowns of population density, sex, education and race.
A complication to our approach is that our favored source for turnout data, the census bureau’s Current Population Survey Voting and Registration Supplement (CPSVRS), has no data about who voters chose in the election, just whether or not they voted. To get voter preference information we use the Cooperative Election Survey (CES) which does the work of validating survey respondents self-reported turnout with voter files5. We use each voter’s party choice for their U.S. House district as a proxy for their likely vote for state legislature.
Our model combines those data sets at the congressional district level and jointly estimates turnout probabilities using counts of voters from the CPSVRS and counts of Dem votes for U.S. House from the CES. We then post-stratify the estimates across the demographic data we built for each VA lower house district. The result is a prediction for the expected Dem vote share in each district. More detail about the model and data-sources can be found here.
Cutting to the chase: in the chart below we plot the model estimate (using 2018 data) vs. the results of the 2019 election. In blue, we also plot the “model=result” line, where every dot would fall if the model was exactly predictive of each race. The R-squared value indicates that our model explains 74% of the variance among contested races6.
We want to pause for a moment to pat ourselves on the back: our model actually works pretty well! It’s particularly remarkable that a model using only a few demographic variables and votes for a different office (U.S. House) has any predictive power at all, much less the 74% explanatory power that we see here. It’s a reminder of how strongly demographics determine voting patterns and how consistently people vote for one party on the entire ballot. Although our model isn’t perfect – after all, our data are limited, and factors related to the specific candidates or incumbency aren’t included at all–we think it will serve our purpose. With it, we think we will be able to sort Virginia’s upcoming state legislature races into “safe”, “needs defending”, “flippable” and “long-shot” and to limit our donation recommendations to the middle two categories.
In the coming days, we’re going to apply this model Virginia’s 2021 races, which are happening this November7. But next year, as we look to state legislative elections in, e.g., GA, TX, PA, OH, and WI, we’ll do the same work there, and use that to help guide our recommendations in those races.
It’s crucial to start winning back state-houses and Dems can only do that if they support candidates in winnable races. But that’s not enough. In some districts, we need to field better candidates–or any candidate at all! We need to nurture and fund local organizers and learn what they need to help win state legislative races. This is part of a larger need to understand and support the role of local organizing in driving turnout and winning elections.
We’ll be writing more soon about specific candidates we recommend in Virginia, and voter turnout organizations to start funding now in various states. Watch this space!
Want to read more from Blue Ripple? Visit our website, sign up for email updates, and follow us on Twitter and FaceBook. Folks interested in our data and modeling efforts should also check out our Github page.
See, for example this free redistricting tool. Or this. Information about all things redistricting is available here.↩︎
For larger geographic areas it’s possible to get ACS “micro-data,” which provides specific information for many factors. The smallest of these, Public-Use-Microdata-Areas or “PUMA”s, contain about 100,000 people which is too big to use to get the demographics of a SLD. Census block-groups only contain a few thousand people, but for those micro-data is not available. We’re using census tables and they do not provide 4-factor breakdowns. So we we’re limited to sex, education and race. Once additional 2020 decennial data is available, we may be able to improve on this.↩︎
Imagine, for example, that college-educated Asian voters are more likely to vote for Dems. But that in the places where their turnout is highest, their preference for Dems is lowest. If we modeled turnout and preference separately, we would miss this correlation, and estimate a higher number of Dem votes than we should in places where we expect high turnout among college-educated Asian voters, and a lower number of Dem votes than we should in places were we expect low turnout of those voters.↩︎
We built a python script to automate most of this process. We download shapefiles and block-group-data for the state and the script merges those into SLD-level demographics. The code is available on our github site↩︎
The CES also has turnout information and it has the advantage of being validated. But that comes with its own issues. We generally run our estimations with both CPSVRS and CES as a turnout source to make sure the results are similar but rely on a slightly-corrected version of the CPSVRS, as that seems to be the most common approach.↩︎
Races uncontested by one of the major parties fall on the sides of the chart (except HD-63, where a well-known local independent mounted a strong challenge to the incumbent Dem) and we can see that these are predictably one-sided in the model, with the exception of district 78 (a swingy district that was uncontested by the Democrats).↩︎
The CES hasn’t yet validated it’s voting data for 2020. We’re hopeful that they will complete that soon and then we can run this model with an eye to the 2021 elections for the VA state-house. If we have to, we can use the 2018 voting data but we’d prefer a more recent sample of voter preference.↩︎