Is Gerrymandering About to Become More Difficult?

OK, so what might you do about that? Well, one thing you can do is to make it a rule. Ohio was maybe the first state to do that, and it happened pretty recently. In 2018, Ohio voters passed a [state] constitutional amendment creating a commission—a not-very-independent commission, it turns out—and written into those rules was the goal that the [partisan] share of seats should reflect the share of [statewide] votes. And as far as I know, that’s the first spelled-out instance of setting up proportionality as a goal.

For a mathematician like me, that’s really sensible: State your goals, then we can try to achieve them. But when your goals remain really vague, it’s very difficult to talk about why one might be better or fairer than the other.

When did you realize that your background in math had an application to gerrymandering?

I started working on this in 2016. My background in math is in geometry. And I thought, well, what if we tried to think about what it would mean to be “fair” on the district-drawing side?

I started with the intuition that the story is in the shapes, and that if we can just come up with the right shape metric, we’ll [solve it]. I went looking for the authoritative literature on all these “compactness” metrics that would tell the right story, and to my surprise, there was really classical math and ancient, pre-classical math, but there didn’t seem to be any kind of post-1900 mathematics in the mix.

The geometry of discrete spaces has really exploded in richness and depth in the last 100 years, but I wasn’t seeing a lot of those ideas in the mix. And it struck me — as many others have certaily realized before — that districting is really a discrete problem: There’s a finite number of people, and we have these geographic chunks that tell us where they are. So basically, I came to this thinking, “Oh, I bet there’s something that could be usefully done here.” And it has bloomed into a full-time research program.

One thing we’re going to deal with this redistricting cycle that we haven’t seen in the past is this new “differential privacy” approach by the Census, that actually changes the underlying data. Can you walk me through that, and how that will affect redistricting?

Yes. So, the Census Bureau has taken it upon themselves to do something cutting-edge—which always makes people nervous. In this case, they have “microdata”—the responses to all the census forms in, effectively, a giant table, with all the answers from every single person included in the enumeration. The bureau doesn’t release all of that information publicly. Instead, it aggregates it up: Census blocks or block groups will have maybe hundreds or a few thousand people in them, and you’ll get aggregate statistics rather than individual people’s responses—so there will be a little chunk of a map, and you’ll know how many people live there and what their responses were, in aggregate.

The threat is now this: If you have enough of these aggregate statistics, you can throw them into a computer and actually reproduce the input table that made the aggregate statistics. Risk number one is that you can recover the person-level data. And risk number two, which is really interesting, is that if you pair it with easily available commercial data—like from Facebook—you could work out for quite a few of those people what their names and addresses and phone numbers are.

All this computing power being brought to bear on elections is generally pretty healthy and pro-democratic—people are coming up with ideas about making better systems and outcomes. But bad actors are also empowered by computing. That’s the risk—and it’s an interesting one. Under Title 13 of the U.S. Code, the Census Bureau is obligated to protect privacy. Does that include protecting people from these “reconstruction and reidentification” attacks, where you might have to use third-party data to do it? The Bureau has decided that the answer is yes.

So, they took this idea called “differential privacy,” which was created by Cynthia Dwork and her colleagues in computer science. And the idea is: What if you could, in a really controlled way, add random noise to all your numbers so that you’d be off a little bit here and there, but by the time you added it up, all those differences would cancel out and you’d get numbers that are very accurate at high levels, even if they’re very noisy at low levels?

It is a gorgeous idea. And the beauty of it is that you can do it in a really controlled way. The Census Bureau announced that they were going to do that, and chaos ensued. They’ve already been sued in a lawsuit led by Alabama.

Source

You can skip to the end and leave a response. Pinging is currently not allowed.

Leave a Reply

Powered by WordPress | Designed by: Premium WordPress Themes | Thanks to Themes Gallery, Bromoney and Wordpress Themes