Earlier this week Hadley Wickham, Chief Scientist at RStudio, gave a little talk at Booz Allen. He started out in med school, and one of the things that stuck out from his talk was a comparison between being a consulting statistician and taking a medical history. He tells a similar story in this interview:
One of the things I found most useful from med school was we got trained in how to take a medical history, like how to do an interview. Really, there’s a lot of similarities. When you’re a doctor, someone will come to you and say, “I’ve broken my arm. I need you to put a cast on it.” ((During his in-person talk, the patient with the broken-arm-and-cast instead had a cold and wanted antibiotics, which is a better example since the cold is caused by a virus which will be unaffected by the antibiotics.)) It’s the same thing when you’re a statistician, someone comes to you and says, “I’ve got this problem, I need you to fit a linear model and give me a p-value.” The first task of any consulting appointment is to think about what they actually need, not what they think they want. It’s the same thing in medicine, people self‑diagnose and you’ve got to try and break through that and figure out what they really need.
I think this problem comes up in any consulting or contracting environment. As a consultant, should I:
(a) do what my client is asking me to do, or
(b) figure out why they're asking me to do that, and then figure out what they should want me to do, and then convince them that's what they want to do, and then do that thing?
This is pretty routine, and no surprise to anyone who has worked in consulting. Here's why I'm sharing it though. This is from Megan McArdle's discussion of the CMS Inspector General's report on "How HealthCare.gov Went So, So Wrong." ((Which incidentally is something I have spilled a lot of ink about previously.))
The federal government contracting process is insane. [...] A client is a long-term relationship; you want to preserve that. But the federal contracting system specifically discourages these sorts of relationships, because relationships might lead to something unfair happening. (In fact, the report specifically faults CMS for not documenting that one of the people involved in the process had previously worked for a firm that was bidding.) Instead the process tries to use rules and paperwork to substitute for reputation and trust. There’s a reason that private companies do not try to make this substitution, which is that it’s doomed.
Yes, you end up with some self-dealing; people with the authority to spend money on outside vendors dine very well, can count on a nice fruit basket or bottle of liquor at Christmas, and sometimes abuse their power in other less savory ways. But the alternative is worse, because relying entirely on rules kills trust, and trust is what helps you get the best out of your vendors.
Trust is open ended: You do your best for me, I do my best for you. That means that people will go above and beyond when necessary, because they hope you’ll be grateful and reciprocate in the future. Rules, by contrast, are both a floor and a ceiling; people do the minimum, which is also the maximum, because what do they get out of doing more?
Having everything spelled out exactly in contract not only removes trust from the equation, it eliminates the contractor's ability to give you what you need instead of what you originally ask for. It precludes that consultant from exercising their expertise even though that expertise is the very reason they were given a contract in the first place.
Granted, there are some advantages to a consultant only being able to do what they are initially asked to do. Unscrupulous contractors can't use that chain of logic in (b) above to convince the client to do a lot of unnecessary things. But if we don't trust government managers enough to resist that convincing, why should we trust them enough to write up the RFPs and judge proposals and oversee the performance of the contracts in the first place?
I've been consulting less than a year, and I've already been exposed to too many government agencies who are the equivalent of a hypochondriac who stays up all night reading WebMD. "Yes, yes, I understand you have a fever and your neck is stiff, but no, you do not have meningitis... no it's not SARS either... or bird flu." "Yes, I understand you head everyone talking about 'The Cloud,' but no, not every process should be run via Amazon Web Services, and no, you don't need GPUs for that, and no no NO, there is no reason to run a bunch of graph algorithms on non-graph data."
Let's put aside how we personally feel about ObamaCare for a moment. Ignore for the time being any considerations of the politics, economics, efficiency, justice, equity, etc. of the law. ((After all, you, dear readers, are strangers to me, and I find it slightly uncivilized to discuss politics, religion or sex with strangers.))
Let's steer far clear of the Knowledge Problem and the Calculation Debate and other perennial political-economic anlages.
Let's not refer to the NHS, or contemplate citations to the Oregon Medicaid Study or the World Health Report 2000.
Let's certainly not contemplate an alethiological analysis of the utterance "If you like your doctor, you will be able to keep your doctor. Period."
I'm going to leave all that stuff off the account while I explain to you why I'm brimming with epicaricacy at the failure of the Healthcare.gov exchange launches.
However, the sub-rational side of me is loving this. Not for any partisan reasons — I'm an "a plague a' both your houses" sort of guy — but rather because it is so satisfying to this geek to see the President, ((This president, no less, so lovingly and recently hailed as "the iPod President.")) his cabinet secretaries, senators, and all the other high and mighty mandarins and viziers of the Beltway brought low before the intransigent reality of Code.
All these powerful people are learning (one hopes) the painful lesson that so many powerful people before them have learned when confronting technical problems. It does not matter how many laws you can create with the stroke of your pen, nor how many regiments you can order about, nor how many sheriffs or tax collectors or wardens you direct: you can't give orders to Computers. It is nice to see such mighty people forced to acknowledge — as thousands of hapless executives and others have in the past — that things are not as simple as commanding geeky worker bees to make it so. No number of fiats, from however august an authority, can summon software in to being: It must be made.
As my father — a former legislative assistant on the Hill — said, "passing a law requiring the exchanges to be open is like passing a law forbidding people from being sick, and just as effective."
Compilers don't care about oratory or rhetoric. Political capital can't find bugs. Segfaults aren't fixed at whistle-stops or town-halls or photo-ops. No quantity of arm-bending or tongue-wagging or log-rolling or back-scratching can plug memory leaks. You can't hand-shake or baby-kiss your way into working code.
I tend to see two different mistaken attitudes among non-geeks when it comes to how software is actually made. Some people think it's complete magic, which is flattering but utterly wrong. Others see it as "just pressing buttons," which is wrong but utterly arrogant.
Programmers sit at computers, stare at monitors, and type. Which is exactly what J. Random Whitecollar does, so how hard can it be? It is, after all, "just typing" — although in the same way that surgery is just cutting and stitching. ((One should be as loathe to hire inexperienced coders for mission critical software as one would be to hire a butcher and a tailor to collaborate on an appendectomy.))
I have become accustomed — as every CS grad student becomes — to getting emails from founders seeking technical expertise for their start-ups. The majority of these are complete rubbish, written by two troglodytes who imagine that coming up with an idea plus a clever name for a website constitutes the bulk of the work. These emails typically include a line about "just needing someone to create the site/app/program for us." This is a dead give away that these people will make terrible partners. Just create it? Just? You might as well tell a writer that you have an idea for a novel, and could he please just write the book for you?
This is the same attitude I see from the the White House. Not only did they start off the process with the general suits-vs-geeks attitude, they continued at every turn to place precedence on political desires over engineering realities: failing to set realistic deadlines from the start, leaving all details up to the numerous "the secretary shall determine" clauses in the legislation, delaying the date that states must decide if they would run their own exchanges, delaying finalizing what the rules on the back end would be for insurers, HHS insisting on doing the general contracting itself, ((Yes, really. Because they have so much experience with this sort of thing.)) the head-in-the-sand "brisk management" they engaged in when it became clear the deadline would be slipped, etc., etc. Over and over again the political establishment prioritized their own wants over the engineering needs.
Suits imagine they have the hard job, because that's the only job they know how to do. Yes, the political wrangling is difficult. But we geeks have sat in those frustrating meetings, attempting to get disparate parties on the same page. We've drafted those memos, and written those reports, and had those conference calls. We have to do all that too. When's the last time the suits tried our job? When's the last time they wrestled with memory allocation bug in ad hoc dynamic data structures nested four deep? When have they puzzled out a floating point underflow error? When have they deciphered an undocumented API?
The psychologically easiest response, when confronted with something you have no clue how to do, is to assert that it's simple, and you would easily do it if only you had the inclination and time denied you by having to deal with more rigorous matters.
I don't want to fall in to the opposite trap here of assuming the other guy's job, i.e. the political, non-engineering one, is easy. But let me ask you some questions. How many people in the executive branch have the jobs they do because they donated to a campaign or ran a solid get-out-the-vote drive in a swing state, or did something else politically advantageous to the current occupant of the Oval Office but otherwise entirely unrelated to the department/bureau/administration they now give orders to? And how many political appointees are where they are because they've mastered their craft over tens of thousands of hours of practice?
Now answer those same questions, but substitute "software engineering firm" for "executive branch." What's the ratio of people who get ahead by who-they-know to those who are promoted for what-they-know there? Silicon valley isn't exactly known for sinecures and benefices. On the other hand OPM has entire explicit classes of senior-level officials who are where they are for no other reason than POTUS's say-so. And this isn't some kind of sub-rosa, wink-wink-nudge-nudge thing: this is exactly how the administration is supposed to function.
Let's shift gears and take a look at Charette's (soon-to-be-) classic article, "Why Software Fails." I don't expect the politicians and bureaucrats in charge of this thing to have read K&R or SICP backwards and forwards or have a whole menagerie of O'Reilly books on their shelf. But they at least ought to be familiar with this sort of thing before embarking on a complete demolition and remodel of a sixth of the US economy that was critically dependent on a website.
Here's Charette's list of common factors:
Unrealistic or unarticulated project goals
Inaccurate estimates of needed resources
Badly defined system requirements
Poor reporting of the project's status
Unmanaged risks
Poor communication among customers, developers, and users
Use of immature technology
Inability to handle the project's complexity
Sloppy development practices
Poor project management
Stakeholder politics
Commercial pressures
Let's assume the final one doesn't apply (although I'm sure there were still budget constraints, since I remember multiple proposals all summer and autumn to fix this by throwing more money at it). Other than that, I could find a news story to back up the ObamaCare site making every one of these mistakes other than #7. You're looking at 10 out of 12 failure indicators. Even granting very generous interpretation of events there's no way the Exchanges weren't dealing with at the very least #1, 3, 4, and 6.
I've seen plenty of people on the Right gleefully jeer that this is what happens when you don't have market incentives to guide you. They're right.
I've also seen plenty of people on the Left retort that history is littered with private enterprises that have wasted billions on poorly-executed ambitious IT projects. They're right too.
Of course, they're both wrong as well.
The people on the Right are engaging in a huge amount of survivorship bias. All those companies that screwed up an IT rollout like this aren't around for us to notice anymore. Maybe they aren't bankrupt, but they're not as salient as their successful competitors either. Failures are obscure, successes are obvious.
The people on the Left are misunderstanding how distributed, complex systems like a market work. Yes, individual agents will fail. That's part of the plan, just like it is in evolution. You can't have survival-of-the-fittest without also having the contrapositive. ((ahem *bailouts* ahem!)) We don't have the freedom to develop healthcare.gov via the distributed market-driven exploration process. All of our eggs are in this one basket.
I don't think the people making either claim really grok how a market is supposed to operate. It wouldn't help to give the healthcare.gov developers an equity stake or pay them lavish bonuses. You might get more effort from them or a better group of programmers, but you've still only got a single attempt at getting this right. And the people pointing out all the wasted private-sector IT spending are also missing the point. Yes, there are failures, but the entire system relies on failures to find the successes. The ACA does the opposite of that by forcing everyone to adopt the same approach and continuing to disallow purchasing across state lines. That's a recipe for catastrophic loss of diversity and dampening of feedback signals.
I've seen people on all sides suggest that what we really needed to do was go to Silicon Valley and hire some hotshot programmers and give them big paychecks, and they could build this for us lickety-split. Instead, we're stuck paying people mediocre GS salaries (or the equivalent via contractors), so we get mediocre programmers who deliver mediocre product. I don't think this reasoning holds up. Another common observation, which I also think is flawed, is almost the opposite: there was never a way to make this work since good coders in the Valley expect to get equity stakes when they create big, ambitious software products, and no such compensation is possible for a federal contract.
At the margin more money will obviously help. Ceteris paribus, you will get more talented people. But that's not the whole story, by a long-shot.
1. There's several orders of magnitude between the best programmers and the median programmers. You can't even quantify the difference in quality between the best and the worst, because the worst have negative productivity: they introduce more bugs into the code than they fix. Paying marginally more may get you marginally better coders, but there's a qualitative difference between the marginally-above-average and the All Stars.
2. The way you get the best is not often by offering more money. It helps, it's only a piece of the whole story. The way you get the best is by giving them interesting problems to work on.
3. The exchange is not an interesting problem. In fact it's quite the opposite. It's almost entirely what Eric S Raymond calls "glue" — it pastes a bunch of other systems together, but doesn't do anything very interesting on its own. ESR cautions programmers (quite rightly!) to use as little glue as possible. Glue is where errors — and madness — insinuate themselves in to a project. ((See Eric S Raymond, "The Art of Unix Programming," 2003. This may be a little advanced for a legislator or administrator to read, but is is that much to ask that the people governing these critical systems learn a little but about how they work?))
This is related to all of the discussion I've heard about how Obama got geeks to volunteer to help him create various tools for his campaigns. If hotshot programmers would do that, the thinking goes, why wouldn't they pitch in to build an awesome exchange?
Simple: because the exchange is boring. It's as bureaucratic as it comes. Working on it would require a massive amount of interfacing with non-technical managers in order to comply with non-trivial, difficult-to-interpret legislative/regulatory rules. Do you know how many lawyers a coder would have to talk to in order to manage a project like this?! Coders are almost as allergic to lawyers as the Nac Mac Feegle are.
All that managerial overhead is no fun at all, especially compared to the warm-and-fuzzies some people feel when they get to participate in the tribal activity of a big election. ((Is it Robin Hanson who has the theory about political engagement being another form of team sport and spectation?))
Not only is building the exchanges not fun compared to building a campaign website, but it comes with all sorts of deadlines and responsibilities too. If you think up some little GPS app to point people toward their polling place, but it doesn't work the way you want, or handle a large enough load... no sweat. It was a hobby thing anyway. If it works then you feel good about helping to get your guy elected. If it doesn't then you just move on to the next hobby that strikes your fancy.
Was there a way for the Obama administration to harness some of that energy from the tech community? Yeah. Could they have used open source development to make some of the load lighter? Yeah. But it's no cure-all. At the end of the day there was a lot of fiddly, boring, thankless, unsexy government work to be done.
Let's take a slight detour and discuss Twitter. A professor I know claims that several bad months of performance by healthcare.gov is no big deal. After all, Twitter used to be plagued by the Fail Whale but it's a very successful enterprise now.
First of all, they're the exception. People remember the Fail Whale specifically because Twitter is the opposite of a failure now.
Secondly, tweeting is entertaining. Buying insurance isn't. People will put up with more hurdles being put between them and free fun than between them and expensive drudgery.
Thirdly, Twitter never had to worry about its delicate actuarial calculus being thrown off by a non-random sample of users pushing their way through a clogged system.
Fourthly, if Twitter screwed something up all its users were free to walk away — either until things were fixed, or forever. We don't have that option w/r/t healthcare.gov.
The administration's responses in the last few weeks to the ongoing troubles have been characterized as "legislation by press release." Let's put aside the constitutional/philosophical issue of whether the President is merely tweaking the way a law is executed, as is his wont, or is re-writing the law of the land by presidential motu propio.
I want to point out that this is another area where comparison to Amazon, Netflix, etc. falls short. If Twitter finds out that some part of their design is unimplementable they have complete prerogative to change the design of their service in any way. They can re-write their ToS or feature list or pricing structure however they want, whenever they want. The State utterly lacks such range of motion and nimbleness. There is thus even less point in people on either the Red Team or Blue Team saying "well the private sector builds massive IT projects all the time." They aren't playing the same game.
Jay Carney et al. have been insisting all along that everything is working (or will be working, or should have been working, or whatever the line is today), and the only problem is it's a bit slow, as if this is a trivial matter. I don't think people realize how relentlessly commerce websites are engineered to remove all the slowness. And I mean all the slowness. Every millisecond of delay costs you sales. Every slowdown lowers your conversion rate. Tens of milliseconds are a big deal. ((And for context, conscious thought is best described on a scale of hundreds of milliseconds, so delays that are nearly too brief to perceive lower you chance of completing a transaction by a noticeable amount.)) Having delays delays measures in minutes is unspeakably bad. Delays in the hours are no longer "delays" — they mean the system doesn't work.
(If you don't believe me then you can do a little experiment. If you're using Chrome, open up a new tab, then go to View > Developer > Developer Tools and click on the Network tab. [I know other browsers have a similar function, but I don't remember what they call it off the top of my head.] Once you see that, go back to the tab you just opened and load www.amazon.com. You'll see all the various files needed to display their page listed in the timeline. Note that the "latency" column is measured in milliseconds. If delays of several minutes were just part of doing business, this isn't how developers would want something reported.)
(Update: here's a look at what the exchange sales funnel actually looks like. Not good, especially for the unsubsidized consumers. And considering this is a product we're required to buy. [How well would Amazon do if they had the IRS requiring you to buy books every year in the name of increased national literacy?] Oh, and considering we don't know who will actually end up paying their bills. And is anyone else a little suspicious at how hard it is to get these numbers? What happened to all the promises of freely shared government data from "the most transparent administration ever"? How does that mesh with not releasing how many people have actually purchased a plan?)
These lengthy delays are actually worse than the exchanges not working at all. We'd be better off if they never opened. The healthy kid who's buying a policy because he's told he has to is going to be put off by these delays, but the sick old-timer with diabetes and a bad hip isn't. So rather than not getting any customers, you're getting just the expensive ones. (I feel like we need a sound effect or musical theme to play when the Death Spiral is about to come on stage. Maybe something from "Mars, Bringer of War"?)
This all leads us to Brisk Management and Failing on Time. These are very important engineering management concepts. This post is already dragging on much to long, so I'll summarize in one sentence: it is a huge mistake to take on extra risks just to hit an arbitrary calendar deadline. Or if you'd prefer a sentence with more imagery: it's better that a building take longer to finish than have it done on time but collapse later. The health exchanges look like a textbook case of Failing on Time. Obama was reassuring everyone that signing up was going to be just like shopping on Amazon or Kayak a week — one week! — before the missed launch date. The first missed launch date, that is.
Many of the problems of the exchange implementation were apparent even on paper, in the planning stages. For instance: just how is healthcare.gov supposed to calculate subsidies? That will require a real-time verification of your income. From whence will this information come? The IRS knows a scary amount about us, but it doesn't know until deep into next year how much you made this month. They don't have some server with an API standing by to answer queries like getCurrentIncome(<SSN>). ((If you don't believe me then you should have been around when I was trying to convince Sallie Mae and the Department of Education of my family's correct income was so they could calculate our loan repayments. It took about nine months to convince them that my wife, a teacher, is paid 10 months a year and as a result you can't just multiply her biweekly wages by 26 to get annual income. There are four million teachers in the US, so it's not exactly like this was some rare exception they had to cope with. I wish there was some IRS system for quickly verifying income, because it would have saved me most of a year of mailing in pay stubs and 1040s and W-2s and offer letters and triplicate forms, by which point, of course, the information was out of date and we had to start over.
Sorry to get off on a tangent here, but the federal government is so bad at technology I just can't let this go. And actually, it's not much of a tangent when you consider it was the PPACA that spearheaded the semi-nationalization of the student loan industry. (Drat. I need a footnote for this footnote. A couple of the very important concepts you can learn from "The Art of Unix Programming" (note 7 supra) are the principles of Compactness and Orthogonality. Both of these, and particularly the latter, should be rules for legislation as well. Folding student loan reform into the PPACA in order to game the CBO scoring is a pretty clear violation of both of these principles.)
Compared to health insurance, a student loan is a pretty simple thing. Have you had to deal with studentloans.gov? It's atrocious. Recently they changed the repayment plan that my wife was on without notifying us. That's bad enough. The ugly part is that when they do that, they don't change the displayed label on your account that tells you which plan you're in, so even if you proactively check for changes you won't find out. And the truly hideous thing is that they don't change the label on the info screens that their own representatives can see either, so if you call to verify you still won't find out! It's true that dealing with the banks before was a complete mess, but I chalk that up to the absence of a right-of-exit for consumers. That was bad enough then, but post-nationalization I'm really over a barrel.
Getting people signed up is only the first skirmish for healthcare.gov. All these sorts of ongoing problems, such as the ones I've experienced with student loans, will constitute the bulk of the IT battle, and they have not yet even begun to show up yet.)) So it was pretty inevitable that this feature would be abandoned in favor of the honor system. Which is unfortunate, because I remember ObamaCare supporters swearing up and down that it was completely absurd for their opponents to raise concerns about people hustling the system for subsidies they didn't qualify for. (Not to mention the equivalent assumptions the CBO was forced to make.)
This post is already orders of magnitude longer than I expected, so I'm going to toss in a handful of links to a couple of other people's posts without comment. There were many, many more I could put here, but keeping track of all the ink spilled on this is impossible.
The last four are by Megan McArdle, who is not only one of the most cogent econ-bloggers out there, she also worked as an IT consultant, so she has had a lot of valuable perspective to contribute.
I'll close with this, from Ellen Ullman's excellent memoir Close to the Machine. Ullman was (is?) a card-carrying communist. I mention that so you know she's no anti-government right-wing Tea Party ideologue. This passage describes her experience in the early 90s as the lead developer on a San Francisco project building a computer system to unify all the city's AIDS-related efforts. She started the project over-joyed to be working for "the good guys" instead of some profit-maximizing robber barons, but very quickly it turns in to this:
Next came the budget and scheduling wrangles. Could the second phase be done in December? At first I tried what may be the oldest joke known to programming managers—"Sure you can have it in December! Of What year?"—but my client was in deadly earnest. "There is a political deadline," they said,"and we can't change it." It did no good to explain that writing software was not a political process. The deed was done. They had gone around mentioning various dates—dates chosen almost at random, imagined times, wishes—and the mentioned dates soon took on an air of reality. To all the world, to city departments and planning bureaus, to task forces and advisory boards, the dates had become expectations, commitments. Now there was no way back. The date existed and the software would be "late." Of course, this is the way all software projects become "late"—in relation to someone's fantasy that is somehow adopted as real—but I didn't expect it so soon at the AIDS project, place of "helping people," province of "good."
I asked, "What part of the system would you like me not to do?"
"You tell us," they said.
"This one. This piece here can't be done on time."
"But we must ace that one! It's a political requirement."
Round and round: the same as every software project, any software project. The same place.
My brief recent dabbling with computational cartography (see Part 1 & Part 2) was inspired by Neil Freeman. His project was initiated as a (speculative) reform to the Electoral College.
Incidentally, this is a great idea. Every four years when whichever party looses the White House starts complaining about how silly the Electoral College is I feel compelled to point out that it's actually an excellent way to maximize the expected impact of any one voter's vote. The catch is that this requires states to have equally-weighted populations, which obviously we don't.
I just ran a quick Monte Carlo to check the numbers on this. Let's say you voted in 10 million elections in your life, and in each one there were 100 million voters. (This is obviously way too many elections, but I wanted something that didn't require me to report the results in scientific notation.) Let's further stipulate that every one of those races was close: a dead heat with a standard deviation of 1%. In this idealized case you could expect to be the tie-breaking vote — thus having your vote "matter" — in less than four of them. Four in ten million.
cnt = 10000000;
pop = 100000000;
for i=1:1000,
results = round(random('normal',n/2,pop*.01,[cnt 1]));
ties=[ties;sum(results==n/2)];
end;
mean(ties) => ans = 3.9160
If, on the other hand, there were 51 states of 1,960,784 voters each, you would be the tie-breaker for your state 203 times. This would only matter if the other 50 states were tied 25/25. You can expect this to happen 11.2% of the time. Which means your chance of being the deciding vote goes from 4-in-10,000,000 to 23-in-10,000,000 which means your vote "matters" more than five times as much in the (modified) Electoral College system than it does in the winner-take-all system.
All of this is far too numerical to explain to someone in casual conversation, so my standard defense of the Electoral College is to agree that we should abolish it as soon as the winner of the World Series is decided by whichever teams scores the most runs in 63 innings rather than the team which wins four of seven games. The reasons to prefer both the Electoral College and best-of-seven champions are the same, but I'm already way too far off track to cover that now.
Okay, back to the point. Equally-sized states would be cool. But the chances of us redrawing state boundaries is zero.
Despite this, I think an automated system to draw boundaries for equally populated regions is far more than just an exercise. Why? Because while our state boundaries are effectively carved in stone by tradition — or in the case of my home of Maryland, literally carved in stone — there are political borders that are re-drawn with regularity: congressional districts.
I'm also pretty much agnostic on term limits. I used to be opposed to them. And as I see how effective gerrymandering is, which I think is one of the reasons people hate Congress but love their Congressmen, or at least tolerate their Congressmen, because of gerrymandering, I think that's something that is difficult to deal with and probably wouldn't be helped that much by term limits. If you've got a gerrymandered district, especially with modern gerrymandering technology, the new guy who gets elected in it is likely to look an awful lot like the old guy because the district is so one-dimensional.
Taking the "especially with modern gerrymandering technology" as a given, how can we use technology to pre-empt this trend? It's usually best to address technical problems with technical solutions rather than social ones.
One option is to draw the boundaries the same way we do now, but reject them if their score on some measure falls below a certain acceptable level. For example, IDV Solutions has ranked districts byperimeter/sqrt(area). This is, by their own admission, a crude way to do things. To name just one problem, it favors districts that fall on state borders like the Mason-Dixon Line over those that fall on borders like the Potomac River.
Other measures are possible, obviously. One that I think would be useful is based on nearest neighbors. In a non-gerrymandered map the vast majority of addresses will have all of their k nearest neighbors in the same district. As districts become more complex, more people will be have large proportions of their neighbors in other districts. If you limit the neighbors to those within the state, which is reasonable, you minimize the Mason-Dixon/Potomac problem.
Another measure, which I've thought out less fully, would be to measure the Kolmogorov Complexity of the map. The descriptive length of the Maryland 3rd is... well "long" doesn't cut it. Compare that to the former definition of the Maryland 1st: "the Eastern Shore up to the Susquehanna River." The Kolmogorov Complexity itself is incomputable, but I think we have very good proxies in this case. Whoever draws the maps must be able to present in writing, with no graphics, an unambiguous definition of the districts in less than k words. Or alternately, must be able to present them orally to a judge, spoken and without visual aids, in c minutes.
There's a general principle here: If you can't explain it succinctly, it's a bad rule.
Another approach might be to measure the curvature or the fractal dimension of boundaries between districts. I've thought even less about this though.
The approach of capping some metric has the option of being simple, objective, and easily implementable. Whatever partisan committee which now draws the borders presents their map to an adjudicator, who passes it through an established, preferably open-source, program which calculates the metrics. If they're above some pre-established threshold the map is rejected. This is already an improvement over the subjective way we do things now.
A second option, suggested by David Friedman, is to have the partisan committee submit not a finished map, but a program to generate a map. The inputs to the program could contain population data, municipal boundaries, etc. but no information about party affiliation or past voting records, or any proxies for them. There would also be a cap on program length to keep anyone from submitting what is essentially just a precomputed lookup table.
Given the low numeracy I see in most legislators, I think it would be a hard sell to get them produce such a program, but the idea has tremendous appeal to me. The whole problem is to remove subjectivity and make the decision on the basis only of explicitly acceptable criteria, which is exactly what computers do: they're entirely dispassionate and the only know what you tell them.
We could take this one step further and simply codify the program to be used rather than having partisan committees writing their own after every census. I find a technocratic Bureau of Districting Algorithms much easier to imagine than local politicians wrestling with GIS algorithms.