The Three Laws of Politics

I had a revelation during my commute this morning—and, lucky you, I’m going to share it with you. But first, I’m going to give you a brief and reductive history of my personal politics.

I grew up with liberal Democrats for parents—which meant that, as a rebellious teenager, I decided to become a conservative Republican, which lasted until the Bush administration’s response to 9/11 made being a conservative Republican distasteful to me. Which meant, of course, that I had to be an adult and decide on my own politics.

In my early-to-mid twenties, things like social democracy and distributism and Utah Phillips appealed to me, I suppose primarily for their idealism, which appeals to people in their early twenties, I guess. For a while I flirted with libertarianism, but eventually decided (or realized) that it was based on a fantasy about the Founding Fathers, who were slave-owning oligarchs, and maybe not the best guys. I realized at roughly the same time that I was too much of a radical leftist to be libertarian, and so started flirting with anarchism (though there was probably a long period where I was both a libertarian and an anarchist—what kept me attracted to libertarianism was its (conservative) anarchism).

The problem with anarchism, though, is that it doesn’t really work on a massive scale—because, you know, the state of nature, the world before/without government, it’s not a nice place, whatever Rousseau may have said——and, also, in my later twenties, idealism seemed both naïve and impractical. I got around (or tried to get around) this by calling myself an “anarcho-pragmatist”—working toward the goal of a stateless, non-coercive society, but aware that it was an unreachable goal. I still use that label to identify myself politically when such a label is necessary, although I’m not sure how useful (or applicable) it is.

So, the revelation: What I want is a government constrained by Asimov’s Three (Four) Laws of Robotics:

  1. A robot [government] may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot [government] must obey the orders given to it by human beings [constituents], except where such orders would conflict with the First Law.
  3. A robot [government] must protect its own existence as long as such protection does not conflict with the First or Second Laws. [Maybe this one is less necessary.]

Also, the zeroth law: A robot [government] may not harm humanity, or, by inaction, allow humanity to come to harm.

I have no idea how workable this is as a politics: it only just occurred to me this morning. Certainly there are ambiguities waiting to become serious problems in the definitions of, say, “injure” and “harm”—and, as Asimov himself notes (or has R. Daneel Olivaw note in Foundation and Earth), how do you decide what is “harmful” to “humanity”? Also, a robot is a singular entity—an individual, a “person”—and a government is a system, an organization, a structure: is it meaningful or possible to apply the three laws to such a thing?

Maybe it’s just idealism masquerading as science fiction masquerading as political theory, or something—another way of saying “do unto others as you would have done unto you” or “be excellent to each other,” except with the government as one of the others. Maybe it’s just another way of writing/defining democracy: government for, of, and by the people (which is maybe itself an idealistic fantasy).

Maybe I just read the Foundation series too many times as a kid.

 

Advertisements


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s