Social Technology

Not all technology is made of atoms and bits. Some technology is made of people. I call this “Social Technology”, which is the technology of human organization, made out of human agreements, beliefs, worldviews, and plans.

Many people are using this terminology now, which is good. It is a good paradigm for thinking of human affairs, which is neither moralistic, nor relativistic. Though it unfortunately underemphasizes the organic reality of social organization.

Let’s start with some examples of social technology:

To analyze these things as “Social Technology” implies a number of things about the social world: That social systems can be analyzed scientifically, that they can be designed, that they have purpose, that some are better than others, and that they aren’t just sacred recieved tradition or oppressive anachronisms.

For example, the paradigm of social technology denies relativism. It says that different ways of doing things lead to different effects in reality, which necessitates different judgements about the goodness of doing things those ways. The insertion sort is superior to the bubble sort, the internal combustion engine superior to the steam engine, and depending on our aims, this kind of family structure is superior to that kind.

It also denies, or claims superior authority to, a sort of moralism. Within the social technology paradigm, moral judgements are the enforcement mechanism of the logic of the social technology. Ideally the logic of the moral judgement tracks the logic implied by the engineering factors, but it isn’t always so. An analysis of a social technology as such claims to be able to either explain or refute all these relevant moral judgments in terms of mechanisms, consequences, and purposes.

When discussing social technology, one often encounters the objection that “social engineering” is apt to be utopian, totalitarian, naive, even destructive, as if engineers constantly run around tearing down bridges and putting up half-baked armchair-designed bridges. This unfortunately comes from historical events where naive utopians wanted to run around tearing everything down, building totalitarian authority, and otherwise causing trouble, and used the language of science and engineering to give their activities a better brand.

The engineering of social technology does not have to be idiotic in this way. In fact it should ideally be done with the same carefully controlled and extremely conservative kind of processes with which safety-critical material systems like bridges are engineered. But that conservatism comes as a result of maturity of the field; we currently have people doing cowboy social engineering without any consciousness or responsibility at all. The prerequisite is to rectify the self consciousness of social technologists, be more explicit about what we’re doing, and align responsibility with power, so that the field and its methods can be matured.

I’ve declared the objective before of developing an engineering-grade science of social technology. If such a thing could be pulled off, a vastly better social world could be built, where social relations and institutions just work, because they would be well designed for their purposes, and people would actually understand how they work. There are challenges, but it seems worth the investment.