An old science fiction novel features a device that surrounds its bearer with an impenetrable bubble of force. The inventor rapidly discovers that every government and political faction on the planet wants what he has and is prepared to use any means, from persuasion to brute force, to get it. Our hero spends most of the book alternately listening to arguments, trying to decide who are the good guys and using his invention to help him escape attempts to capture him.
After about a hundred and fifty pages he realizes that he has been asking the wrong question. The answer to "what faction can be trusted with a monopoly over the shield" is "no." The right question is how the shield will affect the world—how it will alter the balance between freedom and oppression, individual and state, small and big. The answer to that is easy. A world where the random individual is armored against anything short of an atomic explosion will be, on net, a better and freer world than the one he is currently living in. He writes out an explanation of how the shield works and spends two days distributing the information to people all over the world. By the time Military Security—the most formidable of his pursuers—catches up with him, it is too late. The cat is out of the bag.
Poul Anderson's shield is fiction. The nearest real world equivalent is privacy—my control over other people's access to information about me. Neither my government nor my neighbor can punish my thoughts, because neither can read my mind. That is why thoughts are free. However much other people are offended by what I write, they cannot retaliate unless they know who wrote it, what he looks like, where he lives. That is why Salmon Rushdie is still alive despite the death sentence passed on the author of The Satanic Verses fifteen years ago by Iranian authorities.
Defensive weapons can be used for bad purposes; an impenetrable shield would be very useful for a bank robber. But it would be even more useful for the bank teller. Robbing banks would be harder in a world where everyone had the shield than in a world where nobody did.
The ability to control other people's access to information about you can be used for bad purposes too. That is the usual argument against privacy—"If you haven't done anything wrong, what do you have to hide?" The ability to conceal past crimes from the police and potential victims is useful to a robber. But the ability to conceal what I have that is worth stealing, where it is, how it is protected, is equally useful to the potential victim. Broadly stated, privacy gives each of us more control over his own life—which on average, if not in every case, is likely to lead to a freer world.
If I am a bad guy, the police are not the only people I might want to keep secrets from. When courting a wealthy widow, it helps if she does not know that my last three wives drowned in their bath tubs after taking out large life insurance policies. When borrowing money, it helps if the lender does not know that I have declared bankruptcy twice already.
But in a world of voluntary transactions—such as loans and marriages—my privacy does not require you to take me on faith. You have the option of not taking me. I have the power to keep my past defaults secret from a potential lender but he has the power to refuse to lend to me if I do. Privacy is my ability to control other people's access to information about me. That does not mean that they cannot get the information—only that they cannot get it without my permission. Someone who offers to take care of my children but refuses to allow me access to the records that would show whether or not he has ever been convicted of child abuse has already told me all I need to know.
In some contexts I am willing to let other people know things about me. In others I am eager to. If only lenders knew a little more about my finances I would not be interrupted at dinner by phone calls from people offering to refinance my nonexistent mortgage. If sellers were better informed about what sorts of things I was interested in buying, advertisements would be less of a nuisance and more of a service. Even in a world where I could keep information secret, I often would choose not to. Privacy provides me protection when I want it and only when I want it.
Government is not reason. It is not eloquence. It is a force, like fire: a dangerous servant and a terrible master.
Privacy includes the ability to keep things secret from the government. The better I can do that, the less able government is to help me—I might be keeping secret my weakness for alcohol, or heroin, or gambling or pornography and so preventing the government from stepping in to protect me from myself. And the better other people can keep secrets from the government, the harder it is for the government to protect me from them. If you view government as a benevolent super being watching over you—a wise and kindly uncle with a long white beard—you will and should reject much of what I am saying.
But government is not Uncle Sam or a philosopher king. Government is a set of institutions through which human beings act for human purposes. Its special feature—what differentiates political action from the other ways in which we try to get what we want—is that government is permitted to use force to make people do things. A firm can try to fool me into giving it my money. A tax collector uses more direct methods. A preacher can try to persuade me to renounce my sins. The Drug Enforcement Administration, with the help of the local police, can arrange to have me locked up until I do.
Part of the genius of American political culture is the recognition that making it hard for governments to control people is not always a bad thing. Political mechanisms, even in a democracy, give us only very limited control over what government can do to us. Reducing government's ability to do bad things to us, at the cost of limiting its ability to protect us from bad things done to us by ourselves or by other people, may not be such a bad deal. And since government, unlike a private criminal, has overwhelming superiority of physical force, control over what information it can get about me is one of the few ways in which I can limit its ability to control me.
I have defined privacy and sketched the reasons why I think it is, on the whole, a good thing. The obvious next questions are where privacy comes from—what determines how much of it we have—and what we can and should do to get more of it.
One of the things that determines how much control I have over other people's access to information about me is technology. If someone invents a mind reading machine or a reliable truth drug, my thoughts will no longer be as private as they now are. Or as free.
Another is custom—systems of social norms. The more willing my friends and neighbors are to gossip about something, the easier it is for information about that something to get from those who have it to those who want it. That is one reason why Israelis are better informed about how much money their friends and relations make than Americans are and modern Americans better informed about other people's sex lives than nineteenth century Britons were.
A final factor is law. In the U.S., the Fourth Amendment to the Constitution prohibits "unreasonable searches and seizures" and requires that search warrants shall only be issued with probable cause. The more narrowly courts interpret that restriction, the easier it is to keep secrets from the police. One important example is the series of cases that applied the restriction to wiretaps as well as physical searches. Later cases have ruled on to what extent the use of high tech devices to figure out what people are doing inside their houses—infrared photographs to spot illegal greenhouses growing marijuana, for example—is a search and so requires a warrant.
Law and technology interact in complicated ways. For your neighbor's nosy fifteen year old to use a scanner to listen to the phone calls you make on your wireless phone and tell his friends about them is illegal. It is also easy, making that particular legal protection of privacy in practice unenforceable. The substitute is technology—encryption of the signal from the handset to the base station. Similarly with cell phones.
As these examples suggest, technological developments can both decrease and increase privacy. So can law. Legal rules that ban or limit technologies for learning things about other people, such as laws against wiretaps, increase privacy. Legal rules that ban or limit technologies for preventing other people from learning things about us, such as restrictions on the use of encryption, decrease it.
It used to be that one reason to move from a village to the big city was to get more privacy. Walls were no higher in the city, windows no less transparent. But there were so many more people. In the village, interested neighbors could keep track of what who was doing with whom. In the city, nobody could keep track of everyone.
That form of privacy—privacy through obscurity—is doomed. I cannot keep track of the million people who share the city I live in. But the computer on my desk has enough space on its hard drive to hold a hundred pages of information on every man, woman and child in San Jose. With a few hundred dollars worth of additional storage, I could do it for everyone in California, for a few thousand, everyone in the country. And I can do more than store the information. If I had it I could search it—produce, in a matter of seconds, a list of those of my fellow citizens who are left handed gun owners with more than six children. Privacy through obscurity cannot survive modern data processing.
As it happens, I do not have a hundred pages worth of information on each of my fellow citizens. But with a little time and effort—too much for a single individual, but not too much for a government, a police department, or a large firm—I could. It is hard to pass through the world without leaving tracks. Somewhere there is a record of every car I have registered, every tax form I have filed, two marriages, one divorce, the birth of three children, thousands of posts to online forums on a wide variety of subjects, four published books, medical records and a great deal more.
Much such information, although not all of it, was publicly available in the past. But actually digging it up was a lot of work. The result was that most of us went through life reasonably sure that most of the people we met did not know much about us beyond what we chose to tell them. That will not be true in the future.
Data processing is one technology with the potential to sharply reduce privacy. Another is surveillance. One form—already common in England—is a video camera on a pole.
A video camera in a park connected to a screen with a police officer watching it is, at first glance, no more a violation of privacy than the same police officer standing in the park watching what is going on. It merely lets the officer do his watching somewhere warm and out of the wet. Add a video recorder and it is arguably an improvement, since the evidence it produces is less subject to mistake or misrepresentation than the memory of the policeman. And, judging by British experience, such surveillance cameras are an effective way of reducing crime. What's the problem?
To see the answer, add one more technology—face recognition software. Combine that with a database, put up enough cameras, and we have a record of where everyone was any time of the day and—with suitable cameras—night. The arresting officer, or the prosecuting attorney, no longer has to ask the defendant where he was at eight P.M. of July ninth. All he has to do is enter the defendant's social security number and the date and the computer will tell him. And, if the defendant was in a public place at the time, show him.
For a slightly lower tech version of the same issue, consider the humble phone tap. In the past, the main limit on how many phones got tapped by police was not the difficulty of getting a court order but the cost of implementing it. Phone taps are labor intensive—someone has to listen to a lot of phone calls in order to find the ones that matter.
That problem has now been solved. Voice recognition software originated by companies such as Dragon Systems and IBM lets computers convert speech into text—a boon for computer users who are slow typists. The same technology means that the police officer listening to someone else's phone calls can now be replaced by a computer. Only when it gets a hit, spots the words or phrases it has been programmed to listen for, does it need to call in a human being. Computers work cheap.
In an old comedy thriller (The President's Analyst, starring James Coburn) the hero, having temporarily escaped his pursuers and made it to a phone booth, calls a friendly CIA agent to come rescue him. When he tries to leave the booth, the door won't open. Down the road comes a phone company truck loaded with booths. The truck's crane picks up the one containing the analyst, deposits it in the back, replaces it with an empty booth and drives off.
A minute later a helicopter descends containing the CIA agent and a KGB agent who is his temporary ally. They look in astonishment at the empty phone booth. The American speaks first:
"It can't be. Every phone in America tapped?"
The response (you will have to imagine the Russian accent)
"Where do you think you are, Russia?"
A great scene in a very funny movie—but it may not be a joke much longer. The digital wiretap bill, pushed through Congress by the FBI a few years ago, already requires phone companies to provide law enforcement with the ability to simultaneously tap one percent of all phones in a selected area. There is no obvious reason why that cannot be expanded in the future. My current estimate is that the dedicated hardware to do the listening part of the job—for every phone call in the U.S.—would cost less than a billion dollars. And it is getting cheaper.
So far I have been discussing technologies that already exist. Fast forward a little further and surveillance need no longer be limited to public places. Video cameras are getting smaller. It should not be all that long before we can build one with the size—and the aerodynamic characteristics—of a mosquito.
Here again, if we regard government law enforcement agents as unambiguously good guys, there is no problem. The better our record of where everyone was when, the easier it will be to catch and convict criminals.
The same technology would make keeping track of dissidents, or political opponents, or members of an unpopular religion, or people with the wrong sexual tastes, or people who read the wrong books, or anyone else, a great deal easier than it now is. It is true that the random government is rather less likely to have bad intentions than the random criminal. But if it does have bad intentions it can do a great deal more damage.
The technologies I have been discussing so far—database and face recognition software, surveillance hardware—have the potential to make this a much less private world. So do other technologies that I have not covered: improvements in lie detectors and interrogation drugs to learn what we think, biometric identification by fingerprints, retinal patterns, DNA to learn who we are, with or without our permission. The future implications of such developments are sufficiently strong to have convinced at least one thoughtful observer that the best we can hope for in the future is a transparent society, a world without privacy where the police can watch us but we can also watch them (Brin 1999). I would find the symmetry of that future more appealing if it did not conceal an important asymmetry: They can arrest us and we cannot arrest them.
But there are other technologies.
We start with an old problem: How to communicate with someone without letting other people know what you are saying. There are a number of familiar solutions. If worried about eavesdroppers, check under the eaves. To be safer still, hold your private conversation in the middle of a large, open field, or a boat in the middle of a lake. The fish are not interested and nobody else can hear.
That no longer works. The middle of a lake is still within range of a shotgun mike. Eaves do not have to contain eavesdroppers≠just a microphone and a transmitter. Phone lines can be tapped, cordless or cell phone messages intercepted. An email bounces through multiple computers on its way to its destinationãanyone controlling one of those computers can save a copy for himself.
The solution is encryption. Scramble the message. Provide the intended recipient with the formula for unscrambling it. Now it does not matter if someone intercepts your mail. He can't read it.
There is still a problem. In order to read my scrambled message you need the key≠the formula describing how to unscramble it. If I do not have a safe way of sending you messages, I may not have a safe way of sending you the key either. If I sent it by a trusted messenger but made a small mistake as to who he was really working for, someone else now has a copy and can use it decrypt my future messages to you.
About twenty-five years ago, this problem was solved. The solution is public key encryption. It works by using two keys, each of which decrypts what the other encrypts. One of the two—my public key—I make available to anyone who might want to send me a message. The other never leaves my hands. Someone who wants to communicate with me encrypts his messages with my public key. I use my private key to decrypt them.
Public key encryption provides a free bonus—digital signatures. In order to prove that a message was sent by me I can encrypt it using my private key. The recipient decrypts it using my public key. The fact that what comes out is text rather than gibberish proves it was encrypted with the matching private key—which only I have. Hence, unless I have been very careless, the message is from me.
Imagine a world where public key encryption is in general use. Add in related technologies such as anonymous digital money, to permit payments that leave no paper trail, and anonymous remailers, to keep who I am talking to, as well as what I am saying, private—for details see (Friedman 1996). In that world I can email someone—anyone—with reasonable certainty that nobody else can read the message. I can have telephone conversations without worrying about who may be listening. In that world I can if I wish establish an online persona—an identity defined by my digital signature—while keeping control over the link between that and my realspace persona. However much my online persona offends someone—even the rulers of Iran—there is very little anyone can do about it. It is hard to murder someone when you don't know his name, what he looks like, or what continent he is on.
I have been describing things we already know how to do. Most can already be done—using free software that runs on the computers most of us have. I now take a small step forward to add one more element to the mix: Virtual reality. Using goggles and earphones—if we are willing to step further into science fiction, direct links between mind and computer—we create the illusion of seeing, hearing, perhaps tasting and touching. The world of strong privacy expands from text messages and phone conversations to something very much like the real world we currently live in. Just let your fingers do the walking.
I have described two clusters of technologies. One—database, voice and text recognition, surveillance—has the potential to reduce privacy to the point where those who control the technology know very nearly everything that everyone does. The other—encryption, online communication, virtual reality—has the potential to increase privacy to the point where individuals have nearly total control over other people's access to information about them. What if we get both?
It will be an interesting world. Everything you do in realspace will be known to the authorities, perhaps to everyone—David Brin's Transparent Society. But most of the important stuff—all transactions involving information, ideas, arguments, beliefs—will have been moved to cyberspace, protected by the strong privacy of encryption. Freedom of speech will no longer depend on how the Supreme Court interprets the First Amendment. It will be protected, instead, by the laws of mathematics—which so far, at least, heavily favor defense over offense, encryption over cracking.
There will be—already have been—attempts to use law to block both futures. Supporters of privacy will try to get laws restricting the ability of law enforcement—and other people—to use technology to learn our secrets. Opponents of privacy will try to get laws restricting the ability of private individuals to use encryption to protect their secrets.
There are at least two legal approaches to preserving privacy in the face of technologies such as computer databases and surveillance. One is to use law to prevent other people from getting information—a data base is of no use if there is nothing in it. The other is to permit other people to get information but use law to limit what they can do with it.
An example of the first approach is regulation of wire tapping and other forms of surveillance—both laws against private surveillance and laws restricting surveillance by law enforcement agents. Such restrictions can keep some information about me from getting to other people. But they do nothing to protect the vast amount of information that I generate by going about my daily life in the public view—buying and selling, marrying and getting divorced, writing and talking.
An example of the second approach is the web of restrictions, legal, contractual, and customary, on the use of confidential information. I cannot keep my doctor from having access to the medical information he creates when he examines me and uses when he prescribes for me. But I can, to some limited degree, prevent him from sharing that information with other people. Credit bureaus are free to collect information on people in order to advise other people as to whether to lend them money but, under current Federal law, they are only permitted to release that information in response to requests from people who have a legitimate need for it.
As the example of credit bureaus suggests, there are practical difficulties with protecting privacy by letting other people have information and then controlling what they do with it. Credit agencies could not serve their intended purpose at any reasonable cost if they engaged in an extensive investigation of everyone who asked for information. And even if the agency limits itself to giving the information to people who can prove they are entitled to it, there is no way it can control who they then give it to. It is probably prudent to assume that what the credit agency knows about you any else can know if he really wants to. The forms you sign when you shift to a new doctor include an extensive list of people to whom and circumstances under which your medical information will be made available, so it might be equally prudent not to rely too much on your medical privacy.
As long as we limit our options to current technologies for protecting privacy, the outlook does not look good. We might succeed in restricting the use of surveillance, wiretapping, and similar technologies, although attempts to restrict their use by law enforcement face serious opposition by those concerned with the threat of crime and terrorism. But most information about us is public, and once information is out it is hard to control how other people use it or who they give it to.
The technologies of strong privacy offer at least a partial solution. If I make a purchase with a credit card, I create a paper trail—someone, somewhere, knows what I bought. Even if I use cash, a purchase in real space requires me to walk into a store where someone sees me—the information about what I bought is now his as well as mine. In a world where the relevant software is a little better than it now is—say ten years in the future—that someone is a store video camera linked to facial recognition software linked to a database. Stores, after all, like to know who their customers are.
If, however, I buy something over the phone or over the internet, using the digital equivalent of cash—anonymous digital currency—only I know that I bought it. If the something is not a physical object that must be delivered to me but information—music, data, software—I can collect my purchase online without ever revealing my identity or location.
Thus the technologies of encryption and computer networking can permit us, to a considerable extent, to move through the world without leaving footprints behind. If I want to receive advertising based on my past purchases—as it happens I often do—I can choose to make those purchases under my real name and provide my real address. If I want to receive the advertising without making my acts publicly observable—perhaps I am purchasing pornography—I can do it via an online identity. The link that ties my realspace body to my cyberspace persona is under my control. I have privacy—control over other people's access to information about me.
If we go a little further into science fiction I could even have privacy from my doctor. He knows the information that an examination—via remote controlled devices—revealed about me. He does not need to know what my name is, my face looks like, or where I live. It is not likely that I would want to carry my privacy that far—but I could.
So far I have been considering ways in which we might preserve privacy against the threat posed by technology. But there is another side to the story. For those who think that we already have too much privacy, what I view as the solution may look more like the problem. There have already been attempts to restrict the use of encryption to protect privacy. There will be more.
Suppose I concede, at least for the purposes of argument, that it is possible to have too much privacy as well as too little. Further, and less plausibly, suppose I believed that the strong privacy provided by encryption is a serious problem. How might one use law to solve it?
One difficulty is that encryption regulation poses the problem summed up in the slogan—"when guns are outlawed, only outlaws have guns." The mathematics of public key encryption have been public for decades. The software to do it already exists in a variety of forms, some of them freely available. Given the nature of software, once you have a program you can make an unlimited number of copies. Keeping encryption software out of the hands of spies, terrorists, and competent criminals is not a practical option. They probably have it already, and if they don't they can easily get it. The only people affected by a law against encryption software are the law abiding.
What about banning or restricting the use of encryption—at least encryption that cannot be broken by law enforcement agents? To enforce such a ban law enforcement agencies could randomly monitor all communication systems, looking for illegally encrypted messages. One practical problem is the enormous volume of information flowing over computer networks. A second and even more intractable problem is that while it is easy enough to tell whether a message consists of text written in English, it is very much harder—in practice impossible—to identify other sorts of content well enough to be sure that they do not contain encrypted messages.
Consider a three million pixel digital photo. To conceal a million character long encrypted message—an average sized novel—I replace the least significant bit of each of the numbers describing the color of a pixel with one bit of the message. The photo is now a marginally worse picture than it was—but there is no way an FBI agent, or a computer working for an FBI agent, can know precisely what the photo ought to look like.
Short of banning communication over computer networks or at least restricting it to text messages, there is no way that law enforcement can keep sophisticated criminals, spies, or terrorists from using encryption. What can be done is to put limits on the encryption software used by the rest of us≠to insist, for example, that if AOL or Microsoft builds encryption into their programs it must contain a back door permitting properly authorized persons to read the message without the key.
This still leaves the problem of how to give law enforcement what it wants without imposing unacceptably high costs on the rest of us. Consider the description of adequate regulation given by Louis Freeh, at the time the head of the FBI—the ability to crack any encrypted message in half an hour. The equivalent in realspace would be legal rules that let properly authorized law enforcement agents open any lock in the country in half an hour. That includes not only the lock on your front door but the locks protecting bank vaults, trade secrets, lawyers' records, lists of contributors to unpopular causes, and much else.
Encryption provides the locks for cyberspace. If all legal encryption comes with a mandatory back door accessible in half an hour to any police officer with a court order, everything in cyberspace is vulnerable to a private criminal with the right contacts. Those locks have billions of dollars worth of stuff behind them≠money in banks, trade secrets in computers and in messages. If being a police officer gives you access to locks with billions of dollars behind them, in cash, diamonds, or information, some cops will become criminals and some criminals will become cops.
In one important way, the consequence for cyberspace is even worse than the equivalent in realspace. If a police officer opens a safe and pockets a stack of cash or a bag of diamonds, the owner can see that something is missing and demand it back. But when information is copied the original is still there. If the officer who has decrypted your communications or stored data assures you that he found nothing relevant to his investigation and so took nothing away, there is no way to prove he is lying.
For encryption regulation to be useful it must either prevent the routine use of encryption or make it easy for law enforcement agents to access encrypted data and messages. Not only would that seriously handicap routine transactions, it would make computer crime easier by restricting the technology best suited to defend against it. And what we get in exchange is protection not against the use of encryption by sophisticated criminals and terrorists≠there is no way of providing that≠but only against its use by ordinary people and unsophisticated criminals. It does not look like a very attractive deal.
Some years ago Professor Etzioni, who has contributed a chapter to this volume, published a book arguing for some restrictions on privacy as ways of promoting the common good. In reading it, I was struck by two differences between our views that explain much of the difference in our conclusions.
The first was that I did, and he did not, define privacy in the context of freedom of association. Consider the question of airlines requiring their pilots to be tested for drugs and alcohol. Professor Etzioni regards that as a (desirable) restriction on the pilots' privacy. I agree that it is desirable but not that it restricts privacy.
In a society where privacy is protected you have a right not to be tested. You do not have a right to be hired to fly airplanes—and, if you choose to exercise your right not to be tested, you should not be surprised if the airline exercises its right not to hire you. The background legal principle is not that I have a right to be hired as a pilot or that United Airlines has a right to have me fly their planes. The background principle is that they can hire me to fly their planes if and only if both they and I agree. Given that principle of free association many—although not all—of the problems that Professor Etzioni sees with privacy vanish.
The second difference has to do with our different views of government. While Professor Etzioni makes occasional references to the risk of some future oppressive government misusing information, he does not take seriously similar concerns with regard to our current government. His implicit assumption is that government is to be viewed as a benevolent agent standing above the human struggle, not as a mechanism through which individuals seek to achieve their goals, often at the expense of other individuals. That is not a view that strikes me as realistic.
Privacy, like almost anything else, can be used for good or bad purposes. My thesis in this chapter is that, on net, more privacy makes the world a better place. It does so because it is an essentially defensive weapon, a way of reducing the ability of other people to control us.
Reducing the ability of other people to control us is not always a good thing—someone may, after all, want to control me for my own good or control you to keep you from hurting me. But we live in a world where too much control is more of a problem than too little. In the entire world over the past century, something on the order of ten million people have been killed by private murderers. Between a hundred and two hundred million have been killed by the governments that ruled them (Rummel (1999) estimates about 170 million from 1900 to 1987). Quite a lot of individual pain, suffering, injustice has been due to the acts of private individuals; some could have been prevented by better law enforcement. But mass pain, suffering and injustice has been very nearly a monopoly of governments. If governments were better able to control us, there would have been more of it. And at the individual level, while privacy can be used to protect criminals against police, it can also be used to protect victims against criminals.
It is tempting to try for the best of both worlds—to restrict the privacy of bad people while protecting that of good, permit governments to collect detailed information about us but only allow it to be used for good purposes. But somebody must decide who are the good people and the bad, what purposes are worthy or unworthy. Whoever that somebody is will have his own agenda, his own purposes. Angels are in short supply.
To put the matter differently, "cannot" is better protection than "may not." If we permit law enforcement agents to know everything about everybody but forbid them fro using that information against individuals with unpopular views or political opponents of the party in power, we are protected only by a "may not." The same is true if private parties are able to collect information but restricted in what they may do with it. If the law keeps the information from being collected in the first place, we are protected by a cannot—however corrupt or dishonest they are, or however convinced that they are working for a greater good, people cannot use information they do not have.
"Cannot" at one level may depend on "may not" at another. You cannot use information that you do not have. You do not have it because you may not collect it. But even if the law forbids wiretaps or unauthorized surveillance, a sufficiently determined agency—or a sufficiently competent private criminal—can violate the law. That is where technologies that support privacy come into the picture. In a world where encryption is routine it does you no good to tap my phone because you cannot understand what I am saying. It does no good to intercept my email because you cannot read it. "Cannot" is better than "may not."
We can and should fight a delaying action against the use of technology to restrict privacy. But in the long run technology—useful technology—is hard to stop. In the long run, the real battle will be the one fought in defense of technologies that protect privacy. That one we might win.
Brin, David (1999), The Transparent Society: Will Technology Force Us to Choose Between Privacy and Freedom? Perseus (The first chapter is webbed at http://www.kithrup.com/brin/tschp1.html)
Etzioni, Amitai (1999), The Limits of Privacy, Basic Books.
Friedman, David (1996) "A World of Strong Privacy: Promises and Perils of Encryption," Social Philosophy and Policy, pp. 212-228. Webbed at http://www.daviddfriedman.com/Academic/Strong_Privacy/Strong_Privacy.html
Rummel, Rudolph J. (1999), Statistics of democide: Genocide and mass murder since 1900, Lit Verlag.
Anderson, Poul, Shield
Ellen Frankel Paul, Fred D. Miller, Jr., & Jeffrey Paul (Eds). (2000) The Right to Privacy, Cambridge University Press.
http://www.mega.nu:8080/ampp/rummel/20th.htm (detailed statistics on 20th century democide)
http://www.daviddfriedman.com/future_imperfect_draft/future_imperfect.html (Much more detailed account of encryption, surveillance, and much else.)
David Friedman is a professor of law at Santa Clara University; his specialty is the economic analysis of law and his current interests include the effect of technology on the world over the next few decades. His most recent book is Law's Order: What Economics has to do with Law and Why It Matters. You can learn more than you want to know about both him and his ideas at www.daviddfriedman.com.