Monthly Archives: July 2013


So the newly exposed NSA program XKeyscore captures pretty much everything you do online, including searches, websites visited and private facebook chats. As I’ve noted before, this means that the NSA is breaking or bypassing the encryption used by major sites. The most efficient way to do that would be require that the companies surrender their private encryption keys and not disclose that they had done so. To take the example of Facebook, since it is featured in the NSA slides, and has denied having any part in this…there must be some person at Facebook who knows Facebook’s private encryption keys, who has shared these with the NSA, and is legally compelled to not disclose this fact.

This is a fundamental challenge to the open internet. This is not proportionate to the “terrorist threat” and clearly generates its own momentum. If this is the defense of freedom, the question is, freedom for who? To do what? Why? And why is so much of this privatized? Private companies are collecting all this information. What happens when they happen to be on the verge of bankruptcy, or have a rogue employee? What happens if individuals at that company have a lawsuit with you, and they know everything you do online? Every search you do to establish your legal strategy is known to your opponent. Doesn’t seem like a good idea, on many levels.

Why SSL May Not Be Secure

This article discusses how PRISM probably works on a technical level. Lyle Troxell of GeekSpeak
provided the inspiration.

Snowden’s PRISM slides clearly show that internally, the NSA is saying they have “direct access” to the servers at major US internet companies. The companies have denied this, and most of them are using encryption these days. If you are on google or facebook, etc., you’ll see that the url in the address bar of the browser starts with https, which means that the communication is encrypted, and supposedly private, using SSL(Secure Socket Layer).

You can argue about whether the NSA has powerful enough computers to brute-force break SSL encryption, but even if they did, this would not be a good use of resources. The weakness is in SSL encryption itself, if implemented a certain way.

SSL communications rely on the concept of a public key and private key, which are some set of random digits. When you connect to a secure server, such as facebook, there is an initial “handshake” that takes place to authenticate that server. Facebook has a private key, which only they know, and a public key, that they share with you, the user. There is a linkage between the public and private keys such that messages created with one can be decrypted with the other. So Facebook gives you their public key, which allows you to encrypt messages you send to them, which they can decrypt with their private key. As far as most computer scientists know, this is very secure in terms of a third party being able to access the data. The size of the keys used to encrypt the data increases the complexity of decrypting, and these keys have been getting pretty big in recent years.

As others have suggested, the NSA could just be storing bulk data for a later date when they have the computing resources to brute-force break the encryption, but this is not practical or efficient.

The way to do it is grab all the data right off the network, as the mantra “collect it all” suggests. This will be SSL encrypted and a real pain to decrypt. So, you go to the companies in question, and you ask for their private key. Do this with a National Security Letter or some other process that obscures this fact. Then you have the encrypted data and the private key that allows you to painlessly decrypt it. No use wasting all those cpu cycles trying to break an enourmous encryption algorithm.

If this is true, both the NSA PRISM slides and the companies denials are technically correct. The NSA gets everything, but the companies in question can claim they don’t give direct access to anyone.

Since I’m if anything a free speech advocate, I would point out here that it is the suppression of First Amendment rights that allows this to happen in secret. If major US companies are being compelled to share their private SSL keys with the government, a BIG part of the problem is that the government compels them not to disclose this. The First Amendment does not allow for such exceptions, and for good reason. If people cannot talk about what the government is compelling them to do, we cannot evaluate if the government is doing a good thing at all. But that’s politics.

The technical solution, for those who want to prevent snooping, is to make sure that the session itself uses a secondary, temporary public/private key pair that is actually used to encrypt the data, and discarded at the end of the session. That way, after the session has ended, no one has a key to decrypt the data, and the only way to recover it is through brute force, or future flaws that are discovered in the encryption algorithms themselves. This seems to be the strategy of Silent Circle.

I thank Lyle Toxell for the enlightenment. GeekSpeak is a great show.

Hops, Graphs, and Six Degrees of Separation in a Small World

Last week, John Inglis, deputy director or the NSA, told congress that analysts could do a “second or third hop query” on telephone and internet data to find connections between people. This prompted a lot of speculation on what this means – if I know someone who knows someone who knows someone who knows a terrorist, will that make me a suspect in this dragnet?

I wouldn’t worry too much about the exact number of “hops” – if an algorithm can do 3 hops, it can probably do more. There’s no reason why it couldn’t do an infinite number of hops, depending on how you set it up. In computer science, the data structure to represent this is generally referred to as a graph. A graph contains “nodes”, which in terms of surveillance, might be thought of as people, and “edges” which could be a communication event between two people. Each node keeps references to adjacent nodes, which also keep references to their adjacent nodes, and so on. When you are searching a graph data structure, there is no limit to the number of “hops” you can make – it might just depend on how much time you have, how much memory, or how many cpu’s you can throw at the problem. Facebook’s “Graph Search” is an implementation of this idea, where the search algorithm considers your social network in the results that are presented to you.

American social psychologist Stanley Milgram began the mathematical study of social networks in the Small World Experiment in the 1960’s. This led to the popular notion of Six Degrees of Separation – that there are only six “hops” between any two people on the planet. I would guess the NSA has, or is close to having, a real working example of this.

If I were asked to set up a surveillance system to monitor everyone on the planet, this would be the way to do it. Think of every person as a node, each communication as an edge. In the node, you would store whatever personal information you get about that person, which would be informed by the metadata and content from the edges, or communication events. Get these from whatever source you can – it’s big data, you want everything. As the graph operates over time, you would collect location information, purchases, system information, passwords, searches, urls, phone numbers called, medical history, travel, etc. All of this information would be stored in the node which is a reference to an individual human. You could easily set this up in such a way that a main database table does not contain the person’s name, which would be in another table, and then you could say you don’t have personally identifiable information, even though you really do.

But what makes this kind of data structure really interesting is what it shows you on a bigger level – the BIG PICTURE, so to speak. As it approaches real-time, a system like this allows you to track the spread of ideas, and the role of individuals in the spread of ideas. If you are a government that doesn’t like certain ideas, this gives you a really good way to stop those ideas, because you can identify the individuals who are central to the social networks expressing those ideas. This would allow an unscrupulous government to easily target individuals who are expressing ideas it doesn’t agree with. You say you aren’t doing anything wrong? Maybe someone looking at your data has a different idea of right and wrong, and without your knowledge, has decided you are a threat. This is why privacy matters.

Collect It All

There have been some interesting articles recently about NSA chief Keith Alexander. James Bamford had one in Wired and there is another in the Washington Post. Techniques that were developed to eavesdrop on occupied Iraq have now come home to spy on Americans. Following the Big Data mantra, Alexander wanted to “collect it all” in the context of Iraq. The Snowden leaks reveal that the program originally targeting Iraqis has gone global.

At the same time the NSA is secretly assessing the threat, it also has the means to carry out an essentially military response, as with the Stuxnet virus, and military forces directly under NSA command.  As the whole process is increasingly privatized, you will have private companies playing both ends. This would seem to encourage a business model where you exaggerate threats on the one side to create opportunities for profit on the other.

This kind of setup invites abuse. Keith Alexander has certainly lied about NSA activities, but may believe he is doing it all for the best of the country. A public official may believe he’s doing the right thing, but we don’t know that if his rationale and decisions remain secret and classified. The only way to tell if this power is being used effectively to guarantee our security, is to have an open debate about it. We got a little taste of that this week when a normally timid Congress vigorously questioned NSA officials.

On many issues of National Security, we have public records that can be examined, we have a public debate about the issues. Our politicians engage in that debate, and their words become part of the historical record, that we, as citizens, can examine and determine for ourselves if they made the right decision.

What we’ve discovered is that the system of classification is being used in ways that may not be in the public interest. Daniel Ellsberg, Bradley Manning and Edward Snowden have all released classified documents because they believed the documents were classified against the public interest The only way to know is to have a public debate about it.  A “court” such as FISA that does not allow for open, adversarial debate and classifies its decisions isn’t much of a court at all. The Privacy and Civil Liberties Oversight Board is a step in the right direction.  It’s great to see ACLU’s Jameel Jaffer participating because he has a different agenda than ever-expanding surveillance, and will ensure that perspective is present in the debate.

The secret imperative to “collect it all” needs to be subject to adversarial and open debate. What are the alternatives? What would be the value of prohibiting the government from storing all this data on US Citizens? What would be the advantages of more narrowly targeting the collection? Why not give US citizens the right to see the data the government has collected on them? How can you What if we open-sourced the whole thing?  A more adversarial system, with these questions being openly debated would make us stronger, not weaker.

Because we haven’t had that debate until now, US companies that have collaborated with the NSA suddenly have a public relations disaster. As I wrote previously in this blog, this could have very bad effects on Silicon Valley companies. The latest Guardian revelations on how Microsoft collaborated with the NSA have the company issuing denials, and asking the Attorney General to allow them to talk about it. Lots of companies are now asking “permission” to discuss the the requests made to them by the NSA. Again, I would emphasize, the First Amendment should solve this. We either have a right to free speech, or we have a government which can compel us to do things without telling anyone. We do not, and cannot have both.

Sure, there are times when the government and private companies will collaborate secretly for the greater good – times of war, or dire national emergency. The problem comes when that situation is permanent. If an emergency is permanent, it is no longer an emergency, it is just the new normal. If, in the normal course of events, FISA court rulings and National Security Letters compel companies to secretly hand over data and not utter a word about it, we should be concerned not only about the loss of privacy, but also the loss of free speech this entails. If “Congress shall make no law… restricting freedom of speech”, how does this happen?

“Collect it all” is probably not a good policy, and had to be enacted in secret because there are so many problems with it that it never would have happened with a full democratic debate. Now, the debate is happening, and like in the mid 1970’s with the Church Committee, the public will try again to grapple with what has been done in our name. We will need to find an appropriate balance between “Collect Nothing” and “Collect it All” – which can only be achieved by an open and adversarial debate.





On the Bright Side

Some might think that I’m against the NSA and our nation’s signals intelligence efforts because of the kinds of things I say here. Not true. I’m against certain policies that control the use of these assets, and that’s what I’m mainly talking about on this site. We have the most powerful military and signals intelligence this world has ever seen, and I’m glad that we are powerful enough that we do not face any serious threat in these realms.

If you go back to World War II, we faced a very serious threat from the Axis Powers. We have not faced a similiar level of threat since. Most of the National Security policies that we are still dealing with today had their gestation in WWII and the years immediately following.  At the time – when we did face an existential threat from foreign enemies, the codebreaking programs that allowed us to read secret communications of the Germans and Japanese were vital to the war effort.

During the war, the predecessor to the NSA launched a program called VENONA, which was aimed at cracking the cryptography used in Soviet codes to determine if the Soviets were making a secret pact with the Nazis. This was important to know in strategic terms.

The war ended before the project bore fruit, but when we did finally crack the Soviet diplomatic cables, numerous spies were uncovered. Several had passed atomic secrets to the Soviets, which allowed them to accelerate their atomic bomb program.

These individuals were guilty of espionage – working as an agent of a foreign government against the interests of their country.  Back in the years of World War II and the early Cold War, our cryptologic and intelligence efforts played a critical role in our National Security, as illustrated by these cases.

More recently, we’ve seen arguments about whether or not surveillance played a role, or how large of a role, in foiling some 50 terrorist plots. But none of these represents the kind of threat our country faced back in the 1940’s. And now we see Edward Snowden being charged with espionage, though he never apparently intended to help any foreign adversary, or harm the American people.

Although the United States doesn’t face the kind of existential threats that we did in the World War II era, there are still plenty of threats out there. It is the fundamental duty of our government to protect us against foreign threats. If a government, say in China or anywhere else, is training programmers to attack US commercial and government systems, we ought to have the means to counter that, to keep our systems safe. And we can’t have every intelligence analyst with a political gripe spilling the beans. In my opinion, there should be automatic prosecution of people who release classified information against their oath. At the same time, there should be way less classified information, and there should be a clear path to pardon for those who are acting in the public interest.

We have to ask ourselves, as the most powerful country in the world, what do we really want? Do we want “Global Cryptologic Dominance” – the stated “vision” of the NSA? Especially after what I’ve seen this last month, I’m a little uncomfortable with the “Dominance” thing.  I’d be a little happier if the vision was something more like “Global Cryptologic and Network Security.” Domination is what you do when you are at war with an enemy, when you must win and you give it everything you’ve got. Domination is not suited for times of peace or relations with friendly countries and free people. Attempts at domination create their own adversity. Now more than ever, we need partnerships that create security for all, not secret programs to dominate.

Steve Gibson’s Explanation of PRISM

In a podcast back in June, security expert Steve Gibson speculates that the name PRISM comes from the nature of the devices used to collect data. Back in 2006, ATT engineer Mark Klein reported that the NSA had set up a secret room in a San Francisco facility with access to ATT’s fiber-optic internet “backbone.” An optical splitter was placed place on the backbone cable, which split the single optical stream into two – one continuing to its intended destination, the other copy going to the NSA.

By it’s nature, such a device would use some form of optical prism, like you place on the window sill to split the sunlight, hence the name, Gibson speculates. Even more interesting, and equally plausible, is his explanation of how NSA uses this to get user data from US companies.

According to the NSA slide leaked by Snowden, the PRISM program collects data “directly from the servers” of major US companies. All the US companies denied this was the case.  I assumed that either employees of the companies, or contractors with access to the facility who were secretly working for the NSA had set up these systems inside Google, Yahoo, Facebook, etc. Even an employee who had a security clearance or was served with a National Security Letter prohibiting them from talking about it to anyone could have been responsible.

Gibson asserts that it is the routers leading to the servers that are compromised, not the servers themselves. This does make some sense. This would mean that the NSA slide is technically inaccurate, saying “server” when it should have said “router.” It’s effectively the same, but if the slide was prepared for a technical audience, you would think they would get that distinction right. But it would also mean that the denials of the companies are, at least technically, accurate.

Here’s an example of what Gibson is talking about:

Let’s say we’re talking about Google. Google server farms, in various locations, are served by fat fiber optic pipes that carry the data through a series of routers, on their way to Google. A router close to an internet backbone like the one at ATT will have data going to all kinds of destinations. As you get physically closer to the Google data center, there will be chokepoints, or routers one hop away from Google that basically only have data going to Google. You put a splitter on this router, and you might as well have “direct access to the servers” at Google, because you are getting all the data going to those servers – the individual requests sent from everyone using Google.

The problem I see with this theory is again a problem of trust. If this is true, it is incumbent on Google to investigate and identify the compromised routers that are copying their data. If that is a separate company, it doesn’t matter, Google’s data, and their brand, is still compromised. Presumably, this would be a serious breach of contract by the company providing the router services for Google. If this is true and Google doesn’t investigate, or doesn’t seem to care, that would suggest that they are colluding with the NSA, and their denials, while technically accurate, are designed to mislead.

This would have happened with all the major Silicon Valley brands, allowing the NSA to grab all the data going to their servers, while giving the companies a cover of plausible deniability.

Big Data and Policy

Bruce Schneier’s recent blog post Big Data Surveillance Results in Bad Policy makes an important point about how big data is used by governments, and how this falls short of good public policy. Big Data is all about finding patterns in the data, not understanding what they mean, or the underlying problems they point to. Maybe that’s ok for private businesses, but for government institutions, it can lead them to focus on the symptoms, rather than the causes of problems.

If we can track all the communications of all the people in the world, we might be able to find the ones who intend to do us harm. But we have lost sight of the original problem – why would anyone want to harm us in the first place? If you take at face value the statements of terrorists from Osama Bin Laden to the Boston Marathon bomber, US foreign policy is a large factor that motivates them. It also creates a context where others might sympathize with them or aid and abet them.

It might be worth looking at why our foreign policy seems to create as many enemies as friends, rather than just assuming we have to change the nature of our domestic society to handle the blowback.

Confidence and Trust

In Silicon Valley, we make great software that the world loves. This has produced the revolution in human communications we call social networking, made the world’s data the biggest searchable library ever, and given people so many wonderful games and apps to entertain, enlighten and enhance our lives.

Anyone who makes software knows just how fickle consumers can be. One little screwup, bad review, overloaded server or non-functioning button can make the difference between a consumer who loves you and a consumer who is looking for an alternative. Most important of all, the baseline that we cannot ignore is that the consumer must have confidence that the software works as described, and trust that whatever personal data they share with that software is safe, secure, and not used improperly. This is getting harder and harder to figure out, as people share more and more complex data with ever more sophisticated systems. We all know we have to make tradeoffs – there’s this one thing we don’t like about the software, but this other really good thing that means we have to use it, it’s a must have.

We’ve had a lot of must-haves in Silicon Valley for awhile now – Apple, Google, Facebook. These companies make software that millions or billions of people around the world find compelling. Now, with US companies having been revealed to be cooperating with US government surveillance, this is being called into question. Responding to NSA revelations, Neelie Kroes, Vice President of the European Commission has questioned whether Europeans should be trusting US firms with their cloud data:

“Concerns about cloud security can easily push European policy-makers into putting security guarantees ahead of open markets, with consequences for American companies. Cloud has a lot of potential. But potential doesn’t count for much in an atmosphere of distrust.”

This could have serious consequences for Silicon Valley. We make software for consumers around the globe, and those consumers must feel secure with our products. They must know that the product works as advertised and intended, that their data is safe. Now that people around the world know US companies have been opening a secret backdoor to US government eavesdropping from within our most popular software brands and products, they may shun Silicon Valley in favor of alternatives they view as more secure. If this happens, it will be yet another burden this system of surveillance has put upon us. No one here wants to see Silicon Valley’s must-haves turn into Silicon Valley’s has-beens.

4th of July

I’ve been really busy and haven’t been able to work on this project. This has given me some time to reflect on the Snowden revelations, the responses, and what it all means. Today, there are “Restore the Fourth” rallies across the country. Here is one in DC

I’m no expert on the 4th amendment to the US Constitution, but this text is pretty clear:

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

At the time these words were written, the intent was clearly that “the people” should be free from the government doing warrantless “searches” – exactly what the NSA does today. This is on the face of it, the US government directly violating the law. Even though the framers had no way of conceiving of the internet, or email or cellphones, their intent was very clear – they wanted to prevent this type of abuse of power by governments that is happening now. When the people’s lives are transparent to government, but the actions of the government are hidden from the people, we have a strange inversion of the notion of public and private. Private citizens are the ones about whom everything is known. Public officials are the ones who are amassing secret stores of data about private citizens, justified by secret interpretations of the law which are classified and cannot be uttered in public. Our founding fathers had a different notion of how this country would be run – by private citizens, whose private lives are not the subject of government searches and seizures, expressing their will publicly about public policy issues that are discussed openly.

From Barack Obama on down, many government officials have said they want to have a public debate about surveillance technologies. Of course this position was only stated after all the dirty laundry came into view from the Snowden disclosures, which makes such statements somewhat unbelievable. It will take more pressure from below. It will take more Edward Snowdens, and more Glenn Greenwalds, and they will come. We are humans, and we will not be crushed by the machines of power.

Over the years, I have been a strong advocate of free speech, and first amendment proponent. Back in 1992 I was making audio for video games. I recorded everything president Bush said off the tv. Under the name Tone Def, I released Bushwack, an electronic parody song of reconstructed phrases of the president. It received massive airplay, opened many people’s eyes to this new technology, and I never made a cent off it, because it was not condoned by the big music companies, who at the time, stil controlled music distribution. I really didn’t know what would happen with Bushwack – I thought of it as an important test of the limits of free speech.  I didn’t know if someone would come after me, but I felt that was less important than making that artistic statement. Me, a powerless individual, could rearrange the words of the most powerful man on earth, making him speak the truth. Even if I couldn’t make money off it, I could get it onto the radio, and participate in the national debate. I was trying an extreme form of “speech” and it worked. I followed this up with the Clinton parody Rock The House and the GW Bush parody Fuzzy Math for which I also made a flash video and released the entire database of audio for others to use as The George W Bush Public Domain Audio Archive

In each case, my intention was to push the bounds of free speech as far as I could – to use technology as an equalizing mechanism, to use the words of the US President to illuminate his own shortcomings in a humorous manner. The fact that I could do this without getting thrown in jail or worse proved to me that we live in a country with a healthy respect for the first amendment.

But when it comes to National Security, and classified information, our government is not so permissive. Anyone with direct knowledge of NSA activities does not apparently enjoy the rights of the first amendment:

Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.

It is one thing to have a massive surveillance state such as we do, and clearly it violates the fourth amendment, but I would say, even more importantly, it violates the first amendment. The people who know how the system of surveillance works are legally prohibited from talking about it – in direct violation of the first amendment. This is important because it means we cannot have democratic control over such institutions, as pervasive and powerful as they are. If we cannot even have an open discussion about these systems, we cannot decide as a democracy, if they are good, worthwhile, or effective. Edward Snowden, like any American, has a right of free speech, which is outlined in our constitution. I agree that we do not want to have every government contractor who has a problem with something leaking it to the press. There should be automatic punishments for such things. It would be the role of a Chief Executive, recognizing the clear public interest in the disclosures, to pardon a man like Snowden. But this would only happen in a democracy where top officials served the public interest with integrity, humility, and respect for our most basic and important laws. In the American fantasy, we still live in such a country, but the secret accumulation of centralized power to snoop on all Americans shows we are much closer to a “turnkey totalitarian state” as former NSA official William Binney put it.

The time for a national debate on these technologies is now. Ten years from now, there may be systems in place that prevent leaks like the one that opened up this debate. Even more terrifying is to think about where a system like this is heading. Imagine a future where we have seen terrorist acts of unprecedented destruction. They will be so dangerous, and the stakes so high, the system will be automated to make sure it can respond quickly enough to the danger. When an algorithm scanning NSA databases identifies a positive imminent threat, it will send automated systems(eg robots) to neutralize the threat. At that point, humans are not even part of the equation. We will have completed our historic transformation from citizens, to consumers, to casualties. This will be the subject of another post. Happy fourth of July!!