Wednesday, March 23, 2016

An Open Letter to Tim Cook: Apple and the Environment

Dear Mr. Cook:

I watched your March Event 2016 Keynote speech with great interest, and applaud your principled stance on encryption, as well as the amazing steps Apple is taking toward environmental stewardship. For me, the high point of the entire event was the introduction of Liam, which I think serves as a beacon of responsibility for other manufacturers to aspire to.

However, my enthusiasm for this environmental trail-blazing was dampened later in the day when I downloaded iOS 9.3 and installed it on my iPhone. Immediately after installation, I eagerly went looking for the Night Shift settings in the control panel. After a frustrating search, I finally found mention online that the feature was only available to newer devices.

As a lifelong environmentalist, I believe in making use of things as long as they retain their basic utility. To that end, I still carry an iPhone 5 in my pocket. It’s a truly amazing device, and it still works as well now as it did when I got it three years ago.

But the absence of Night Shift support is the latest in a string of unnecessary disappointments. When WiFi calling was introduced in iOS 8, we were told that the iPhone 5 was not powerful enough to support the feature (despite its presence in low-end Nokias for years, and rumors that it worked fine on the iPhone 5 during the beta period). When content blocker support was introduced in iOS 9, we were told that 64-bit processors were required for the feature, which is completely non-sequitur. Now, when iOS 9.3 comes out, we’re told the same thing for its new and shiny features.

I’ve been a software engineer for decades, and I recognize artificially manufactured limitations when I see them. Look, I get it. Apple sells hardware, and it’s good business sense to artificially choke off older equipment to induce people to buy new devices. But when you pair this behavior with environmental messages, it sends mixed signals. It tells us that the environmental push isn’t as sincere as it’s being held out to be.

And I think Apple is better than that.

Sincerely,
Adam Roach

Friday, February 19, 2016

Laziness in the Digital Age: Law Enforcement Has Forgotten How To Do Their Jobs

In all the hubbub around the FBI and Apple getting crosswise with each other, there's a really important point that's been lost, and it's not one that the popular press seems to be picking up on.

From initial appearances, the public as a whole seems to support Apple's position -- of protecting the privacy of its users -- over the FBI's. But there is still a sizeable minority who are critical of Apple as obstructing justice and aligning themselves with terrorists.

What the critics are overlooking is that the very existence of this information -- the information the FBI claims is critical to their investigation -- is a quirk of the brief era we currently live in. Go back a decade, and there would be no magical device that people carry with them and confide their deepest secrets to. Go forward a decade, and personal mobile devices are going to be locked down in ways that will be truly uncircumventable, rather than merely difficult to break.

But during the decade or so that people have been increasingly using easily exploitable digital devices and communication, law enforcement has become utterly dependent on leveraging it for investigations. They've simply forgotten how to do what they do for a living.

When James Oliver Huberty shot 21 people at a San Diego-area McDonald's in 1984, there was no phone to ask Apple to unlock. Patrick Edward Purdy in Stockton, CA in 1989? No phone. George Jo Hennard in a Texas Luby's in 1991? He wasn't carrying a digital diary with him for police to prowl through. I could go on for much longer than any of you could possibly bear to read. In none of these cases did law enforcement have a convenient electronic device they could use to delve into the perpetrator's inner thoughts. They pursued their cases the same way law enforcement has for the centuries prior: they applied the hard work of actual physical investigation. They did their job.

Law enforcement agencies developed investigatory techniques before the ubiquity of digital devices, and those techniques worked just fine. Those techniques will continue to work just fine after everything is finally locked down, and that day is coming soon. All the FBI needs now is to re-learn how to employ these techniques, rather than pushing companies to weaken privacy protections for people worldwide.

And they need to re-learn these techniques before digital snooping becomes completely worthless, or they're going to be in a world of hurt.

Saturday, November 28, 2015

Better Living through Tracking Protection

There's been a bit of a hullabaloo in the press recently about blocking of ads in web browsers. Very little of the conversation is new, but the most recent round of discussion has been somewhat louder and more excited, in part because of Apple's recent decision to allow web content blockers on the iPhone and iPad.

In this latest round of salvos, the online ad industry has taken a pretty brutal beating, and key players appear to be rethinking long-entrenched strategies. Even the Interactive Advertising Bureau -- who has referred to ad blocking as "robbery" and "an extortionist scheme" -- has gone on record to admit that the Internet ads got so bad that users basically had no choice but to start blocking them.

So maybe things will get better in the coming months and years, as online advertisers learn to moderate their behavior. Past behavior shows a spotty track record in this area, though, and change will come slowly. In the meanwhile, there are some pretty good tools that can help you take back control of your web experience.

How We Got Here

While we probably all remember the nadir of online advertising -- banners exhorting users to "punch the monkey to win $50", epilepsy-inducing ads for online gambling, and out-of-control popup ads for X10 cameras -- the truth is that most ad networks have already pulled back from the most obvious abuses of users' eyeballs. It would appear that annoying users into spending money isn't a winning strategy.

Unfortunately, the move away from hyperkinetic ads to more subtle ones was not a retreat as much as a carefully calculated refinement. Ads nowadays are served by colossal ad networks with tendrils on every site -- and they're accompanied by pretty sophisticated code designed to track you around the web.

The thought process that went into this is: if we can track you enough, we learn a lot about who you are and what your interests are. This is driven by the premise that people will be less annoyed by ads that actually fit their interests; and, at the same time, such ads are far more likely to convert into a sale.

Matching relevant ads to users was a reasonable goal. It should have been a win-win for both advertisers and consumers, as long as two key conditions were met: (1) the resulting system didn't otherwise ruin the web browsing experience, and (2) users who don't want to have their personal movements across the web could tell advertisers not to track them, and have those requests honored.

Neither is true.

Tracking Goes off the Rails

Just like advertisers went overboard with animated ads, pop-ups, pop-unders, noise-makers, interstitials, and all the other overtly offensive behavior, they've gone overboard with tracking.

You hear stories of overreach all the time: just last night, I had a friend recount how she got an email (via Gmail) from a friend that mentioned front-loaders, and had to suffer through weeks of banner ads for construction equipment on unrelated sites. The phenomenon is so bad and so well-known, even The Onion is making fun of it.

Beyond the "creepy" factor of having ad agencies building a huge personal profile for you and following you around the web to use it, user-tracking code itself has become so bloated as to ruin the entire web experience.

In fact, on popular sites such as CNN, code to track users can account for somewhere on the order of three times as much memory usage as the actual page content: a recent demo of the Firefox memory tracking tool found that 30 MB of the 40 MB used to render a news article on CNN was consumed by code whose sole purpose was user tracking.

This drags your browsing experience to a crawl.

Ad Networks Know Who Doesn't Want to be Tracked, But Don't Care.

Under the assumption that advertisers were actually willing to honor user choice, there has been a large effort to develop and standardize a way for users to indicate to ad networks that they didn't want to be tracked. It's been implemented by all major browsers, and endorsed by the FTC.

For this system to work, though, advertisers need to play ball: they need to honor user requests not to be tracked. As it turns out, advertisers aren't actually interested in honoring users' wishes; as before, they see a tiny sliver of utility in abusing web users with the misguided notion that this somehow translates into profits. Attempts to legislate conformance were made several years ago, but these never really got very far.

So what can you do? The balance of power seems so far out of whack that consumers have little choice than to sit back and take it.

You could, of course, run one of any number of ad blockers -- Adblock Plus is quite popular -- but this is a somewhat nuclear option. You're throwing out the slim selection of good players with the bad ones; and, let's face it, someone's gotta provide money to keep the lights on at your favorite website.

Even worse, many ad blockers employ techniques that consume as much (or more) memory and as much (or more) time as the trackers they're blocking -- and Adblock Plus is one of the worst offenders. They'll stop you from seeing the ads, but at the expense of slowing down everything you do on the web.

What you can do

When people ask me how to fix this, I recommend a set of three tools to make their browsing experience better: Firefox Tracking Protection, Ghostery, and (optionally) Privacy Badger. (While I'm focusing on Firefox here, it's worth noting that both Ghostery and Privacy Badger are also available for Chrome.)

1. Turn on Tracking Protection

Firefox Tracking Protection is automatically activated in recent versions of Firefox whenever you enter "Private Browsing" mode, but you can also manually turn it on to run all the time. If you go to the URL bar and type in "about:config", you'll get into the advanced configuration settings for Firefox (you may have to agree to be careful before it lets you in). Search for a setting called "privacy.trackingprotection.enabled", and then double-click next to it where it says "false" to change it to "true." Once you do that, Tracking Protection will stay on regardless of whether you're in private browsing mode.

Firefox tracking protection uses a curated list of sites that are known to track you and known to ignore the "Do Not Track" setting. Basically, it's a list of known bad actors. And a study of web page load times determined that just turning it on improves page load times by a median of 44%.

2. Install and Configure Ghostery

There's also an add-on that works similar to Tracking Protection, called Ghostery. Install it from addons.mozilla.org, and then go into its configuration (type "about:addons" into your URL bar, and select the "Preferences" button next to Ghostery). Now, scroll down to "blocking options," near the bottom of the page. Under the "Trackers" tab, click on "select all." Then, uncheck the "widgets" category. (Widgets can be used to track you, but they also frequently provide useful functions for a web page: they're a mixed bag, but I find that their utility outweighs their cost).

Ghostery also uses a curated list, but it's far more aggressive in what it considers to be tracking. It also allows you fine-grained control over what you block, and lets you easily whitelist sites, if you find that they're not working quite right with all the potential trackers removed.

Poke around at the other options in there, too. It's really a power-users tracker blocker.

3. Optionally, Install Privacy Badger

Unlike tracking protection and Ghostery, Privacy Badger isn't a curated list of known trackers. Instead, it's a tool that watches what webpages do. When it sees behavior that could be used to track users across multiple sites, it blocks that behavior from ever happening again. So, instead of knowing ahead of time what to block, it learns what to block. In other words, it picks up where the other two tools leave off.

This sounds really good on paper, and does work pretty well in practice. I ran with Privacy Badger turned on for about a month, with mostly good results. Unfortunately, its "learning" can be a bit aggressive, and I found that it broke sites far more frequently than Ghostery. So the trade-off here: if you run Privacy Badger, you'll have much better protection against tracking, but you'll also have to be alert to the kinds of defects that it can introduce, and go turn it off when it interferes with what you're trying to do. Personally, I turned it off a few months ago, and haven't bothered to reactivate it yet; but I'll be checking back periodically to see if they've tuned their algorithms (and their yellow-list) to be more user-friendly.

If you're interested in giving it a spin, you can download Privacy Badger from the addons.mozilla.org website.

Monday, August 12, 2013

Web Apps as the new Lingua Franca

I've recently been involved in some conversations around the "lock-in" provided by the mobile system development platforms: once you start working towards (for example) an Android app, it's going to be an Android app. Deploying it on another platform generally means that you have a lot of porting work to do, and multiple codebases to maintain.

I think that's yesterday's model. I think the model for the future is "web apps," by which I mean applications developed using HTML5, JavaScript, and the various APIs coming out of the W3C.

An objection I've heard raised here -- actually, two different objections, but with the same root -- is (1) people don't want to open their web browser and surf to a web page to run their programs, and (2) the mobile web browsing experience is not terribly convenient.

What's critical to realize is that "web apps" aren't necessarily bound to the experience of "opening up a web browser as an explicit action, navigating to an application's web site, and then interacting with it as if it were a web page." Web-apps-as-first-class-apps is a viable model for development and deployment. And there is a team of people inside Mozilla who are working really hard to make this work as seamlessly as possible.

Both FirefoxOS and BlackBerry 10 inherently treat HTML5/JavaScript applications as "first class" apps. In FirefoxOS, it's the sole means of developing applications; and BlackBerry 10 uses it as one of several native development environments. In both cases, the user's experience is indistinguishable from what they're used to apps doing: there isn't even an intermediating browser anywhere in the mix. The HTML5 and JavaScript engines are an inbuilt part of both operating systems.

I recognize that neither platform has the penetration to be a real game changer; at least, not at the moment. I'm holding them up as existence proofs that the approach of "web app as native app" is viable. But let's talk about how these technologies can be leveraged for native webapp deployment on Android, iOS, and Desktop systems.

All of these systems have their own idea of "installed applications." To be successful, we need some way to make these HTML5/JavaScript applications look, act, and feel like every other kind of application (rather than "something running in a web browser").

This is where the work that Mozilla has been doing for web app deployment comes into play, and it's easier to demonstrate than it is to explain. If you have Firefox installed on your desktop (or Firefox Aurora on your Android device), you can go to https://marketplace.firefox.com/, find apps that amuse you, and click on "install." At that point, Firefox takes the steps necessary to put an icon in whatever location you're used to finding your applications. On your Android device, you get an application icon. Under OS X, you get a new application in Launchpad. For Windows, you get... well, whatever your version of Windows uses to access installed apps.

From that point forward, the distinction between these web apps and native applications is impossible for users to discern.

Interestingly, iOS was both a bit of a pioneer in this space, and also remains the most frustrating platform in this regard. If you recall Apple's stance when the original iPhone first launched, they didn't allow third-party applications at all. Instead, their story was that HTML and JavaScript were powerful enough to develop whatever applications people want to write. Presumably as part of this legacy, iOS still allows users to manually save "bookmarks" to their desktop. If the content behind those bookmarks is iOS-aware, then the bookmarks open in a way that hides the browser controls -- effectively acting like a native app. If you have an iOS device, you can play around with how compelling this experience can be by going to http://slashdot.org/ -- select the box-and-arrow icon at the bottom of your browser, and click "Add to Home Screen." Now, whenever you open the newly-created icon, it acts and feels exactly like you downloaded and installed a native "Slashdot" application onto your device. It even appears in the task management bar as a separate application.

The reason the iOS experience remains frustrating is twofold. First, and primarily an annoyance to developers: the only HTML5/JavaScript rendering engine allowed on the platform is Apple's own webkit implementation. This frustrates the ability to add new APIs, such as may be required for new applications. So, new APIs are only added at whatever pace Apple's product plan gets them added. Secondly -- and of actual concern to users -- there is no programmatic way to add web app icons to the desktop. Creating these icons requires a relatively arcane series of user steps (described above). Basically, Apple is playing the role of the napping hare in this tortoise-and-hare race: they were very fast out the gate, but have been asleep for quite a while now.

But that covers all the bases: it's possible to develop HTML5/JavaScript apps that truly run cross-platform for iOS, Android, Blackberry, FirefoxOS, and a variety of desktop systems -- even if the Apple experience is somewhat less than satisfying (for both developers and users).

I suspect most of what's holding developers back is predominantly a lack of knowledge that this is a viable technical model. I've heard developer feedback that one major reason they aren't targeting JavaScript environments involve concerns about the speed of execution in a JavaScript environment. It's not yet clear whether these problems are merely perception or reality.  I've seen some impressive things like usable cryptography, high-def video codec decoding, and high-performance first-person 3D environments (including viable multiplayer first-person shooters) implemented in pure JavaScript, so I suspect it's more a perception problem based on very outdated experiences. I'd love feedback from anyone who has run into performance problems trying to use this model with a modern JavaScript interpreter.

Friday, August 2, 2013

It's official: No SDES in the browsers.

I wanted to follow up on my earlier post regarding WebRTC security. In that post, I mentioned that RTCWEB was contemplating what role, if any, SDES would play in the overall specification.

At yesterday's RTCWEB meeting, we concluded what had been a rather protracted discussion on this topic, with a wide variety of technical heavy hitters coming into express opinions on the topic. The result, as recorded in the working group minutes, is:

Question: Do we think the spec should say MUST implement SDES,
MAY implement SDES, or MUST NOT implement SDES. (Current doc
says MUST do dtls-srtp)

Substantially more support for MUST NOT than MAY or MUST

**** Consensus: MUST NOT implement SDES in the browser.

So, what does this mean? The short version is that we just avoided a very real threat to WebRTC's security model.

I'd like to put a finer point on that. There was a lot of analysis during the conversation around the difference in difficulty of mounting an active attack versus a passive attack, the ability to detect and audit possible interception, and a variety of other topics that would each require their own article (although I think Eric Rescorla's slides from that session do a good job of summarizing the issues rather concisely). Rather than go into that level of detail, I want to share a brilliant layman's description of SDES by way of an anecdote.

One of the RTCWEB working group participants brought along a companion to Berlin for this weeks' meeting. She sat in on the RTCWEB encryption discussion. When she asked what the lively conversation was all about, the working group participant directed her to the relevant portions of the SDES RFC.

Yesterday evening, when we were talking about the day's events, she summarized her impression of SDES based on this lay reading: "It's like having the most state-of-the-art safe in the world, putting all of your gold inside, and then writing the access code on the outside."

Friday, June 7, 2013

WebRTC: Security and Confidentiality


One of the interesting aspects of WebRTC is that it has encryption baked right into it; there's actually no way to send unencrypted media using a WebRTC implementation. The developing specifications currently use DTLS-SRTP keying[1], and that's what both Chrome and Firefox implement. The general idea here is that there's a Diffie-Hellman key exchange in the media channel, without the web site -- or even the javascript implementation -- being involved at all.

But who are you talking to?


This is only part of the story, though. While encryption is one of the tools necessary to prevent eavesdropping by other parties, it is in no way sufficient. Unless you have some way to demonstrate that the other end of your encrypted connection is under the control of the person you're talking to, you could easy be sending your media to a server that is capable of sharing your conversation with an arbitrary third party. One important tool to help with this problem is the WebRTC identity work currently underway in the W3C. This isn't ready in any implementations that I'm aware of yet, but it's definitely something that needs to happen before we consider WebRTC done.

The general idea behind the identity work is that, as part of key exchange, you also get enough information to prove, in a cryptographically verifiable way, that the other end of the connection are who you think they are. Of course, there are still some tricky aspects to this (you have to, for example, trust Google not to sign off on someone other than me being "adam.roach@gmail.com"[2]), but you can at least reduce the problem from trusting one party (the website hosting the WebRTC application) to trusting that two parties (the website and the identity provider) won't collude.

The other tool necessary to ensure the confidentiality of contents is making sure that the media isn’t being copied by the javascript itself and being sent to an alternate destination. This isn’t part of any current specification, but we’re working on adding a standardized mechanism that will allow specific user media streams to be limited so that they can only be sent, over an encrypted channel, to a specified identity (and nowhere else).

On top of this, web browser developers have a very difficult task in presenting this to users in a way that they can use. The nuances between (and implications of) "this is encrypted but we can't prove who you're talking to" versus "this is being encrypted and sent directly to adam.roach@gmail.com (at least if you trust Google)" are very subtle. Rendering this to users is a thorny challenge, and one that's going to take time to get right.

And who knows who you are talking to?


Of course, none of this is perfect. The recent Verizon brouhaha is about a database of who is communicating with whom (known in the communications interception community as a "pen register"), not actually listening in on phone calls. It uses telephone numbers as identifiers, which are pretty easy to correlate to an owner. WebRTC can't prevent this kind of information from being collected,  using IP addresses where Verizon uses phone numbers. IP addresses aren't much harder to correlate to people than phone numbers are, as has been demonstrated by numerous MPAA and RIAA lawsuits.

Even with a good encryption story, WebRTC has no inbuilt defenses to collecting this kind of information. Anyone with access to the session description is going to be able to see the IP addresses of both parties to the conversation; and, of course, the website is going to know where the HTTP requests came from. Beyond that, your ISP (and every backbone provider between you and the other end of the call) can easily see which IP addresses you're sending information to, and picking media streams out (even encrypted media streams) is a trivial exercise for the kinds of equipment ISPs and backbone providers deploy.

The problem  is that it's fundamentally difficult to mask who is talking to whom on a network. There are approaches, such as anonymizers and Onion Routers, that can be used to make it more difficult to ascertain; but such approaches have their own weaknesses, and most simply shift trust around from one third party to another.

In summary, WebRTC is taking steps to allow for the contents of communication to remain confidential, but it takes a concerted effort by application developers to bring the right tools together. The less tractable problem of masking who talks to whom is left as out of scope.

____
[1] There's been recent talk in the IETF RTCWEB working group of adding Security Descriptions (SDES) as an alternate means of key exchange. SDES uses the signaling channel to send the media encryption keys from one end of the connection to the other. This would necessarily allow the web site to access the encryption keys. This means that they (or anyone they handed the keys off to) could decrypt the media, if they have access to it. In terms of stopping some random hacker in the same hotel as you from listening in while you talk to your bank, it's still reasonably effective; in the context of programs like PRISM, or even the pervasive collection of personal data by major internet website operators, it's about as much protection as using tissue paper to stop a bullet.

[2] Whether you choose to do so mostly comes down to whether you trust this blog entry more than this slide.

Thursday, May 16, 2013

Google Hangouts testing WebRTC-based, pluginless implementation?

A sharp-eyed Toby Allen recently brought the following code to my attention:

Qg.prototype.init=function(a,b,c,d){this.ca("pi1");var e=window.location.href.match(/.*[?&]mods=([^&]*).*/);if(e=(e==m||2>e.length?0:/\bpluginless\b/.test(e[1]))||Z(S.Xd)){t:{var e=new Ad(Uc(this.e.l).location),f;f=e.K.get("lantern");if(f!=m&&(f=Number(f),Ka(Og,f))){e=f;break t}!Fc||!(0<=ta(dd,26))||webkitRTCPeerConnection==m?e=-1:(Pg.da()?(f=Pg.get("mloo"),f=f!=m&&"true"==f):f=q,e=f?-3:0==e.hb.lastIndexOf("/hangouts/_/present",0)?-4:1)}e=1==e}e?Rg(this,q):Sg(this,a,b,c,d)};

That's an excerpt from the Google Hangouts javascript code. It's a bit obfuscated (either by design; or, more likely, because it's the output of another tool), and I haven't taken the time to fully dissect it. But the gist of the code appears to be to test for the presence of a "mods=pluginless" string in the URL; and, if one is present, to check whether the browser supports the use of WebRTC's RTCPeerConnection API (or, at least, Google's prefixed version of it). It then looks like it calls one of two different initialization functions based on whether such support is present.

Alas, with this preliminary analysis, I couldn't get Hangouts to do anything that looked plugin-free, even on a recent copy of Chrome Canary. But it's pretty clear that the Hangouts team has started playing around with a WebRTC-based implementation.

The downside is that they're checking for Chrome's specific prefixed version of RTCPeerConnection rather than attempting to use a polyfill like most WebRTC demos on the web nowadays. So it appears that this functionality, when it's deployed, is most likely going to be Chrome-only -- at least, initially.