Three Forms of Bad Analysis (part 3): Law
AEIdeas
December 31, 2013
My two previous posts have dealt with bad engineering analysis and bad economic analysis. This post on bad legal analysis concludes the trilogy, and does so by examining net neutrality regulations; rules that are out of date both in terms of technological and marketplace realities.
It’s useful to study the original arguments for net neutrality, especially now that we await an imminent decision from the Third Circuit on the FCC’s Open Internet rules. The seminal texts are Tim Wu’s paper Network Neutrality, Broadband Discrimination (Journal on Telecomm and High Tech Law Vol. 2, 2003) and an FCC filing by Wu and his mentor Larry Lessig, Ex Parte Submission in CS Docket No. 02-52, dated August 22, 2003.
Wu and Lessig develop an argument for “neutral”network treatment of data in some spheres but not in others. Wu in particular recognizes that neutrality is actually a very subtle notion, even in the Internet context. The Internet protocols, TCP and IP, are famously “indifferent”to application requirements from above TCP/IP (in the protocol stack architecture) and are also indifferent to network capabilities from below. For example, real time applications such as voice require a low delay transport service, which is designed-into DSL, DOCSIS, Wi-Fi, and Ethernet, but TCP/IP prevents the application from requesting a particular transport service from the network. TCP/IP isn’t actually “neutral”in this respect, It’s more properly deemed “arrogant”as it adopts the policy that all applications and all networks are one dimensional and the same.
Wu admits that net neutrality is a “belief system:”
The argument for network neutrality must be understood as a concrete expression of a system of belief about innovation, one that has gained significant popularity over last two decades.
So one question that has to be asked is whether there is any concrete evidence that this belief system is based on reality. The best Wu can offer is an elaboration of net neutrality theory:
A communications network like the Internet can be seen as a platform for a competition among application developers. Email, the web, and streaming applications are in a battle for the attention and interest of end-users. It is therefore important that the platform be neutral to ensure the competition remains meritocratic.
Clearly, this elaboration simply repeats the belief that a “neutral”network facilitates competition; it doesn’t explain how this belief can possibly be true or even what is meant by the word “neutral.”Wu admits this as well:
…the concept of network neutrality is not as simple as some IP partisans have suggested. Neutrality, as a concept, is finicky, and depends entirely on what set of subjects you choose to be neutral among.
So we can’t talk about neutrality without raising the question: “neutral with respect to what?”In the Internet’s early days, applications were similar in terms of their transport requirements, but now they aren’t:
As the universe of applications has grown, the original conception of IP neutrality has dated: for IP was only neutral among data applications. Internet networks tend to favor, as a class, applications insensitive to latency (delay) or jitter (signal distortion). Consider that it doesn’t matter whether an email arrives now or a few milliseconds later. But it certainly matters for applications that want to carry voice or video. In a universe of applications, that includes both latency-sensitive and insensitive applications, it is difficult to regard the IP suite as truly neutral as among all applications.
While the Internet was once neutral for all practical purposes, by the time network neutrality became a policy prescription this neutrality had ceased to exist. When all applications have the same needs from the network, indifference guarantees neutrality, but when they don’t it doesn’t. In fact, there’s really no coherent and universally agreed-upon way to conceptualize neutrality in a network of diverse applications; the best we do is debate the pros and cons of various methods.
In today’s world, IP indifference is inherently non-neutral:
The technical reason IP favors data applications is that it lacks any universal mechanism to offer a quality of service (QoS) guarantee. It doesn’t insist that data arrive at any time or place. Instead, IP generally adopts a “best-effort”approach: it says, deliver the packets as fast as you can, which over a typical end-to-end connection may range from a basic 56K connection at the ends, to the precisely timed gigabits of bandwidth available on backbone SONET links. IP doesn’t care: it runs over everything. But as a consequence, it implicitly disfavors applications that do care.
Internet history suggests that problems are never solved until they become critical. The Internet doesn’t have an inherited QoS mechanism because early applications didn’t require one, just as it didn’t have a congestion management mechanism until ARPANET was bypassed and the Internet collapsed.
The Internet is now in a position where the migration of smartphones to IP requires a general-purpose QoS mechanism to support telephone calling over IP. The technical barrier can be overcome, but the insistence on a “neutrality”mandate (that is more properly an indifference mandate than a true neutrality rule with respect to the broader universe of network applications) has now become a barrier. It comes as no surprise that the firms who support the indifference mandate are those whose businesses run on web servers, a classically “data intensive”application.
Wu’s paper makes a strong case that “open access”is not a productive measure for today’s Internet, and seeks to replace it with net neutrality. Wu develops a rationale for net neutrality that runs counter to his own evidence. He stipulates that an indifferent TCP/IP protocol is non-neutral with respect to networks, but insists that it can be viewed as neutral with respect to applications. This distinction isn’t logically consistent, because the different service grades designed into networks exist for the purpose of supporting different applications. Wu has something else in mind: he posits that implementing network-wide QoS either requires re-engineering the Internet or unacceptably close relationships between application designers and network operators:
True application neutrality may, in fact, sometimes require a close vertical relationship between a broadband operator and Internet service provider. The reason is that the operator is ultimately the gatekeeper of quality of service for a given user, because only the broadband operator is in a position to offer service guarantees that extend to the end-user’s computer (or network). Delivering the full possible range of applications either requires an impracticable upgrade of the entire network, or some tolerance of close vertical relationships.
This point indicts a strict open-access requirement. To the extent open access regulation prevents broadband operators from architectural cooperation with ISPs for the purpose of providing QoS dependent applications, it could hurt the cause of network neutrality.
Wu is seeking to thread a needle here, preserving the possibility of providing QoS-dependent applications despite the fact that he believes (incorrectly, in my view) that QoS requires “a close vertical relationship between a broadband operator and Internet service provider.”In today’s world, “broadband operators”and ISPs are the same firms, but under the open access regime that net neutrality replaces it was meaningful. Wu rejects the “open access”model in favor of his nuanced form of net neutrality; he terms this formulation “broadband discrimination.”This is actually an advance over the blunt instrument open access approach that separates basic broadband from Internet service. The distinction between a neutral Internet and a service-differentiated broadband network ultimately proves too subtle, however.
In effect, Wu’s network neutrality regime gives broadband service providers a triple-play monopoly by preventing the sale of QoS at the gateway between the broadband network and the Internet as a whole:
Hence, the general principle can be stated as follows: absent evidence of harm to the local network or the interests of other users, broadband carriers should not discriminate in how they treat traffic on their broadband network on the basis of inter-network criteria.
Broadband service providers “discriminate”within their own networks in order to provision QoS to sell TV and voice services alongside Internet access, but Wu wants to ban the sale of such “discrimination”to external service providers. Effectively, this regime grants a triple play monopoly to broadband providers in order to ensure non-discriminatory access to the Internet. This is the Kingsbury Commitment – a trade by the Justice Department that granted AT&T a telephone monopoly in return for a universal service commitment – in different clothes.
Kingsbury turned out to be bad law because it prevented the uptake of new technology in communication networks. Net neutrality is bad law for the same reason: It seeks to preserve the Internet status quo from the 1980s and 1990s long after the rationale for that status quo has ceased to exist. We now have the capability to build networks that have the ability to support diverse applications without imposing monopoly conditions and without hobbling technical advances. Only the law stands in the way of this progress.
Wu’s analysis ultimately falls back on telecom law developed to regulate monopoly networks. His key criteria – discrimination, private benefits without public harm, and foreign attachments – hail from telephone law, as he admits:
Its origins are found in the Hush-a-Phone case, where the FCC ordered Bell to allow telephone customers to attach devices that “[do] not injure . . . the public in its use of [Bell’s] services, or impair the operation of the telephone system.”
The giveaway here is the attempt to envelope the Internet in a body of law devised for technology and marketplace realities of a different and bygone era. Broadband networks and the broadband marketplace exhibit utterly different dynamics that policy makers must recognize if they’re not to strangle these networks in the crib. Law has a built-in bias toward precedent, but we have to be aware of unprecedented realities when we find them.
Conclusion
Good technology policy depends on the competent application of engineering, economics, and law to technology markets. Policy can go astray when errors of analysis are made on any of these three fronts, so It’s important for policy makers to be well informed across the board and not to privilege any one form of analysis over the other two. It isn’t necessary for every policy maker to be skilled in the arts of engineering, economics, and law, but it is helpful for them to possess the ability to recognize bad analysis where they find it.
This post was originally published on TechPolicyDaily.