Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Monday, December 10, 2012

WCIT, Neutrality, OTT-Telco & "sustainable" Internet business models

I have a general dislike of the word "sustainability". To me, it invokes the image of grey, uninspiring perpetuity, devoid of imagination or change. It is the last resort of the conservative, controlling and unambitious.

In its negative variant, "unsustainable", it is often a cloak for unpalatable ideas, typically used when trying to justify limits on freedom for reasons of ideology or partisan commercialism.

Frequently, advocates of "sustainability" overlook or deliberately ignore the pace of technical innovation, and the ingenuity of humans to overcome or work around problems. It is often misanthropic and miserablist, often with arguments riddled with logical fallacies - especially straw-men.

I'm not going to discuss environmental or socioeconomic themes on this blog, as the rhetoric on "sustainability" is currently often applied to the technology industry as well, and particularly the Internet itself.

Whenever you hear someone in the telecoms industry talk about a "sustainable business model", it should ring alarm bells. It means that they have gripes with the status quo - either because of ideological preconceptions, or because they feel they deserve long-term excessive rents without innovation and hard work.

Typically, "sustainability" arguments are more about the ends - changing the industry for its own sake - rather than the means: fixing a specific and immediate problem.


Dubai - hardly a beacon of sustainability

[Note: this post is being written while the WCIT12 conference is mid-way through]

I've already called out the ridiculous ETNO proposals for the current ITU WCIT conference in Dubai, and the earlier "telcowash" nonsense published by ATKearney about a so-called sustainable model for the Internet.

The ITU itself is trying to justify its intrusive stance on DPI and traffic management by citing the usual spurious themes about "data deluge", when it is already apparent that sensible pricing is a very easy & effective way to limit traffic and change user behaviour, if needed. Some of 3GPP's work on the PCC architecture is similarly self-serving.

Various ITU proposals use the QoS/sender-pays argument to suggest (falsely) that this is the way to fund Internet build-out in developing countries. More likely, a contractual form of peering/interconnection for the Internet would stifle or kill its development in those markets, and with it kill the promising young software and app ecosystems that are springing up. As OECD points out, the interconnection of the Internet works so well because there are no contracts, let alone complex settlement fees and systems for QoS and "cascading payments", like the legacy & broken international telecoms model.

It is perhaps unsurprising that the telecoms industry and its vendors keep flogging the dead horse of application-specific charging for the public Internet. As regular readers know, I am an advocate of separating the (neutral-as-possible) Internet from other broadband services which telcos should be free to manage any way they see fit (competition laws & consumer protection notwithstanding).

Let's face it - ITU and Dubai is nothing to do with "sustainability" of the Internet, it's about rent-seeking telcos (and maybe Governments) wanting to find a way to get money out of Google and/or wealthier countries, in ways that don't involve innovation and hard work. And for certain countries, if the next Twitter or other "democratising" tool doesn't get invented because the Internet ecoystem breaks down - well, so much the better.

(It's also amusingly hypocritical to talk about "sustainability" from a vantage point in Dubai, given that the UAE has one of the highest levels of per-capita energy consumption on the planet).


The sustainability disease is infectious

However, other more respected - and thoughtful - people also have thoughts along similar lines regarding so-called network "sustainability". My colleague Martin Geddes and I have near-total disagreement on whether Internet Neutrality is either desirable or possible. He is working with a bunch of maths wizards from a company called PNSol to define a new theory of how networks can (and should) work, and how they could be managed for optimal efficiency.

Core to his thesis is that networks should be "polyservice" - basically code for managing different flows with different "quality" requirements, according to which are the most latency-sensitive. Although the understanding of IP's limitations and the optimisation algorithms are different (and probably much better), this is not that far conceptually from the differentiated-QoS network pitch we've all heard 100 times before. The story is that we can't carry on scaling and over-provisioning networks the way we have in the past, because the costs will go up exponentially. (More on that, below).

His observations that current networks can clog up because of weird resonance effects, irrespective of the theoretical "bandwidth" are quite compelling. However, I'm not convinced that the best way to fix the problem - or mitigate the effects - is to completely reengineer how we build, manage and sell all networks, especially the Public Internet portion of broadband.

I'm also not convinced by the view that current Internet economics are "unsustainable". There are enough networks around the world happily running at capacities much higher than some of those he cites as being problematic, which suggests maybe company-specific issues are greater than systemic ones.

While it's quite possible that he's right about some underlying issues (certainly, problems like buffer-bloat are widely accepted), his view risks the same "moral hazard" that the ITU's sender-pays proposals do: it might have the unintended consequence of breaking the current "generativity" and innovation from the Internet ecosystem.

I think that putting systematised network "efficiency" on a pedestal in search of sustainability is extremely dangerous. It's a mathematician's dream, but it could be a nightmare in practical terms, that could have a potential "welfare" cost to humanity of trillions of dollars. The way the current Internet works is not broken (otherwise hundreds of millions of people wouldn't keep signing up, or paying mone for it), so it's important not to fix it unnecessarily.

Now one area that Martin & I agree is on observation and root-cause analysis of problems. By all means watch Internet performance trends more carefully, with more variables. And we should be prepared to "fix it" in future, if we see signs of impending catastrophe. But those fixes should be "de minimis" and, critically, they should - if at all possible - allow the current hugely-beneficial market structure to endure with minimal changes. Trying to impose a new and unproven non-neutral layer on today's Internet access services is a premature and dangerous suggestion.

[Enviromental analogy: I believe that current theories of anthropogenic climate change are, mostly correct, although better modelling and scrutiny, and continued confrontational science is needed. However, unlike some lobbying groups who see this an opportunity to change the world's social and political systems, I'd prefer to see solutions that work within today's frameworks. We need to decouple pragmatically fixing the problem - clean energy etc - from a more contentious debate about consumerism, globalisation, capitalism and so on. We shouldn't allow extremists - environmental or telco - to exploit a technology problem by foisting an unwanted ideology upon us].

It may be the case, however, that the way we do broadband is more broken, either in fixed or mobile arenas. (Note: broadband is more than just Internet access, although we often use the terms synonymously. However, 90% of the value perceived by residential end-users from broadband today comes from using it for the open, "real" Internet).

By all means use some sort of polyservice approach like that PNSol advocates, or another telco vendor's or standards body's preferred QoS mechanism, for overall broadband management. Indeed, this is already done to prioritise an operator's own VoIP and IPTV on domestic ADSL/FTTH connections, and also many corporate networks have had various forms of QoS for years.

The key thing is to keep the Internet as segregated from all of that experimentation as possible. Even on shared access media like cables or 3G/4G, there should be strict controls that the Internet "partition" remains neutral. (Yes, this is difficult to define and do, it's a worthy goal - perhaps the worthiest). If necessary, we may even need to run Internet access over totally-separate infrastructure - and it would be worth it. If it clogs up and fails over time, then users will naturally migrate to the more-managed alternatives.

I don't buy the argument that we should reinvent the Internet because some applications work badly on congested networks (eg VoIP and streamed video). My view is that

  1. Users understand and accept variable quality as the price of the huge choice afforded them by the open Internet. 2.5 billion paying customers can't be wrong.
  2. Most of the time, on decent network connections, stuff works acceptably well
  3. There's a lot that can be done with clever technology such as adaptivity, intelligent post-processing to "guess" about dropped packets, multi-routing and so forth, to mitigate the quality losses
  4. As humans, we're pretty good at making choices. If VoIP doesn't work temporarily, we can decide to do the call later or send an email instead. Better applications have various forms of fallback mode, either deliberately or accidentally.
  5. Increasingly, we all have multiple access path to the Internet - cellular, various WiFi accesses and so forth. Where we can't get online with enough quality, it's often coverage that's the problem, not capacity anyway.
  6. Anything super-critical can go over separate managed networks rather than the Public Internet, as already happens today

It may be necessary to have multiple physical network connections, as otherwise we need to multiplex both unmanaged Internet and managed polyservice on the access network. But that multi-connection existence is already happening (eg fixed+mobile+WiFi) and is worthwhile price to pay to avoid risking the generativity of the current mono-service Internet.

By all means introduce new "polyservice" connectivity options. But they need to be segregated from the Public Internet as much as possible - and they should be prohibited by law from using the term "Internet" as a product description.

There is also a spurious argument that current Internet architectures are not "neutral" because things like P2P are throttled by some ISPs, some content filtered-out for legal reasons, and because of mid-mile accelerators / short-cuts like CDN. That is a straw-man, equivalent to saying that Internet experience varies depending on device or server or browser performance. 


Sustainable Internet growth?

But let's go back to the original premise, about sustainability. Much of both ITU's and Martin's/vendors' arguments pivot on whether current practices are "sustainable" and support sufficient scope for service-provider profitability, despite usage growth.

A lot of the talk of supposed non-sustainability of current network expansion is, in my view, self-serving. It suits people in the industry well to complain about margin pressures, capex cycles and so forth, as it allows them to pursue their arguments for relaxing competition law, getting more spectrum, or taxing the way the Internet works. We already see this in some of the more politically-motivated and overcooked forecasts of mobile traffic growth. Few build in an analysis of either behavioural change and elasticity as a result of pricing, or the inevitable falling costs of future technological enhancements. (Although some are better than others).

But I have seen almost no analysis of where the supposed cost bottlenecks are. If there is a "cost explosion" decoupled from revenues, where is the "smoking fuse" that ignite it? Are we missing a step on the price/performance curves for edge routers or core switches? Are we reaching an asymptote in the amount of data we can stuff into fibres before we run out of visible wavelengths of light? Are we hitting quantum effects somewhere? Overheating?

Before we start reinventing the industry, we should first try and work out what is needed to continue with the status quo - why can't we continue to just over-provision networks, if that's worked for the last 20 years?

Now to be fair, OECD identifies a gap in R&D in optoelectronic switching, going back to the early 2000's, when we had a glut of fibre and lots of bankrupt vendors in the wake of the dotcom bubble bursting. Maybe we lost an order of magnitude there, which is still filtering down from the network core to edges?

In mobile, we're bumping up against Shannon's limit for the number of bits we can squeeze into each Hz of spectrum, but we're also pushing towards smaller cells, beam-forming and any number of other clever enhancements that should allow "capacity density" (GB/s/km2) to continue scaling.

I'm pretty sure that the frequency of press releases touting new gigabit and terabit-scale transmissions via wired or wireless means haven't slowed down in recent years.

All things being equal, the clever people making network processor chips, optical switching gizmos and routing software will be well-aware of any shortfall - and their investors will fully understand the benefit to be reaped from satisfying any unmet demand.

A lack of "sustainability" can only occur when all the various curves flatten off, with no hope of resuscitation. I'm not a core switching or optics specialist, but I'd like to think I would have spotted any signs of panic. Nobody has said "sorry guys, that's all we've got in the tank".


OTT-Telco: Debunking the unsustainability myth?

Most people reading this blog will be very familiar with my work on telecom operators developing OTT-style Internet services. There are now close to 150 #TelcoOTT offerings around the world that Disruptive Analysis tracks, and my landmark report published earlier this year has been read by many of the leading operators & vendors around the world.

The theme of telcos and/vs/doing/battling/partnering so-called OTT providers is well-covered both on this blog and elsewhere (such as this article I guest-wrote for Acme Packet recently).

One element that doesn't get covered much is that various Internet companies are now themselves becoming telcos. Google, notably, has its Kansas City fibre deployment, where it is offering Gbit-speed fibre connections at remarkably low prices ($70 a month). But it's not alone - Facebook is involved in a sub-oceanic Asian fibre consortium, Google is reportedly looking at wireless (again), assorted players have WiFi assets - and, of course, various players have huge data centres and networks of dark fibre.

This trend - OTT players becoming telcos (in the wider sense) - seems inevitable, even if the oft-hyped idea of Apple or Facebook buying a carrier remains improbable. OTT-Telco may eventually become as important as Telco-OTT.

For me, this is where the so-called sustainability issue starts to break down. Firstly - is Google swallowing the costs of the Kansas network and under-pricing it? Or has it debunked the naysayers' cries of Internet "monoservice" armageddon? Given that it makes its own servers, is it also making any of its own networking gear to change the game?

I'm sure Google has already thought about this idea (and I've mentioned it to a couple of Googlers), but I think that it should seriously consider open-sourcing its management accounting spreadsheets. Shining a light on the detailed cost structures of planning, building and running a fibre network (equipment, peering, marketing etc) would make other companies' claims of sustainability/unsustainability of business models more transparent.

While it is quite possible that Google's economics are very similar to its peers around the world, it may also have used its engineering skills, Internet peering relationships - or other non-conventional approaches - to lower the cost base for delivering fast access. It may also have different ways of structuring its cost- and revenue-allocation, outside the legacy silos of other telcos. It may have its own forms of traffic-management / flow-management that minimise the damaging volatility seen on other networks - or it might just be able to over-provision sufficiently cheaply that it's not necessary.

Whatever is happening, the fact that Google (and others like HongKong Broadband in the past) are able to offer gigabit residential broadband suggests that we've got at least one or two orders of magnitude left before the "qualipocalypse" becomes a realistic threat for the public Internet.

In other words, OTT-Telco offers us the chance to prove that what we've got now isn't broken, is "sustainable" and indeed has headroom left. Obviously that isn't yet proven, so close monitoring (and ideally visible & transparent financials) will still be needed.

None of this means that we shouldn't also have non-neutral broadband access products available - and if customers prefer them, that indicates the direction of the future for Internet / non-Internet applications.

But for now, the neutral-as-possible "monoservice" Internet seems not only sustainable, but arguably such a valuable contributor to global development that it should be regarded as a human right. We should vigorously resist all rhetoric about "sustainability", whether from ideologically-inspired governments, lazy/greedy network operators, or evangelical vendors. If and when the Internet's economics do take a nosedive, we should first look for technical solutions that allow us to keep the current structures in place.

We should not allow purely technical issues to bounce us into philosophical shifts about the Internet, whether inspired by ugly politics or elegant mathematics.

2 comments:

scaleyboy said...

Dean, certainly Google might be tempted to apply its Android lessons to fibre build. That would surely mean licensing or franchising a fibre build package. After all it would run into anti-trust problems if it started building out its own fibre cities as a telco in the US, but as with Android it doesn't need to do the business itself, just orchestrate it, with maybe the occasional Nexus-like dabble to keep its hand in.

InfoStack said...

Bill and keep stifles innovation. But the old legacy vertically integrated subsidy and tariff regimes (bilateral settlements) also won't work. What is required is a horizontal model with balanced settlement systems (in the control layers) that efficiently clears supply and demand across networks (vertical boundaries) and application and access layers (horizontal boundaries).

Balanced settlement systems can include both calling and called party pays. The latter (which appears anathema to you) is the basis for centralized procurement (800, VPN, ad-sponsored) that can lead to ubiquitous and free (universal) broadband access.