Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Sunday, December 30, 2012

My 2013 Telecom Industry Anti-Forecasts

In the past, I've followed industry-analyst convention in December, and usually put out predictions for the year ahead. (This was last year's set - I reckon I've done pretty well)

This time around, I'm going to focus on what's NOT going to happen in 2013, despite the consensus (or a good chunk of it) saying otherwise. Some of these are standard fodder for regular readers, and others are new ideas. These are things for telecom operator & vendor execs to avoid wasting time, effort and resources on. Either they're not going to happen at all, or everything is going to be later than expected.

So without further ado, these are Disruptive Analysis' Top 10 Telecoms Anti-Forecasts for the coming year...

1) RCSe / RCS5 / Joyn won't gain meaningful user traction...

... and not much more operator support either. Apple won't play ball. Google and Microsoft will be ambivalent at best. There's no clear business model beyond "adding value" to bundles, which depends on the service being valuable in the first place. There's a high chance of execution risk that could make it worse-than-useless. I have seen no arguments about how to win back users from WhatsApp, Line, KakaoTalk etc, all of which are better and faster-evolving than RCS. The argument of RCS adoption by the non-OTT-using laggards first is ridiculous - basically it suggests "crossing the chasm" backwards (on a unicycle). Markets with little operator control of handset distribution won't get RCS-capable phones in high enough concentrations.

There is more of a case for RCS as an API rather than an app (eg for rich B2C messaging beyond text), but I'm not convinced that likely to be successful either, although the US might be an exception. Elsewhere the low penetration and growing number of alternative paths will limit an opportunity. We'll see a few telcos trying out cloud-based or wholesale-based RCS platforms rather than buying their own, which is sensible as it just makes it cheaper (and hopefully faster) to fail. If there are more than 30 live networks or 20 million regular users of RCS in its various guises by December 2013 it'll be a miracle.


2) NFC payments will continue to struggle.

Over-complex value chain (with too many participants contributing no, or negative, value). No support from key players like Apple or PayPal. And above all, no meaningful benefits for the end-user, despite considerable behaviour change required and numerous risks. As just one minor example, consider the number of times you're doing something else on the phone, at exactly the same time you'd be asked to tap it on something. NFC also suffers from almost the same level of irredeemably geekiness which means QR codes hardly get used - it's just not something that a "normal" person would want to be seen doing.

The jury is still out on non-payment applications ("interactions", not "transactions"). The only cool thing I've seen is Blue Butterfly's "tap to connect to WiFi" for hotspots. Unless and until NFC becomes "just another feature and API" for Android and iOS developers to play around with easily and freely, wide adoption and usage just won't happen. It also needs to be completely decoupled from SIM cards as authentication/security tools.


3) Broad adoption of VoLTE won't occur in 2013

This one shouldn't be too contentious, given that even GSMA presentations I've seen envisage 2014 as a target for mass adoption. There are numerous reasons for this, not least is that current incarnations of CSFB (circuit-switched fallback) are working OK, VoLTE appears to eat batteries at a serious pace, and the patchy nature of many LTE networks mean calls will either drop or need in-call handoff from VoIP to circuit (SR-VCC technology). Oh, and of course Apple doesn't support it, as it needs IMS capability on the device, which they're clearly unwilling to countenance.

More generally, expect tuning and tweaking to take a long while yet. VoIP is hard enough to tame on a gigabit corporate LAN, let alone with mobility and the vagaries of RF thrown in for good measure. The cellular industry doesn't have much experience in dealing with the acoustic complexities of VoIP either - echo cancellation, noise suppression, packet-loss-concealment and so forth. Add in trying to get the network to provide some sort of QoS without knock-on effects on other services for good measure. Lastly, the conversation with the CFO that suggests expensive investments in new infrastructure, for a declining market with cratering prices, might prove challenging. We're past "peak telephony" in many countries, and VoLTE doesn't offer anything to stem the tide. Worse, it might even accelerate the move by consumers away from telephony, if it works worse than a $20 GSM phone.


4) WebRTC won't take over the world in 2013

This might surprise a few people. While I'm hugely enthusiastic about WebRTC's medium term prospects, I'm a little worried that expectation might get ahead of itself, while the bugs and wrinkles are being ironed out. It's definitely a "fantastic idea" (unlike things like RCS and NFC) but it is going to need careful execution, especially in mobile.

Apple and Microsoft browser support timelines are vague, while putting WebRTC in smartphones has all sorts of UI and power-consumption risks. There will be plenty of early examples where it will make a major impact, but prognostications of it killing either telcos' traditional phone business or the app-based OTT model are premature. Stand by for an upcoming report which will outline the early winning use-cases and longer-term roadmap and scenarios. (Now, as to whether WebRTC takes over the world in 2014 or 2015 - that's a different question)

**NEW Feb 2013 - Disruptive Analysis WebRTC report - details here**

5) Nokia won't be acquired

And especially not by Microsoft. If it's going to succeed at making Windows Phones, it will do so independently; if it's going to fail, it doesn't matter who owns it. The Asha and low-end product lines would be a poor fit for Microsoft, too. It's conceivable that Microsoft might provide some sort of financing, eg through a loan - after all, that's what rescued Apple in the 1990s. I guess some sort of private equity buyout isn't inconceivable either.


6) LTE won't replace fixed broadband.

I'm really impressed by the way LTE works in practice. People I show it to are wowed by the speeds, albeit on a fairly empty network (EE's in the UK). Its adoption rate in Korea, Japan and the US has been surprisingly swift. Its growth in subscriber numbers and revenues in 2013 will be impressive. BUT.... it won't make any meaningful dent in use of fixed broadband, and especially FTTx.

It's dependent on (expensive) handsets & other devices, the economics are too different, the prices to consumers are miles apart when you consider volumes, and speeds are likely to drop as networks load up. Ultimately, the (shared) speed of a cell-tower is essentially the same that can be delivered (dedicated) by a single fixed connection. Realistically, even with a lot of spectrum, several operators and a fairly dense cell-grid, LTE is going to struggle to offer more than an aggregate 5Gbit/s per sq km. That's not going to support people watching Netflix on HDTVs in their living room, especially if the TV is behind a couple of walls. Most of the use-cases I see for LTE metrocells are about outdoor / high footfall areas - not trying to service high-density residential population.

That said, there are niches where it will be more important - notably in rural areas outside the reach of copper/fibre, or for prepaid users who don't want broadband based on monthly contracts. On the other hand, the proliferation and demand for WiFi is going to drive fixed connections, as will high-end home uses such as IPTV and gaming. It's notable that even LTE-advocating vendors like Ericsson project fixed broadband traffic volumes to remain at least 10x cellular loads for the foreseeable future


7) OTT traffic on broadband won't be "monetised"

Yes, I know the fairy stories that DPI vendors tell their operator clients at bedtime about "monetisation" of Internet companies' traffic. But we simply won't be seeing so-called OTT players paying telcos  either through "1-800" business models for apps, or - more generally - paying for access-network QoS. Even where it's legally permissible, it's still technically doubtful, commercially impossible and culturally anathema. (Zero-rating partnerships, on the other hand, are more realistic).

Every time I've met vendors and operators in this space for the past 3-4 years, I've asked if they are aware of any app/content companies paying "cold hard cash", either for enhanced access network quality "delivery" or for some sort of "sender-pays" carriage fee on the part of subscribers. Doesn't matter if it's fixed or mobile, the answer is the same: an unequivocal "No". Since then, we've had the ridiculous and unworkable sender-pays proposals defeated in the ITU WCIT debate in Dubai as well. The French operators are about to get a hard lesson in trying to force the issue too - bafflingly, they've tried to pick a fight with Google/YouTube over peering. It will not end well - I can think of at least 5 ways in which Google can run rings around them on this.

I've discussed the difficulties in many previous posts and reports, so I won't rehash them here. Either way, 2013 isn't going to see Netflix, Facebook, Google or any of the operators' own Telco-OTT content businesses start writing cheques for "delivered" traffic. The whole metaphor of sending, delivering, "distribution" and "digital logistics" is broken and needs to be retired. (Note: there is a very small chance that some corporate cloud IT companies might pay, eg for home-workers on fixed broadband. But I'm still doubtful - more probable is a distribution deal rev-share on the telco actually selling the cloud service in the first place).


8) Handset purchase patterns won't change that much

There's a lot of discussion at the moment about operators ending subsidies, to reduce operating costs. T-Mobile in the US is going that direction, and assorted other telcos have discussed something similar. Various commentators have suggested that this might impact Apple and to some degree Samsung, as many of their devices in certain markets attract sizeable subsidies. Others have suggested that this might lead to greater spend on services, as users hang on to old devices longer when they realise what the "real price" of phones is.

I don't really buy this, even though the idea has some elegance and appeal. In those markets in which subsidies are prevalent (mostly post-paid centric markets like the US and Northern Europe), end-users have become habituated to getting free or heavily-discounted phones. But the quid pro quo is that operators have become habituated to getting 18 or 24-month contracts. Both of those habits are going to change only slowly. Telefonica and Vodafone tried abandoning subsidies earlier this year in Spain, but had to row back when they lost market share to Orange (which kept them) and found post-paid subscribers moving to prepay or 1-month rolling deals.

What we might see is some operators moving from subsidies to some form of installment/credit scheme for buying phones, separate to the service tariff. That also helps them get around increasingly tight accounting rules that frown upon blended service/equipment packages, especially in terms of reported revenue allocation. However, the elephant in that room is that reported service ARPU will take a nosedive, when handset-subsidy "repayments" are stripped out of future reported revenues.

Users are also unlikely to move to buying phones from new retail sources very quickly. While the "cognoscenti" already get unlocked "vanilla" handsets in those markets (and clearly 70% of the rest of the world already does anyway), it's going to take a long time for retail distribution channels to shift away from operator-controlled retail outlets. One other thing to ponder here: do operators really want to reduce their ability to preload apps and configurations into devices (think content apps, policy, RCS, VoLTE etc), because customers buy them vanilla elsewhere? If users buy and own their own phones independently, they are likely to be unwilling to load telco bloatware onto those devices.


9) WiFi won't be "seamless" or tightly coupled to mobile network cores

I've written and spoken widely about why mobile operators' (and their vendors') dreams of fully integrating WiFi in their networks are implausible. There's too much non-operator WiFi out there, and too many other participants and stakeholders in the value chain (users themselves, fixed operators, venues, brands, device & OS suppliers, employers, government authorities etc).

In general, WiFi in most countries is moving to a model of being "free at the point of use", available widely in cafes, airports, shops and even on the streets. (In some markets like UAE or China, controls are tighter and there is much less open/free WiFi). End users connect at home, in their local Starbucks, at conferences with a free code from the organisers, and so forth. They use phones as tethers, connect regularly at home and at work. While many users do want assistance with the clunkier aspects of logging on to WiFi, they are unlikely to be willing to pay (even implicitly) for this. Attempting to bundle WiFi traffic into cellular data caps won't fly - and neither will exerting onerous policy controls, when the same venue allows "non-seamless" access to the whole Internet. Yes, there will be a couple of exceptions - eg parents locking down WiFi access for children, but that's a corner-case.

I still meet many in the cellular industry who treat WiFi as "just another access" that should be treated as part of a HetNet, with traffic and authentication unified with 3G/4G. This is palpable nonsense - WiFi is inherently different in many important ways - not least of which is users' expectation of how it should work. We'll see more "WiFi pain-reduction" solutions like DeviceScape', but the key thing to think of is "frictionless" not "seamless". Seams are borders, and nobody wants to have their data knocked over the head & smuggled across the frontier in the back of a SIM-powered truck.


10) No operator will make a bold acquisition of a major Internet player

So far, telcos have passed on buying multiple obvious Internet winners, that could have parachuted them straight into the OTT top-tier. Skype, Yammer, Instagram etc. have all been game-changing innovators that could change the game in either consumer or business services, with assorted synergies to existing telco properties or aspirations. LinkedIn and Twitter and Tumblr are now probably too expensive, despite being obvious targets for years. I know that several telcos sniffed around Skype before Microsoft stepped in.

Instead, telco M&A teams tend to prefer safe bets - consolidation among their peers, maybe a systems integrator or two, mobile handset retailers and so on. CFOs funnel cash towards new spectrum, instead of new applications.

I don't think this will change. We'll see a few smaller innovative buys (similar to SingTel/Amobee or Telefonica/TokBox) but those will feed into operators that are already grasping the Telco-OTT nettle and need some tactical knowledge or assets. Strategic-scale acquisitions will be deemed too risky. To be fair, it's not clear that most "cool" startups would fare especially well inside a telco's rigid structure and stodgy conservative culture. There is also a wariness of repeating expensive mistakes in content, ten or so years ago.

It's a shame, though, as a Verizon/Netflix, China Mobile/Line or Vodafone/LinkedIn combination could be pretty potent if managed well.


Conclusions

I hope that a few of these have given pause for thought. Obviously, there's a bunch of things I am a lot more positive about as well, but I'll keep those for regular readers and customers rather than put them on this post.

One thing I'd like to see more of in 2013 is for conference organisers to be more bold and specifically seek out contrarian views and "heretical" speakers. Yes, your sponsors may prefer a more consistent brainwashing of positivity and spin. But your delegates deserve to see both sides of a debate, robustly argued. Events should not be purely cheerleading and wishful thinking - let's see more challenge to separate the wheat from the chaff.


Bonus!

A couple of other, shorter, extra anti-forecasts for 2013:

- Cellular M2M connections will start to lose out to WiFi, Zigbee, private radio and others for connections to devices that don't actually move about
- LTE roaming will be widely ignored because of bill-shock risk, spectrum mismatch in devices and issues around supporting voice
- Nobody normal will be using mobile phones to unlock doors of homes, cars or hotels instead of keys or cards
- Mobile video-calling/sharing will remain almost irrelevant, and generate way more PR puff than it deserves. Some other embedded-video apps might make more sense, though.
- Augmented Reality is mostly touted by people with a limited grip on non-augmented reality. It won't be meaningfully important in 2013, if ever.
- Everyone will hate the new venue for MWC13. I'm not going - if I fancied a week on an industrial park next to IKEA, I'd go to Neasden as it's closer.
- The Internet will happily go about its merry monoservice business, despite the apocalyptic predictions of my colleague Martin Geddes. I won't be waking from nightmares shouting "Non-stationarity!!!"
- Outside of the Galaxe Note-style "phablet", few tablets will have 3G/4G modems embedded, and even fewer will have them regularly used
- We won't see much change in Internet Governance, despite lots of noise and thunder from those mostly-thwarted at the ITU WCIT conference
- White-space technology won't evolve as far, as fast or as disruptively as many people hope
- We probably won't see Software-Defined Networking (SDN) proceed as fast as many hope, but that's an area for me to research a bit more fully before nailing down that conclusion

Have a Happy New Year. Be Disruptive.....

Thursday, December 27, 2012

WebRTC is the new battleground for peer-to-peer vs. server-based models for communications

I'm doing a really deep dive into WebRTC technology and business models at the moment. My view is that it's going to be a huge trend during 2013, and will be subject to the highest levels of hype, hope, marketing, debunking, politics, ignorance and misinformation. I'm not predicting it will take over the world (yet) - but I certainly am predicting that it's going to be a major disruptor.

**NEW Feb 2013 - Disruptive Analysis WebRTC report - details here** 

It's a fast-moving and multi-layer landscape encompassing telcos, network suppliers, device vendors, Internet players, software developers, chip vendors, industry bodies, enterprise communications specialists and probably regulators. Because my research and analysis "beat" covers all of those, I'm hoping to be the best-placed analyst and strategy consultant to decode the various threads and tease out predictions, opportunities, threats and variables.

One of the most interesting aspects is the linkage between intricate technology issues, and the ultimate winners and losers from a business point of view. Just projecting based on the "surface detail" from PR announcements and vendor slide-decks misses what's going on beneath. I'm finding myself going ever deeper down the rabbit-hole, before I can return and emerge with a synthesised and sanitised picture of possibilities, probabilities and impossibilities. That's not to say that there aren't also a set of top-down commercial and end-user trends driving WebRTC as well - but that's for another day.

A cursory glance at the WebRTC landscape reveals a number of technical battlegrounds - or at least foci of debate:

  • Codec choices, especially VP8 vs. H.264 for video
  • Current draft WebRTC vs. Microsoft's proposed CU-RTC-Web vs. whatever Apple has up its sleeve
  • The role of WebSockets, PeerConnection, SPDY and assorted other protocols for creating realtime-suitable browser or application connections
  • What signalling protocols will get adopted along with WebRTC - SIP, XMPP and so on
  • What does WebRTC offer that Flash, Silverlight and other platforms don't?
  • What bits of all this does each major browser support, when, and how? How and when are browsers updated?
While a lot of these seem remote and abstruse, there is another (mostly unspoken) layer of debate here:

Is WebRTC mostly about browser-to-browser use cases? Or is it aimed more at browser-server/gateway applications?

That is the secret question which is both chicken and egg here. Certain of the technical debates above tend to favour one set of use cases over the other - perhaps by making things easier for developers, or introducing the role of third parties who operate the middle-boxes and monetise them as "services". Because of this, it is also the hidden impetus behind various proposals and political machinations of various vendors and service providers. Other, less "Machiavellian" players are going to find themselves in the role of passengers on the WebRTC train, their prospects enhanced or damaged by these external factors without  their control.

Let's take an example. Cisco and Ericsson are both fans of H.264 being made a mandatory video codec for WebRTC. Now there are some good objective reasons for this - it is widespread on the Internet and on mobile devices and it is acknowledged as being of good quality and bandwidth-efficiency. But.... and this is the pivot point... it is not open-source, but instead incurs royalty payments for any application with more than 100K users. Conversely, Google's preferred VP8 is royalty-free but has limited support today - especially in terms of hardware acceleration on mobile devices. Maybe in future we'll see VP8-capable chipsets, but for now it has to be done in software, at considerable cost in terms of power use.

On the face of it, Cisco and Ericsson are behaving entirely rationally and objectively here. A widely adopted, hardware-embedded codec is clearly a good basis for WebRTC. But.... by choosing one with a royalty element, they are also swaying the market towards use-cases that have business models associated with them; especially ones that are based on "services" rather than "functions", as someone, somewhere, will need to pay the H.264 licence. (Ericsson is a member of the MPEG-LA patent holders for it, too). That works against "free-to-air" WebRTC applications that work purely in a browser-to-browser or peer-to-peer fashion. I guess that it could just push the licensing cost onto the browser providers, ie Google and Mozilla etc, but that doesn't help non-browser in-app implementations of WebRTC APIs.

But looking more broadly at all the battles above, I see a "meta-battle" which perhaps hasn't even been identified, and which also links to things like WebSockets (which is a browser-server protocol) and PeerConnection (browser-browser) as well as the role of SIP (very server/gateway-centric).

In a browser-to-browser communications scenario, there is very little role for communications service providers, or those vendors who provide complex and expensive boxes for them. Yes, there is a need for addressing and assorted capabilities for dealing with IP and security complexities like firewalls and NATs, but the actual "business logic" of the comms capability gets absorbed into the browser, rather than a server or gateway. It's a bit like having the Internet equivalent of a pair of walkie-talkies - once you've got them, there's no recurring service element tied to "sessions". Only with WebRTC, they'd be "virtual" walkie-talkies blended into apps and web-pages.

Now, the server-side specialists have other considerations here too. Firstly, they have existing clients - telcos - that would like to inter-work with all the various end-points that support WebRTC. Those organisations want to re-use, extend and entrench their existing service models, especially telephony and SIP/IMS-based platforms and offerings. Various intermediaries such as Voxeo, Twilio and others are helping developers target and extend the reach of those services via APIs, as discussed in my last post. Some vendors like SBC suppliers are perhaps a bit less exposed than those more focused on switching and application servers.

There is also the enterprise sector here, which will clearly like to see its call-centres and websites connect to end-users via whatever channel makes most sense. WebRTC offers all sorts of possibilities for voice, video and data interaction with customers and suppliers. They'd also (generally) prefer to reduce their reliance on expensive services-based business models in the middle, but they're a bit more pragmatic if the costs become low enough to be ignored in the wider scheme of things.

Now all of this looks like a big Venn diagram. There are some use-cases for which servers and gateways are absolutely essential - for example, calling from a browser to normal phone. Equally, there are others for which P2P makes a huge amount of sense, especially where lowest-latency connections (and maximum security/privacy) are desirable. It's the bit in the middle that is the prize - how exactly we do video-calling, or realtime gaming, or TV-hyper-karaoke, or a million other possible new & wonderful applications? Are they enabled by communications services? Or are they just functions of a browser or web-page. We don't have a special service provider to enable italic words online, so why do we need one for spoken words or moving visuals?

This isn't the only example of a P2P vs. P2Server battle - obviously the music industry knows this, as well as (historically) Skype. But it goes further, for example in local wireless connectivity (Bluetooth or WiFi Direct, vs. service-based hotspots or Qualcomm's proposed LTE-Direct). The Internet itself tends to reduce the role of service providers, although the line dividing them from content/application providers is much more blurry.

It would be wrong to classify Google as being purely objective here either. Despite high-profile moves like Google Voice, Gmail and Chat, I think that its dirty secret is that it doesn't actually want to control or monetise communications per se. I suspect it sees a trillion-dollar market in telecoms services such as phone calls and SMS's that could - eventually - be dissipated to near-zero and those sums diverted into alternate businesses in cloud infrastructure, advertising and other services.

I suspect Google believes (as do I) that a lot of communications will eventually move "into" applications and contexts. You'll speak to a taxi driver from the taxi app, send messages inside social networks, or conclude business deals inside a collaboration service. You'll do interviews "inside" LinkedIn, message/speak to possible partners inside a dating app etc. If your friend wants to meet you at the pub, you'll send the message inside a mapping widget showing where it is... and so on.

I think Google wants to monetise communications context rather than communications sessions, through advertising or other enabling/exploiting capabilities.

Even when abstracted via network APIs, conventional communications services pull through a lot of "baggage" (ie revenue and subscriber lock-in). They perpetuate the use of scarce (and costly) resources like E.164 phone numbers.

I also think that Microsoft and Apple are somewhere in the middle of this continuum, which is why they are procrastinating. They both have roles to play in both scenarios - and therefore, perhaps, are the kingmakers. Both are advocates on the specific issue of H.264 - Apple because of FaceTime, and Microsoft for reasons that seem unclear to me, as Skype is adopting VP8. More generally, Microsoft seems more server/network-centric, but is also wary of doing anything that allows the IE browser to fall further behind.

Either way, this contretemps is about more than just technology - it is, ultimately, rooted in the nature of WebRTC as a business. Specifically, it is about drawing the boundary between WebRTC services and WebRTC features.

I'm not making a judgement call here. This is not so much an iceberg analogy as a tectonic one. We've got a number of plates colliding. The action - the subduction zone - is occurring at a deep level. And over the next few years we're going to get some sudden movements that generate earthquakes and tsunamis.

(Amusingly, the first line on the tectonics web-page says "When two oceanic plates collide, the younger of the two plates, because it is less dense, will ride over the edge of the older plate" - perhaps a better analogy than I realised at first!)

Stay reading this blog in coming days: I'm working on the first seismic map of the WebRTC world. Sign up for updates here and follow @disruptivedean on Twitter.

**NEW Feb 2013 - Disruptive Analysis WebRTC report - details here** 

Sunday, December 23, 2012

Do telco network voice/messaging capabilities need to be "exposed" via 3rd party APIs?

For several years now, we've watched telecom operators try to court application developers, offering APIs for a variety of network and database functions. These have ranged from billing-on-behalf, through to SMS and phone call initiation, location lookups, authentication and so forth. Some of these have been offered by operators acting "solo" (most larger operators have developer programmes), while we have also seen standardisation efforts through organisations like OMA and GSMA's OneAPI programme.

All of this is seen as a way for telcos to:
a) Build revenues by charging directly for the APIs
b) Gain mindshare & relevance among Internet & IT developers, building an "ecosystem"
c) Potentially move up the value chain beyond "minutes and messages" into CEBP (communications-enabled business processes) or application rev-share models

Usually, the network APIs have been just a part of a wider developer-facing effort, which has also provided developer tools, testing facilities, and even funding through associated incubators and VC arms. While some seem to have gained quite a lot of critical acclaim (eg Telefonica's BlueVia), the actual results - especially in monetary terms - have been lacklustre. Most operators have now moved away from operating their own appstores, for example.

I see a few problems that have contributed to this state of affairs:

  • While developers understand the role of device-side APIs (for creating smartphone apps, eg on iOS & Android) and web APIs (for creating mashups, eg with Google Maps), they have much less awareness or understanding of the role (and possibilities) of in-network telco APIs
  • Various telco API initiatives have been quite poor - the actual functionality, or the way it has been packaged, has not fit well with the way developers design their applications or services, or has been poorly documents, or have required clunky sign-up or registration processes.
  • Telcos are seen as slow and uncool (important for developers)
  • If you're not a telco person, it's difficult to get your head around how the networks actually work and for whom. If a user calls into a company's call-centre, does the operator API work via the user's network, or the company's provider? Apps are always user-centric (and must be downloaded), but applications are server-side
The last point is really critical. It's the difference between apps and applications, and goes right to the heart of whether operators have a chance to resuscitate their falling SMS and telephony revenues (and usage).

I've long talked about the limitations of telephony, and it's also something that Martin Geddes and I cover in depth at our Future of Voice workshops. Phone calls are old, clunky and really not a great experience - they're often interruptive, there's no upfront indication why someone is calling, there's no context info beyond caller ID, they're charged per-minute (which may be an inappropriate metric), and they come with a whole set of social and etiquette rules that may diminish the value of an interaction.

The question is whether the best way around this is to avoid using "calls" at all, and just shift to other forms of communication: "non-telephony voice apps", or completely abandon voice and shift to messaging or other ways to interact. If I want a taxi nowadays, I use a London cab company's app, which is faster and better because I enter the address, get a price quote, time of arrival, vehicle registration and so on. Much better than trying to speak to a dispatcher, from a noisy pub or restaurant.

But the other option is to try and reinvent  the phone call - and the phone network - to be more useful, and work around some of the deficiencies of the format. We've long seen "call me" buttons on company websites, but how much more valuable would be "call me, with your agent having been sent the web-form I've filled in 80% and then got stuck, so they can help me complete it". That's clearly not interruptive (I've requested it), it's got context (the partially-filled form), maybe it's recorded by me or them, and I'm not paying for it - and maybe they're paying by a metric other than a minute.

The problem has been that that's not just a telco API. It's both a telco AND web API, working together. It's also (probably) not inside a smartphone app, but just uses the regular phone dialler as the UI. Which makes it very hard for a company to specify, a developer to produce, or for a telco to enable. Add in that the company might want to give its customers the option to connect via both traditional phone calls or VoIP ("Skype me"), plus various styles of messaging, and it becomes harder still. They might also want to have all this integrated into a smartphone app that  their regular customers download (eg bank, airline etc). Add WebRTC into that for in-browser comms in future, too.

(There are many other use cases beyond call-centres, but this is a good illustration).

Looked at that way, it becomes clear that the telco network call-initiation API is only an ingredient of a much broader "meta-API" (probably) provided by somebody else.

This is why we're seeing other companies start to sit between telcos and developers - Voxeo and Twilio probably being the best-known examples. They are able to help developers put together those sorts of useful functions that don't need have to have a downloadable app, but run using "legacy telephony" updated with rich context and programmability. They can still exploit ubiquity where appropriate (ie everyone can call/be called), but work around some of the human psychological and behavioural downsides of a 100-year old service.

We've seen various partnerships already - Voxeo & DT, Twilio & AT&T, for example - and I think we will see quite a lot more. Apart from anything else, those companies are "web telephony" players, and don't have the same anti-coolness stigma attached to telco APIs. They may also be in a place to create more sensible business models that don't boil down to minutes.

I see parallels here with other areas of telecoms. Jasper Wireless does much the same intermediary role for M2M - it provides a platform for telcos that turns their network capabilities into something that device manufacturers can actually use to develop new service models. Most telcos wouldn't know how to deal directly with a company that makes washing machines or digital cameras, so having an intermediary layer makes sense. We may also see the same for authentication/ID services, and of course we've long had SMS aggregators or external advertising networks that also essentially help telcos monetise their network resources or customer data assets.

This range of API intermediaries might even be able to salvage something from the wreckage of RCS (certainly, various of them are hoping for it). They could also be the ones who manage to put some lipstick on the IMS pig for developers, although I suspect that will be very market-dependent and won't work on a generalised global basis.

Where I think this approach will struggle is with consumer-to-consumer services, which are increasingly going to be be app-based on smartphones or tablets. There will probably be reasons to bridge between phone calls, SMS and websites on occasions, but I think that Viber, Skype, Facebook and others will still develop the majority of their comms functionality on a direct-to-user basis, without the need for "embedded" telco functionality in the form of telephony. I'm unconvinced that some of the niche-specific uses of voice comms (eg karaoke, baby monitors, voiceprint biometrics etc) will use telco "raw ingredients" rather than app-style OTT VoIP or (in future) WebRTC.

But for companies and brands, this new layer of meta-APIs, which incorporate some telco APIs "under the hood" has some interesting potential. Less clear is what proportion of the value flows to the operators, versus the new tier of Twilio, Voxeo et al in the middle.

Monday, December 17, 2012

Difficult optimisation for 3G vs 4G vs WiFi

For the last few days I've been playing with an iPhone5 on EE's new LTE network in London. Rapidly launched after the regulator Ofcom controversially allowed EE (EverythingEverywhere) to refarm its 1800MHz 2G spectrum, it has allowed EE to steal a march on its UK rivals in launching a 4G service. 

(As backdrop - the UK auction of 2.6GHz spectrum has been plagued with delays for the past 5 years. I worked on a couple projects for UK government bodies looking at options for the band around 2007-8, and since then we've been stuck in an endless cycle of lawsuits, consultations and slow government action. EE short-circuited the endless loops by preempting the auction with refarmed 1800MHz. This has led to a rapid settling of differences by everyone else, and now a combined is 2600+800 sale is coming up soon).

My initial thoughts on UK LTE are pretty positive - using Speedtest, I've been getting 20-25Mbit/s down (up to 33Mbit/s at one point), and 13-20Mbit/s up, along with 40-50ms ping times. I've most noticed hugely faster email downloads, instant initial web connections and blazing-fast maps (obviously using the new Google Maps app, rather than the dreadful Apple version included on the iPhone).

However, I've also noticed pretty poor indoor penetration - weirdly, often getting a UMTS2100 connection, dropping back 3G, rather than the LTE1800. I assume that means that EE hasn't put LTE cells alongside all of its 3G cells in central London, as otherwise I would have expected the lower-frequency band to give better coverage, not worse.

That would also explain why part of EE's press announcement about build-out last week referred to densification of its network in existing covered markets.

In my view, that's absolutely essential. At the moment, the user experience is of blazing fast outdoor speeds, and much more variable indoor ones - often bumped down to 3G - which is just where people are much more likely to use video, m-commerce and so on. Outdoors, people mostly use maps, email, social network stuff and messaging. Maybe some content downloads or streaming on public transport. Clearly, having fast speeds where you don't really need them, and slow speeds when you do, is sub-optimal.

That has another implication - people indoors will still prefer WiFi if this situation is maintained. If I had a decently large monthly quote, and a good chance of having indoor LTE, I'd probably start switching back my usage from WiFi to cellular, unless I was doing something very data-intensive (in volume terms) indeed. At present, on an unloaded network, LTE gives better average performance than WiFi, which in turn gives better average than 3G. Obviously it depends a lot on venue - some WiFi networks are congested and almost useless, but overall I'd say 4G>WiFi>3G, which also reflects backhaul.

That's a problem for operators, who have to contend with user behaviour of choosing when/where to  use WiFi (no, it won't be "seamless" access to carrier WiFi, in 90% of cases). Without a suitably dense network or sub-1GHz bands, they risk failing to capitalise on LTE's speeds to change back user perception, of relatively desirability of cellular vs. private WiFi.

This also leads to a secondary problem: if MNOs are going to want to use WiFi for offload, or even as an extra value-added service, they're probably going to have to install fibre connections for fixed broadband to support it, as otherwise it's likely to offer worse performance than LTE. That's feasible (but expensive) for own-build networks, but likely to be near-impossible for roaming partners or venues that just operate their own broadband.

This suggests to me that connection management is going to become ever more complicated. Not only is there a WiFi/cellular problem to optimise, but it's also going to be dependent on local availability of 3G vs. 4G (which is driven by frequency & cell grid density) and possibly on small vs. macro cell availablity. And in additional factors such as application preferences and whether the user is actually mobile (ie moving about) and there is going to be an almost unsolvable level of complexity.

Imagine trying to optimise connection between:

- 3G small-cell coverage on 2100MHz
- LTE macro coverage on 1800MHz
- 3G macro coverage at 900MHz
- MNO WiFi from the small cell outside the building, with fast microwave backhaul
- 3rd-party WiFi + MNO roaming, but with a 10Mbit/s ADSL backhaul
- Another 3rd-party WiFi provided by the venue, with no MNO roaming or auto-authentication, but with fibre backhaul

Oh, and the user is trying to play gaming (for which latency is the #1 concern) and also downloading email attachments in the background (cost/speed as #1).

Then, just for the fun of it, imagine that you also have the option to run cellular+WiFi concurrently and either bond the two connections together, or split them entirely via the handset OS.

Sounds like a combinatorially-hard problem, not solved easily in the network, the OS or a single app. Should be a lot of opportunities for software and infrastructure vendors - but also some serious pitfalls such as the ill-fated ANDSF technology, which tries to make everything operator-driven. It does point to the huge value of sub-1GHz spectrum, though - which should make the UK Treasury happy about the 800MHz auction.

Monday, December 10, 2012

WCIT, Neutrality, OTT-Telco & "sustainable" Internet business models

I have a general dislike of the word "sustainability". To me, it invokes the image of grey, uninspiring perpetuity, devoid of imagination or change. It is the last resort of the conservative, controlling and unambitious.

In its negative variant, "unsustainable", it is often a cloak for unpalatable ideas, typically used when trying to justify limits on freedom for reasons of ideology or partisan commercialism.

Frequently, advocates of "sustainability" overlook or deliberately ignore the pace of technical innovation, and the ingenuity of humans to overcome or work around problems. It is often misanthropic and miserablist, often with arguments riddled with logical fallacies - especially straw-men.

I'm not going to discuss environmental or socioeconomic themes on this blog, as the rhetoric on "sustainability" is currently often applied to the technology industry as well, and particularly the Internet itself.

Whenever you hear someone in the telecoms industry talk about a "sustainable business model", it should ring alarm bells. It means that they have gripes with the status quo - either because of ideological preconceptions, or because they feel they deserve long-term excessive rents without innovation and hard work.

Typically, "sustainability" arguments are more about the ends - changing the industry for its own sake - rather than the means: fixing a specific and immediate problem.


Dubai - hardly a beacon of sustainability

[Note: this post is being written while the WCIT12 conference is mid-way through]

I've already called out the ridiculous ETNO proposals for the current ITU WCIT conference in Dubai, and the earlier "telcowash" nonsense published by ATKearney about a so-called sustainable model for the Internet.

The ITU itself is trying to justify its intrusive stance on DPI and traffic management by citing the usual spurious themes about "data deluge", when it is already apparent that sensible pricing is a very easy & effective way to limit traffic and change user behaviour, if needed. Some of 3GPP's work on the PCC architecture is similarly self-serving.

Various ITU proposals use the QoS/sender-pays argument to suggest (falsely) that this is the way to fund Internet build-out in developing countries. More likely, a contractual form of peering/interconnection for the Internet would stifle or kill its development in those markets, and with it kill the promising young software and app ecosystems that are springing up. As OECD points out, the interconnection of the Internet works so well because there are no contracts, let alone complex settlement fees and systems for QoS and "cascading payments", like the legacy & broken international telecoms model.

It is perhaps unsurprising that the telecoms industry and its vendors keep flogging the dead horse of application-specific charging for the public Internet. As regular readers know, I am an advocate of separating the (neutral-as-possible) Internet from other broadband services which telcos should be free to manage any way they see fit (competition laws & consumer protection notwithstanding).

Let's face it - ITU and Dubai is nothing to do with "sustainability" of the Internet, it's about rent-seeking telcos (and maybe Governments) wanting to find a way to get money out of Google and/or wealthier countries, in ways that don't involve innovation and hard work. And for certain countries, if the next Twitter or other "democratising" tool doesn't get invented because the Internet ecoystem breaks down - well, so much the better.

(It's also amusingly hypocritical to talk about "sustainability" from a vantage point in Dubai, given that the UAE has one of the highest levels of per-capita energy consumption on the planet).


The sustainability disease is infectious

However, other more respected - and thoughtful - people also have thoughts along similar lines regarding so-called network "sustainability". My colleague Martin Geddes and I have near-total disagreement on whether Internet Neutrality is either desirable or possible. He is working with a bunch of maths wizards from a company called PNSol to define a new theory of how networks can (and should) work, and how they could be managed for optimal efficiency.

Core to his thesis is that networks should be "polyservice" - basically code for managing different flows with different "quality" requirements, according to which are the most latency-sensitive. Although the understanding of IP's limitations and the optimisation algorithms are different (and probably much better), this is not that far conceptually from the differentiated-QoS network pitch we've all heard 100 times before. The story is that we can't carry on scaling and over-provisioning networks the way we have in the past, because the costs will go up exponentially. (More on that, below).

His observations that current networks can clog up because of weird resonance effects, irrespective of the theoretical "bandwidth" are quite compelling. However, I'm not convinced that the best way to fix the problem - or mitigate the effects - is to completely reengineer how we build, manage and sell all networks, especially the Public Internet portion of broadband.

I'm also not convinced by the view that current Internet economics are "unsustainable". There are enough networks around the world happily running at capacities much higher than some of those he cites as being problematic, which suggests maybe company-specific issues are greater than systemic ones.

While it's quite possible that he's right about some underlying issues (certainly, problems like buffer-bloat are widely accepted), his view risks the same "moral hazard" that the ITU's sender-pays proposals do: it might have the unintended consequence of breaking the current "generativity" and innovation from the Internet ecosystem.

I think that putting systematised network "efficiency" on a pedestal in search of sustainability is extremely dangerous. It's a mathematician's dream, but it could be a nightmare in practical terms, that could have a potential "welfare" cost to humanity of trillions of dollars. The way the current Internet works is not broken (otherwise hundreds of millions of people wouldn't keep signing up, or paying mone for it), so it's important not to fix it unnecessarily.

Now one area that Martin & I agree is on observation and root-cause analysis of problems. By all means watch Internet performance trends more carefully, with more variables. And we should be prepared to "fix it" in future, if we see signs of impending catastrophe. But those fixes should be "de minimis" and, critically, they should - if at all possible - allow the current hugely-beneficial market structure to endure with minimal changes. Trying to impose a new and unproven non-neutral layer on today's Internet access services is a premature and dangerous suggestion.

[Enviromental analogy: I believe that current theories of anthropogenic climate change are, mostly correct, although better modelling and scrutiny, and continued confrontational science is needed. However, unlike some lobbying groups who see this an opportunity to change the world's social and political systems, I'd prefer to see solutions that work within today's frameworks. We need to decouple pragmatically fixing the problem - clean energy etc - from a more contentious debate about consumerism, globalisation, capitalism and so on. We shouldn't allow extremists - environmental or telco - to exploit a technology problem by foisting an unwanted ideology upon us].

It may be the case, however, that the way we do broadband is more broken, either in fixed or mobile arenas. (Note: broadband is more than just Internet access, although we often use the terms synonymously. However, 90% of the value perceived by residential end-users from broadband today comes from using it for the open, "real" Internet).

By all means use some sort of polyservice approach like that PNSol advocates, or another telco vendor's or standards body's preferred QoS mechanism, for overall broadband management. Indeed, this is already done to prioritise an operator's own VoIP and IPTV on domestic ADSL/FTTH connections, and also many corporate networks have had various forms of QoS for years.

The key thing is to keep the Internet as segregated from all of that experimentation as possible. Even on shared access media like cables or 3G/4G, there should be strict controls that the Internet "partition" remains neutral. (Yes, this is difficult to define and do, it's a worthy goal - perhaps the worthiest). If necessary, we may even need to run Internet access over totally-separate infrastructure - and it would be worth it. If it clogs up and fails over time, then users will naturally migrate to the more-managed alternatives.

I don't buy the argument that we should reinvent the Internet because some applications work badly on congested networks (eg VoIP and streamed video). My view is that

  1. Users understand and accept variable quality as the price of the huge choice afforded them by the open Internet. 2.5 billion paying customers can't be wrong.
  2. Most of the time, on decent network connections, stuff works acceptably well
  3. There's a lot that can be done with clever technology such as adaptivity, intelligent post-processing to "guess" about dropped packets, multi-routing and so forth, to mitigate the quality losses
  4. As humans, we're pretty good at making choices. If VoIP doesn't work temporarily, we can decide to do the call later or send an email instead. Better applications have various forms of fallback mode, either deliberately or accidentally.
  5. Increasingly, we all have multiple access path to the Internet - cellular, various WiFi accesses and so forth. Where we can't get online with enough quality, it's often coverage that's the problem, not capacity anyway.
  6. Anything super-critical can go over separate managed networks rather than the Public Internet, as already happens today

It may be necessary to have multiple physical network connections, as otherwise we need to multiplex both unmanaged Internet and managed polyservice on the access network. But that multi-connection existence is already happening (eg fixed+mobile+WiFi) and is worthwhile price to pay to avoid risking the generativity of the current mono-service Internet.

By all means introduce new "polyservice" connectivity options. But they need to be segregated from the Public Internet as much as possible - and they should be prohibited by law from using the term "Internet" as a product description.

There is also a spurious argument that current Internet architectures are not "neutral" because things like P2P are throttled by some ISPs, some content filtered-out for legal reasons, and because of mid-mile accelerators / short-cuts like CDN. That is a straw-man, equivalent to saying that Internet experience varies depending on device or server or browser performance. 


Sustainable Internet growth?

But let's go back to the original premise, about sustainability. Much of both ITU's and Martin's/vendors' arguments pivot on whether current practices are "sustainable" and support sufficient scope for service-provider profitability, despite usage growth.

A lot of the talk of supposed non-sustainability of current network expansion is, in my view, self-serving. It suits people in the industry well to complain about margin pressures, capex cycles and so forth, as it allows them to pursue their arguments for relaxing competition law, getting more spectrum, or taxing the way the Internet works. We already see this in some of the more politically-motivated and overcooked forecasts of mobile traffic growth. Few build in an analysis of either behavioural change and elasticity as a result of pricing, or the inevitable falling costs of future technological enhancements. (Although some are better than others).

But I have seen almost no analysis of where the supposed cost bottlenecks are. If there is a "cost explosion" decoupled from revenues, where is the "smoking fuse" that ignite it? Are we missing a step on the price/performance curves for edge routers or core switches? Are we reaching an asymptote in the amount of data we can stuff into fibres before we run out of visible wavelengths of light? Are we hitting quantum effects somewhere? Overheating?

Before we start reinventing the industry, we should first try and work out what is needed to continue with the status quo - why can't we continue to just over-provision networks, if that's worked for the last 20 years?

Now to be fair, OECD identifies a gap in R&D in optoelectronic switching, going back to the early 2000's, when we had a glut of fibre and lots of bankrupt vendors in the wake of the dotcom bubble bursting. Maybe we lost an order of magnitude there, which is still filtering down from the network core to edges?

In mobile, we're bumping up against Shannon's limit for the number of bits we can squeeze into each Hz of spectrum, but we're also pushing towards smaller cells, beam-forming and any number of other clever enhancements that should allow "capacity density" (GB/s/km2) to continue scaling.

I'm pretty sure that the frequency of press releases touting new gigabit and terabit-scale transmissions via wired or wireless means haven't slowed down in recent years.

All things being equal, the clever people making network processor chips, optical switching gizmos and routing software will be well-aware of any shortfall - and their investors will fully understand the benefit to be reaped from satisfying any unmet demand.

A lack of "sustainability" can only occur when all the various curves flatten off, with no hope of resuscitation. I'm not a core switching or optics specialist, but I'd like to think I would have spotted any signs of panic. Nobody has said "sorry guys, that's all we've got in the tank".


OTT-Telco: Debunking the unsustainability myth?

Most people reading this blog will be very familiar with my work on telecom operators developing OTT-style Internet services. There are now close to 150 #TelcoOTT offerings around the world that Disruptive Analysis tracks, and my landmark report published earlier this year has been read by many of the leading operators & vendors around the world.

The theme of telcos and/vs/doing/battling/partnering so-called OTT providers is well-covered both on this blog and elsewhere (such as this article I guest-wrote for Acme Packet recently).

One element that doesn't get covered much is that various Internet companies are now themselves becoming telcos. Google, notably, has its Kansas City fibre deployment, where it is offering Gbit-speed fibre connections at remarkably low prices ($70 a month). But it's not alone - Facebook is involved in a sub-oceanic Asian fibre consortium, Google is reportedly looking at wireless (again), assorted players have WiFi assets - and, of course, various players have huge data centres and networks of dark fibre.

This trend - OTT players becoming telcos (in the wider sense) - seems inevitable, even if the oft-hyped idea of Apple or Facebook buying a carrier remains improbable. OTT-Telco may eventually become as important as Telco-OTT.

For me, this is where the so-called sustainability issue starts to break down. Firstly - is Google swallowing the costs of the Kansas network and under-pricing it? Or has it debunked the naysayers' cries of Internet "monoservice" armageddon? Given that it makes its own servers, is it also making any of its own networking gear to change the game?

I'm sure Google has already thought about this idea (and I've mentioned it to a couple of Googlers), but I think that it should seriously consider open-sourcing its management accounting spreadsheets. Shining a light on the detailed cost structures of planning, building and running a fibre network (equipment, peering, marketing etc) would make other companies' claims of sustainability/unsustainability of business models more transparent.

While it is quite possible that Google's economics are very similar to its peers around the world, it may also have used its engineering skills, Internet peering relationships - or other non-conventional approaches - to lower the cost base for delivering fast access. It may also have different ways of structuring its cost- and revenue-allocation, outside the legacy silos of other telcos. It may have its own forms of traffic-management / flow-management that minimise the damaging volatility seen on other networks - or it might just be able to over-provision sufficiently cheaply that it's not necessary.

Whatever is happening, the fact that Google (and others like HongKong Broadband in the past) are able to offer gigabit residential broadband suggests that we've got at least one or two orders of magnitude left before the "qualipocalypse" becomes a realistic threat for the public Internet.

In other words, OTT-Telco offers us the chance to prove that what we've got now isn't broken, is "sustainable" and indeed has headroom left. Obviously that isn't yet proven, so close monitoring (and ideally visible & transparent financials) will still be needed.

None of this means that we shouldn't also have non-neutral broadband access products available - and if customers prefer them, that indicates the direction of the future for Internet / non-Internet applications.

But for now, the neutral-as-possible "monoservice" Internet seems not only sustainable, but arguably such a valuable contributor to global development that it should be regarded as a human right. We should vigorously resist all rhetoric about "sustainability", whether from ideologically-inspired governments, lazy/greedy network operators, or evangelical vendors. If and when the Internet's economics do take a nosedive, we should first look for technical solutions that allow us to keep the current structures in place.

We should not allow purely technical issues to bounce us into philosophical shifts about the Internet, whether inspired by ugly politics or elegant mathematics.