Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Thursday, January 27, 2011

I'm picking a fight with a peer, about VoLTE and IMS

It's quite rare for me to take direct pot-shots at other specific analysts. While I'm often confrontational, I try to avoid ad-hominem attacks, or cast doubts on specific items of work.

I'll make an exception in this case, because I know the analyst quite well & have a lot of respect for his other commentary, as well as our banter on Twitter, via blog comments, or over a beer at various conferences.

I reckon Gabriel Brown (@GabeUK) has called it wrongly on VoLTE and IMS in his latest article. I vehemently disagree - based on discussions with operators, vendors and my own analysis - that IMS platforms are likely to become the main long-term platform for telephony or other applications in mobile operators in the long term. IMS will most likely be used patchily, by some operators, for some purposes, in some places. (Note: I haven't read his full report, which the article is a trailer for, so there may be some more contrarian viewpoints contained in it).

I certainly don't see that VoLTE will be the catalyst to "set the stage for a more fundamental transition to operator-provided real-time, rich-media services" - that is more GSMA-style wishful-thinking, rather than a probable outcome. And I completely dispute his assertion that with VoLTE/IMS "investment won't be sunk into interim measures that close off that route to innovation".

What he doesn't discuss is that VoLTE is mis-named. It's actually more correctly called ToLTE - Telephony over LTE, not a generic voice platform. It isn't designed as a ground-up answer to the question "what should voice communications look like in future?". I wrote a report in 2007 saying that there needed to be a 3GPP standard for "plain old mobile telephone" service for IP-based mobile networks, and that it was urgently required to stave off the fast-improving third party VoIP providers. The belated answer is VoLTE, aka ToLTE. It's now 2011 and the expectation is that the majority of 3GPP operators are unlikely to deploy VoLTE commercially until 2013-2014. Which means it won't gain significant uptake among massmarket users until perhaps 2015-2016.

Don't get me wrong, an agreed solution for ToLTE would have been important in 2007-2008, because it would have been ready at the launch of LTE last year.  It could have been a good defensive move, adding in all the legacy baggage around telephony such as "supplementary services", roaming and assorted requirements for regulation and control. But the problem with defensive strategies is that they have a timing window, after which they're useless. A missile defense scheme needs to destroy rockets in the launch phase, before they break into multiple warheads and re-enter. A football defender needs to tackle someone before they run past towards an open goal.

In many, perhaps most, cases for LTE, it now seems clear that defence will be too late by 2014-2016. For all but the fastest and most value-chain controlling operators or countries, several things will have happened by then:

  • A significant proportion of voice application usage will be "non-telephony" by 2015. This is happening already - voice is being embedded in applications and software as a feature, not a service. It might be using a Skype connection as a baby-monitor, or in-game speech between players, or 10,000 other voice use cases. These are not in any sense "phone calls" and are very poor fits with both VoLTE/ToLTE or today's circuit infrastructure.
  • A further substantial amount of the VALUE of both telephony and voice applications generally will have migrated to "cloud voice" by 2015, embedded into websites or business processes, especially for high-end users who will be first to LTE. VoLTE is not designed as being developer-friendly or "mashable" as a #1 goal, which would have been fine in 2007 but not in 2011. (I remember being the chair of an IMS conference on the day that Facebook launched its HTTP-based IM chat. So it's slightly ironic that this debate is happening the day Facebook seems to have trialled its "call" function.)
  • More regulators will have started to allow portability of mobile numbers to third-party VoIP services, and more companies will support it. Google Voice announced it for the US last week. At the moment, mobile number ranges - especially in Europe - are sacrosanct and only available to network owners and their MVNOs. Lets see how long that lasts, and indeed whether numbering retains its psychological importance against other identifiers.
  • The stickiness from SMS, which keeps people from moving to VoIP for primary telephony today, may start to be eroded, especially among LTE users who are likely to have smartphones. Facebook and BBM (and maybe Twitter) are starting to attack the SMS messaging citadel and it is naive to assume that it will remain impregnable.
  • The voice roaming model will have broken down significantly by 2015. It's already crumbling fast, with regional zero-premium agreements and government intervention. Whilst it's understandable that the operators don't want to hasten its demise, they shouldn't be tying  themselves up in time-consuming knots, in desperate attempts to revive it. That ship hasn't sailed yet, but it's pulling up the gang-plank and if you're honest with yourselves, you'll recognise it.
  • The fallacy that the operators "own the social graph" via the handset phonebook will become even more wrong that it is today. As I discuss in my report on IMS/RCS it's a great example of self-delusion that Mark Zuckerberg must be sniggering about on a daily basis.
  • There will likely be various dual-radio / dual-SIM phones capable of running simultaneous LTE data and GSM voice, without horrible battery impacts. They will also likely be capable of VoIP over HSPA+ where it is reliable. VoIP over LTE will be a nice-to-have, not a must-have, for these users.
  • By 2015, the concept of a link between an operator's access business and its services will have fractured. Many service providers will want VoIP services that can extend outside their own access footprint, without "interoperability" concerns. VoLTE is unsuitable as a basis for operators "own-brand OTT" activities.

Some of the people at operators already know most of this, even though they're trying to ignore the full ramifications. In general, I find that the strategy officers are much more realistic about the challenges than those at the coal-face of the core network - after all, it's not their jobs that are ultimately threatened, so there's less of an incentive to want to BELIEVE in things like IMS as a saviour.

But as well as mine and Gabriel's differences in the vision of voice/telephony - which obviously is subjective and opinion-led - there are some other more hard and concrete issues. He writes " it should be possible to at least match [in VoLTE] the service capability of the existing circuit-switched domain used in 2G and 3G networks."

Capability, yes. But quality? Here, there's an interesting gulf emerging between the "establishment" 3GPP, GSMA and vendors, versus the newcomers such as Skype and Google. The old-school approach to QoS is about packet scheduling, managing latency and delay-sensitive traffic, different classes of service and so on. Get the network to minimise dropped packets, and minimise jitter (variations in latency). The new kids try something different. They assume that there will be problems, and look for work-arounds. Clever buffering techniques, new codecs, packet-loss concealment algorithms, echo-cancellation to deal with weird VoIP side-effects, acoustic wizardry to fool the ear.

IMS and VoLTE is about prevention, while 3rd-party VoIP is about cure.

It's easy to say that "prevention is better than cure", but if it comes years later, is that really true?

Skype acquired Camino, while Google acquired GIPS, specifically to deal with speech processing. The GSMA's IR92 specifications for VoLTE largely ignore this whole area. Speaking to a network vendor yesterday, their view is that the bulk of this responsibility in VoLTE falls to the handset vendors. Yet looking at the Internet companies, my understanding is that their QoE (not QoS) engines involve a complex dance between device and server, watching network quality and adapting in real-time. I don't think VoLTE does that. The application isn't really network- or acoustic-aware. One of IMS's core principles is that applications should be "network agnostic", which is a major point-of-failure for the whole architecture.

Yesterday, I did a conference call over WiFi to personal hotspot, and then via HSPA to the Internet, on to Skype, and finally out to circuit via SkypeOut. It wasn't perfect, but it was perfectly usable. Roll that forward 5 years, and Skype-over-LTE should be better than circuit (HD, built-in ambient noise cancellation and so on). Google Voice over LTE in 2015 will probably feature realtime translation between languages using speech recognition in the cloud. Yes, if the cell is congested there might be some problems with QoS. But I'm willing to bet that the overall QoE will be miles ahead of VoLTE's.

As I've written before, it's pretty clear that 3GPP and GSMA are attempting to use LTE as an excuse to crowbar-in IMS to the operators who've been recalcitrant in adopting it so far. I can use various analogies - it's like Ferrari making a 2-ton boat anchor a mandatory accessory on a new car, or as I put in my most popular-ever blog post, like using LTE as the perch to which to nail the dead parrot of IMS.

In some instances, they may succeed - but perhaps to a bare minimum. For instance, I've recently met an operator who's told me, resignedly, that they might put in a small IMS to handle roaming LTE users. However, he has no intention of offering IMS-based services to his own users.

To be fair to Gabriel, he does sign off the article with a doubt "Can operators reinvent and extend the voice model into billable rich media services? I think it's too early to say". That encapsulates our differences - I also believe that reinventing the voice model is absolutely something that operators need to do. I just think that VoLTE is absolutely the wrong way to approach it. As for "billable rich media services", the answer is maybe - but only if they ditch IMS as a flawed legacy technology and look for other solutions, even if they are proprietary and don't have the false comfort-blanket of interoperablity.

The bottom line is that operators considering investing in VoLTE or IMS need to think twice. What is the actual problem you are attempting to solve, and the business case associated with it? Is it purely a defensive move, and if so is it already too late? Is your "voice" (ie telephony) business as valuable as you think, or is it being inflated by accounting oddities around subsidy repayments and classification of "line access" as voice revenue?

Do you really understand what the difference is between Telephony, and Voice as a whole? Is the old-style telephony service worth replicating in LTE at all - will it be monetisable in 2015 and beyond? Will VoLTE actually deliver good-quality speech when acoustic factors are involved too? Are there ever going to be any IMS mobile services that actually add value and provide a good user experience? Or should LTE be used to introduce a proper vision of "services", "cloud" and "voice", with low-margin legacy phone calls kept for the 2G circuit networks that will be here for another 20 years anyway?

For more detailed analysis and advice on these matters, please contact me at information AT disruptive-analysis DOT com.

Wednesday, January 26, 2011

Paradoxes in the notion of mobile billing replacing cash or credit cards

I continue to read supposedly-visionary pieces proclaiming the imminent death of either cash or credit cards, proclaiming that mobile operators will run the banks into the ground. As this wise sage points out, "Banks and credit card companies of the world, be very afraid. The operators will come to take your business away".

Apparently, we'll all be waving NFC-enabled phones at retailers' readers or transport systems' ticket barriers, and can kiss good-bye to the mess of credit and store cards lurking in our wallets.

In fact, we can forget about wallets too - the leather industry is destined for an implosion of catastrophic proportions. The coin industry is, similarly, quaking in its boots, seemingly scared of the "truth" as peddled by the all-knowing T Ahonen and others.

We have, of course, all the reference sites we need, with Safaricom's impressive M-Pesa mobile money platform, and NTT DoCoMo's Felica NFC-style functionality. And of course, the rumours suggest that the iPhone 5 will be NFC-enabled.

So this must all be true.

But just for the sake of argument, and being disruptive, let's have a look at some contrarian observations:

  • 70%+ of the world's mobile subscribers, and essentially all those who are "unbanked", rely on prepay. The most typical way to top up prepay accounts is with scratch-cards or vouchers. Which are usually paid for with cash.
  • There's also quite a lot of prepay credit that's bought with the help of bank cards, at an ATM machine. Probably more, in fact, than we're seeing bank card payments substituted by phone payments.
  • The average mobile prepaid credit balance is something like $5 (much lower in developing countries) and is frequently zero. Not idea for doing your shopping, or even buying a coffee.
  • Even some of the most enthusiastic NFC-promoting operators don't believe in the idea that customers will charge purchases to their phone bill or prepay accounts. They're providing hooks so that they can funnel payments to the user's choice of credit-card account, Paypal or another source of funds. Few customers want a fridge or airfare appearing on their bills at the end of the month. I had a recent conversation with a senior exec in one of the most NFC-friendly operators, and his view is that the bill is fine for small digital goods, but not major purchases
  • Governments rather like cash and seem unlikely to want to permit or encourage its death. It's a bit difficult to do quantitative easing via mobile phone.
  • In the SafariCom example, M-Pesa accounts are distinct from the mobile phone airtime account. It's basically a bank with a mobile-phone front end. Which makes a lot of sense, especially for the "unbanked" who would otherwise have cash stuffed under the bed & would be unable to make payments. But it's not the mainstream view of a "mobile wallet" substituting for cards or cash in developed economies.
  • There are no "payment portability" or "mobile banking portability" laws yet. Given that most (all?) operator mobile payments linked to an access account, users unlikely to lock themselves in & embrace huge switching costs when they want to churn phone provider. Obviously operators want to reduce churn, but assuming your customers are gullible and stupid is not a longterm winning strategy. Loyalty isn't the same as lock-in.
  • DoCoMo has spent rather a lot of money to get its payment solution accepted - including a near-$1bn investment in Sumitomo Mitsui Card in 2005 and the purchase of a stake in Lawson's convenience store chain. I don't see many other telcos stepping up to the plate with that sort of cash or serious intent.
  • Central banking stats generally show "currency in circulation" continuing to rise, despite the presence of many alternative large and successful payment mechanisms such as debit cards. The chances that mobile payments will make a dent? Almost zero.

Taken as a whole, my view is that mobile payments have a very important role to play in markets like Kenya, where there are significant numbers "unbanked", big problems in distributing money such as salaries (eg Afghanistan) or Japan where a combination of culture (eg trusting the telcos) and heavy investment by the likes of DoCoMo can make a big impact.

I can also see some payments being facilitated by mobile phones (and in some cases operators) and being directed to existing channels such as cards, bank accounts, Paypal or iTunes.

But this clamour that "cash is dead", "the wallet is dead" or "credit cards are toast" is complete nonsense, now and for the forseeable future. I'd be more willing to believe that retina-scans, DNA authentication, telepathy or a William Gibson style direct neural interface for payments will happen first.

Friday, January 21, 2011

Is mobile video traffic quite the threat that everyone thinks? Is the so-called "optimisation" approach flawed?

I smell the "Tyranny of Consensus", about mobile video data traffic.

This is a long blog post examining the "optimisation" of video for mobile. It forms part of Disruptive Analysis' ongoing research into policy & traffic management. For more details on custom work, please contact Dean Bubley directly.

Recently I’ve been bombarded with vendor announcements of video optimisation solutions, DPI and charging products, and lots of slides suggesting that operators might try to charge extra for mobile video traffic that is "swamping" 3G and 4G networks. Everyone is gearing up for "personalisation", tiered services and so forth – and even trying to talk up the prospect of video-optimised data plans. Everyone's putting out PR-driven surveys with predictably trite answers to loaded questions, usually ignoring cause-and-effect.

Normally when there's this level of consistency in stance, it's wrong. Or it's hiding something.

Most of the noise is coming from vendors of traffic management solutions sitting in the GGSN or in a box on the Gi interface – the link between the operator’s core and the Internet itself. Typically, these compress, transcode, buffer, block or generally fiddle-about with video traffic, with varying levels of sophistication and subtlety. Vendors in this general area here include Acision, ByteMobile, Cisco, Flash Networks, Mobixell, OpenWave, Vantrix and assorted others.

Some of them try to second-guess what’s going on in the radio by looking at throughput rates and other indirect indicators and only act when there's a perceived problem. Some can do relatively benign "lossless" compression which doesn't actually change the content. Some try to limit the amount of video that gets downloaded to the player's buffer, in case the user abandons viewing, closes the session and "wastes" the data already transmitted. Some control the numbers of parallel (concurrent) IP connections the device can use.

Conspicuously, the more radio-centric vendors have been comparatively quiet about video – although it’s possible they’re just looking forward to selling upgrades to LTE, rather than reducing traffic.


What's the problem?

So, is mobile video (a) really the problem that everyone suggests, and (b) if it is a problem, are they going about fixing it in the right way?

Obviously, it needs to be acknowledged that there's a ton of published information (and obviously more in private) that talks about the % of traffic volumes attributable to "video". So we know from Bytemobile, for example, that >40% of total global mobile data traffic in 2010 was video - at least from the networks they can measure. But then there's a bit of a "so what" there - as other analysis points out that usually only a few cells in a mobile network are actually congested, and we also know that there are plenty of other weak spots beyond data volume, such as RNC signalling storms, that cause congestion.

(The actual definition of video is that's something that's rarely or poorly defined itself. Is near-photorealistic cloud gaming video? Augmented reality overlays? A flash animation of a cartoon? Discuss)


It's about User Experience. Really?
So should we take at face value the claims of the video-optimisers that it's really about "enhancing user experience" as many people claim? (Everyone hates stalling video streams, don't they?). Or is it more about manufacturing a convenient bogeyman: an excuse to sell boxes to help operators "personalise" their mobile broadband or raise prices?

Video "consumes" lots of bits, so it must be evil, right? Never mind that in most of the world, mobile data is now priced by MB / GB tiers and quotas, so surely 3GB of email is just as evil as 3GB of video? Or more, given the greater signalling load. And doesn't the act of compressing traffic *unnecessarily* reduce the chance of upselling the user to a larger quota? Surely, when there's no real cost impact or congestion risk, doesn't it make sense to let users download as much as they want? Let the user, or app/content provider take the responsibility for reducing volumes if needed.

And never mind that the majority of mobile video still goes to PCs with USB modems, not tablets or smartphones, especially outside the US. Modems that have generally been sold by operators as alternatives or complements to fixed broadband, so it should be neither surprising nor a cause for action when customers use them for that purpose. (Although it’s interesting that T-Mobile UK has recently suggested that you should save your video usage for your proper network at home). It's desktop versions of YouTube or iPlayer, using Flash or HTTP, that are generating the tonnage - typically indoors. And people are tolerant of delays & buffering on PCs, and also know how to use the toggle for 360/480p or SD/HD.

In other words, a sizeable proportion of mobile video traffic is solely generated by operators' mis-selling and mis-pricing of PC dongles as real alternatives to ADSL and cable broadband.

Now obviously here there is a major issue: *when* a given cell or sector is actually congested, or a backhaul link, then one incremental video causes much more of a problem than an extra email. But all the coffee-table stats about "10% of users consuming 90% of the bandwidth" generally fail to state whether they also cause 90% of the problems. It's an anecdote, not a problem statement - especially as most of that 10% are PC users who've been sold a product designed specifically (and knowingly) for high consumption.

There's much less data to suggest that actual congestion is caused by bulky things like video streaming, or just that there are 20,000 people at Kings Cross Station at 9am on a Monday morning checking their email simultaneously. Anecdotally, I've spoken to operators that have agreed that the real busy-hour, busy-cell situation is much more complex than just blaming YouTube.

To give an analogy: I drive my car 4000 miles a year, as I live in central London and mostly use public transport. But I contribute more to road congestion than someone living in the middle of Wales, driving 40,000 miles annually from their cottage to town and back. Data “tonnage”, like mileage, is a lousy predictor of problem causation – yes, there’s probably a positive r-squared, but correlation is weak. But volume is easily measured, and to an uneducated ear it “sounds” fair to penalise volume, especially when prefixed with emotive PR gibberish like “data hogs”.

Forecasts

Everyone's seen the Cisco VNI forecasts for mobile video network traffic... but are they (and others) quite as accurate and meaningful as many seem to think?

I certainly agree that larger screen sizes means that video data will expand for the same duration of viewing. But other mobile applications for smartphones are proliferating too. Yet unless I'm missing something, the biggest growth in usage (minutes / events) is around social networking, gaming and the like. Yes, Facebook friends can embed or link to a video clip... but doesn't the growing amount of time spent messaging and communicating reduce the number of "snacking" opportunities to watch video? If you have 3 minutes waiting for a bus, do you watch YouTube on your phone, or message your friends or send a cleverly-crafted tweet or status update? 

I'm also not convinced that there's going to be a perpetual growth in minutes of video delivered over cellular macro networks to mobile devices *in congested cells*. There's too going to be many ways to offload and cache the content (eg podcasts, or connecting via WiFi or femtocells). Many of the places where people watch video or use 2-way video applications will be those with WiFi available. Those home with both fixed and mobile broadband will also increasingly use femtocells. Both almost completely eliminate congestion - especially if they route traffic promptly to the Internet rather than piping it back through the core. In any case, video will need to be treated differently by charging, optimisation and policy servers if it is delivered by a dedicated indoor solution. All such elements need to be made femto-aware to be useful.

So is the overall "problem" of macro-cellular mobile video going to get worse? Or is it manageable just with normal capacity upgrades and tiered pricing rather than complex control infrastructure, and questionable "optimisation" practices that are perhaps not optimal for user, content owner or operator?


Why optimise in the network anyway?

It's also worth noting that two of the most widely-used and successful mobile applications actually monetised by operators - BlackBerry's email service and Opera's Mini web browsing proxy-based service - are both self-optimised and data-compressed. The special magic is in RIM's or Opera's own servers and clients, not in the operator network in between. So why should video be any different? YouTube, the BBC and others are much more mindful of their users' quality of experience when watching their content than the operators.

The only way to measure real QoE is directly from the device or software client. Trying to decode "user experience" from the Gi interface is like a doctor diagnosing an illness by looking at how red your nose is, through a telescope from a mile away. Maybe the throughput has dropped because the user went to the basement for 30secs? Maybe a window was minimised? Maybe another app on a multitasking device squeezed out the video client with a big download? Maybe there's an OS glitch, or some other user intervention? The network does not - and cannot - know this.

There has even been some talk of network boxes checking that the video size doesn't exceed the phone's screen resolution, and reducing it to fit. This is astonishingly arrogant and apt to create huge user dissatisfaction (and perhaps lawsuits), as it ignores the possibility of the user zooming-in to part of the video, saving it to memory for later use, outputting via the TV port on some phones, or just using the device as a tether or modem.


Yes, some network elements can, in some instances, improve experience, some of the time. But it's inconsistent and unprovable, and may well have hidden side effects elsewhere. But it's like a spark-plug manufacturer claiming it can improve your driving experience with a new type of "optimising" plug. Yes, sometimes it could make the engine smoother and the driver experience better. But it's not much help if the tyres are bald - and it might also mess up the fine calibration and actions of the engine-management chip.

The gating factor is the radio network

The other elephant in the room is radio *coverage*, not capacity. Many 3G networks are still like Swiss cheese, with "not-spots" still common, especially indoors. The woolly survey questions about video user experience generally don't ask respondents if they live in a basement, or a building with metallised windows. Much of the problem is nothing to do with congestion - it's the pesky nature of radio link budgets. You can’t “optimise” video through brickwork.

Not only that, but the risible notion of offering "premium video" mobile data plans with so-called silver/gold/platinum service tiers generally overlooks the realities of physics, and the inter-dependence of shared media. Who’s going to extra pay for “priority” video when often you can’t get 3G at all? I’m writing this on a train – where I might actually want to watch video on my PC via mobile – and I’m lucky if I can get GPRS connection most of the time. I might pay extra for national roaming onto a competing network which can actually supply me with connectivity - but I'd rather just churn outright.
  
And what happens when a platinum user is right at the edge of the cell & wants a video? Do you boot off 70 gold-tier users who've got better radio conditions in the middle of the cell, just to serve them? In fact, even doing this potentially raises issues with consumer protection laws - the only way that HSPA can even theoretically get to 7.2 / 14.4 / 21 Mbit/s is by biasing traffic towards those people who've got the best signal. If you change the algorithm, it potentially makes the advertising claims false.

In conclusion

Overall, Disruptive Analysis believes that there is a good chance that the fears about mobile video traffic are being overstated, and being used as a smoke-screen to permit arbitrary degradation of video content, by unsophisticated boxes placed in the wrong part of the network, ill-equipped to understand the real causes of congestion and poor end user-experience. 

Regulators should scrutinise very carefully whether covert transcoding of video on uncongested links contravenes new views on Net Neutrality that permit only necessary and proportionate management. They should also enforce requirements on transparency stringently – a subtle “we reserve the right to manage traffic” in the terms & conditions is insufficient: there should be detailed and possibly realtime information on any active management / compression of traffic.

Content providers and aggregators should track whether their output is being modified over cellular networks without their permission or awareness, and should consider using encryption to protect against covert manipulation. The should embed agents in the browser or client app to detect unwanted “optimisation” (perhaps through digital watermarks or steganography) and alert the user when the operator is modifying the content in the background.

Operators should aim to work with content providers to enable them to optimise their content in the best fashion – either via rate-adaptive codecs, user alerts about congestion, altering frame rates, editing the video or other mechanisms. Some form of congestion API (ideally real-time but initially less-accurate) will help immensely. They should also pressure all of their core network and so-called "optimisation" vendors to adopt strong measures for radio-awareness and device/user-awareness.

Despite the hype about charging upstream content providers for "premium" QoS, operators should recognise that this will remain a pipe dream owing to the complexities of radio. If it works, it'll happen first in fixed networks, not mobile, where there's much more control and predictability. There's certainly no chance that a video provider, or user, will pay extra for so-called QoS if the network still has poor coverage

For those operators looking at offload, they should reject any policy management or optimisation solution which cannot distinguish between macro and femtocell traffic, or which cannot "optimise" by helping the user to connect to faster/less-congested WiFi if available. They should also be extremely sceptical of so-called personalisation services, pricing different traffic types (not “services”) differently, that look good on paper, but which are completely impractical and don’t reflect the reality of applications and the web, such as mashups.

Remember, video is not “an application”, it’s 10,000 applications, each of which needs individual consideration if personalisation is to be applied usefully. Unless there's someone with the skill and authority to think about this sensibly, there's a huge risk of failures and hidden gotchas.

Overall, I'm increasingly moving to the view that GGSN/Gi-based solutions for video traffic management are OK for urgent firefighting, but need to be very well-integrated with both RAN probes and, crucially, device-side intelligence to have longevity of usefulness. Most of the network vendors (understandably) shy away from working directly with handset OS's, connection managers and other tools - this will need to change.

Disruptive Analysis is one of the leading analysts covering mobile broadband business models, policy management and next-generation "holistic" models for controlling traffic and user experience. Please contact information AT disruptive-analysis DOT com for information about workshops and custom research projects.

Monday, January 17, 2011

Why developers need to take responsibility and create more network-aware applications

In all the brouhaha about traffic management and Net Neutrality, it is startling that one aspect is often completely overlooked - the need for mobile application developers to become much more network-aware, and write their software to take account of the realities of the user's situation.

This is not, primarily, about saving money for the network operators. It is about creating a better all-round experience for the applications' users, whether that is through improved or more consistent performance, less impact on device battery life, easier monitoring and feedback, or saving users time and money.

Ideally, applications should be aware of a broad plethora of network-related variables, and be able to act on them accordingly. An incomplete list includes:

- Real-world end-to-end speed, latency and jitter, for both uplink and down
- Bearer type - WiFi (and SSID and security method), 3G, 2G etc - or combinations of the above
- Which bearers are potentially available, but not being used
- Whether the cellular connection is routed via a macro- or femtocell
- Roaming status
- Variations in network quality, either predicted or real-time measured
- Price plan details - eg per-MB billing, time- or location-variable, proximity of thresholds & caps
- Probable future connection status - eg likelihood of going out of coverage, or switching to another network
- Awareness of current or likely future congestion on the network
- Mechanisms used for offload
- Policies applied by the network, and how these vary by time or bearer
- What types of signalling the application is likely to generate, directly or indirectly
- Handset capabilities & status (eg ability to use multiple radios, battery level, whether WiFi or 3G or roaming are switched on, potential for side-loading via Flash memory or Bluetooth and so on)
- Whether the phone is being used as a tether, or could use another device as a tether / peer

In reality, no application is going to be able to deal with all of these, or pick a comprehensive decision-tree route to decide on how to behave. Instead, there is likely to be a sub-set chosen - based on the nature of the application, the availability of the various input data, and the preferences of the user. A certain proportion of these "network-aware" elements will also be the responsibility of the OS, which might dictate either universal rules ("no updates while roaming") or might bundle some up into a general "network status API" or "bearer-awareness recommendations" accessible by all apps.

Some apps already make use of this type of data, at least in part and on some platforms. VoIP and streaming-video applications are typically the most bearer-sensitive, able to choose different codecs or frame rates based on observed network performance.

But we are still at very early days, especially with regard to the ability of apps to understand or predict mobile broadband congestion, offload dynamics, or cell-edge drop-off effects.

There are three main sources of this data:

- Direct measurement and input by the app itself. For example, Skype clients can measure latency and packet-loss directly.
- From the handset OS, which may expose some of this data centrally - eg WiFi vs. 3G connections, or perhaps in future the aggregate cellular data consumption vs. a user-defined monthly cap.
- From an operator API - perhaps roaming status, or ideally information on the user's data plan. Ultimately, operators will expose "congestion APIs" to applications and devices, although this is still a future-looking concept.

There may also be data that can be gathered from other sources, such as utility applications in the handset or in the cloud, or via direct provision from network elements in the data path.

There is likely to be a considerable amount of push-back from parts of the developer community towards this concept. This stuff is complex, perhaps boring, and will add time and risk to the development process. Some will undoubtedly say "It's not my problem" and attempt to abdicate their part of the decision-making. Nevertheless, mobile application (and mobile web) developers need to take responsibility for their creations' actions and performance.

There are far more variables - and variability - involved in wireless connectivity than is the case on fixed broadband, or a company LAN. On an ADSL line, you're not at risk of suddenly "going out of coverage". On a local gigabit ethernet connection in an office, you don't have to conisder the impact of roaming or the benefits/drawbacks of switching to an alternative connection. Using normal home WiFi you don't have to think about signalling loads.

Mobile *is* different and is much less predictable, and that should be reflected in application design where connectivity is required.

Disruptive Analysis believes that it is incumbent open both device and OS vendors to educate their developer base on the importance of "network-awareness", to a much greater degree than in the past. It may even need some sort of app certification as "network-friendly" to identify which software acts as a good mobile citizen.

Operators also have a role to play - firstly by exposing the right information via APIs or other mechanisms, and secondly by being honest and communicating the limitations of their infrastructure. Whether they are ever going to be able to charge extra for this information is unclear, but it is certainly in their own self-interest to enable network-friendly applications. That said, they also need to be wary of providing tools to those trying to "game the system".

Overall, the next few years will continue to see grudging moves to more network-aware applications, and more application-aware networks that aim to actively help applications rather than just use DPI or PCC to arbitrarily enforce blind network policies.

In fact, all of this is an absolute prerequisite for the success of future mobile cloud applications. Without this bearer/radio-aware knowledge and the development of network-based feedback loops, cloud services are doomed to niche status.