Thursday, December 6, 2012

Google Enterprise Fragmentation

I am writing this in Google Docs, something that I generally stay away from as it certainly does not have the features necessary for a complex document, but will suffice for a blog entry.

However, this is an example for the dilemma that Google presents to its user set.  The tools seems to be there, but there are significant limitation or exceptions when you stray only a little-bit outside of their larger application set or environment.

In this entry, the focus is on the differences between the Google-based environments that I use, or try to use everyday.  Google apparently wants these environments to look the same, but any Enterprise user finds out quickly that they are not.

My company uses Google Apps primarily for email and some level of document exchange.  I personally have a Windows-based PC as well as an Android-based Tablet.  These two Google environments that center around the Chrome browser can lead to moment of “what was Google thinking” exasperation.

Up until recently (I don’t know exactly when), Google Apps, such as Document and Spreadsheet appeared quite differently in Chrome on the PC and the Tablet.  I just checked this morning, and now there seems to be some convergence of the look-and-feel.  Part of this was the release of Chrome for Android, which has now become my default browser on both platforms.

But, right now, Chrome is a tease.  On my PC, viewing a PDF is done quite differently than on the Android tablet.  Chrome works directly with nearly all PDFs and displays as a native format within a browser window.  On the tablet, loading a PDF causes a traditional download event that then requires the launch of a separate program to view the document.  Maybe this is a bit of nitpicking, but if Google wants the Enterprise space, then differences like this must be resolved.

Another more immediate tease occurred this week.  Chrome Remote Desktop, out of beta, appeared as a capability sent from heaven.  I was about to travel and did not want to take my desktop-replacement laptop on the journey.  I was able to quickly fire-up Remote Desktop on my laptop and another laptop and voila a remote desktop that worked remarkably well.  I then got my tablet and quickly determined that there was no Android Chrome client available, thwarting my mobile remote desktop access plans.  It is not even clear if Google is working on such a capability.

This got me thinking about the same-but-different environments that Google provides.  There are Google Chrome applications that you get from the Web-store and there are Android applications.  With the common denominator being Chrome, you would think that this would be the services convergence environment, at least for Enterprise services.  Alas, but no, at least not yet.  Because of this, I am not sure that I would consider a Chrome OS Laptop as I fear this introduces yet another environment with limitations and differences that are not necessarily foreseen until encountered.

Microsoft is certainly attempting their convergence approach for Tablets, Phones, PCs, and even the Xbox.  The eventual cleaner, easier to use, more uniform, and (of course) secure environment will likely dominate the emerging mobile Enterprise.

Monday, October 29, 2012

Software Defined Network (SDN) Considerations for Commercial Network Service Providers


More than four years ago I wrote that carrier service providers must realize that they are in reality “application enablers” see (Going Horizontal on the Vertical) and that Service Providers needed to Virtualize the WAN.    These posts indicated two important areas that relevant to the discussion of Software Defined Networks (SDN) in the context of commercial services (A good summary of describing the SDN approach as compared to the prevalent distributed control model can be found at SDN A Backward Step Forward).

The two areas I described were that:
  1. Network service providers have to provide an Application Programming Interface (API) for service requests and service status
  2. Virtualization of the Core network enables customers to define what they want out of their network ensemble. 
It appears to me that the evolution of SDN technology needs to follow a similar path as virtual machine technologies.  That is the maturation of technology at the enterprise-level and then the transition to a commercial services provider’s infrastructure.  The first instantiation of virtual machine technology enabled the consolidation and increase in efficiency of an organization’s infrastructure.  The next steps were the development of multi-tenancy environments and transition into commercial “Cloud” services.

The successful public service provider SDN controller must be able to provide a user with the control needed by their application set, while at the same time enabling the service provider to optimize their network.  There are significant options about what the SDN concept means for a service provider and what it looks like to a customer:
  1. Is the goal to provide “raw” virtualized standard router-type services? That is, does the customer select a router type and instantiate a set of virtual core routers to meet their requirements?
  2. Is there a network equivalent of an x86_64 (i.e., Generic PC processor) virtual machine?  Do you provide a blank sheet and development environment for customers to create their own virtual network devices?
  3. Is the goal to make it appear that the network is a single large “router” regardless of the number and physical locations of the actual network?  Wow, I get my own world-wide network that looks like a single router!  Can you also provide a “primary” and “backup” virtual router?
  4. Do you provide a Plain Old IP VPN service for customers that just want the same-old-same-old basic service?
  5. Do you provide multiple network personalities at the same time to a customer?  That is a network connection and control that enables both IP transport that appears as standard WAN VPN services (with traditional standard QoS – 99.9% packet delivery) as well as services that are more tailored data center operations such as moving Virtual Machines (with near Time Domain Multiplexed QoS – 100% packet delivery). 
As the “simplicity” increases from the customer perspective, the complexity for the service provider increases to manage the increasingly dynamic nature of the services offered and customer demands.

Finally, can we expect novel SDN-based capabilities to be provided by traditional network service providers, or do we need companies that “think outside the box” to move into this area?  If the introduction of large-scale Cloud services define the pattern, then companies like Google and Amazon may lead the network charge.



Tuesday, July 31, 2012

Some additional comments on Google Fiber

A few more comments on Google Fiber:

  • The big deal here are the services Google is going to provide.  1TBytes of storage for data and a 2TByte DVR.   These are values.  However, are they really long-term discriminators against current services?.  Traditional DVR storage can be expanded easily, units can be upgraded to record more at a time, and Cloud storage services (already part of at least some providers services) can also be expanded.  Competition is good.
  • The $120/month looks good (for a two year contract that waives the $300 install fee), but is not that much different that current service deals from Verizon for FiOS.  Current two year pricing with a multi-room DVR and 75Mbps of Internet and 285 channels with 75 in HD is $130/month.  Since FiOS is GPON-based, providing "1Gbps" of access service to the Internet is possible.  This now descends into feature and marketing games.  Again, competition is good. 
  • Content is the key.  Much of the cost of cable service is the content, not getting the wire to the house. Just look at the jockeying between content owners and cable and satellite providers.  For a compelling offer, Google has to deal with this issue.
  • Apparently, Google has designed their own equipment.  It is not clear if this is true for the optical transport equipment or the home video termination equipment.  It is also not clear if the optical equipment is a clean-sheet design or a derivative of existing technology (e.g. GPON or ActiveEthernet).
  • People are focusing on the 1Gbps access rate which certainly is not needed for the eight simultaneous DVR sessions which even in 3D HD  is around 10Mbps x 2 x 8 = 160Mbps (are these done on the home unit or in the Google Cloud?, if in the Cloud what does the bandwidth to the home matter?)

Sunday, July 29, 2012

Google Fiber - Less Filling (cost) Tastes Great (more bandwidth)?


There is much excitement in the news and on the Web about the Google Fiber rollout planned for Kansas City. With the promise of 1Gbps to the home at a good price point, this sounds great. There have been posts that talk about whether Google understands what they are getting themselves into:


The bottom line is that the installation of physical facilities into people’s homes means Google now has to take responsibility for prompt repair that ranges from minor problems, a failed home unit (a GPON ONT?) or major damaged caused by acts of nature.

However, my bent is a bit different.  First, Verizon, which uses GPON technology for their FiOS service, could provide a “1Gbps” service.  But, let’s take a bit of a look at the reality here.  Any service, virtually no matter the technology, has aggregation points.  With GPON technology the concentration point is at the Optical Line Termination equipment (OLT).  Each tradition GPON port provides 2.4Gbps downstream and 1.2Gbps upstream that supports the Optical Network Terminals (ONTs) at the customer location.  Even if you provide a service template that enables 1Gbps peak to each customer, there are generally 32 to 64 customers per GPON segment.  Also at the OLT is the amount of uplink bandwidth from the OLT to the Internet.  In general there are one to four 10Gbps uplink connection.  So, in the best case there are 40Gbps to spread over the hundreds of customers connected to the OLT.  Moving to DWDM-GPON or 10G-GPON reduces contention on the segment to a customer, but there are still limitations from the OLT to Internet.

Of course, you would say that people only need 1Gbps for a very short time so there is great packet statistical gain on the system.  And of course you would be correct.  So, let’s look at the sustained traffic that could be demanded by a customer.  Unless the typical consumer has a video production company putting together 1080p contribution quality video for Hollywood, most likely the home’s bandwidth is dominated by video consumption.  Let’s say the home has four HD TVs, each with the ability record two streams at each TV.  In addition, there are three mobile devices that will watch video at the same time.

So, the math works out as:

4 HD TVs x 3 HD Stream + 3 Mobile HD x 1 HD Stream = 15 HD Streams


Wikipedia has listed bandwidths required by the popular streaming services.  The largest bandwidth, the 1920x1080 video from Sony is 8Mbps.  For our purposes, let’s round-up to 10Mbps.  With that in mind, the sustained bandwidth from a customer would be:

15 HD Streams x 10Mbps = 150Mbps

This current peak fantasy is approximately an order of magnitude less than 1Gbps.

The math from the OLT to the Internet is interesting as well.  Assuming that you only do 20 customers per GPON segment (so that they can each get their 150Mbps for their HD streams) and with the typical 40Gbps uplink on the OLT, you get a maximum of 20x40 = 800 customers per OLT.  And of course, you have to find a way to get the 40Gbps from the Internet.  A Content Delivery System located close to the OLT helps dramatically, but again drives-up cost.  Google has implemented their own fiber-based nationwide backbone network, is this something they plan on leveraging to become a new Tier 1 ISP?

The bottom line is that for the vast numbers of consumers, the practical limit of consumption has nothing to do with the limitation of the access system from the home and more to do with the limitation (which of course will change, although 15 HD streams seems pretty generous at the moment) of the ability to consume product.

This becomes a marketing game as there is no significant near-term and probably medium-term benefit for a 1Gbps connection, or anything above around 100Mbps.  Will local providers start removing the limits (where they can, for example if they use GPON or DOCSIS 3.0) on local access, moving the service bottleneck elsewhere? 

So, I don't sound like a Luddite, there are likely to be future new applications that may drive change in my analysis and new devices that consume even more.

Of course, if you can only get DSL services a several Mbps, it's time to call Google (is that possible) and petition for your community to go Google Fiber.

Wednesday, July 4, 2012

Consumer Internet Usage Cap Bomb No More or Just Delayed?



Well, it looks like my predictions on when I would exceed the bandwidth cap where pretty accurate. In my post in September 2011, I predicted that in June 2012 I would hit the cap:

http://kaplowtech.blogspot.com/2011/09/more-usage-same-cost-boom.html

In an earlier post, I predicted the end of 2011:

http://kaplowtech.blogspot.com/2011/02/other-countdownthe-consumer-usage-bomb.html

As a recap, in September 2010, I used 40 GBytes, in June 2012 248 GBytes, a 620% increase. However, at the same time, my cost for Internet service went up maybe 10% or so. Yikes for ISP margins, how close are we to the breaking point where prices are going to have to go up significantly?

I am sure that thousands already smashed the cap and Comcast was faced with what to do. Stick to the previous threats and shut down customers, raise the limit, or get another plan?

The note on the Comcast User Account website says it all. Now what is in store?

Enjoy,Wesley



Tuesday, June 26, 2012

My Penultimate RIM Post


It’s been a bit since my last post as my work has taken me a bit far and wide doing some technology assessments for my company’s largest customer.

I am still tracking the near final gasps of Research In Motion (RIM), and I believe (but not promise) that this is my penultimate posting on the subject.  The last post will probably be about the final sale of the elements of RIM and its effective end and as a company.

My fist post on the subject was on April 17, 2011, titled RIM Goes the Way of DEC.  At the time, RIM was still a relative high-flyer and it was my observations about the Playbook and their attempt to be “cool” that started me thinking that the company was engaging in a death spiral.  I stated that “The bottom line is that RIM is in trouble”.   Since then, the stock went from around $53 to around $9 today.  Ouch, if I only took my own advice and sold short!

The operative part of my post was that RIM is likely to self-destruct in nearly the same manner that DEC did.  Unfortunately for RIM, this appears to be exactly what is happening:

The company fired its co-CEO dynamic duo that gave us dozens of undifferentiated products and a tablet that was half-baked. This is similar to the messy, expensive, and undifferentiated DEC products that strained corporate resources from producing what could have saved the company (e.g., VAX 9000 that slowed Alpha products development and lack of focus on Networking and the Web). 

A new CEO that has no particular technical vision for the company, but is put into the position of placing everything in the “latest” company technology basket (e.g., Blackberry 10/QNX and new handset compared to DEC’s Alpha and OpenVMS & Windows NT) and preparing massive layoffs to conserve cash.

Preparing to do a massive restructure and sale of company assets. In the last years, DEC sold off major portions of its business (e.g., Networking, Software, Printers), to raise cash, and finally the company was sold to Compaq (and then acquired by HP). Just this week, it was reported that RIM is considering selling their Handset business to either Facebook or Amazon. There are also reports about selling is Blackberry Messaging platform as well (hard to believe that there is really any value in this as Google, Sykpe, and Facebook are now the serious players here – Ask AOL if there is value in AIM).

It took a bit over five years from CEO change at DEC (and the name to “digital”) from the founder Ken Olsen to Robert Palmer for the company to cease as an independent entity, and certainly less time for it to fade from being a force in the industry to playing catch-up.  It appears that RIM will go through the same cycle changing CEOs from founders Mike Lazaridis and Jim Balsillie to Thorsten Heins.  Eerily similar to Mr. Palmer at DEC, Mr. Hiens was an internal promotion.  RIM has already tumbled from being a force in the marketplace (however, it should be noted that they still have strength in several countries around the world) and their revenue has declined by 35% in only one year with its market share dropping from nearly 15% to a scant 8%.  With its new savior products (the new QNX-based Blackberry 10) not even shipping, Apple iOS, Google Android, and now Windows as well as the corporate Bring Your Own Device (BYOD) trend, there is no likely change in this trajectory.  Massive employee reductions, of around 40% are reported to be already in the works.

It’s been nearly six months from the change in CEO at RIM, will RIM make it another five (like DEC did) as an independent company?

Sunday, May 6, 2012

The Internet Kill Switch


There are still many of us that the question “Where were you when the lights went out” meant where were you stuck during the November 9, 1965 blackout in New York City.  Even though I am older than I look, my only fleeting memory is that my Mom would not let me open the refrigerator and that we needed to go out of our apartment in the Bronx to hunt down some milk.

However dramatic, most of the result on that day were the people stuck on the Subway or stuck because all of the traffic lights were out.  Of note, was that phone service was not impacted.  Economic impact was that people could not get to factories or their offices, but commerce outside of the region was only minimally impacted.  New procedures were put in place to protect against a recurrence such a massive blackout affecting over 10 million people.  However, these, and subsequent technology and procedure changes did not stop:
  • The July 13, 1997 blackout that mostly affected New York City
  • An even larger blackout August 14, 2003 that affected over 45 million in the USA and 10 million in Canada
At our recent EIS 2012 conference at the Global Learning Center at Georgia Tech, I pondered the question of the impact of an Internet blackout and its effect on our lives, commerce, and the safety of the nation.  This was triggered by observing the following as entered my flight from Dulles to Hartsfield-Jaskson:
The Internet Kill Switch on a airplane built before the Internet!
To put this into context, this is on a MacDonnell Douglass MD-88 aircraft.  The MD-88 deliveries started in 1987 with the last delivered in 1997.  So, this plane is around 20 years old.  Surrounded by potentiometer controls for sound, the INTERNET SWITCH MUST BE ON AT ALL TIMES label seems to be standout as a metaphor that the Internet surrounds us almost everywhere we go, even at 35,000 feet.  I just hope that this is never connected to anything related to the control of the plane!

At the conference I asked, what would be the impact if the Internet would fail during different eras.  I started with the 1980s:

The network before the Network
Clearly, in 1980, the impact of a failure of the Internet (really at this point the ARPANET, there being no Internet yet) was virtually zero.  The networks for the government (e.g., command and control, space flight, etc.) and the nascent networks for commercial purposes had essentially zero commonality.  In fact, most of the services were still provided by dedicated analog phone lines and the emerging digital transmission of the T1 and DS3 family.  The ability for nefarious activity, particularly sourced from one location, to affect these archaic and diverse systems was virtually zero.

I stepped through the 1990s (the common consensus was that nothing “happened” in the ‘90s), quickly pointing out the emergence of the Internet and the beginnings of an Internet economy.  However, even in the 1990s the impact of the Internet in the “off” position, especially early on, would be an inconvenience for many, but of major impact to only a very few.

Skimming the 2000s, I pointed out that in spite of Time Magazine’s cover, the millennium really started in 2001 (there was no year zero).  However, at this point, the Internet economy, although it burst a bubble, started to really change the way that people worked and lived.  It became an integral part of commerce, reaching customers, and radically new business models.

Now, as we reach the 2010s, the impact is everywhere.  There are around a billion devices that attach to the Internet.  Wireless data services are more pervasive in many countries in ways the wired infrastructure never reached.  In fact, even in the USA, more than 30% do not even have a dedicated home phone.  In fact, even those that still do, the technology in the background maybe using the same technology and network infrastructure as the Internet.  More than iPads and Android devices, Internet-based services have now moved directly into our cars and may again change business models (Mobile Technology Killing Satellite Radio). 

So, what happens now if the Internet Kill Switch is moved to the ON position (that is turning the Internet off)?  Of course the issues affecting the performance of the Internet could be a combination of several different varieties:
  • One of the Tier 1 carries has a common-mode failure that causes their part of the Internet to fail.  This would cause a significant disruption to customers and businesses, but one would believe that this would be taken care of fairly quickly.  In 1990, AT&T’s digital voice network had a several hour outage that was caused by issues in their Signaling System 7 (SS7) control network.  The root of the problem was that the same bad code (there is a difference between ‘=’ and ‘==’ in the programming language ‘C’) common across the SS7 nodes in the network.  Today, most Tier 1 providers “core” MPLS/IP networks are built using a single vendor.  Service providers test like crazy to find potential problems, and they treat all software upgrades with suspicion and perform significant testing.   Reloading and restarting the backbone network could take hours or a few days, but overall is under the control of the service provider and their vendors.
  • The “bad guys” find magic bullet packets, sequences, or other vulnerabilities in the Internet that can be used to cause significant routing problems to propagate around multiple Internet backbone providers through their peering points.  Other vectors include compromising the typical Out-Of-Band (OOB) control network that is used to configure and monitor network devices.  This could cause one or more Tier 1 providers to start closing peering points, resetting routers, or even have to take their network down to reload clean software and firmware on their routers.   The service provider would probably take steps to ensure the isolation of their management network, and then the restart of the backbone routers is under their and their vendor’s control.
  • The other Internet Kill Switch is the one that gets talked about is one under some sort of control of the government.  What criteria would be used? Even the first item above, one that has nothing to do with hacking or Internet terrorism, could look like an attack on the provider’s network.  Do we flip the Internet into the OFF position and cause self-inflicted damage?
Clearly, the Internet now is so pervasive that it has become an essential utility for virtually every aspect of life.  Even local Internet outages cause significant disruption in commerce.  However, if we go back Before the Internet (B.I.), the exact same was said about our phone system and our railroad network.  I am not that old, but maybe there was discussion about a Telephone System Kill Switch?  Or, do we make the generally valid assumption that major companies that form the Tier 1 structure of the Internet do the right thing?  That is, it is in their commercial interest (corporate existence) to make their service as robust and reliable as possible to keep and attract customers?

The top Internet providers should show how they prepare for serious events that are natural and man-made.  This includes providing time estimates of restoring services in a variety of situations, and how this is coordinated between the other major network providers.  This is the only way to estimate the risk to our daily Internet conveniences and more importantly to our country’s economic health.   For example, the plans put into place by network provides ensured that during the 2003 massive blackout their equipment remained powered, with emergency deliveries of fuel for generators essentially went along as planned.  Do we have as well coordinated plans among the service providers for other issues:
  • Route repair coordination and optical transport bandwidth sharing?
  • Point-of-presence facility failure?
  • Depot and sharing of routers and other equipment in an emergency?
  • Response to large scale attacks at peering points?
  • Coordination with government to ensure situation awareness and prioritization of restoration efforts?
  • Coordination to limit the cross Tier 1 impacts to major application service providers to protect e-commerce?
I am not looking forward to the time when the topic of the day is: “Where were you when the Internet went out?”.  In a future post, we'll look at "Where were you when the Cloud went out?".

Monday, April 2, 2012

Will the Was "Now" Network become the Will Be "Now" Network


After over a year of position and scheming, it appears that Sprint has finally adopted a technical direction for its network and customers.  And, for all of you have a WiMAX-based Sprint 4G phone at least you are not more than two years away from getting trade-in value for an LTE-based 4G product.

The bottom line is that Sprintwill not be adding any new WiMAX phones to its crowded line-up of user technology.

So, the old front-runner  in technology apparently spent much of its time last year negotiating deals and talking about coming out with an LTE strategy instead of actually committing to LTE and getting a network up-and-running.  From its announcement almost a year ago to the complete collapse last month, wasted time means that others are running away with the mindshare of 4G with rolled-out devices and coverage in dozens (and for Verizon over 200) cities.

Now, Sprint finds itself in a catch-up position.  Burdened with a multi-billion dollar commitment for iPhone devices, it now faces an even-more un-enviable position of not having a large enough 4G network to compete for a good share of a looming iPhone5 rollout, which one could safely assume has LTE.  Without a good share, they will be hard pressed to meet the $15.5 billion they owe to Apple.

In my opinion, Sprint’s marketing has always been suspect, but in last year or so they have moved to position themselves as the provider of the “Now” Network, which would not be bad if it were true.  The reality looks more like the “Was Now Network”, when they lead GSM, push-to-talk, and EVDO to the “Will Be Now Network” after AT&T and Verizon have over a two year start.

Friday, March 30, 2012

First out of the gate, and now in the back of the pack.


There are two interesting developments this week on the evolution of the strategies of companies that I have been tracking.

The first, is the new favorite to bash (I have been critical since early last year) Research in Motion (RIM), the company that brought us the previously game changing Blackberry line of devices.

The second, is that Sprint has essentially abandoned WiMAX and is now committing  to LTE.  I will expand on this in a future post.

For RIM, a summary:
  • The barely made their most pessimistic sales targets (with sales down over 21% year over year)
  • Resignation of former co-chief executive Jim Balsillie (is this then a co-resignation?, who is going to staff the innovation committee?)
  • Stepping down of the chief technology officer David Yacht (I though the co-CEOs were the brain trust
  • Dan Dodge, former head of QNX Software is the new CTO (hmmm, do we really think that an operating system is going to restore luster in a crowed OS market?)

Remember, when the current CEO stepped into place, he made the impression that the previous leadership was really doing the right things, and that it was a function of marketing that was causing the loss of sales and market share (this message sent the stock tumbling).

Well, the newish CEO, Thorsten Heins, says now that the company would now focus corporate blackberry environments, their previous area of dominance and where they were first out of the gate.  This was initially taken as a sign that the company was going to abandon the consumer market, but I doubt that was the intent (although it ultimately may be the reality).

Can RIM salvage their once corporate dominance in securely managing mobile devices to move from their now nearly back of the pack position?

If RIM is going to focus on the corporate marketplace it is probably too late – unless they can rapidly turn their Blackberry Enterprise Server system to real management of non-RIM devices.  The trend to “bring your own device” (see, IBM) is strong and therefore the corporate marketplace may be dominated by consumer-based decision making.  In addition, there are established and startup companies that are specializing in enabling IT departments to manage this consumer purchased, but integrated into the corporate environment devices (see, Wikipedia, notice that RIM is not even on the list!).


Sunday, March 25, 2012

The New Convergence - Can Engineering and Operations Staffs Handle the Change?


There is a new breeze moving though the telecommunications landscape.  It is called convergence.  However, it is not the convergence, or “convergences”, that may first come to mind.  As a review, some previous convergence examples:
  • Private Networking Convergence: Move from Asynchronous Transfer Mode (ATM) and Frame Relay into Multi-Protocol Labeled Switching (MPLS)-based Virtual Private Network (VPN) services
  • Provider Networking Convergence: The use of MPLS to provide a common “provider” core network to support the all data services
  • Service Convergence: Converging voice and video services over Internet Protocol (IP)-based networks

All three of these represent the convergence of technology, but the last one is different in a very specific and critical dimension.  Since the beginning of data networking, the systems used to provide voice services and data services may have shared common network transport (e.g., point-to-point circuits and wavelengths), but differed in planning, engineering, and operations.

Service Convergence of voice and private data networking onto the same IP network infrastructure and required new thinking on the organizations that support voice services.   No longer an independent Time Division Multiplexed (TDM)-based voice backbone, the voice services organization transitioned to making requirements on the converged, and most likely, MPLS-based IP-service network.

However, there is one critical area that did not converge.  The hardware devices that support the voice services are separate and distinct from the data networks.  This includes items like call processing servers, session boarder controllers, and TDM gateways.  What this means is that with the exception of the actual IP bandwidth,  the voice service planning, engineering, and operations teams still have a separate set of devices to lifecycle manage.

Now, another convergence is on the horizon that does convergence one step beyond and that is the actual merging of data networking into the optical transport domain.   Right now, the product roadmaps of companies like ciena, Alcatel-Lucent, Juniper, Cisco, and Infinera show a merging of the functionality of both the DWDM optical layer and the functionality of a core MPLS provider “P” router in one device.
Currently, at a service provider, these (the optical and MPLS layers) are designed, acquired, and engineered independently by separate internal organizations.    This separation also extends into the operations environment (i.e., the Network Operations Centers – NOCs) which generally has separate transport, MPLS/IP, and voice groups.

The figure below shows examples of the kinds of convergence environments that are in progress or emerging.  In the typical “Current Environment”, the MPLS P and PE layers are grouped together and have a separate life-cycle.  Similarly, the Synchronous Optical Networking (SONET), and associated Ethernet interfaces for Ethernet-line (ELINE) services are generally combined with the optical transport layer.

The emerging “Near-term Environment” (and in some cases already implemented and deployed) has Optical Transport Network (OTN) capabilities combined with SONET and Ethernet services integrated with Dense Wavelength Division Multiplexing  (DWDM) transport.  Of note, is that for most existing service providers, these types of capabilities are already considered transport and therefore this transition does not cause too much operations organization disruption.  This type of service looks much like provisioned circuits, so the training and concept of operating the system are familiar.


However, the next transition into the “Future Environment” starts network operators down a different path.  The new convergence is the move of the traditional MPLS “P” function, generally a large router providing MPLS-only (e.g., handles Labeled Switched Paths only and does not “route”) capabilities, into the same hardware environment that provides OTN and DWDM transport.  Hardware vendors are banking on merging these capabilities to significantly reduce the hardware costs and improve overall system efficiency.  Generally called integrated optics or compatible optics, the approach is to remove the back-to-back optics costs that are incurred by connecting separate pieces of equipment together (e.g., between the “P” MPLS router and DWDM optical transport).

With this approach there needs to be some interesting changes in engineering.  The IP engineering organization has to directly participate in the specification, testing, and eventual deployment of such an integrated network component.  This complicates the selection of systems.  A real question for equipment vendors is whether this integrated approach requires a too complicated set of organizational issues for service providers.  As I have written previously, engineers love to hug their hardware.  If they are no longer in control, their natural reaction will be to resist the change.

In parallel with changes in engineering, there are needed changes in operations.  In general, the operation of MPLS devices are managed by a “Data” Network Operations Center (NOC), and the transport equipment by a “Transport” NOC.  For many service providers, these may not even be collocated in the same building.   In general, the data network looks at the optical service as an independent resource provided by an internal service provider.  Failures and issues are addressed at the MPLS-layer.  The Transport NOC provides circuit resources and performs maintenance as needed, generally not directly in regard with the users of their service, including the MPLS network.

So, in the new environment, the transport and data network worlds collide as there are significant operations inter-dependencies that need to be reconciled:
  • Which team is primary on the equipment?
  • What network management system monitors the equipment?
  • How do you coordinate maintenance activities?
  • Are there training gaps and skill gaps that need to be filled?
  • Do you collapse the Data and Transport NOCs?

Even in light of these issues, the convergence train has left the station and its next stop is yet another consolidation of equipment which promises to increase scale, reduce cost, and increase power efficiency.  Engineering and Operations staffs each of their respective services are going to have to get used to this new reality and learn to start sharing the hugging of network hardware.

Sunday, February 26, 2012

Every which way...is that a technical strategy

With the possibility of piling-on, the last two weeks have been precious in terms of technical and business strategy gone awry.

Friday, it was announced that Sprint's Board of Directors decided to turn against a Several Billion dollar deal to buy MetroPCS.  From a strategy perspective, you could ask what the Sprint-Nextel's CEO Dan Hesse was thinking?  This failure also comes on the heels of the virtually certainty of LightSquared being denied any useful operating license by the FCC.  Remember, LightSquared was going to pay Sprint $9 Billion in cash over 11 years to build the LightSquared network and Sprint would also get $4.5 billion in network usage credits.

Sprint announced its own over $7 billion LTE network upgrade recently, so why all the confusion?

Oh, but there is more.  Google announced that it is going to essentially abandon its investment in Clearwire, for a loss of nearly $500 Million.  As I discussed earlier in Early Adopters Don't Always Win Update, WiMAX technology that Sprint was getting from Clearwire did not live up to the 4G marketing hype from WiMAX proponents, Sprint, or Clearwire.  Clearwire posted a loss of $237 Million for the 4Q2011 and even went as far to say that their revenue will likely stay the same or even fall during 2012.  Clearly, this is amazing as the number of LTE subscribers and others using HSPA+ and CDMA Rev. A is rapidly expanding along with revenue (profits may be a different matter, of course).

So again, what does this mean technically?  Sprint purchased Nextel in 2004 to gain new customers and get scale to help it compete.  Not learning from the difficulty in integrating two different wireless standards in a swing for the seats $35 Billion deal, Spring seemingly has ignored real technical innovation and planning for more swing for the seats deals including
  • pouring billions into Clearwire (both in stock and in must pay service agreements)
  • several billion dollar LightSquared transaction (hmmm....is this being done by ex-Worldcom people?)
  • $15 billion deal with Apple for iPhones
  • failed attempted transaction for MetroPCS for nearly $8 Billion

Right now, a company that was once known for technical leadership (one of the first GSM companies and rapid transition to 3G CDMA) may now be known for a series of done and would-be-done financial transactions to try to build the company into a fierce competitor.

Maybe concentrating on its own network (technology and buying spectrum), improving its marketing, and looking for some other technical differentiation would allow it to grow and keep its customer base.  Just a thought.

Monday, January 9, 2012

Sprint’ing to be Finished

I have probably been a bit hard on Sprint over the past year:

So with Sprint's recent announcement, let’s look at their 4G strategy:
  • Trying to be first to market, Sprint moved into a partnership with Clearwire for their WiMAX service.   This was supposed to drive subscriber growth, with fancy adds on real-time mobile video collaboration.  No such luck.
  • Sprint now decides that maybe WiMAX is not the answer and that an LTE strategy for 4G is key.  Even Clearwire starts LTE trials after having spent billions of dollars on WiMAX systems.
  • Flailing a bit, Sprint decides that some sort of deal with the emerging wholesale provider, LightSquared, would do the trick: http://news.cnet.com/8301-1035_3-20084472-94/sprint-lightsquared-unveil-network-sharing-deal/

On first glance the LightSquared deal looks great. Sprint gets over $10 billion to build out the network for LightSquared to help build out the network and “host LightSquared’s spectrum” and other considerations.  However, in the same article, Sprint is leaning on Clearwire’s 4G strategy and at the same time going to release their larger LTE strategy later in the year.

Sprint also said that they really want the GPS issue with LightSquared’s use of the MSS spectrum to be resolved.  Well, that may not turn out so well, and Sprint is again back to zero with $10 billion less in its pocket
.
Well, we now see the effect of such a coherent strategy:
Verizon has 190 LTE markets in production, AT&T 26, and Sprint has a mid-year launch in 10 cities.
So, with all the money exchanged, all the early market entry hype, and the potential financial gimmickry of the LightSquared deal, the bottom line is that Sprint is in definitively last place in the 4G saga (oh, there is the HSPA+ of T-Mobile).

What's even more confusing is that Spring announced that their first new 4G LTE site is a multimodal WiMAX/LTE site somewhere in New Jersey.  No lesson is lost on Sprint (also previously discussed about Nextel), they need to support:
  • Motorola iDEN (there are still Nextel customers out there?)
  • 3G CMDA
  • LTE
  • WiMAX

 The extra hardware, software, systems, planning, people, and operational issues around supporting a laundry list of service agreements and technologies makes it harder for Sprint to be competitive.

If they had a nimble, focused, catch-the-big-guys (i.e., AT&T and Verizon) strategy, it just does not look like it's going to work out.

Moreover, investors have been led down a garden path to losses:
  • Since the beginning of 2009, the Dow Jones Industrial Average is up approximately 40%:
  • Clearwire is down around 70%
  • LightSquared, depending on the FCC, may be effectively dead (not a public company, but what's an investment worth if you can't have customers?)
  • Sprint is a big 0%