Sunday, December 26, 2010

Name your own price...for wireless Internet bandwidth

So, would you like to name your own price for high-speed Internet access? It may sound a bit far-fetched, but it may be on its way – imagine William Shatner over your shoulder telling you to demand $1 for unlimited 4G wireless for the day. Don’t think it can happen? Well, let’s take a look at the competitive environment for your wireless bucks.

Competition is the engine that ignites innovation. One product emerges, such as the iPhone, and sets the standard by which all other products are measured. This type of competition drives to differentiate a product in order to break away from commoditization to gain market share without having to engage in cut-throat pricing.

Consumers and business benefit from this type of competition, getting higher performance and more features from a larger number of companies. However, there is another type of competition that also benefits consumers and business alike. Essentially based on the commoditization of a product or service, price competition, driven by consumer selection increases the value received by enabling the consumer choices in the selection of a lower cost item or service.

So, what does this have to do with telecommunications? The general observation is that communications equipment has commodity items and competition that has driven down the cost of routers, switches, and optical transport equipment. For service providers, it has also dramatically lowered the cost of Internet services. This includes both wired and wireless.

For this article, let us focus on Internet services. Anyone with experience in purchasing wired Internet service knows that much of the cost is driven by the number of access providers that have their own transport assets into your building. If there is only one, then you are at the mercy of the provider, if there are several then even a bit of negotiation can significantly reduce your cost for service.

However, right now, there is a robust market of multiple providers that exist virtually everywhere. This is the wireless marketplace. Only a few years ago, with the best wireless Internet service limited to around 1Mbps download and several hundred Kbits per second, these services were good for road warriors, but not as a replacement for wired services (or WiFi connected to a wired service). The situation changed a bit with 3G services, and now even more with multiple 4G service providers available and more on the way. This means that there are multiple service providers banging into your mobile devices that can provide multi-Mbps enabling a whole range of applications.

The wireless companies want to lock us in to contracts, or at least monthly plans, at a fixed cost and fixed maximum use (without paying an additional cost). However, at any moment, unlike the wired situation, you are surrounded by multiple wireless providers, each providing essentially the same commodity services. But with a contract or device limited to a single provider, it is not possible to take advantage of this rich access environment.

This situation will change. With embedded 3G and 4G multi-band capabilities built into portable devices (e.g., laptops, tablets, phone, etc.) and new market could emerge that puts the power of the moment and choice in the hands of the user. Like Priceline, a user would advertise the fact that Internet service is needed and either set a price or let a real-time market set the price. Competition among the several carriers will most likely drive the price down, providing the best value to the consumer. This type of service could be implemented via a subscription service that will handle the auction, and then enable the programming of the wireless device to use the selected service. Options would then exist for improving the current pricing schemes of monthly data transfer caps. For example, it may be possible to pool multiple accounts, by using a central administration and allocation mechanism that dynamically assigns an account with ample unused data transfer available to a requesting mobile device.

So, when will this happen? With applications demanding more and more bandwidth, the flexibility of our mobile devices increasing, and the volume production of flexible wireless hardware, one or two things will happen, each to the benefit of consumers and business users alike:

  • One of the several 4G wireless providers will decide to break with the pack and introduce higher data caps or even a plan with no cap at all
  • An enterprising company will develop an approach to enable reverse auctions or bandwidth pooling

Maybe both will happen, or not, but the precedent has been set in other areas of business including:

  • Least cost routing for voice call termination
  • Reverse auction hotel costs and airline fares
  • Credit cards that offer travel points on any airline

New business models are starting all the time, maybe this is one that is about to get some wireless legs.

Sunday, December 5, 2010

Orchestration - It Should be the Focus of Most IT Organizations

You've got the best Information Technology (IT) department in the world, your organization develops and cranks out new applications and capabilities to your customers both internal and external like clockwork – you are the Chief Information Officer (CIO) of the century. However, one day you wake-up and find the IT world has actually passed by you and your company. How did that happen? Just yesterday it appeared that all was well.

Well, several things happened. Your crack team:

  1. Loves to run systems
  2. Loves to buy hardware
  3. Loves to interact with vendors
  4. Loves to develop code
  5. Staffs for each new requirement 
  6. Staffs to maintain each new application
All this sounds good, so what is the problem?

This is a team that hugs its environment. They want to be able to see their babies, listen to the noise of the fans, and watch the blinkin’ lights. This may also be shared by executives who want to see where their money is going. High self-esteem for the team is generated from having vendors come in and parade their wares, and doing show-and-tells of the latest feature or cool system configuration trick to corporate leadership.


In addition, the organization likes to grow. Success for many is measured by the size of the organization. So, for each new business demand, the size of development team increases, the size of the IT infrastructure increases, and the operations staff to keep it all working increases.

These two elements combined to cause the IT organization to stagnate. The organization grew too large and the culture becomes focused inwards on the tools and systems developed. Your competitors become more nimble and out execute your organization. As CIO, you see this coming, and want to take action to fix this (just as your CEO is starting to get on your tush).

There are several options you can take, but you really want to understand the root cause of the problem. To get a bearing, you engage some excellent consulting firms, and they tell you all about the best practices that will enable a technical course correction. You put the plan into action. You get your IT team trained based on consultants’ recommendations and you set expectations and goals. You wait for the changes. However, the changes do not come. What was the missing ingredient?
The problem is that the change necessary was not directly technical, but related to the overall manner in which your organization is run. The change is not technical but directional. The change is from engineering to one of “Orchestration”.
Orchestration means that you need to change the culture of an organization that loves engineering IT solutions. The culture needs to change based on new thinking and new goals as the current culture which hugs things must now embrace software and system capabilities that exist outside of their direct control (paraphrasing Obi-Wan: “use the Cloud…Luke”). For customized applications, a new development approach, based on commercial tools environment needs to be created that enables targeted outsourcing of development. The culture must understand what must be done internally, and what can be competed and placed into the hands of other companies. Engineering and development is only one of the dimensions, the other is to ensure that operations is also uses an orchestration model, tailoring its support to the specific items of high-value to your organization, and not to areas where benefits of scale are best served by another provider.

The is the approach that Netflix has taken in choosing not to continue to own and expand their own data centers, but to outsource virtually all of their IT system growth using Amazon Web Services. The basic issue was that the cost of scaling their own environment could never keep up with the cost to performance capability provided by an outsourced Cloud Services provider. In addition, it enables Netflix to focus on its real capabilities – obtaining content and differentiating their services in a competitive marketplace.

Some actionable points for an “Orchestrator CIO”:
  1. It’s the Job. Get the team together and tell them that they love their job and not machines and software
  2. Benchmark. More than determining comparative costs of different internal IT approaches, you need to benchmark against costs of completely alternative implementation approaches
  3. Move Budget. It’s difficult to affect change if the engineering organizations control their budget and continue to do what’s comfortable. Move the money and move the organization
  4. Reward. It cannot be all punishment (moving budget, benchmarking that the current approach is not cost effective, etc.). There must be a method to reward innovation and risk taking
  5. Never Rest. Practice Constructive Non-Complacency

Thursday, November 25, 2010

The Technology Yellow Brick Roadmap

Enterprise Information Technology (IT) organizations seem to have a deep desire to create Technology and Architecture Roadmaps. The rational of creating these roadmaps is to help ready an organization for known and potential changes in technology as well as identify gaps in the ability to meet future requirements.

Unfortunately, too often the rush is to understand the elements of technology, such as the latest speed and number of cores for a processor in a blade center, and not how these technologies solve business problems. The difference is that the first are Engineering or Technology Capabilities, and the latter are actual Services.

Commercial IT service providers, such as the major telecommunications service providers, understand the linkage between technology and business implicitly and use several basic questions that lay the foundation of the purpose of a Technology Roadmap:
  1. What are the IT services that my customers need?
  2. What are the potential technical solutions, and what are the trades in cost, performance, delivery, and ultimately customer satisfaction?
  3. What is the Concept of Operations (CONOPS) for how the IT service is going to be delivered to my customers?
  4. Is my organization and processes mature enough to bring technology and the CONOPs together?
A commercial provider examines the first question and begins to decompose from the customer need into functional areas that combined together make the complete service. Contrast this against many IT organizations, which may be lead by engineers who are not directly in touch with their customers and make technology and service decisions that relate to their parochial understanding. This missing piece here is a close understanding of the customer, and not anecdotal information that may lead to decisions that are not based on real requirements or needs.

The second and third items are the critical transition from an engineering solution to a service solution. Engineering organizations are generally concerned with whether a service is going to work, the details of the component selection, and a deployment. They generally hold in lower regard the operations staff in an organization. The thought process of the staff sees the complexity of the developed solution as too complicated for anyone but a certified engineering genius to operate. Well balanced IT organizations know that the ability to ensure high-quality, repeatable service delivery is the goal – not just the excitement of the initial deployment of a technology or capability. The balance between engineering and operations must be found. That is, developing a forcing function that identifies how a new or enhanced service is delivered as well as the overall framework for all services to be delivered.

Giving it up – it is painful to stop what you are doing and rely on others. However, this is the key to the last item. The mature organization, like leadership itself, understands that the various divisions (e.g., engineering and operations) take leads at different times. The engineering staff will want to continue to “hug” the technical baby. The operations staff will tend to want to respond only to monitoring services and performing basic break fix, all too happy to leave the heavy lifting of configuring and provisioning services to the engineering staff.

So, how do you make sure that an IT organization is balanced (that is find the forcing function) and enables the development of technology, the creation of the service CONOPs, and the proper transition of these elements into a production environment? The simple solution is to find the junction between customer needs, engineering, and operations and to then assign the responsibility of this position in the middle, to a product organization that lives only thought the ability to ensure the mutual satisfaction of the other three elements. Mature service providers, such as large multi-service telecommunications providers, have well established product managers.

These managers are the fulcrum, the center of the development of an effective technology and architecture roadmap. The manager’s role is traditional leadership as in many cases the person may not be the position leader of all the people involved. Leadership is needed to provide a vision to the key individuals of the other organizations, and provide the environment for collaboration to develop effective and actionable roadmaps. The product leadership needs to direct, although not specify, all aspects of the roadmap activity:
  1. Development of the Architecture Roadmap, with the milestones of technology, services, and operations ensuring the collaboration of the engineering and operations groups
  2. Development of the business case, ensuring the appropriate inputs from the various organizations, including executives and finance
  3. Drive the endorsement and executive buy-in of the roadmap
  4. Development of a top-level actionable plan for the implementation of the roadmap
  5. Supervision of the implementation plan, with the responsibility to identify the critical milestones needed across all organizations
  6. Tracking of service execution, delivery, projections, and customer satisfaction with the responsibility to coordinate resources, if necessary, to address growth and service issues
  7. Periodic updates of the roadmap for service improvement as well as service retirement
By following this approach, the services developed will have the right balance, ensuring that customer satisfaction is measured and tracked, that operations measures the performance of the services delivered. As important, is ensures that engineering is available for escalations necessary for service quality or delivery issues, and is able to spend time on implementing the next steps of the roadmap.

Thus, the Technology and Architecture Roadmap no longer lives in un-actionable isolation – a dream only invented to meet some corporate feel-good requirement. It provides, when combined with the proper organizational, services concept of operations, and product leadership, clear direction and participation of each functional area with the result being services that can be consistently and efficiently delivered with high customer satisfaction.

Sunday, March 21, 2010

Avoiding creating a Self Licking Ice Cream Cone

It is amazing what people can do together. By marshaling resources, leveraging each person’s talents and experience, virtually any technical data communications challenge can be met and overcome.

However, it is exactly this strength that can lead to an amazing weakness: The creation and reinforcement of the design, engineering, and deployment of a “Self Licking Ice Cream Cone” or SLICC for short. Defined, a SLICC is product or service that works as designed, but misses the mark on customer requirements, cost, operational improvements, or are technically obsolete when deployed.

SLICCs are not created on purpose, but in the government and even in the commercial world, they are produced sometimes two scoops at a time. What causes this? What causes smart and experienced people to create these generally expensive but less effective products or services?

In general, it all starts going off track because of the “second system effect”. This effect, coined by Fred Brooks, product manager for the somewhat ill fated IBM OS/360 project and author of the book “The Mythical Man Month”, suggests that after developers are successful in creating their first system, the same team may fail spectacularly on their next or “second system”. In short, Brooks concludes that the second system retains concepts from the first system and also includes all the items that they were not able to include in the first system.

Thus, the requirements of the system become can be driven based on the legacy systems, concepts, and the opportunity for engineers to focus on what they were not able to do previously. Instead of looking at current trends and customer requirements, the Project Managers and others dutifully monitor the progress of the developers to ensure the SLICC comes in meeting requirements and on-schedule. Unfortunately, these requirements may be inward focused, not what the market or internal customer really needed. Cynically, sometimes these requirements are put in place to ensure long-term employment for the organization.

Most of the time, commercial companies have two different mechanisms that are supposed to avoid or mitigate the effects of a SLICC. The first is a proper Product and Marketing organization. Functioning correctly, these groups are responsible to understand the marketplace for their products, including technical, operational, and pricing requirements needed for market success. For well run companies, even if Product and Marketing makes mistakes, test introductions (especially Internet-type betas) enable changing the product to meet customer needs or removal from the market to cut losses.

Unfortunately, many times in the government environment, there is no marketplace pressure. Projects are created to provide “technology insertion” or “technology refresh” to existing systems. Unfortunately, as indicated above, in many cases the same team that implemented the current system will be in charge of the new project – a “second system”. Through no direct fault of their own, they will see this as the opportunity to improve on the previous system and introduce features they could not previously deploy but potentially missing the mark on the end-to-end improvement from the customer’s perspective. In the end, they create a SLICC.

How can this be avoided in a government controlled network infrastructure? In spite of not having a direct competitive marketplace to provide pressure to move products and services towards success (other than relative budget reductions) it is possible to create an environment that provides a good facsimile. The organization’s equivalent of a Product Manager, the Enterprise Architect needs to do several things:

  • Interact with end-users of the services to understand what problems they are encountering end-to-end and what is limiting their ability to be successful and meet their customer’s needs
  • Understand that the technology and services being developed and deployed are most likely not unique to the government and needs to reach out to companies providing like services and understand their approach
  • To make up for a lack of direct marketplace, the Architect must develop a target set of parameters that a new service must achieve

The key parameters to success must include functional, operational, and costs components so that any develop product or service is measured not against the government’s existing technical and operations approach but against commercial analogues. For example:

  • What is the technical cost for moving a bit end-to-end in the network?
  • What is the technical cost for increasing the capacity of the network?
  • What is the engineering effort required to provision a new customer requirement
  • What is the operations effort required to provision a new customer?

Too often, the comparison for a “technology insertion” or “technical refresh” is relative to the current government baseline approach. This leads to an expectation that a 10% or a 20% cost reduction is a great accomplishment. However, when compared to commercial technical and operations practices, the current approach may be several factors more costly. Only by realizing this fact, is it possible to take a comprehensive view of making significant changes to the technical, operations, and business approaches of a system and being able to realize much larger cost savings.