Tuesday, December 22, 2009

Developing EA in the context of managed services

This article first appeared in ebizq and is being reproduced here:

When practitioners refer to examples of Enterprise Architecture (EA) programs, they generally refer to in-house initiatives run by a business’s IT department. Therefore, the objectives of such programs are grounded in the business’ self interests. In the case of Managed Service Providers (MSP), who not only run their own enterprises but also the IT departments of their customers, the objectives are dual and sometimes competing. In this article, I attempt to differentiate between these two kinds of EA programs and look at ways in which an MSP can not only help themselves, but also their customers. First, let me begin at the beginning by defining EA and MSP.

Different experts give different answers in defining enterprise architecture. But there a number of commonalities in their definitions. There are technology and business architecture components that are the backbone of most frameworks. There is also a strong information architecture component in some of the more practical frameworks. In the interest of brevity, I will quote only one definition from Gartner, which defines EA as:
Enterprise architecture is the process of translating business vision and strategy into effective enterprise change by creating, communicating and improving the key principles and models that describe the enterprise's future state and enable its evolution. The scope of the enterprise architecture includes the people, processes, information and technology of the enterprise, and their relationships to one another and to the external environment.

An MSP, for the purposes of this discussion, is defined as a provider who provides services (both delivery and management) in the areas of network, server management, application maintenance, infrastructure maintenance, and hosting. A total service provider usually contracts for a Walk-In-Take-Over (WITO) type of an arrangement.

Today, EA services are generally provided by the IT department (the discussion of the wisdom of this is a subject of another article.) An enterprise architect program typically reports directly to the CTO/CIO and is headed by a senior enterprise architect. The function of this group is to understand the business, facilitate the alignment of IT with the business, and ensure that IT services are provided to support both the tactical and strategic operations of the business. As part of its mandate, the EA provides policies, standards for maintenance operations, and guidance for adopting new and emerging technologies.

This works well when you have an IT budget of sufficient size and--more importantly--the people to help implement the EA program. But what happens when the CIO decides to totally outsource IT operations to an MSP? Who runs the EA program when you have little or no influence on the technology component? While the CIO might still retain key people to interface with the MSP and the business, it is usually not an effective arrangement.

Contrast this with the EA program of the MSP. It is a dual objective program in that it needs to serve both its business and the business of its customers. This requires the creation of two charters; one internal and the other for its customers. It may look and feel like a duplicate program, one where every deliverable seems to come in two flavors – internal and external. The reality, however, is that for a program to be effective and practical, the MSP needs to have a generally common EA program that is applicable to both its internal and external customers. There may be minor tweaks here and there but the net intent and direction of the program should remain the same. (Note that this does not apply to the governance structure of the EA program, as it is something that should be tailored specifically to each customer).

To practically develop the EA program in a Managed Services scenario, it should be led by a chief enterprise architect with a number of enterprise architects reporting to her. This pool of EA will ideally come from backgrounds with vertical strengths (i.e. energy, healthcare, insurance, biotech, automotive, etc.) Each of these EA’s are then assigned to one or more (ideally two) customers in the same vertical. By rotation or otherwise, a similar pool of EA’s can develop the internal institutional capital of the MSP EA program. These artifacts/knowledge capitals are then filtered down to the customer level with the appropriate tweaks to maintain relevance and suitability.

While implementing such an EA program is not all that difficult, the issue is convincing the end customers of the sincerity of the MSP’s EA services. The typical contact point in an MSP relationship is between the account executive of the MSP and the delivery head/CTO of the customer. It is a challenge for the MSP to get beyond this relationship and gain access to the business side of the customers. Generally, there are two reasons for this: one is that the customer’s IT head feels the need to act as a go-between the business and the MSP to ensure continued relevance of their position (essentially job security); two is that the customer’s IT is in a better position to understand its business than the MSP (vertical/domain comprehension). While these may be valid reasons, the customer would actually gain from allowing access to the MSP enterprise architect. This person would likely have had experience in similar domains and can bring a great deal to the table in terms of process efficiencies, identifying deficiencies in information systems, and recommending optimization strategies.

To illustrate my point, I offer two case studies. In the first case, the MSP recently established a contracting relationship with a customer who recently divested from its parent company. As is usually the case, the relationship between the recently-divorced parent and the new entity was acrimonious. Being a nascent company, the customer was initially unwilling to let the MSP communicate with the divorced parent. Therefore the MSP initially stayed out of the contentious issues and let the customer lead through these technical discussions. Since this was a new experience for the customer, they did not ask the right questions and anticipate issues before they arose. These management errors led to cost overruns and project delays. In desperation, they finally asked the MSP to take a more active role in managing the relationship. The result is that the MSP was soon able to get things going in the right direction. These and other value-added services provided by the MSP above and beyond the contractual obligations, eventually gained the trust of the customer who then asked that the MSP EA sit in on strategic meetings between the business and IT of the customer.

In another case, the issue concerned a well-architected but poorly executed project. As in the previous case, the MSP had no access to the business partner of the customer. Again, by stepping up to the issue and taking the responsibility to rectify the project, the MSP gained the trust of the customer’s IT department. The MSP certainly generated additional business because of this project. But overall, the relationship improved significantly, due to an effective inclusion of the MSP in the business-IT strategy meetings of the customer.

As these examples indicate, it is always in the interest of the MSP to provide the value-added services to any ITO relationship. Merely serving the letter of the contractual arrangement does not establish a long and fruitful relationship with the customer. At the same time, customers need to realize that it is in their best interest to engage MSP earlier in the contract to help them make sound strategic technology and business decisions.

Monday, June 29, 2009

Interdisciplinary Design as an Instructional Discipline

I was invited to speak on a panel discussion entitled “Interdisciplinary Design as an Instructional Discipline” at the National Science Foundation (NSF) Engineering Research and Innovation Conference in Honolulu, Hawaii. The theme of the workshop was to foster creativity and innovation in approaches to design by including it as an instructional discipline in academic institutions. Therefore, panelists and invited audience from diverse backgrounds such as architecture, business, product design, information systems, etc. were engaged to provide their perspectives on the design process. The discussion, led by senior industry decision makers from Siemens, Honeywell, PetSmart and Dansk, provided an interesting perspective on issues and challenges facing today’s global and economically turbulent marketplace.

It was interesting to note how the process of design is conducted in disciplines other than information systems and software engineering. There were quite a few commonalities but then there were obvious differences as well. Almost all of the fields do have a multi-step process consisting of conceptual, logical, and physical designs. The difference lies in the interpretation of what constitutes each of these design sub-processes.

I will not elucidate on the design issues that were pertinent to other areas but I will be remiss if I do not highlight the design issues facing our industry today:

We need a design process that consistently facilitates the alignment of the implemented systems with the intent of business. Despite our best effort in trying to come up with a workable design, the rate of failure of IT systems (maybe not systemic failure but nevertheless significant failure) today is quite high. Why is that so? Is it because we err in gathering, analyzing, and/or comprehending the requirements of our business partners? Or is it that our design process is flawed that we fail to realize our system deficiencies well in advance of implementation.

How do we ensure integration in terms of business rules, standards, information flow, process flow, interoperability, etc. across the enterprise? Today, it is not uncommon for businesses to be spread across a much wider geographical and cultural spectrum. Consequently, the varied nature of our system boundaries poses a much greater challenge in integrating our process flows. Could we adjust our design process to take into account these seemingly extraneous factors?

How do we facilitate easy change with the ability to quickly react to market? With the advent of the information age, the market is a much more dynamic entity today when compared to a time twenty years ago. Therefore, not only does our design processes need to be sound but they also need to facilitate a quick entry to market. As the market conditions are forever changing, the design process also needs to be flexible to optimize systems development and maintenance.

How do we ensure all of the above and yet keep pace with increasing technology and programming paradigm sophistication? Adding to the other challenges, the constantly changing technology landscape with increased sophistication in technology paradigms sometimes provide a disruptive influence that needs to be countered to ensure a stable software development process. We can, and should, embrace these new developments but certainly not at the cost of disrupting the enterprise. We are here, after all, to serve the business and no amount of tectonic paradigm shifts should alter our basic focus: supporting the enterprise.

Can we perhaps incubate these ideas into future generations of software designers if academic institutions were to encompass them into a design instructional discipline? Or should such things be learned experientially rather than being advocated in a formal instructional setting?

On the long flight back home, it certainly made me go, “Hmm”.

Friday, May 8, 2009

"Architecting the Enterprise", a panel discussion at DESRIST 2009

I was invited to speak on a panel discussion entitled “Architecting the Enterprise” at DESRIST 2009 (a conference for design researchers) and thought I would share my thoughts on what this community thinks about Enterprise Architecture. But before we begin, what exactly is design research? Simply put, design research investigates the process of designing in all its many fields. It is not the research of the end design.


The discussion was primarily led by enterprise architects from Siemens, Pricewaterhouse Coopers, AT&T, Open Group Architecture and moderated by Penn State U. It was a useful discussion which brought forth the following:

  • It continues to have a heavy technology slant with little attention being paid to the business aspect of things.
  • Little or no attention is paid to the informational/data needs associated with a successful EA program.
  • The academic community can help in this regard by providing a realistic definition and value of EA in both business school and computer science curricula.
  • In most organizations, any senior technical architect with a good knowledge of the business is considered an EA. Why? Because he/she has the ability to understand the domain (not the business, mind you; these are two different things) and still talk technese. And then management wonders why there is no business-IT alignment.
  • Most EA in most organizations probably do not know about the various architectural frameworks much less know the difference. Like I said earlier, most are techies with a sound domain (not business) knowledge.

    Sobering thoughts, eh?

Wednesday, April 15, 2009

Keeping EA realistic?

Your organization has chartered you with developing a nascent EA program. You gleefully rub your hands and proceed to get to work. First comes the nebulous, then the specifics and then hopefully the applications of the specifics. As you diligently move from one phase, it is very easy to get carried away and look for perfection at every stage. So how do you make sure it is all relevant from your organization’s perspective?


If you take a look at all of the wonderful frameworks that abound the literature today, these are all well thought out and require significant amount of investment both in time, effort and cost. But in reality, are most corporations willing to spend the time and effort to go through a rigorous exercise of aligning/developing to these frameworks?


Sure, some of them will. These are usually organizations for whom process, standardization, precision are necessary aspects of their domain and are most likely in the aerospace, defense, nuclear fields. But for the typical for-profit business entity who can live with a degree of variability in their product offering/service, investing in developing or adapting an enterprise architecture framework is something that they can always put off until next year.


Then, what is the solution. Keep it simple, realistic and always practitioner oriented. Be ready to show ROI at every stage of the game. Keep your value propositions specific and quantifiable as much as possible. In the coming days, I am going to introduce a framework which purports to do just the same. It is called SEAM™.


SEAM™ stands for Simple Enterprise Architecture Management.

Stay tuned.

Tuesday, February 17, 2009

A simple methodology for measurement of baseline performance

When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the state of science.
- Lord Kelvin



I. Introduction:


Every so often, businesses ask their IT counterparts very general questions on the capacity of their current IT infrastructure especially as it relates to customer facing applications. While these questions are very vague, they nevertheless require a diligence on the part of the IT manager to get the answers. Since these questions come with a reasonable periodicity, it is prudent for the IT manager to devise a measurement strategy to baseline the capacity of IT assets. Quite a few IT managers think that creating a baseline is an expensive proposition requiring the services of high-end consultants from a top-notch consulting firm. Nothing could be further from the truth. There are simple and commonsensical ways to get the baseline measurements done. In this post, I discuss a simple and generalized methodology for measuring the baseline performance of IT infrastructure within an organization.

Developing a measurement baseline is usually the first phase of a performance analysis and optimization plan. This is akin to assessing the current state of affairs. As I have often said, “you cannot optimize if you do not know your current capability”. Therefore, the intent of creating a measurement baseline is to develop a benchmark of current usage for both hardware and software resources. Once you have a benchmark, one can then carry out additional performance analysis to determine optimum configuration and response times for various scenarios. That, of course, is a future topic for discussion.

II. Procedure:


At a high level, the following steps are required to develop a measurement baseline:



  1. Identify resources that need to be used and determine the type of role/work performed by each of the resources.

  2. Determine the workload characterization.

  3. Select appropriate objects and their associated counters for measuring performance.

  4. Automate data collection by creating logging schedules and upload data into a suitable format for subsequent analysis.

  5. Sample the data at the appropriate intervals.

  6. Generate descriptive statistics and line graphs to identify peaks and valleys of system usage.

  7. Present the findings using a format relevant to the discussions.



The following sections detail each of the above steps:



  1. Identify resources and Roles: An accurate measurement baseline of current use can only be developed by monitoring resources that are currently used to host software applications. The right approach is to use passive monitors on resources that are part of the current production configuration. Care must be taken not to disrupt your steady state of operations.

  2. Workload Characterization: The next task is to determine the characterization of the infrastructure workload by studying your clustering and load-balancing heuristics. Obviously, primary servers will have a greater hit ratio when compared to secondary servers. Understanding the topology of the environment is a key to an accurate analysis of current usage.

  3. Selection of Performance Counters: The advantage of most applications in today’s environment is that they are instrumented off the shelf. This means that the administrator rarely has to devise software widgets to accurately measure the performance of these applications. However, the success of an accurate baseline is indicated by the right selection of these performance counters. A selection of too many counters will yield a data overload which makes it very cumbersome to analyze using standard methods. On the other hand, selection of too few counters makes the point of the whole endeavor meaningless. A good and experienced administrator should be able to pick and choose the right mix of these counters.

  4. Data Collection: Automate the collection of data as much as possible. In all of the standard operating system environments, provision exists to automatically collect periodic data points and store them in text files. These text files can later be imported into spreadsheets or databases for the actual analysis. Periodically check to ensure that the data is indeed being collected and stored appropriately. It is also important to have these raw data files appropriately named with the date and time of the log data for easier stratification later. Failure to do so will almost surely lead to confusion and possibly erroneous conclusions.

  5. Sampling Period: Once you have the basic data collection framework in place, it is important to choose the correct sampling period. It is critical that the sampling period encompasses any cyclical nature of the business. For instance, there may be a heavy volume of transactions during the first Monday of the month or the last Friday of the month. Or it could be that the pattern of usage is different for each day of the week. To complicate matters further, there also may be different usage patterns during different times during the day.

  6. Descriptive Statistics: Decide on the descriptive statistics that are intended to be used for data analysis. It is important not to go overboard with your analysis unless it is a complex real-time trading system where every transaction is critical to the business success of the enterprise. For the most part, summary statistics such as mean, median and mode supplemented by basic “normal” analysis such as the standard deviation, confidence levels, etc. are generally sufficient.

  7. Presentation: All of the analysis will be for naught if one is not able to convince the decision makers on the accuracy and relevance of your findings. Therefore, use simple trend graphs, histograms, Pareto charts, and pie charts to convert the analyzed data into meaningful information.

The seven steps discussed above is a simple process that can be carried out with minimal costs. Almost organizations have access to some sort of spreadsheet software, desktop database software and rudimentary presentation and graphing software. The creation of a measurement framework for the first time will probably require some investments in terms of thinking through the process. Once the measurements are validated, calibration of the framework is usually straightforward and much less cumbersome.

Friday, February 6, 2009

Components of a compelling technology roadmap

So what are we looking for in a technology roadmap? A good technology roadmap begins with the business side of things. There is no point in talking about changing technology simply for the sake of changing things. There is of course the “cool” factor of playing around with the latest and greatest. But then, mention that to any half-sane CEO and I would be surprised if you are not kicked out of the office.

Therefore, let me begin with the components of a good technology roadmap. Obviously, there is the perfunctory introduction detailing some background information. Once you get past that, you will need to define the scope of the expected change. You will also need to explain what the organization can expect to get out of implementing the roadmap.

A well documented roadmap will have a clearly defined section of business drivers. What is driving the business to ask for change? Is the technology antiquated and not able to satisfy the business need? Has the business significantly changed so that the technology assumptions made earlier are no longer valid? These and other questions will need to be answered in the roadmap.

Next come the technical drivers that are driving the roadmap. It could be that there is a significant paradigm shift that necessitates a complete overhaul of the technology inventory. It could also be that most of the infrastructure is so significantly outdated or out of warranty that it makes financial sense to look at a complete refreshment of infrastructure.

Closely related to the technical drivers are the technology enablers that compel a strong argument for change. For instance, if the company’s model has moved from a “brick and mortar model” to a “click, brick and mortar” model, then it makes sense to look at infrastructure in the form of web servers, application servers, firewalls, load balancers, etc.

A good roadmap has a clear section assessing the current state of affairs. If you do not know what you have, then you do not know what you lack. With the rapid proliferation of dynamic discovery and probing tools, it is usually a matter of setting up auto-probes that do most of the work for you. Once you have the current state well assessed, focus on the vision of desired state both from a business as well as from a technical perspective.

The gap analysis between the current state and the desired state of the enterprise will form the basis for a cost benefit model. I can assure you no senior executive will consider your request seriously if your roadmap does not include a solid cost benefit analysis.

Of course, no roadmap is complete without including the timeline of proposed change, the resources needed to accomplish the task and other mundane administrivia.

Having a well laid out technology roadmap provides many advantages. Not only does it force you to think of things in a structured fashion, it also provides a compelling case to the folks holding the purse strings.

Thursday, January 29, 2009

I don't get no respect...

So I walk into a conference room full of design engineers and project managers the other day and get a bunch of dirty looks from them. These looks probably meant:

Who is this guy?
What is he doing here?
Who invited him?
Here we go again (sigh)
Ah, here comes the trouble maker (extended sigh)

You enterprise architects out there probably know what I am talking about. I am sure you have encountered situations where you had to repeatedly explain your role in the project. Most design engineers probably never heard of you and developers, even less. The other senior managers/directors (usually your peers, really) probably think of you as another highly paid architect and so expect you to attend to the technical minutiae of every aspect of the project.

Levity aside, the role of an enterprise architect is rarely perceived clearly. To adapt an oft-quoted idiom, “He/she is all things to some people and some things to all people”. Your immediate boss and probably the boss’s boss (the guys who hold your purse strings) probably know what your role is and where you fit into the organization. If they do not, then “Gawd help you”.

So how do you fix this problem? Here are five suggestions:

First, with the help of your boss (and your boss’s boss), put on a road show. Create a reasonably slick PowerPoint (Flash is even better) presentation. Take every opportunity within (and outside) the organization to showcase your abilities, your role and how you are here to help them. Tell them what you can and cannot do, what you will and will not do.

Second, establish your credibility. Always be prepared for your meetings. It is okay to ask basic questions but ensure that the intent of your questions is well understood. Other folks attending the meeting who were hesitant to ask the basic questions may now “cotton on” on to your intentions.

Third, lead “brown bag” lunch discussions on important technical and business topics relevant to your organization. To ensure participation (at least initially) provide free lunch. Believe me; a free lunch would get them in droves. Slowly, but surely, you will see your investments in time and effort paying off.

Fourth, insert yourself into as many meaningful projects or discussions. The earlier in the project cycle, the better it is for your cause. This requires some diligence on your part to get into discussions that may end up nowhere. But then, it at least adds to your visibility.

Finally, produce as many artifacts as you can to help the cause of your fellow architects or engineers. No one likes to be sermonized from a pulpit. If you can supplement your ideas with artifacts from your previous job experiences (obviously scrubbed and sanitized for your own protection), it will help to establish you as a person with substance.

These five suggestions have been used by me in the past and I continue to use them even today. Remember, having the best wares does not guarantee that it will be sold. It is marketing these wares that make the difference.

Wednesday, January 28, 2009

Enterprise Architecture - three things that can make it succeed

Enterprise Architecture has been an active buzzword at least for the past few years. It has achieved some success in some organizations but definitely not the total success that most excecutives have come to expect. But then, that is the nature of the beast.

EA means many things to many people and that is possibly the fundamental problem in this area. For most senior executives, the expectation is that the role will be filled by a senior archcitect who is accomplished in quite a few areas. The expectation also is that they could use this person to do the jobs previously done by multiple individuals. There are quite a few others who think hiring an EA is a panacea to all the endemic IT problems within the organization.

Business executives have come to expect the EA to miraculously solve all of the problems within a few months. In fact, they start to expect results within two weeks of a new hire. This high expectation is not entirely their fault. Frequently, IT executives justify the high price of hiring/retaining an EA by setting a rosy scenario where the business and IT live in one happy kingdom under the watchful eyes of the wise wizard, the Enterprise Architect.

So what does it take for an EA program to succeed in an organization? First, get the definition right. Does it mean the same thing to the CIO, CTO, CEO, CFO? Does the hiring manager's definition of an EA the same as the incoming/incumbent EA?

Second, get the expectations right. It is important to let the senior officers in both the business and IT areas know what can and cannot be achieved within the realm of EA. Goals need to be broken into short term, medium term and long term. At the risk of using an oft repeated cliche, "grab the low hanging fruit". There are probably more naysayers in the organization than there are champions and therefore it is important to have a few early successes to establish the credibility of the program.

Third, give the EA some teeth. Most organizations make the mistake of making the EA an individual contributor with no real authority to override the design architects in critical decision making processes. While it is important that the EA does not unnecessarily put a roadblock into the implementation of projects (or even the operation in a steady state), he/she must throw a "hissy fit" when absolutely required. While most senior IT executives do "talk the talk" under day to day scenarios, very few "walk the talk" during distress conditions. By distress conditions, I mean situations of system breakdown. I certainly do not mean that everyone needs to sit in a room and espouse the cause of structured arhitectured and design analysis when all hell is breaking loose in the business wing of the floor. Yes, the systems need to come up first. That is the number one priority, no matter what. And most IT organizations do a decent job of getting it back up as soon as they possibly can. But here's where most organizations fail - they bask in the reflected glory of resumed operations that they fail to pay heed to the core crux of the problem.

These three aspects, by themselves, are obviously not enough in sustaining an EA program but it is good start to a long and fruitful relationship between business and IT.

About Me

My photo
Sree Sundaram is currently a Sr. Director of Enterprise Architecture at a major global technology firm. He is currently engaged at two major international biotechnology firms in optimization and migration of infrastructure from their current platform to a newer technological platform that is in line with their current and future business needs. Sree has solid experience in understanding the needs of both middle and top level management and has the ability to communicate at both levels. He is fundamentally aware that the transactional and short-term needs of middle level management are different from the long-term vision of top-level management. He has successfully dealt with such issues by providing an IT framework that meets both the short term and long term needs. In general, Sree helps to prioritize competing initiatives using a combination of his acumen, communication skills, strategic and operation plans.