THE STUDENTS SYNDROME AND CRITICAL CHAIN

Category: By ROHINI DUTTA
Student syndrome refers to the phenomenon that many people will start to fully apply themselves to a task just at the last possible moment before a deadline. This leads to wasting any buffers built into individual task duration estimates.



The student syndrome is a form of procrastination, but with more of a plan with good intention. For example, if a student or group of students goes to a professor and asks for an extension to a deadline they will usually defend their request by noting how much better their project will be given more time to work on it; they request this with all the right intentions. In reality most students will have other tasks or events place a demand on the time they fully intended to commit to improving their paper or project. In the end they will often end up close to the same situation they started with wishing they had more time as the new delayed deadline approaches.

This same behaviour is seen in businesses; in project and task estimating, a time- or resource-buffer is applied to the task to allow for overrun or other scheduling problems. However with Student syndrome the latest possible start of tasks in which the buffer for any given task is wasted beforehand, rather than kept in reserve. Like students, many workers do not complete assignments early, but wait until the last minute before starting, often having to rush to submit their assignment minutes before the deadline. A similar phenomenon is seen every year in the United States when personal tax returns are due - Post Offices remain open until midnight on the final day as people queue to get their tax return postmarked.



CRITICAL CHAIN AND ITS CONNECTION-


With traditional project management methods, 30% of the lost time and resources are typically consumed by wasteful techniques such as bad multi-tasking, Student syndrome, In-box delays, and lack of prioritization.

In project management, the critical chain is the sequence of both precedence- and resource-dependent terminal elements that prevents a project from being completed in a shorter time, given finite resources. If resources are always available in unlimited quantities, then a project's critical chain is identical to its critical path.

Critical chain is used as an alternative to critical path analysis. The main features that distinguish the critical chain from the critical path are:

1. The use of (often implicit) resource dependencies. Implicit means that they are not included in the project network but have to be identified by looking at the resource requirements.
2. Lack of search for an optimum solution. This means that a "good enough" solution is enough because:
1. As far as is known, there is no analytical method of finding an absolute optimum (i.e. having the overall shortest critical chain).
2. The inherent uncertainty in estimates is much greater than the difference between the optimum and near-optimum ("good enough" solutions).
3. The identification and insertion of buffers:
* project buffer
* feeding buffers
* resource buffers.
4. Monitoring project progress and health by monitoring the consumption rate of the buffers rather than individual task performance to schedule.

CCPM aggregates the large amounts of safety time added to many subprojects in project buffers to protect due-date performance, and to avoid wasting this safety time through bad multitasking, student syndrome, Parkinson's Law and poorly synchronised integration.

Critical chain project management uses buffer management instead of earned value management to assess the performance of a project. Some project managers feel that the earned value management technique is misleading, because it does not distinguish progress on the project constraint (i.e. on the critical chain) from progress on non-constraints (i.e. on other paths). Event chain methodology can be used to determine a size of project, feeding, and resource buffers.


METHODOLOGY

Planning

A project plan is created in much the same fashion as with critical path. The plan is worked backward from a completion date with each task starting as late as possible. Two durations are entered for each task: a "best guess," or 50% probability duration, and a "safe" duration, which should have higher probability of completion (perhaps 90% or 95%, depending on the amount of risk that the organization can accept).

Resources are then assigned to each task, and the plan is resource leveled using the 50% estimates. The longest sequence of resource-leveled tasks that lead from beginning to end of the project is then identified as the critical chain. The justification for using the 50% estimates is that half of the tasks will finish early and half will finish late, so that the variance over the course of the project should be zero.

Recognizing that tasks are more likely to take more rather than less time due to Parkinson's Law, Student's Syndrome, or other reasons, "buffers" are used to establish dates for deliverables and for monitoring project schedule and financial performance. The "extra" duration of each task on the critical chain—the difference between the "safe" durations and the 50% durations—is gathered together in a buffer at the end of the project. In the same way, buffers are gathered at the end of each sequence of tasks that feed into the critical chain.

Finally, a baseline is established, which enables financial monitoring of the project.


Execution

When the plan is complete and the project ready to kick off, the project network is fixed and the buffers size is "locked" (i.e. their planned duration may not be altered during the project), because they are used to monitor project schedule and financial performance.

With no slack in the duration of individual tasks, the resources on the critical chain are exploited by ensuring that they work on the critical chain task and nothing else; bad multitasking is eliminated. An analogy is drawn in the literature with a relay race. The critical chain is the race, and the resources on the critical chain are the runners. When they are running their "leg" of the project, they should be focused on completing the assigned task as quickly as possible, with no distractions or multitasking. In some case studies, actual batons are reportedly hung by the desks of people when they are working on critical chain tasks so that others know not to interrupt. The goal, here, is to overcome the tendency to delay work or to do extra work when there seems to be time.

Because task durations have been planned at the 50% probability duration, there is pressure on the resources to complete critical chain tasks as quickly as possible, overcoming student's syndrome and Parkinson's Law.


Monitoring

Monitoring is, in some ways, the greatest advantage of the Critical Chain method. Because individual tasks will vary in duration from the 50% estimate, there is no point in trying to force every task to complete "on time;" estimates can never be perfect. Instead, we monitor the buffers that were created during the planning stage. A fever chart or similar graph can be easily created and posted to show the consumption of buffer as a function of project completion. If the rate of buffer consumption is low, the project is on target. If the rate of consumption is such that there is likely to be little or no buffer at the end of the project, then corrective actions or recovery plans must be developed to recover the loss. When the buffer consumption rate exceeds some critical value (roughly: the rate where all of the buffer may be expected to be consumed before the end of the project, resulting in late completion), then those alternative plans need to be implemented.




Theory of Constraints (TOC) is an overall management philosophy that aims to continually achieve more of the goal of a system. If that system is a for-profit business, then the goal is to make more money, both now and in future. TOC consists of two primary collections of work: 1) The five focusing steps and their application to operations; 2) The Thinking Processes and their application to project management and human behavior.

According to TOC, every organization has - at any given point in time - one key constraint which limits the system's performance relative to its goal (see Liebig's Law of the Minimum). These constraints can be broadly classified as either an internal constraint or a market constraint. In order to manage the performance of the system, the constraint must be identified and managed correctly (according to the Five Focusing Steps below). Over time the constraint may change (eg because the previous constraint was managed successfully, or because of a changing environment) and the analysis starts anew.
 

WHY FIRING YOUR CUSTOMER ISNT SUCH A GREAT THING

Category: By ROHINI DUTTA
Fire your bad customers.

That piece of advice has become widely accepted in recent years as companies have sought to manage their relationships with customers in more sophisticated ways. The rationale for this idea is clear-cut: Low-value customers -- such as the ones who hardly spend any money on your services or products yet tie up your company's phone lines with questions and complaints -- end up costing more money than they provide. So why not jettison them and focus your customer-relationship efforts on more profitable individuals? Or, as an alternative, why not at least try to increase the worth of the low-value customers to your firm? If a firm has only valuable customers, the thinking goes, its profitability and shareholder value should increase.

It all sounds quite rational, and many corporations have jumped on the bandwagon. But a new study by two Wharton marketing professors, Jagmohan Raju and Z. John Zhang, and Wharton doctoral student Upender Subramanian, cautions that firing low-value customers may actually decrease firm profits and that trying to increase the value of these customers may be counterproductive.

The notion that firing unprofitable customers is a smart thing to do has emerged out of the broad acceptance of a practice usually referred to as Customer Relationship Management (CRM). With CRM, firms often use information technology to quantify the value of individual customers and provide better privileges, discounts or other inducements to customers identified as having high-value. In their study, Raju and Zhang have coined the term Customer Value-based Management (CVM) to describe this central component of CRM. These customer analyses have often shown that a small proportion of customers contribute to a large percentage of profits, and that many customers are unprofitable.

Financial institutions are perhaps best known for treating low-value customers differently from good ones. For instance, bad customers at Fidelity Investments are made to wait longer in queues to have their calls taken by call centers, according to examples cited in the study. But many other types of firms have embraced CRM and are giving low-value customers the cold shoulder. Continental Airlines e-mails only its high-value customers, apologizing for flight delays and compensating them with frequent-flier miles. At Harrah's, room rates range from zero to $199 per night, depending on customer value. Some firms fire customers outright. In July 2007, CNN reported that Sprint had dropped about 1,000 customers who were calling the customer-care center too frequently -- 40 to 50 times more than the average customer every month over an extended period.

In the study, "Customer Value-based Management: Competitive Implications," Zhang, Raju and Subramanian break ground by analyzing CVM in the context of a competitive environment. The researchers acknowledge that firing bad customers may make some sense in industries where there is little or no competition. If a firm treats all customers equally, the argument goes, not only does the company waste resources on attracting and retaining unprofitable customers, it also under-serves profitable customers, who may become unhappy and leave.

Targets for Poachers

However, for the overwhelming majority of companies operating in a competitive environment, firing low-value customers can be counterproductive, the researchers conclude. The key reason: Companies that rid themselves of low-value customers -- or take steps to turn low-value customers into high-value ones -- leave themselves open to successful poaching by competitors. If the competition knows that you have fired many or all of your low-value customers, they are likely to intensify their efforts to take your remaining customers away from you because they now know that all, or most, of those remaining customers are of the high-value variety.

"Over time, companies have acquired a lot of capabilities in processing customer information," Zhang says. "They have all sorts of analytics to do data mining and to figure out how to use that data. One thing companies have done is to figure out who are their profitable customers, and they have concluded that firing low-value customers is a good idea. The problem, however, is that while this idea seems to make sense, it only makes sense in situations where there is no competition, which is very unlike the real world. Our paper looks at how CVM affects companies competing with one another."

"What our analysis tells us is companies make money, in part, by confusing their competitors about their customers," Raju says. "If you make your customer base transparent by firing your low-value customers, competitors will hit you hard because you will be left with customers of one type.'

Instead of firing unprofitable customers, some companies have tried to turn them into high-value customers by giving them inducements to change their behavior, such as teaching them to spend more or to use low-cost support channels. But the Wharton researchers found that this idea is also wrongheaded. "If you make low-value customers more valuable, this can also be counter-productive because it also encourages your competitors to poach more intensely," Raju says.

So what is the proper way to manage relationships with low- and high-value customers? "Our research finds that a better approach is to improve the quality of your high-end customers at the same time that you keep your low-end customers, but you should find other, cheaper, ways to manage the low-value customers, such as encouraging them to use automated phone-response systems or the Internet or offering minimal discounts or other benefits," says Raju "You have to keep your competition confused about who your good and bad customers are."

CVM has enjoyed significant support amongst corporations, researchers and others because its logic seems so compelling. But CVM, once adopted, has often proved disappointing. Studies have shown, for instance, that the retail banking industry, while investing billions of dollars in CVM, has been unenthusiastic about the results to the bottom line, according to the Wharton paper.

"One reason why actual results differ from expected outcomes could be that, hitherto, researchers and industry experts have by and large looked at firms in isolation without considering competitive reactions," the Wharton scholars write. In their paper, the researchers provide the first theoretical analysis of CVM practices when CVM capabilities are potentially available to all firms in an industry. The researchers set up a mathematical model and applied game theory to see how two competing firms, each with the same size base of customers called 'Good' and 'Poor', would compete for customers by offering various inducements. Among other things, the model assumes that the firms have access to the same CVM technology, that the firms are equally efficient in offering inducements and that each firm can identify its customers.

Another finding: Firms in an industry may become worse off as CVM becomes more affordable. Hence, they have an incentive to self-regulate their ability to collect or use customer information. "In some cases, CVM can do damage to an industry," notes Zhang. "Say you and I are competitors. We both have good information and we continue to poach each other's customers. This is high-tech marketing warfare. If the cost of CVM increases, it's not necessarily a bad thing. It's like when armies fight each other with high-cost ammunition: When the cost increases, both sides have less of it, and fighting subsides. But if the cost of ammunition drops, the armies have more ammunition and fighting intensifies. So there's an incentive for companies to get together in industries and agree to use certain kinds of information."

CVM vs. Targeted Pricing

The Wharton researchers stress that it is important understand that CVM is different from another concept that has taken root in many companies in recent years -- targeted pricing. With targeted pricing, firms differentiate between customers based on their willingness to pay and they charge a higher price to those who are relatively price insensitive. In this respect, a high-value customer is one who can bear a higher price. Put another way, a high-value customer is treated poorly. By contrast, under CVM a customer may be of high value due to other characteristics, such as the kinds of goods purchased, the number of times a product is returned to the seller, and the number of times the customer requests customer support. Hence, under CVM, a high-value customer would typically receive lower prices or better service than a low-value customer.

The researchers say that, in future studies, they will continue to explore CVM. They want to analyze such topics as how customer value can be more accurately measured, how it can be enhanced, and in which industries could CVM prove most valuable. In the meantime, they say their new study should help convince firms to reconsider the notion that firing bad customers is a smart decision.

"What we'd like readers to take away from our paper is that just 'cleaning up' your customer base is not good enough," Raju says. "You should focus on good customers and try to improve their quality and not just try to get rid of the bad ones. Firms should find cheaper ways to keep low-value customers because they are confusing your competition to your advantage and there's a chance someday that they will become good customers."
 

HOW TO EVALUATE A STRATEGY

By ROHINI DUTTA
Is your strategy right for you? There are six criteria on which to base an answer.
These are:
1. Internal consistency.
2. Consistency with the environment.
3. Appropriateness in the light of available resources.
4. Satisfactory degree of risk.
5. Appropriate time horizon.
6. Workability.

1. Is the Strategy Internally Consistent?

Internal consistency with corporate goals.
Each policy fits into an integrated pattern.
How it relates to other policies, which the company has established and to the goals it is pursuing.

2. Is the Strategy Consistent with the Environment?

Important test of strategy is whether the chosen policies are consistent with the environment—whether they really make sense with respect to what is going on outside.
Consistency with the environment, a static and a dynamic aspect. In a static sense, it implies judging the efficacy of policies with respect to the environment as it exists now. In a dynamic sense, it means judging the efficacy of policies with respect to the environment, as it appears to be changing.
In one sense, therefore, establishing a strategy is like aiming at a moving target. You have to be concerned not only with present position but also with the speed and direction of movement.

3. Is the strategy Appropriate in View of the Available Resources?

Resources are those things that a company is or has and that help it to achieve its corporate objectives, like money, competence, and facilities; there are two basic issues which management must decide in relating strategy and resources. These are:
 What are our critical resources?
 Is the propose strategy appropriate for available resources?

Critical Resources –
that they represent action potential, and its capacity to respond to threats and opportunities that may be perceived in the environment. They are the factor limiting the achievement of corporate goals; and that which the company will exploit as the basis for its strategy. The three resources most frequently identified as critical are money, competence, and physical facilities. Let us look at the strategic significance of each.

Money: particularly valuable resource because it provides the greatest flexibility of response to events as they arise and considered the “safest” resource, in that safety may be equated with the freedom to chose from among the widest variety of future alternatives. Companies that wish to reduce their short-run risk will therefore attempt to accumulate the greatest reservoir of funds they can.

Competence:
Organizations survive because they are good at doing those things, which are necessary to keep them live. Companies may be good at marketing, other especially good at engineering; still others depend primarily on their financial sophistication. In determining a strategy, management must to determine where its strengths and weaknesses lie. It must then adopt a strategy, which makes the greatest use of its strengths.

Physical Facilities:
Physical facilities have significance primarily in relationship to overall corporate strategy. Any appraisal of a company’s physical facilities as a strategic resource must consider the relationship of the company to its environment. Facilities have no intrinsic value for their own sake. Their value to the company is either in their location relative to markets, or to sources impending competitive installations.

Achieving the Right Balance:
One of the most difficult issues in strategy determination is that of achieving a balance between strategic goals and available resources. The most common errors are either to fail to make these estimates at all or to be excessively optimistic about them.

4. Does the Strategy Involve an Acceptable Degree of Risk?
Strategy and resources, taken together, determine the degree of risk, which the company is undertaking and this is a critical managerial choice. Each company must decide for itself how much risk it wants to live with. Some qualitative factors to be used for evaluation of the degree of risk are:
 The amount of resources (on which the strategy is based) whose continued existence or value is not assured.
 The length of the time periods to which resources are committed.
 The proportion of resources committed to a single venture.
The greater these quantities, the greater the risk that is involved.

5. Does the Strategy Have an Appropriate Time Horizon?
A viable strategy reveals what goals are to be accomplished along with when the aims are to be achieved. Goals, like resources, have time-based utility.

6. Is the Strategy Workable?
It would seem that the simplest way to evaluate a corporate strategy. A better way of asking: Does it work? However, if we to answer that question, we are immediately faced with criteria. What is the evidence of a strategy “working”? Quantitative indexes on performance are a good start, but they really measure the influence of two critical factors combined; the strategy selected and the skill with which it is being execute. Faced with the failure to achieve anticipated results, both of these influences must be critically examine.
 

BRANDING BASICS

By ROHINI DUTTA
BRAND EQUITY
is the value that customers and prospects perceive in a brand. It is measured based on how much trust a customer has in the brand. The value of a company's brand equity can be calculated by comparing the expected future revenue from the branded product with the expected future revenue from an equivalent non-branded product. The difference, usually profit, is how much customers trust the brand, and are willing to pay above and beyond the price for other competitive brands with lower value perceptions. This calculation is at best an approximation. This value can comprise both tangible, functional attributes (e.g. twice the cleaning power or half the fat) and intangible, emotional attributes (e.g. The brand for people with style and good taste).
COMPONENTS OF BRAND EQUITY
BRAND LOYALTY
BRAND AWARENESS
PERCEIVED QUALITY
BRAND ASSOCIATIONS
OTHER PROPRIETARY BRAND ASSETS




Positivity


Brand equity cannot be negative. Positive brand equity is created by effective marketing - advertising, PR and promotion in all forms, and the ability of the brand's performance to consistently maintain customer relationships -- trust.
The greater a company's brand equity, the greater the probability that the company will use a family branding strategy rather than an individual branding strategy. This is because family branding allows them to leverage the equity accumulated in the core brand.


Brand energy

is a concept that links together the ideas that the brand is experiential; that it is not just about the experiences of customers/potential customers but all stakeholders; and that businesses are essentially more about creating value through creating meaningful experiences than generating profit. Economic value comes from businesses’ transactions between people whether they be customers, employees, suppliers or other stakeholders. For such value to be created people first have to have positive associations with the business and/or its products and services and be energised to behave positively towards them—hence brand energy. It has been defined as "The energy that flows throughout the system that links businesses and all their stakeholders and which is manifested in the way these stakeholders think, feel and behave towards the business and its products or services."


Attitude Branding

Attitude branding is the choice to represent a feeling, which is not necessarily connected with the product or consumption of the product at all. Marketing labeled as attitude branding includes that of Nike, Starbucks, The Body Shop, Safeway, and Apple Inc.
"A great brand raises the bar – it adds a greater sense of purpose to the experience, whether it's the challenge to do your best in sports and fitness, or the affirmation that the cup of coffee you're drinking really matters." — Howard Schultz (CEO, Starbucks Corp.)


Brand monopoly

In economic terms the "brand" is a device to create a monopoly—or at least some form of "imperfect competition"—so that the brand owner can obtain some of the benefits which accrue to a monopoly, particularly those related to decreased price competition. In this context, most "branding" is established by promotional means. There is also a legal dimension, for it is essential that the brand names and trademarks are protected by all means available. The monopoly may also be extended, or even created, by patent, copyright, trade secret (e.g. secret recipe), and other sui generis intellectual property regimes (e.g.: Plant Varieties Act, Design Act).
In all these contexts, retailers' "own label" brands can be just as powerful. The "brand", whatever its derivation, is a very important investment for any organization. RHM (Rank Hovis McDougall), for example, have valued their international brands at anything up to twenty times their annual earnings. Often, especially in the industrial sector, it is just the company's name which is promoted (leading to one of the most powerful statements of "branding"; the saying, before the company's downgrading, "No-one ever got fired for buying IBM").


Brand extension

An existing strong brand name can be used as a vehicle for new or modified products; for example, many fashion and designer companies extended brands into fragrances, shoes and accessories, home textile, home decor, luggage, (sun-) glasses, furniture, hotels, etc. Mars extended its brand to ice cream, Caterpillar to shoes and watches, Michelin to a restaurant guide, Adidas and Puma to personal hygiene.
There is a difference between brand extension and line extension. When Coca-Cola launched "Diet Coke" and "Cherry Coke" they stayed within the originating product category: non-alcoholic carbonated beverages. Procter & Gamble (P&G) did likewise extending its strong lines (such as Fairy Soap) into neighboring products (Fairy Liquid and Fairy Automatic) within the same category, dish washing detergents.


Multiple brands

In a market fragmented with many brands, a supplier can choose to launch new brands apparently competing with its own, extant strong brand (and often with an identical product), simply to obtain a greater share of the market that would go to minor brands. The rationale is that having 3 out of 12 brands in such a market will give garner a greater, overall share than having 1 out of 10 (even if much of the share of these new brands is taken from the existing one). In its most extreme manifestation, a supplier pioneering a new market which it believes will be particularly attractive may choose immediately to launch a second brand in competition with its first, in order to pre-empt others entering the market.
Individual brand names naturally allow greater flexibility by permitting a variety of different products, of differing quality, to be sold without confusing the consumer's perception of what business the company is in or diluting higher quality products.
Once again, Procter & Gamble is a leading exponent of this philosophy, running as many as ten detergent brands in the US market. This also increases the total number of "facings" it receives on supermarket shelves. Sara Lee, on the other hand, uses it to keep the very different parts of the business separate—from Sara Lee cakes through Kiwi polishes to L'Eggs pantyhose. In the hotel business, Marriott uses the name Fairfield Inns for its budget chain (and Ramada uses Rodeway for its own cheaper hotels).
Cannibalization is a particular problem of a "multibrand" approach, in which the new brand takes business away from an established one which the organization also owns. This may be acceptable (indeed to be expected) if there is a net gain overall. Alternatively, it may be the price the organization is willing to pay for shifting its position in the market; the new product being one stage in this process.
Abercrombie & Fitch is a multi-brands company, rolling out Lifestyle Brands.


Small business brands

Some people argue that it is not possible to brand a small business. However, many small businesses have become very successful due to branding. For example, Starbucks used almost no advertising, yet over a period of ten years developed such a strong brand that the company expanded from one shop to hundreds.


Own (Private Label) brands and generics

With the emergence of strong retailers, the "own brand", the retailer's own branded product (or service), emerged as a major factor in the marketplace. Where the retailer has a particularly strong identity, such as, in the UK, Marks & Spencer in clothing, this "own brand" may be able to compete against even the strongest brand leaders, and may dominate those markets which are not otherwise strongly branded. There was a fear that such "own brands" might displace all other brands (as they have done in Marks & Spencer outlets), but the evidence is that—at least in supermarkets and department stores—consumers generally expect to see on display something over 50 per cent (and preferably over 60 per cent) of brands other than those of the retailer. Indeed, even the strongest own brands in the United Kingdom rarely achieve better than third place in the overall market. In the US, Target has "own" brands of "Market Pantry" and "Archer Farms" each with unique packaging and placement.
The strength of the retailers has, perhaps, been seen more in the pressure they have been able to exert on the owners of even the strongest brands (and in particular on the owners of the weaker third and fourth brands). Relationship marketing has been applied most often to meet the wishes of such large customers (and indeed has been demanded by them as recognition of their buying power). Some of the more active marketers have now also switched to 'category marketing'—in which they take into account all the needs of a retailer in a product category rather than more narrowly focusing on their own brand.
At the same time, generic (that is, effectively unbranded goods) have also emerged. These made a positive virtue of saving the cost of almost all marketing activities; emphasizing the lack of advertising and, especially, the plain packaging (which was, however, often simply a vehicle for a different kind of image). It would appear that the penetration of such generic products peaked in the early 1980s, and most consumers still seem to be looking for the qualities that the conventional brand provides.


Brand architecture

is the structure of brands within an organizational entity. It is the way in which the brands within a company’s portfolio are related to, and differentiated from, one another. The architecture should define the different leagues of branding within the organisation; how the corporate brand and sub-brands relate to and support each other; and how the sub-brands reflect or reinforce the core purpose of the corporate brand to which they belong.


Types of brand architecture

There are three generic relationships between a master brand and sub-brands:
• Monolithic brand or Branded house - Examples include Virgin Group, Red Cross or Oxford University. These brands use a single name across all their activities and this name is how they are known to all their stakeholders – consumers, employees, shareholders, partners, suppliers and other parties.
• Endorsed brands - Like Nestle’s KitKat, Sony PlayStation or Polo by Ralph Lauren. The endorsement of a parent brand should add credibility to the endorsed brand in the eyes of consumers. This strategy also allows companies who operate in many categories to differentiate their different product groups’ positioning.
• Product brand or House of brands - Like Procter & Gamble’s Pampers or Henkel’s Persil. The individual sub-brands are offered to consumers, and the parent brand gets little or no prominence. Other stakeholders, like shareholders or partners, know the company by its parent brand.


Brand extension or brand stretching

is a marketing strategy in which a firm marketing a product with a well-developed image uses the same brand name in a different product category. Organisations use this strategy to increase and leverage brand equity (definition: the net worth and long-term sustainability just from the renowned name). An example of a brand extension is Jello-gelatin creating Jello pudding pops. It increases awareness of the brand name and increases profitability from offerings in more than one product category.
A brand's "extendibility" depends on how strong consumer's associations are to the brand's values and goals. Ralph Lauren's Polo brand successfully extended from clothing to home furnishings such as bedding and towels. Both clothing and bedding are made of linen and fulfill a similar consumer function of comfort and homeliness. Arm & Hammer leveraged its brand equity from basic baking soda into the oral care and laundry care categories. By emphasizing its key attributes, the cleaning and deodorizing properties of its core product, Arm & Hammer was able to leverage those attributes into new categories with success. Another example is Virgin Group, which has extended its brand from from transportation (aeroplanes, trains) to games stores and video stores such a Virgin Megastores.


Product extensions

are versions of the same parent product that serve a segment of the target market and increase the variety of an offering. An example of a product extension is Coke vs. Diet Coke in same product category of soft drinks. This tactic is undertaken due to the brand loyalty and brand awareness they enjoy consumers are more likely to buy a new product that has a tried and trusted brand name on it. This means the market is catered for as they are receiving a product from a brand they trust and Coca Cola is catered for as they can increase their product portfolio and they have a larger hold over the market in which they are performing in.


Types of product extension

Brand extension research mainly focuses on the consumer evaluation of extension and attitude of the parent brand. Following the Aaker and Keller’s (1990) model, they provide a sufficient depth and breadth proposition to examine consumer behaviour and conceptual framework. They use three dimensions to measure the fit of extension. First of all, the “Complement” is that consumer takes two product (extension and parent brand product) classes as complement to satisfy their specific needs.Secondly, the “Substitute” indicates two products have same user situation and satisfy their same needs which means the products class is very similar so that can replace each other. At last, the “Transfer” is the relationship between extension product and manufacturer which “reflects the perceived ability of any firm operating in the first product class to make a product in the second class”.The first two measures focus on the consumer’s demand and the last one focuses on firm’s ability.
From the line extension to brand extension, however, there are many different way of extension such as "brand alliance",co-branding or “brand franchise extension”.Tauber (1988) suggests seven strategies to identify extension cases such as product with parent brand’s benefit, same product with different price or quality, etc. In his suggestion, it can be classified into two category of extension; extension of product-related association and non-product related association.Another form of brand extension, is a licensed brand extension. Where the brand-owner partners (sometimes with a competitor) who takes on the responsibility of manufacturer and sales of the new products, paying a royalty every time a product is sold.


Categorisation theory
Researchers tend to use “categorisation theory” as their fundamental theory to explore the links about the brand extension.When consumers face thousands of products, they not only initially confused and disorderly in mind, but also try to categorise the brand association or image with their existing memory. When two or more products exit in front of consumers, they might reposition memories to frame a brand image and concept toward new introduction. A consumer can judge or evaluate the extension by their category memory. They categorise new information into specific brand or product class label and store it. This process is not only related to consumer’s experience and knowledge, but also involvement and choice of brand. If the brand association is highly related to extension, consumer can perceive the fit among brand extension. Some studies suggest that consumer may ignore or overcome the dissonance from extension especially flagship product which means the low perceived of fit does not dilute the flagship’s equity.
 

FOOD GROCERY RETAIL

By ROHINI DUTTA
 

INNOVATION

By ROHINI DUTTA
INNOVATION is typically understood as the introduction of something new and useful, for example introducing new methods, techniques, or practices or new or altered products and services.

TYPES OF INNOVATION
Scholars have identified at a variety of classifications for types innovations. Here is an unordered ad-hoc list of examples:

Business model innovation
involves changing the way business is done in terms of capturing value .

Marketing innovation
is the development of new marketing methods with improvement in product design or packaging, product promotion or pricing.

Organizational innovation
involves the creation or alteration of business structures, practices, and models, and may therefore include process, marketing and business model innovation.

Process innovation
involves the implementation of a new or significantly improved production or delivery method.

Product innovation
involves the introduction of a new good or service that is new or substantially improved. This might include improvements in functional characteristics, technical abilities, ease of use, or any other dimension.

Service innovation
refers to service product innovation which might be, compared to goods product innovation or process innovation, relatively less involving technological advance but more interactive and information-intensive .

Supply chain innovation
where innovations occur in the sourcing of input products from suppliers and the delivery of output products to customers

Substantial innovation
introduces a different product or service within the same line, such as the movement of a candle company into marketing the electric lightbulb.

Financial innovation
through which new financial services and products are developed, by combining basic financial attributes (ownership, risk-sharing, liquidity, credit) in progressive innovative ways, as well as reactive exploration of borders and strength of tax law. Through a cycle of development, directive compliance is being sharpened on opportunities, so new financial services and products are continuously shaped and progressed to be adopted. The dynamic spectrum of financial innovation, where business processes, services and products are adapted and improved so new valuable chains emerge, therefore may be seen to involve most of the above mentioned types of innovation.
Incremental innovations
is a step forward along a technology trajectory, or from the known to the unknown, with little uncertainty about outcomes and success and is generally minor improvements made by those working day to day with existing methods and technology (both process and product), responding to short term goals. Most innovations are incremental innovations. A value-added business process, this involves making minor changes over time to sustain the growth of a company without making sweeping changes to product lines, services, or markets in which competition currently exists.

Breakthrough, disruptive or radical innovation

involves launching an entirely novel product or service rather than providing improved products & services along the same lines as currently. The uncertainty of breakthrough innovations means that seldom do companies achieve their breakthrough goals this way, but those times that breakthrough innovation does work, the rewards can be tremendous. Involves larger leaps of understanding, perhaps demanding a new way of seeing the whole problem, probably taking a much larger risk than many people involved are happy about. There is often considerable uncertainty about future outcomes. There may be considerable opposition to the proposal and questions about the ethics, practicality or cost of the proposal may be raised. People may question if this is, or is not, an advancement of a technology or process.

Radical innovation
involves considerable change in basic technologies and methods, created by those working outside mainstream industry and outside existing paradigms. Sometimes it is very hard to draw a line between both.

New technological systems (systemic innovations)
that may give rise to new industrial sectors, and induce major change across several branches of the economy.

Social innovation
a number of different definitions, but predominantly refers to either innovations that aim to meet a societal need or the social processes used to develop an innovation .


DIFFUSION OF INNOVATION




Once innovation occurs, innovations may be spread from the innovator to other individuals and groups. This process has been studied extensively in the scholarly literature from a variety of viewpoints, most notably in Everett Rogers' classic book, The Diffusion of Innovations. However, this 'linear model' of innovation has been substantinally challenged by scholars in the last 20 years, and much research has shown that the simple invention-innovation-diffusion model does not do justice to the multilevel, non-linear processes that firms, entrepreneurs and users participate in to create successful and sustainable innovations.

Rogers proposed that the life cycle of innovations can be described using the ‘s-curve’ or diffusion curve. The s-curve maps growth of revenue or productivity against time. In the early stage of a particular innovation, growth is relatively slow as the new product establishes itself. At some point customers begin to demand and the product growth increases more rapidly. New incremental innovations or changes to the product allow growth to continue. Towards the end of its life cycle growth slows and may even begin to decline. In the later stages, no amount of new investment in that product will yield a normal rate of return.

The s-curve is derived from half of a normal distribution curve. There is an assumption that new products are likely to have "product Life". i.e. a start-up phase, a rapid increase in revenue and eventual decline. In fact the great majority of innovations never get off the bottom of the curve, and never produce normal returns.

Innovative companies will typically be working on new innovations that will eventually replace older ones. Successive s-curves will come along to replace older ones and continue to drive growth upwards. In the figure above the first curve shows a current technology. The second shows an emerging technology that current yields lower growth but will eventually overtake current technology and lead to even greater levels of growth. The length of life will depend on many factors.
 

THE LONG TAIL PHENOMENON

INTRODUCTION

The phrase The Long Tail (as a proper noun with capitalized letters) was first coined by Chris Anderson in an October 2004 Wired magazine article to describe certain business and economic models such as Amazon.com or Netflix. Businesses with distribution power can sell a greater volume of otherwise hard to find items at small volumes than of popular items at large volumes. The term long tail is also generally used in statistics, often applied in relation to wealth distributions or vocabulary use.

THE LONG TAIL BY CHRIS ANDERSON

The phrase The Long Tail was, according to Chris Anderson, first coined by himself. The concept drew in part from an influential February 2003 essay by Clay Shirky, "Power Laws, Weblogs and Inequality" that noted that a relative handful of weblogs have many links going into them but "the long tail" of millions of weblogs may have only a handful of links going into them. Beginning in a series of speeches in early 2004 and culminating with the publication of a Wired magazine article in October 2004, Anderson described the effects of the long tail on current and future business models. Anderson later extended it into the book The Long Tail: Why the Future of Business is Selling Less of More (2006).

Anderson argued that products that are in low demand or have low sales volume can collectively make up a market share that rivals or exceeds the relatively few current bestsellers and blockbusters, if the store or distribution channel is large enough.

Anderson cites earlier research by Erik Brynjolfsson, Yu (Jeffrey) Hu, and Michael D. Smith, who first used a log-linear curve on an XY graph to describe the relationship between Amazon sales and Amazon sales ranking and found a large proportion of Amazon.com's book sales come from obscure books that are not available in brick-and-mortar stores. The Long Tail is a potential market and, as the examples illustrate, the distribution and sales channel opportunities created by the Internet often enable businesses to tap into that market successfully.

An Amazon employee described the Long Tail as follows: "We sold more books today that didn't sell at all yesterday than we sold today of all the books that did sell yesterday."

Anderson has explained the term as a reference to the tail of a demand curve. The term has since been rederived from an XY graph that is created when charting popularity to inventory. In the graph shown above, Amazon's book sales or Netflix's movie rentals would be represented along the vertical line, while the book or movie ranks are along the horizontal axis. The total volume of low popularity items exceeds the volume of high popularity items.

In a 2006 working paper titled "Goodbye Pareto Principle, Hello Long Tail", Erik Brynjolfsson, Yu (Jeffrey) Hu, and Duncan Simester found that, by greatly lowering search costs, information technology in general and Internet markets in particular could substantially increase the collective share of hard to find products, thereby creating a longer tail in the distribution of sales. They used a theoretical model to show how a reduction in search costs will affect the concentration in product sales. By analyzing data collected from a multi-channel retailing company, they showed empirical evidence that the Internet channel exhibits a significantly less concentrated sales distribution, when compared with traditional channels. An 80/20 rule fits the distribution of product sales in the catalog channel quite well, but in the Internet channel, this rule needs to be modified to a 72/28 rule in order to fit the distribution of product sales in that channel. The difference in the sales distribution is highly significant, even after controlling for consumer differences.


Demand-side and supply-side drivers

The key supply side factor that determines whether a sales distribution has a Long Tail is the cost of inventory storage and distribution. Where inventory storage and distribution costs are insignificant, it becomes economically viable to sell relatively unpopular products; however when storage and distribution costs are high only the most popular products can be sold.

Take movie rentals as an example: A traditional movie rental store has limited shelf space, which it pays for in the form of building overhead; to maximize its profits, it must stock only the most popular movies to ensure that no shelf space is wasted.

Because Netflix stocks movies in centralized warehouses, its storage costs are far lower and its distribution costs are the same for a popular or unpopular movie. Netflix is therefore able to build a viable business stocking a far wider range of movies than a traditional movie rental store. Those economics of storage and distribution then enable the advantageous use of the Long Tail: Netflix finds that in aggregate "unpopular" movies are rented more than popular movies.

A recent MIT Sloan Management Review article, titled "From Niches to Riches: Anatomy of the Long Tail" , examines the Long Tail from both the supply side and the demand side and identifies several key drivers. On the supply side, the authors point out how e-tailers' expanded, centralized warehousing allows for more offerings, thus making it possible for them to cater to more varied tastes.

On the demand side, tools such as search engines, recommender software and sampling tools are allowing customers to find products outside of their geographic area. The authors also look toward the future to discuss second order amplified effects of Long Tail, including the growth of markets serving smaller niches.


Cultural and political impact

The Long Tail has possible implications for culture and politics. Where the opportunity cost of inventory storage and distribution is high, only the most popular products are sold. But where the Long Tail works, minority tastes are catered to, and individuals are offered greater choice. In situations where popularity is currently determined by the lowest common denominator, a Long Tail model may lead to improvement in a society's level of culture. Television is a good example of this: TV stations have a limited supply of profitable or "prime" time slots during which people who generate an income will watch TV. These people with money to spend are targeted by advertisers who pay for the programming so the opportunity cost of each time slot is high. Stations, therefore, choose programs that have a high probability to appeal to people in the profitable demographics in order to guarantee a return.

Twin Peaks, for example, did not have broad appeal but stayed on the air for two seasons because it attracted young professionals with money to spend. Generally, as the number of TV stations grows or TV programming is distributed through other digital channels, the key demographic individuals are split into smaller and smaller groups. As the targeted groups get into smaller niches and the quantity of channels becomes less of an opportunity cost, previously ignored groups become profitable demographics in the long tail. These groups along the long tail then become targeted for television programming that might have niche appeal. As the opportunity cost goes down with more channels and smaller niches, the choice of TV programs grows and greater cultural diversity rises as long as there is money in it.

Some of the most successful Internet businesses have leveraged the Long Tail as part of their businesses. Examples include eBay (auctions), Yahoo! and Google (web search), Amazon (retail) and iTunes Store (music and podcasts) amongst the major companies, along with smaller Internet companies like Audible (audio books) and Netflix (video rental).

Often presented as a phenomenon of interest primarily to mass market retailers and web-based businesses, the Long Tail also has implications for the producers of content, especially those whose products could not - for economic reasons - find a place in pre-Internet information distribution channels controlled by book publishers, record companies, movie studios, and television networks. Looked at from the producers' side, the Long Tail has made possible a flowering of creativity across all fields of human endeavour. One example of this is YouTube, where thousands of diverse videos - whose content, production value or lack of popularity make them innappropriate for traditional television - are easily accessible to a wide range of viewers.

Internet commercialization pioneer and media historian Ken McCarthy addressed this phenomenon in detail from the producers' point of view at a 1994 meeting attended by Marc Andreessen, members of Wired Magazine's staff, and others. Explaining that the pre-Internet media industry made its distribution and promotion decisions based on what he called "lifeboat economics" and not on quality or even potential lifetime demand, he laid out a detailed vision of the impact he expected the Internet would have on the structure of the media industry with what has turned out to be a remarkable degree of accuracy, foreshadowing many of the ideas that appeared in Anderson's popular book.

The recent adoption of computer games as tools for education and training is beginning to exhibit a long-tailed pattern. It is significantly less expensive to modify a game than it has been to create unique training applications, such as those for training in business, commercial flight, and military missions. This has led some to envision a time in which game-based training devices or simulations will be available for thousands of different job descriptions. Smith pursues this idea for military simulation, but the same would apply to a number of other industries.


Competition and the Long Tail

The Long Tail may threaten established businesses.Before a Long Tail works, only the most popular products are generally offered. When the cost of inventory storage and distribution fall, a wide range of products become available. This can, in turn, have the effect of reducing demand for the most popular products.
For example, Web content businesses with broad coverage like Yahoo! or CNET may be threatened by the rise of smaller Web sites that focus on niches of content, and cover that content better than the larger sites. The competitive threat from these niche sites is reduced by the cost of establishing and maintaining them and the bother required for readers to track multiple small Web sites. These factors have been transformed by easy and cheap Web site software and the spread of RSS.

Similarly, mass-market distributors like Blockbuster may be threatened by distributors like Netflix, which supply the titles that Blockbuster doesn't offer because they are not already very popular. In some cases, the area under the long tail is greater than the area under the peak.