loading icon for preloader
ResearchBods logo
Back

PRICING & PRODUCT OPTIMISATION EXPLAINED


1 August 2019

Marcus Silversides    Marcus Silversides


Using consumer testing to establish how to price your product and what features to include can be complex. That being said, pricing and product optimisation ensures that you’ll go to market at the right level and can reap more profits in the long run. Marcus Silversides our Head of Data, explains four different approaches – Gabor-Granger, Van Westendorp, Maxdiff and the sophisticated Conjoint exercise.


 

Whilst AI and (latterly) Big Data might be the poster boys of market research right now, let’s take a moment to look at a stalwart of the canon; the pricing and product optimisation exercise.

Arguably the most commonly used amongst these are the Gabor-Granger exercise, the Van Westendorp pricing model and Maxdiff or Conjoint trade-off exercises. Obviously, there are more straightforward approaches, such as asking a simple open ended “How much would you pay for this product?” question, but for the purposes of this article we’ll focus on the more specialist approaches.

What’s wrong with asking a straightforward question, I hear you ask? Fundamentally nothing, but will such an approach really give you the quality of insight you’re looking for? Are you prepared to make a recommendation on the suggested price of a product based on such a question? At best you’ll extrapolate a fairly sensible average price for the item, at worst it’ll be something completely out of kilter. Employing a proven and recognised pricing or product optimisation method will produce a more robust and in-depth dataset to base your recommendation on, not just one single data point.

Both the Gabor-Granger and Van Westendorp concern themselves solely with the pricing of the product in contrast to a Maxdiff, which aims to measure preferences amongst a list of attributes of features without touching on price. A Conjoint exercise on the other hand can address both price and product attributes in the same exercise, presenting combinations of various features/packages to the respondent in addition to the price of the product. You might consider the Conjoint to be more rooted in the real-world and may seem like a less alien experience for the respondent when it’s presented to them in a survey. When did you, as a purchaser, not take into consideration both feature sets and price when trying to choose between packages or products? Think about your last big-ticket electrical purchase, your mobile phone or car. Anyhow, let’s now look at each approach in turn.


GABOR-GRANGER

The Gabor-Granger tests a series of predefined pricing points, often just five (but there can be many more), starting the respondent at a randomly chosen point within the set and asking them if they would be prepared to buy the product at the price shown. From there, depending on whether they’ve given a ‘Top 2’ answer (e.g. ‘definitely would purchase’ or ‘likely to purchase’ at this price point) they will be shown another higher or lower pricing point (assuming there is another logical pricing point that can be shown) until their upper price tolerance within the range tested, has been established.

The image below illustrates the question flow to establish the final price point (PP) for a luxury chocolate bar where the price points are between £2.00 (PP1) and £3.00 (PP5). ‘Top 2’ answers are ‘definitely would purchase’ or ‘likely to purchase’ at this price point.

 

 

As with many techniques there are slight variations on the exact mechanics used to deploy the exercise, but the basic principle remains the same. The resulting data enables the researcher to plot price elasticity and project predicted revenues. A higher pricing point might well be rejected by a significant proportion of respondents, but it may result in higher profits overall.


VAN WESTENDORP

In contrast, a Van Westendorp just aims to establish price sensitivity and optimal pricing points by asking the respondent four straightforward questions; at what price is the product priced too cheap, at what price is it a bargain (but not too cheap to question its quality/value), at what price is it getting expensive (but not so expensive that they wouldn’t consider buying it) and finally at what price is it simply too expensive? It’s based on the notion that there is an intrinsic relationship between the perceived value/quality of a product and its price. Price a product too low and the buyer questions its quality and so forth.

When deployed in a survey, validation is commonly applied between the four questions to ensure the respondent answers correctly. Obviously, the price that is deemed to be too expensive must be the highest given price and the price that is too low must be the lowest etc.

With a bit of manipulation and the calculation of cumulative percentages the resulting outputs of the four questions can easily be charted as per the image below. The optimal pricing point is where equal numbers of mutually exclusive respondents cite a particular price as the price that is ‘too cheap’ and the price that is ‘too expensive’. On the chart it’s where the two lines (green and blue) from these pricing points intersect. Beyond this, and perhaps a more useful output, is the ‘acceptable price range’, established as £40-£75 in the example shown. This is identified as those prices that lay between the intersection of ‘too cheap’ and ‘getting expensive’ (this intersection marking the lowest end of the range) and the intersection of ‘a bargain’ and ‘too expensive’ (this intersection marking the highest end of the range).

 

 


MAXDIFF

Where the previous methods solely address pricing, the Maxdiff exercise concerns itself with identifying the importance of key features or attributes. This is achieved by iterating the respondent through a series of screens, each of which present a set of features or attributes, often limited to no more than four or five per screen. On each screen the respondent is asked to select their ‘most preferred’ and ‘least preferred’ option within that set. In the example shown we use features for a mobile phone. They cannot select the same feature or attribute as both the most and least preferred choice, and they must make two choices. Typically, there are different sets of screens shown to different respondents; this ensures that as many of the features or attributes as possible are pitted against each other. Once the final data is available a value score is generated for each feature or attribute, somewhat akin to a ranking value.

 

 


CONJOINT

Finally, let’s round off with a brief introduction to the concept of the Conjoint exercise; a yet more sophisticated approach. Building upon the Maxdiff exercise it could be said that the Conjoint replicates the purchasing/decision making experience in a more natural manner. It presents respondents with the whole picture; the price isn’t dealt with in isolation but shown in combination with available attributes or options, i.e. the whole package. The Conjoint is presented over a series of screens with a variety of different bundles being pitted against each other, forcing the respondent to make a preferred choice on each screen in turn.

There are different types of Conjoint exercises and these are known as choice-based, adaptive, adaptive choice-based and menu-based. The most popular is the choice based, which has a fixed design of various sets of packages that are to be tested against one another in various combinations. The adaptive versions are dynamic and react to the choices made by the respondent, building the packages on the fly, essentially learning from choices made as the exercise progresses and tailoring the screens to the respondent. The menu-based version might be considered as occupying the middle ground between the two. Like the choice-based version it also relies upon fixed designs, in terms of the combinations of options and prices to be presented per screen, but crucially the respondent can build their own preferred package on each screen rather being forced to select one of several predefined packages.

Think about the last time you bought a new mobile phone; did you perhaps visit a website of one the main mobile network providers and endlessly pore over various combinations of handsets, voice and data packages and numerous add-ons? Were you presented with ways and means to build your ideal package like the image below? Did you find that one of your ‘must have’ criteria wasn’t included in the packages at the upper end of your budget, but were tantalisingly available as an add-on or included in packages in the next price tier up? And more importantly did you succumb and push the boat out?

 

 

A well designed and well-presented menu-based Conjoint aims to replicate that buying experience in a survey, forcing the respondent to make choices and trade off certain factors/options vs price. The resulting analysis helps to identify the value of the different packages available and therefore informs the optimal packages to take to market, often identifying or ratifying what those elusive ‘must have’ options are.


So, there you have it; a brief explanation of four different approaches to pricing and product optimisation. Using consumer testing to establish how to price your product and what features to include ensures you are going to market at the right level and can reap more profits in the long run. This article just scratches the surface of each approach and merely serves to give an overview to those who are not so familiar with the options. If what you’ve read has whetted your appetite and you’re keen to find out more then please get in touch; we’d be happy to look at which approach, if any, might be the best fit for you.

 

GET IN TOUCH