# Measuring Abstract Concepts

Most questions in questionnaires measure *concrete* things like age. A much harder challenge is measuring *abstract* things. For example, how can we measure involvement, innovativeness, cynicism, sophistication, or susceptibility to peer group pressure? The solution is to use a *multi-item scale* which asks multiple similar questions and computes an average response.

## Contents

## The logic of multi-item scales

If you’ve ever done a personality test or an IQ test you will be familiar with the solution to this problem: ask a whole lot of related questions. Consider the problem of trying to work out which people are susceptible to the influence of other people. We could try and understand this by asking people how strongly they agree or disagree with the statement: `I rarely purchase the latest fashion styles until I am sure my friends approve of them.`

However, a problem with this question is that many people may never purchase the “latest fashion styles”, making it difficult to interpret what their answer may mean. We can express this as a simple *measurement model*:

where `Truth` refers to the true degree of susceptibility to influence of other people, `Measurement` is the answers that people give to this question and `Error` refers to all of the factors that prevent the measured degree of social susceptibility from being equal to the true value. Discrepancies’ between the truth and what is measured are known as *measurement errors*.

This equation, which is referred to as the *measurement error model*, tells us that there will be a correlation between `Truth` and `Measurement`, but there is also a correlation between Error and Measurement, meaning that any Measurement will reflect a combination of `Truth and Error`.

Another question wording for trying to understand susceptibility to social influence is: `To make sure I buy the right product or brand, I often observe what others are buying and using.`

This suffers from the problem that some people’s susceptibility may come about through other channels, such as Facebook and by looking at YouTube. Expressing this as a formula gives:

`Truth + Error_2 = Measurement_2`

where the `2_` indicates that this is a different `Error` to that in the earlier equation.

If we add together both of the equations, we get:

`2×Truth + Error + Error_2 = Measured + Measured_2`

Dividing both sides by 2, so that it reflects the average of the two measures gives:

`Truth + (Error + Error_2)/2 = (Measured + Measured_2)/2`

Where two measurements are averaged, the resulting measure is referred to as a multi-item measure, so, we can rewrite the equation as:

`Truth + (Error + Error_2)/2 = Multi-item measure`

Now, you may be thinking at this stage, “so what”, but the above equation is insightful if we make an assumption. The assumption that we need to make is that there is a negligible relationship between Error and Error_2 (i.e., there is close to a 0 correlation between the variables). If this assumption is true, it means that the correlation between the multi-item measure and the truth will be greater than when a single item (i.e., question) is used for measurement. This is because the errors do, to an extent, cancel each other out.

The more wordings of a question that we use, the greater the correspondence between the estimated value and the truth,^{[note 1]} provided that each different variable has similar levels of error. ^{[note 2]}

## Example: Consumer susceptibility to interpersonal influence^{[1]}

Each respondent's average rating of the following 12 items on a seven-point scale of:

Strongly Strongly Disagree Agree 1 2 3 4 5 6 7

is an estimate of the role of interpersonal influence on brand and category choice decisions.

1. I often consult other people to help choose the best alternative available from a product class*

2. If I want to be like someone, I often try to buy the same brands that they buy

3. It is important that others like the products and brands I buy

4. To make sure I buy the right product or brand, I often observe what others are buying and using*

5. I rarely purchase the latest fashion styles until I am sure my friends approve of them

6. I often identify with other people by purchasing the same products and brands they purchase

7. If I have little experience with a product, I often ask my friends about the product*

8. When buying products, I generally purchase those brands that I think others will approve of

9. I like to know what brands and products make good impressions on others

10. I frequently gather information from friends or family about a product before I buy*

11. If other people can see me using a product, I often purchase the brand they expect me to buy

12. I achieve a sense of belonging by purchasing the same products and brands that others purchase

The items with a * next to them are measures of the role that other people have as a source of information, while the remaining items measure the direct influence of other people. As only a third of the items have an asteriisk this means that the scale more heavily weights other people as a direct influence than other people as a source of information.

## How to create multi-item scales

- Dream up many possible items for each concept that is to be measured (e.g., dozens or hundreds), where an
*item*is a specific statement that is rated. - Conduct a small survey of, say, 100 people, and get them to provide a rating on a 5, 7, 9, 10 or 11 point scale for each item.
- Compute the correlations between each of the items (or, equivalently, use Principal Components Analysis).
- Discard any items that are not correlated with all the other items.
- Work out how many items you need to retain. If you are just after a very rough measurement, two or three may be sufficient. If you are wanting to make very precise statements about differences between people you will likely need at least 10. Special formula's have been developed for working out how many items you need and how accurate your ratings will be in multi-item scales, including Cronbach's Alpha.

## Finding existing multi-item scales

Thousands of multi-item scales have been developed by academics, and their use can save many hours of time in questionnaire development. There are a number of handbooks containing many of the scales, and they are worthwhile investment for market researchers (sometimes there is a need to reword the questions to make them more consumer friendly). For example:

Beardon, William O., Richard G. Netemeyer, and Mary F. Mobley (1993), Handbook of Marketing Scales: Multi-Item Measures for Marketing and Consumer Behavior Research. Newbury Park: Sage Publications

Bruner, Gordon C. and Paul J. Hensel (1995), Marketing Scales Handbook. Chicago: American Marketing Association.

The academic discipline of *pschometrics* focuses on the problem of how to create multi-item

## See also

## Notes

- ↑ More complex measurement models distinguishes between constant sources of error (i.e., where the source of the error has the effect of adding a constant to the truth) and random errors, where the error is an additional source of variance. Although important in principle, in most situations where multi-item measures are employed, the resulting measure is to be correlated or included in another model, where the variance is important but the constant cancels out with other constants, so most measurement models used in practice only focus on the source of error described in the example (i.e., error as a source of additional variance).
- ↑ If it is the case that some of the questions have discernibly more error than others, or, some of the variables’ errors are closely correlated with each other, then it is better to omit these variables.

## References

- ↑ For example: Beardon, William O., Richard G. Netemeyer, and Mary F. Mobley (1993), Handbook of Marketing Scales: Multi-Item Measures for Marketing and Consumer Behavior Research.