Quality checks

 

In quantitative studies, reliability and validity are accepted as key criteria for assessing the quality of research. Most qualitative writers are not comfortable applying these concepts to their field, as they are based on assumptions about the researcher as a detached, objective observer and are essentially related to measurement. There is no single set of criteria that researchers agree should be used for all qualitative studies, instead a variety of criteria have been suggested, which  differ according to the methodological and epistemological positions of different approaches. Some writers argue against the use of any fixed criteria at all. It is beyond the scope of this website to examine the debate in this area, however for discussions regarding this, see Murphy et al (1998)Seale (1999) and Willig (2001).

Below are some suggestions on how to use some of the common kinds of quality checks within a study employing template analysis.

Independent scrutiny of analysis

There is a whole range of ways in which qualitative researchers use some form of independent scrutiny of their analysis as a way of checking its quality. These include:

  • Members of a research team coding a sample of data separately and then discussing similarities and differences, in order to agree revisions to themes.
  • The same process as above, but using an outside ‘expert(s)’, selected on the basis of knowledge of the methodology and/or the substantive topic.
  • Defending your analytical decisions to a constructively critical ‘expert panel’.

Statistical calculation of inter-rater agreement is sometimes used in relation to independent coding in thematic analysis. However, as this is based on at least an implicit assumption that one can objectively judge one way of defining themes as ‘correct’, it flies in the face of the notion that texts are always open to a variety of readings. For this reason, it would not be recommended.[1]

In template analysis, independent scrutiny can be used at various stages. It can be used at the initial template development stage, where the research team members can be asked to each carry out preliminary coding on a sample of transcripts. The independent coding can then be circled and two or three hours allocated to compare, contrast and discuss this, with the aim of agreeing an initial template then and there. If you are working on a project on your own, you could try to find one or more outside experts to do this with you, but they would need to be well-briefed on the aims and methodology of your study.

Independent scrutiny is useful and relatively easy to incorporate into the process of developing the template. Members of the research team and/or outside expert(s) can be given sample transcripts and asked to code them using the current version of the template, noting themes they found difficult to employ, aspects of the texts not covered by the template and any other issues they find in the process. Discussion of such observations can then lead to further revisions of the template.

Finally, independent scrutiny can be used when you are interpreting the analysis for your final write-up. In this situation the ‘expert panel’ approach may be useful. They can be given your template and selected coded transcripts, and then asked to interrogate your interpretation, for example, by requesting that you explain why you preferred one reading of an aspect of the data over another.

It is vital to recognise that none of the above approaches is about asking someone else to ‘confirm’ that your analytical decisions are ‘correct’. These are all ways to help you to reflect on the process, by forcing you to think about alternatives that you might have overlooked, or dismissed without proper consideration. If you do not agree with suggestions made by an independent scrutiniser of your analysis, you are under no obligation to act on them.

Respondent feedback

Another strategy for evaluating the quality of analysis is to ask those who participated in the research to comment critically on the analysis. In more realist types of qualitative research, this is known as ‘respondent validation’, but given the problems with the concept of validity noted above, the term ‘respondent feedback’ is recommended.

In a template analysis study, respondent feedback can be used at several stages in the analytical process, in effect mirroring the ways in which independent scrutinisers can be used. At an early stage, participants could be asked to comment on the initial template as applied to their transcript. Later, they could also be asked to critically examine drafts of interpretive writing that relates to their interview. However used, respondent feedback is only likely to be effective if the study and the analytical process have been clearly explained to participants, and the material is presented in a reader-friendly form.

Although quite widely used, there are some difficulties associated with respondent feedback that you should bear in mind when considering whether to use it. Firstly, the respondent may have motives for agreeing with the analysis that have little to do with its quality. In most studies the researcher is in a position of relative power in relation to the respondent, by virtue of his or her professional expertise, as well as through their role in the relationship as the questioner. Due to this, the respondent may feel uncomfortable criticising the researcher’s interpretation (Willig, 2001). Even when the power imbalance is not significant, a respondent may endorse the analysis, in order to be helpful to the researcher. Secondly, the use of respondent feedback to corroborate the researcher’s analysis, assumes that people are able to examine an interpretation of some aspect of their experience in a rather detached manner, and judge whether it is ‘ right’. This contradicts the assumption underlying a lot of qualitative research that people cannot simply stand back from their own ‘ lifeworld’ and assess it objectively.

Creating an audit trail

Writers on qualitative methods commonly recommend that researchers compile an ‘audit trail’ of their analytical process. This is a documentary record of the steps you undertook and the decisions you made in moving from the raw transcripts to your final interpretation of the data. In a thesis or dissertation, you are able to include a detailed audit trail, perhaps summarised in the main text and supported by extensive appendices. This should be integral to the assessment of your work.

A report for an external agency may not require such detail and a journal article or book chapter is unlikely to allow sufficient space for it. Even if it is not published for readers to see, keeping an audit trail is still good practice as it helps you to gain an overview of how you reached the interpretation that you produced. It is an antidote to the unfortunate practice of presenting qualitative findings as if they simply ‘emerged’ fully-formed from the data.

Template analysis lends itself well to the production of an audit trail. It is relatively straightforward to display successive versions of the template, accompanied by a commentary on what changes were made at each stage and why. Where space allows, you can include at least one coded transcript or extracts from several, along with other materials drawn on in the analysis, such as your memos, notes or case summaries. If you are producing the audit trail for others to read, as in a thesis or dissertation, it is crucial to lay it out and annotate in a way that helps the reader through it.

Reflexivity

Qualitative research requires reflexivity on the part of the researcher. As a researcher, you need to reflect on the nature of your involvement in the research process and the way this shapes its outcomes. Reflexivity is required throughout the research process, for instance, in trying to be aware of how your own assumptions about the phenomenon under investigation might influence the way you formulate your research question, and the issues you highlight in your interview topic guide.

The techniques for quality checking described above can all be seen as ways of encouraging reflexivity, as they get you thinking about what it is you bring to the analysis. Comments from independent scrutinisers or respondents help you to reflect on and question the assumptions you may be making; keeping an audit trail forces you to be explicit about the decisions you are making and to reflect upon how they led you on a course towards your findings and conclusions. Although quality checks can help to bring reflexivity to the fore, this should not be seen as something to be ‘dealt with’ only at certain set points in the analysis. Throughout the process you need to pay attention to your own role in it.

One of the strengths of template analysis is that it encourages you to be explicit about the analytical decisions you make and to ground them in the texts you are analysing. In the end, you always need to be able to show where in the data you developed a certain interpretation from. In this way it is an approach that facilitates reflexivity and is therefore important to keep successive versions of your template, ideally with some commentary to remind you at the end of the study of the thinking behind the way you developed it. These might be incorporated within a ‘research journal’, where you record your thoughts and feelings about doing the analysis.

For example, you may note that on reading through a particular transcript, you feel very sorry for the respondent because of some misfortune or injustice they describe. You could then reflect on how your feeling might impact on the way you understand the text and the themes you used to encapsulate this. Perhaps you are over-emphasising the powerlessness of the participant because you have created an image of them as an ‘object of pity’?



[1] Professor Nigel King used this strategy in the analysis of some of the data in his PhD, but would not recommend it in light of his experience.