top of page
  • Facebook
  • Twitter
  • Instagram
Search

Week 4 Summary and Reflection

  • Writer: Jeff McCarthy
    Jeff McCarthy
  • Jan 26
  • 6 min read

Jeff McCarthy

Athabasca University

MAIS 602: Research Methods

Dr. Lisa Micheelsen

 

Keywords: content analysis, keyword analysis, text mining, kappa statistics, pre-defined dictionaries, user-defined dictionaries, computer-assisted analysis, coding content, qualitative vs. quantitative analysis, inductive vs. deductive, inter-rater reliability, keywords-in-context analysis



Needles and Haystacks: Finding Themes in Volumes of Text

 

Introduction

 

Chapter 24 of Researching Society and Culture by Clive Seale and Fran Tonkiss provides a breakdown and study of content and text analysis as tried-and-tested methods for analytically deciphering vast quantities of text and qualitative sources.  The Information Age and today’s computational processing capacity place us in an arguably more enviable position than at any point in academic history (Castells, 2010), given the haystacks available to us at the click of a button.  Sourcing haystacks (text, data) is not a problem.  The increasing number of uncategorized and meaningful and useful needles is.  Seale and Tonkiss ask us to position content analysis as a flexible methodological approach that bridges qualitative interpretation and analytical rigour, allowing researchers to engage meaningfully with text while maintaining transparency and structure (Seale & Tonkiss, 2018).

Here, I first provide a summary of the methodologies presented in Chapter 24, including content analysis, reliability, reflexivity, and the use of computational text analysis methods.  I then look at whether these methods are useful to my own research interests and approach to leadership within complex public service and emergency management systems.

Content and Text Analysis

Seale and Tonkiss define content analysis as a systematic approach to examining texts, including written documents, visual materials, and digital artifacts, in order to identify recurring themes, categories, or patterns of meaning (2018).  More practically, content analysis gives researchers a scientific way to sift through haystacks of words without losing their interpretive licence or the ability to recognize what’s a meaningful needle.  Content analysis is described as rooted in quantitative approaches, in which researchers count how often words or phrases occur.  The authors here reassure its usefulness for qualitative research, but they also note that the meaningfulness of an emerging theme is not solely determined by the sheer number of times it shows up. Deciphering this is the researcher's responsibility.

Once patterns, recurrences, or themes emerge from counting, the researchers’ analysis comes into play.  Seale and Tonkiss refer to this next level of analysis as ‘coding’, either inductive or deductive. Inductive coding is defined as allowing categories of information to emerge from the data intrinsically. Deductive coding uses a predefined set of categories that the researcher is looking for, which can be derived from theory, lived experience, or even prior research.  Seale and Tonkiss suggest that many studies employ a hybrid approach, combining some theoretical rigor with openness to surprises or finding some stones unturned.  Regardless of which approach is used, the authors stress the importance of explicit coding frameworks and transparent documentation of analytic decisions to support methodological credibility (2018).  These methodologies work less like tips or shortcuts and more like next-level strategies for finding the right needles.

Reliability, Reflexivity, and Analytic Credibility

An issue addressed in this chapter is inter-rater reliability in team-based research. Seale and Tonkiss suggest that reliability should not be perfect agreement but rather a shared understanding of the coding rules.  Therefore, it becomes a process of negotiating among coders or determining which assumptions can be further defined, clarified, and agreed upon (2018).  In this sense, reliability checks help ensure that different researchers are searching for the same kinds of needles, even if they don’t always find them in the same way.  Even in solo projects, going back to recheck your coding or find different wording is a good practice for determining better meaning and intention, yielding better results. 

The Computational Approaches

Seale and Tonkiss discuss what they term ‘computationally assisted methods’, such as text mining, keyword analysis, and keywords-in-context analysis.  These approaches are considered strategic when starting with very large haystacks of text and data, such as archives or years of policy documents or global media, where reading everything would be impossible.  Comparative Keyword Analysis (CKA) allows researchers to examine how language differs across groups, organizations, or time periods, revealing underlying discursive patterns and institutional priorities (Seale & Tonkiss, 2018).  Effectively, these tools are meant to help researchers narrow down vast haystacks before engaging in more detailed coding, interpretation and analysis.

Applicability to Leadership Research in Complex Public Systems

These methods, outlined by Seale and Tonkiss, have partial but meaningful applicability to my research interests in leadership within complex public service and emergency-management systems.  My research will focus on how senior leaders make sense of uncertainty (the fog of war), authority, and responsibility in a multi-agency context, drawing on lived experience and my participant-observer perspective.  My research is therefore primarily relational and heavily interpretive, rather than solely text-driven, and concerns itself less with haystacks of data than with identifying which moments allowed plans and systems to break down and which decisions deserve further inspection.  However, there are countless articles, media coverage, and after-action reports related to the pandemic that will hopefully corroborate the early needles I find.  In that, the methodologies described here will be essential to this aspect of my research.  This more granular process of reading and analysing text could help identify what I believe are recurring themes related to coordination, accountability, and adaptive capacity.

At the same time, the limits of content analysis are not hard to spot for this research approach. Leadership in prolonged, complex crisis contexts sometimes unfolds through more informal means, ethical and political roadblocks, and real-time lapses in judgment that are rarely found in media or after-action reports.  These are often found only in back-office conversations, in whispered tones or on the Signal app, in relation to managing political or public scrutiny.  While I believe content analysis can help shed light on how leadership is meant to look or be justified (multi-party Covid Core Committees), it cannot begin to account for how leadership is experienced or carried out in practice.  In my opinion, content and text analysis may work best as what is referred to as a ‘supplementary method’, supporting and benefiting analysis, rather than displacing a more substantially reflexive or participant-observer approach (what I saw and experienced in the NB pandemic response case).

In Practical Terms

              If I want to prove or debunk my assumption that political actors effectively intervene in emergency management responses intended to get people out of harm's way and keep them safe, at the expense of adaptive leadership, I am faced with reams of after-action reports and media to analyze.

Ostensibly, I could start with:

1.       40-50 provincial EMO after-action reports

2.       100 Canadian public health pandemic-related policy documents, and

3.       1000 Canadian media articles on a specific topic

              I would then determine whether I employ an inductive (don’t know what “x” means) or deductive (looking for something specific, such as an assumption, theory, or research-led) approach, or, as Seale and Tonkiss suggest, a combination of both.

              Then I would start building my code.  For example:

1.       Authority = Who has final decision-making authority = Searchable: “Final approval rests with…”

2.       Blame = Who failed = Searchable: “Authority rests with…” or “It’s not our job to…”

3.       Confusion or Uncertainty = Lack of accountability = Searchable: “I/we don’t know…”

Now, as I start reading or searching for keywords and phrases, I can codify passages and returns based on the above.  Seale and Tonkiss point out that a return can have multiple codes, and where they end up is ultimately determined by my positionality and reflexivity as a responsible and ethical researcher. Codifying and theme-finding must include the thought process or ‘memos to self’ as you go.  For instance, I may note that “positive feedback is repeatedly owned at the political centre while negative feedback seems to be deferred to Public Health,” over and over, and thus themes begin to emerge.  In my case, assumptions from lived experience begin to be either proved or disproved.

      Technology can also assist in this endeavour. A quick Google search highlights multiple sets of instructions for search and mining processes in Word, Excel, NVivo, ATLAS.ti or with assistance from artificial intelligence (A.I.).

A Note on A.I.

In reading Seale and Tonkiss, it occurred to me that A.I. may make many of these processes faster but not necessarily more helpful. A.I. is developing exponentially daily, and the latest versions are already capable of more detailed searches, data mining, keyword and phrase searches, and pattern recognition.  But A.I. remains incapable of interpretation and human-centred analysis, both of which I’m assuming are critical to reflexive, participant-observer research approaches.  I believe A.I. will help researchers with the depth and breadth of the haystacks, but it has little yet to offer in determining which needles are the right ones, which nuances apply, or which experiences or impacts weigh more than others.

Conclusion

Chapter 24 of Researching Society and Culture presents content and text analysis as somewhat adaptable, systematic methods for navigating large volumes of qualitative data while maintaining a vital reflexive approach.  Essentially, they provide the researcher with an ethical roadmap for eating a seemingly indigestible elephant.  Seale and Tonkiss stress that these approaches are not just procedurally sound but also effective rebuttals to criticisms of qualitative data analysis.  To put it another way, the challenge is not in finding vast amounts of sources or texts; the challenge lies in determining which returns deserve your attention, interpretation, and reflexivity. 

 

References:


Castells, M. (2010). The power of identity (2nd ed.). Wiley-Blackwell.

Seale, C., & Tonkiss, F. (2018). Content and text analysis. In C. Seale (Ed.), Researching society and culture (4th ed., pp. 404–427). Sage.

 

 
 
 

Recent Posts

See All
Week 3 - Unit 2: Reflective Response

Jeff McCarthy Athabasca University MAIS 602: Research Methods Dr. Lisa Micheelsen   Week 3 Unit 2 - Reflective Response               Through the readings, reflections, and discussions in Unit 2, I ha

 
 
 

Comments


© 2023 by ePortfolio for MAIS Studies. All rights reserved.

bottom of page