Let's talk about Qualitative Data & Sensemaking

Social media brand communities and consumer engagement: Excerpts from a discourse analysis

brand computer-aided qualitative data analysis discourse analysis publications Jan 09, 2023
Book publication: Advances in brand semiotics & discourse analysis

In this post, I would like to share a bit about my latest publication in the book: Advances in Brand Semiotics & Discourse Analysis. The book provides an original collection of conceptual and methodological chapters that showcase discourse analytic and semiotic perspectives in branding research. I contributed a chapter on the computer-aided analysis of social media brand communities and consumer engagement.

In the following, you will find a few excerpts from the chapter.


Brand communities in social media

The internet is a perfect place for consumers to talk about a brand and form a community around it (Muniz & O’Guinn  2001; Oprea  2019). Moreover, data is easily accessible and most suitable for qualitative analysis, especially when inquiring about discourse to uncover patterns of structure and meaning in networked communications.

The chapter aims to showcase possible ways for analyzing discourse rather than presenting study findings. I invite you to use this chapter as a toolbox and source of inspiration.

The data set I am using presents an interaction between marketers and consumers and intermediaries and consumers around the car brand Jeep. According to Barger, Peltier & Schultz  (2016) and Rossolatos  (2020), social media is a way to bridge the communication gap between sender and receiver and enhances consumer engagement.

The social media platform I focused on was YouTube.

First, I chose a corporate video from Jeep® introducing the Grand Cherokee 2022 model. Then, since there was no equivalent corporate video in German, I chose a video from the leading German automobile magazine’s channel AutoBild (intermediary). Their YouTube channel has been operational since 2006 and hosts new car models, test drives, and crash tests, among others.

While reading the comments, it became evident that the discourse deployed quite differently than the official Jeep video. To assess whether this was related to the sender – corporate video vs. car channel (intermediary) - I added another video from a US-based car channel, ‘Alex on Autos.’ This channel provides a weekly in-depth review of the latest car models in the market. Alex’s channel, just like Auto Bild’s, has been online since 2006.

Thus, the data set comprised the following three videos:

  • Jeep® – The All-New 2022 Grand Cherokee Reveal
  • AutoBild – Jeep Grand Cherokee L (2021) | Das ist der neue Jeep Grand Cherokee
  • Alex on Autos – Bigger And More Luxurious Than Ever | 2022 Jeep Grand Cherokee First Drive Review


Analyzing user contributions

There are tools to extract social media comments like ExportComments.com. Then, when you import the data to a tool like ATLAS.ti, all users are automatically coded.

The code frequency for users can indicate whether there is a dominant speaker or how much engagement there is between users in a conversation (Apenes, Birgit & Pedersen 2016). The example in Figure 2 shows that most users only write one post. There was a more significant engagement among users in the corporate video compared to the car channel introducing the new model. The second video has the highest engagement level, with the channel host posting 35 comments.

Such a comparison can be made quickly after importing the data without much additional work. To create the table and figure, I grouped all users by the number of contributions and created what is called the Code-Document-Table in ATLAS.ti.

Comparing code frequencies by documents is a standard feature in QDAS and can be performed using different packages. As the total number of comments differs for each video, I normalized the data and opted to display relative frequencies for easier comparison.


Figure 1.2: Contribution of users to the conversation by number of posts (ATLAS.ti)


Comment: I know this is not yet qualitative data analysis. But you don’t have to be a purist to “do qualitative data analysis.” Just use the tools which you think are helpful for whatever you want to find out. Looking at some numbers can be a good starting point to get a feel for the data.


Analyzing sentiment

The next step could be to analyze the general sentiment of each conversation to determine the type of consumer engagement (Lovett  2011). Using QDAS, data segments can be automatically analyzed for sentiment and coded. The results, however, need to be checked and corrected. At this stage of machine learning development, the underlying algorithm is not so effective as to render the human analyst redundant. Even after adjusting the coding, you might want to invite a second person to validate your sentiment assessment.

In ATLAS.ti, sentiment coding is portrayed as per the output in Figure 3. I decided to code the entire speaker unit, including user name, likes, and dates, as this facilitates subsequent co-occurrence analyses with other topics.


 Figure 1.3: Coding for sentiment (ATLAS.ti)

After coding all speakers’ contributions, you may perform a comparative analysis to gauge which conversation is more or less favorable. This can be facilitated by the Code-Document-Table (Figure 4), which, in our case, suggests that the discussion on the corporate video has a positive vibe to it, whereas the dialogue in German is much more hostile and akin to negative brand co-creation (Bambauer-Sachse & Mangold  2011; Rauschnabel et al.  2016; Kristal et al.  2018; Rossolatos 2019). The conversation on the US car channel is balanced. It is less favorable than the corporate video but less adverse than the German conversation. One would need to analyze more conversations to reach a conclusive argument as to whether this is related to culture or the type of video.


Figure 1.4: Comparative sentiment analysis (ATLAS.ti)

As seen in Figure 4, you can access the data behind each table cell or bar in the chart by simply clicking on it. In addition, the four coded statements from the conversation about the US Jeep video are displayed on the right-hand side. Thus, in addition to examining the tables and charts, you can always read the data behind the numbers. Based on this information, you may draft a report summarizing the content behind the various sentiments.



Analyzing content

In addition to all other aspects of speech, it is essential to know what people are writing about, how the conversation evolves, what cues initiate a specific topic, and how much space is allocated to it in an exchange. With the help of QDAS, it is easy to relate the content to other aspects like sentiment, representation of attitude and knowledge, or contextualizing cues. You can code for content as you browse and read the data or use QDAS to help you find topics. The functions range from creating word lists or word clouds for checking which words occur most frequently to allowing the software to suggest themes or concepts for you, which you can then code automatically (ATLAS.ti and NVivo). This works best and is most helpful if you have a more extensive data set—Figure 10 hints at the results of such an automated search. Judge for yourself.

Figure 1.10: Themes identification (NVivo)

If you use the suggested themes or concepts, you need to read through the automatically coded data segments, potentially reorganizing and refining them. This can replace the initial step of ‘broad-brushed’ coding but does not substitute the process of close reading  (Bazeley 2013; Richards 2021).



Relating content to other aspects of the data

Another option that most QDAS feature is code co-occurrence. This means that the software can find coded segments with multiple aspects. For example, you could identify topics discussed in a more positive, negative, or neutral tone. An example is shown in Figure 13 using MAXQDA’s Code-Relation-Browser. The heat map visualizes the highest frequency per row, i.e., the content discussed.

There is little difference among sentiment types. However, the comments about Jeep’s purchase or ownership have a more positive vibe. In contrast, the debate on electric vehicles is more negative, while the feature discussion wavers between neutral and negative.

Car model comparisons are primarily discussed neutrally. This is, of course, a very top-level analysis. However, when writing up results, you can go into greater depth by looking at the subcodes for each content category and setting a filter to compare the discussion in each document.

We already know from previous analysis that the German language discussion is overall more negative. We can see that the negatively toned exchanges on cultural aspects, car features, and electric vehicles are mainly found in German data.


 Figure 1.13: Codes co-occurrence in MAXQDA


Multimodal analysis

When using QDAS, you are not restricted to analyzing verbal text. You can also add audio, video, or image data to a project (see, for instance, Rossolatos 2013). The comments that were analyzed here are part of a multimodal text. The reality of such fleeting online communities is constituted through the intermodal relationships that result from the interaction of semiotic choices (Jewitt 2009; Halloran 2011). The entire conversation is a reaction to the video. It is interesting to examine which parts of the video are directly referred to, whether the reference one person is making is sparking a reaction of others, thus leading to a mini-conversation, or even dominating the entire discussion, as was the case with the Alex on Autos video. In the following, this is used as an example to demonstrate the ability of QDAS to support the analysis of intermodal relationships.

In Figure 20, you can see how a multimodal conversation can be displayed in QDAS (ATLAS.ti, in this case). The comments on the left-hand side show the links to the video document; the codes are hidden for clarity. With a click on a link, the linked segment will be shown or played back, or you can go directly to it.


Figure 1.20: Display of multimodal discourse in ATLAS.ti

The utterance towards the end of the video, “... a vehicle that has a well-deserved reputation for being sort of the American Range Rover...." ensued upon many reactions that resulted in multiple micro-conversations in the thread. The original comparison was related to the price of the vehicle, but this was not discussed in those reactions. Apart from one response, the conversations turned to the reliability issue, and this topic began to dominate the discussion.


The network graph (Figure 22) shows how the discourse was channeled in many directions.

The word ‘Range Rover’ or the inference that the Grand Cherokee is sometimes referred to as the ‘American Range Rover’ hit a nerve in the community. The responses mostly had a negative connotation or an ironic undertone, given that the entire video was 26 minutes long, whereas the utterance lasted for only 1 second; the effect was immense.

Considering this from a brand reputation point of view, any released material should be carefully scrutinized to identify whether it contains information that consumers might find irritating, thus pre-empting adverse reactions.

From a cultural branding point of view, analyzing such multimodal discourses may allow for the surfacing of deep-seated beliefs that have persisted for many years.


List of References

Androutsopoulos, J., 2016. Participatory culture and metalinguistic discourse: performing and negotiating German dialects on YouTube. In: Tannen, D. and Trester, A. M., eds. Discourse 2.0: Language and new media. Washington, DC: Georgetown University Press, 47-71.

Apenes, S., Birgit, A., and Pederson, P.E., 2016. The role of customer brand engagement in social media: conceptualisation, measurement, antecedents and outcomes. International Journal of Internet Marketing and Advertising, 10 (4), 223-254.

Bambauer-Sachse, S. and Mangold, S., 2011. Brand equity dilution through negative online word-of-mouth communication. Journal of Retailing and Consumer Service, 18, 38-45.

Barger, V., Peltier, J. and Schultz, D., 2016. Social media and consumer engagement: a review and research agenda. Journal of Research in Interactive Marketing, 10 (4), 268-287.

Bazeley, P., 2013. Qualitative data analysis: practical strategies. London: Sage.

Bolden, G. B., 2009. Implementing incipient actions: the discourse marker ‘so’ in English conversation. Journal of Pragmatics, 41 (5), 974-998.

Chafe, W., 1994. Discourse, consciousness and time. Chicago: University of Chicago Press.

Crystal, D., 2001. Language and the Internet. Cambridge: Cambridge University Press.

Gumperz, J. J., 1982. Discourse strategies. Cambridge: Cambridge University Press.

Henley, N. M., Miller, M. and Beazley, J. A., 1995. Syntax, semantics, and sexual voice: agency and the passive voice. Journal of Language and Social Psychology, 14, 60-84.

Herring, S., 2013. Discourse in Web 2.0: familiar, reconfigured, and emergent. In: Tannen, D. and Trester, A. M., eds. Discourse 2.0: Language and new media. Washington, DC: Georgetown University Press.

Herring, S., 2004. Computer-mediated discourse analysis: an approach to research in online behaviour. In: Barab, S. A., Kling, R. and Gray, J. H., eds. Designing for virtual communities in the service of learning. New York, NY: Cambridge University Press, 338-376.

Hood, S. and Forey, G., 2008. The interpersonal dynamics of call-centre interactions. Co-constructing the rise and fall of emotion. Discourse & Communication, 2, 389-409.

Hunt, K. W., 1966. Recent measures in syntactic development. Elementary English, 43, 732-739.

Jewitt, C. ed., 2009. The Routledge handbook of multimodal analysis. London: Routledge.

Johnstone, B., 2018. Discourse analysis. Oxford: Wiley Blackwell.

Kim, D. and Vorobel, O. 2015 .Discourse communities: from origins to social media. In: Kim, D. and May, S., eds. Discourse and education: Encyclopedia of language and education. Stanton Wortham: Springer International Publishing.

Kozinets, R.V., 2002. The field behind the screen: using netnography for marketing research in online communities. Journal of Marketing Research, 39 (1), 61-72.

Kristal, S., Baumgarth, C. and Henseler, J., 2018. “Brand play” versus “Brand attack”: The subversion of brand meaning in non-collaborative co-creation by professional artists and consumer activists. Journal of Product & Brand Management, 27 (3), 334-347.

Lovett, J. 2011. Social media metrics. New York: Wiley.

Martin, J. and White, P. R. R., 2005. The language of evaluation: appraisal in English. Pragmatics, 21, 451-486.

Meredith, J. and Potter, J., 2014 . Conversation analysis and electronic interactions: methodological, analytical, and technical considerations. In: Lim, H. L. and Sudweeks, F., eds. Innovative methods and technologies for electronic discourse analysis. Hershey: IGI Global, 162-172.

Muniz, A.M. and O’Guinn, T.C. Jr., 2001. Brand community. Journal of Consumer Research, 27 (4), 412-432.

O’Halloran, K., 2011. Multimodal discourse analysis. In: Hyland, K. and Paltridge, B., eds. The Bloomsbury companion to discourse analysis. London: Bloomsbury Academy, 120-137.

Oprea, D., 2019. Discourse analysis in social media. International Multidisciplinary Scientific Conference on the Dialogue between Science and Art, Religion & Education. IFIASA.

Rauschnable, P.A., Kammerlander, N. and Ivens, B.S., 2016. Collaborative brand attacks in social media: exploring the antecedents, characteristics, and consequences of a new form of brand crisis. Journal of Marketing Theory and Practice, 24 (4), 381-410.

Richards, L., 2021. Handling qualitative data: a practical guide. London: Sage.

Rossolatos, G., 2020. The depth of brand engagementfunnel: dimensionalising interaction in social media brand communities. Qualitative Market Research, 24 (2), 200-220.

Rossolatos, G., 2019. Negative brand meaning co-creation in social media brand communities: a laddering approach using NVivo. Psychology &  Marketing, 36, 1249-1266.

Rossolatos, G., 2013. A methodological framework for conducting multimodal rhetorical analyses of advertising films with ATLAS.ti. In: Friese, S. and Ringmayr, T. eds. ATLAS.ti user conference 2013: fostering dialog on qualitative methods. Berlin, Germany: Berlin Technical University Press, 1-52.

Sacks, H., Schegloff, E. A., and Jefferson, G., 1974. A simplest systematic for the organization of turn-taking for conversation. Language, 50, 696-735.

Schegloff, E. A., 1968. Sequencing in conversational openings. American Anthropologist, 70, 1075-95.

Schiffrin, D., 1987. Discourse markers. Cambridge: Cambridge University Press.

Sherzer, J., 1987. A discourse-centred approach to language and culture. American Anthropologist, 89, 295-305.

Tannen, D., 2013 . The medium is the metamessage. In: Tannen, D. and Trester, A. M., eds. Discourse 2.0: Language and new media. Washington, DC: Georgetown University Press, 99-117.




Get actionable advice delivered to your inbox.

We give you an accessible overview of the latest trends, techniques, and best practices in qualitative research.

You're safe with me. I'll never spam you or sell your contact info.