Social listening, once a new amazing vector into the public psyche, has become a standard tool for brands and online marketers. But it has yet to truly become the radar, the barometer or the Rorschach test of popular sentiment. Instead it’s a tool set in search of refinement and a new mission which includes the need to leverage machine learning and automate a labor intense process to yield deep insights into consumer attitudes, behavior or sensibilities.
Ardath Albee asked recently if social listening is a waste of time by wondering about “the veracity of people’s behavior and their expressed sentiment.” Arguing, “we generally have no context to interpret why a person posts what they post, “ she hints that the dynamic act of posting and tweeting itself may have developed behaviors or conventions that obscure rather than illuminate the conversation.
Today’s social listening tools scrape every piece of social data they can find and score sentiment based word proximity. Crude algorithms don’t fully account for idioms and dismiss popular slang. Most tools return bell curve results; small numbers of positive and negative sentiment and huge amounts of neutral (non-score able) sentiment. This becomes the starting point for human analysts digging into the posts, tweets, comments and chats to find out what’s really being said about brands.
Many tools pretend to reveal key social media influencers. Most count the volume of posts or the number of friends or followers or both to designate individuals as “influencers.. But absent a consensus on counting or weighing volume, the data is meaningless. Volume and influence are two totally different concepts. The mouth reaching the most ears or blabbing more often may not be the most persuasive and visa versa. And since none of these tools can legitimately track downstream activity nor can they penetrate closed systems (e.g. Facebook) where influence is accumulated and wielded, it’s a misnomer.
In this imperfect and evolving arena, there are 4 principle challenges for Social Listening 2.0.
Bot or Not. We have to develop a methodology for identifying real people and counting them differently from spam bots.
Idioms. English, and every other language, is loaded with slang words and idiomatic expressions. We need to build connected look-up tables of the most common idioms and words to get a better picture of sentiment. The database technology and the etymological compilations already exist. It will be interesting to see who mashes them up with existing tool sets first.
Social Conventions. A sense of what is accepted and expected and clear no-nos are emerging in the social media world. When Facebook first took off there was considerable conversation about the etiquette of friending and un-friending. There are unarticulated givens about when and how its cool to post, tweet, share, retweet, shout out and criticize.
We need to identify, articulate, validate and catalogue them and then study large numbers of conversations and topical threads to gauge the role of these beliefs and conventions in skewing our understanding of what’s being said. Some of this work has begun at MIT and among behavioral economists. Once we understand what these conventions are, we can see how they affect sentiment.
We suspect there are a lot of reflexive responses and frequent pile-ons that might just be momentary reflexes not real or strong feelings. Similarly the desire to jump on a bandwagon that leads to social media momentum might be real emotion or just keeping up with or impressing your friends. We just can’t pick out instances of reflex activity from genuine popular movements, if such things really do exist.
Personalities, the news cycle and current events frequently drive the conversation in ways that may or may not reflect real underlying sentiment and which, in some cases is ripe for manipulation. (Did anyone really care about Kim Kardashian’s bogus wedding?) Drawing a baseline on discrete issues and then calibrating subsequent conversations feels like the first way to understanding these phenomena.
Defining Influence. Marketers prize individuals who truly influence the people around them. Most of us fantasize that if we can influence the influencers all our communications needs will be met efficiently and effectively. Trouble is, nobody can agree on the definition of an influencer that can be counted or measured by our social listening tool sets. Getting a working consensus hypothesis is step one. But it will require some thinking about reach by platform, volume across platforms, assessing sentiment, intensity of content, timing, relevance and virility over a fixed time frame.
Its time for all of us listeners to think harder about making this window into the thoughts, needs and feelings of consumers sharper, clearer and more automatically useful.