This is a guest post from Wrench, who I'm working with along with a few others to develop the Inexact Sciences.
The point of science and theory is to sculpt out useful idealized machinery to make thinking about very complex systems tractable. People always point to physics for this because they’ve gotten the best ratio of reduction to insight of everyone else. Idealized conditions, though very expensive to approach, reveal fundamental laws that are largely unchangeable. “Largely” because, of course, things like relativity come to mess up the system every once in a while.
When we think about language, this is tricky. The tradition from linguistics has largely been to study language as a mechanism in itself, divorced from as much context as possible, and see how much we can explain with such patterns. While sometimes driven by philosophy, it seems like one of the biggest drivers behind this setup has been the fact that linguistic contexts are very difficult to share (a) because they involve so many different parts of perception, including layers of reality we don’t have any encoding for (e.g. social context) and (b) because it’s pretty clear we don’t even have a really good working theory for what all those components are, or what it would mean to “explain” language in light of them. In contrast, explaining syntactic preferences, presents a space of possibilities that can be studied in idealized lab conditions.
What would it mean to try to explain language? Most sentences have something close to a “literal” meaning—the explicit relationship it draws out among different references. Yet, often what these explicit relationships imply is more accurately described as the meaning. Consider:
A: How’s life lately?
B: Life is…life.
As far as I am aware, “Life is life.” is not idiomatic. Yet, the tautology is understood here to give a lack of positive confirmation—confirmation that is expected if B was simply feeling neutral, implying B has been having a negative experience. Most semantic representations attempt to encode this “literal” or “explicit” meaning under the (correct) intuition that other meanings are largely parasitic on the literal. “Largely” because in every private community there are linguistic cues that come to be understood as having connotations that are prior to the literal meaning of the sentence, usually in reference to a key event or common signal that comes up often in the given community.
The question I’m interested in is: How do we understand what is “understood” from a sentence? Social contexts are varied, so most representations I’m aware of would be incredibly insufficient. Where do we start with our idealizations? Representing the internals of a person’s head is even more of a mess than representing outcomes. Yet, I think if we approach things through the classic attempt to represent the literal I think indexical distinctions that come up become murkier and murkier, until we end-up in the shadow of Chomsky trying to explicate meaning from syntax.
Perhaps the biggest obstacle is just data—language spoken as part of a formal experiment simply isn’t representative of most of the factors that go into private language use. And explaining communication at least as hard as explaining human intelligence—likely more complicated because of network effects. Again, it depends on our definition of “explain”
Data seems like an insurmountable barrier…except for large social media companies that wish to study their users. Data on a single platform is still usually not rich enough to understand real coordination, but it is certainly more informative than laboratory conditions. Needless to say such use of data would lead to a publicly outcry now. That is why it is happening behind closed doors and stringently guarded, with plenty of descriptions of side-projects that justify it, e.g. Fake News Detection. I would be willing to wager that 30 years from now using private data on a platform to study the human psyche will be viewed as at most a “necessary evil” as long as “proper privacy protocols are followed.”
What will we do when all the real understanding of language has been monopolized by individual companies with no desire to create and share a general theory? The time to act was 70 years ago at the dawn of AI, to understand that communication would be a tricky and sensitive notion, because it is the coordination of hidden representations in the brain, representations that are essentially our thoughts and identities. Current ethical discourse suggests we should shutdown anything too sensitive as privacy infringement and largely revolved around what is “too dangerous to study”. I cannot, in good faith, accept this line. If we want a universe where we have an understanding of the technologies of manipulation, the way to do it is to make a public science of the social games people play.
In 2020 this has become an especially sensitive topic, as people’s words are increasingly weaponized against them in culture wars. There has likely not been another time since the rise of the internet that people were less willing to have their words used for public study. I don’t claim to know what should be done, but it is precisely these kinds of Twitter dynamics where real social power is being brokered that needs neutral scientific study.
Perhaps this is the rise of the anonymous research group. Cryptocurrency fluctuates in value, but reproducible findings on public data can lay claim to some kind of fundamental value. Or perhaps the fragmentation and hoarding of power by groups that are well coordinated is now inevitable.
Regardless, I intend to attempt a public study of communication and most especially power dynamics. Status and power are essential elements of every conversation, and they inevitably determine the stances of interlocutors—yet they are completely irrelevant to literal meaning and are thus pushed to the secondary place of “flavoring” text rather than showing its interpretation. This couldn’t be more wrong. It is status that determines whether a joke is funny because it should be taken sarcastically or literally. It is power that determines what a person is willing to say and therefore the information entropy of their response, because if there is only one possible response no information has been transferred at all.
In the Information Age, power is becoming easier to quantify, because we can study quantifiable properties (e.g. followers) and effects (e.g. repeated themes in new messages of other users). If we do not have the data or the conceptual vocabulary to understand the dynamics of two friends chatting, then I suppose we will have to wait. But the time is ripe to understand the new sociality of the web, where the majority of socialization in the USA is taking place under lockdown. The game is to cultivate the in-group, but how do people know where the lines are and enforce them so efficiently?