Text networks, part two: Co-occurrence networks

This post is part of a series of posts about text network analysis. Each post will discuss one approach to algorithmically creating networks from texts. See the primer for an introduction and the list of posts.

Recalling the networks 101, a network in its formal representation consists of nodes, i.e. the basic elements of my data, and edges, i.e. relations between the nodes. When creating a network from a text, one has to answer two basic questions:

  1. What are my nodes?
  2. What constitutes a relation between two nodes?

In a friendship network, nodes are usually persons and edges are constituted by a person calling the other person their friend. In a citation network, works (books, articles) can be regarded as nodes, while references to another work creates an edge. However, depending on the research question, one might as well regard authors instead of works as nodes in the network.

In the case of text networks, one might as well look for (living/historical/fictional) persons. This has been done, for example, by Franco Moretti in his piece on Network Theory, Plot Analysis. He states:

A network is made of vertices and edges; a plot, of characters and actions: characters will be the vertices of the network, interactions the edges … (Moretti 2011)

But as stated in the primer, the approach to text network analysis presented here focuses not on an idea of a social network expressed in the text, but rather on the text as a network. This has implications for the definition of nodes and edges.

Identifying nodes

When the text itself is a network, we follow the idea that meaning in the text is created through the arrangement of basic elements. As a form of distant reading, we want to make this arrangement visible by dissolving the sequential nature of text, highlighting the meaning structure.

The basic elements of a text, and thus the nodes in the text network, are words. This alone can provoke philosophical and practical arguments about what a word is, or how to deal with multi-word units, compounds, and so on. Since we focus on meaning structures, two basic assumptions guide our practice:

  1. Not all words are equally meaningful.
  2. Not all differences between words are meaningful.

Selecting nodes

From the first assumption follows that some words are more relevant for our analysis than others. So in order to find a meaningful network structure, one might want to select which words are included into the network model. In many applications of text analysis, it is common to strip so-called stop words, i.e. words that carry only little meaning on their own, like prepositions (in, on, of) or conjunctions (and, or, but), but possibly also frequent verbs (come, see) or adjectives (somehow, likely). Depending on the use case, there can be many different stopword lists.

Another approach is to filter words by their word class or part of speech. This can yield similar results as typical stop word lists, e.g. when filtering word classes like prepositions and pronouns. But it can also have far reaching consequences on the network topography. One might exclude verbs altogether, or only include nouns.

Abstracting words

When referring to “words” as the elements in the meaning structure of a text, we usually implicitly refer to an abstract concept of a word, rather than its concrete appearance: Usually, we would expect “go”, “goes”, and “went” to appear in the network as a single node, “go”. This is especially important for heavily reflected languages like German or Pāḷi.

In text mining and computational linguistics, there are two approaches to this. The first is stemming, i.e. reducing each word to its stem form. This is a quite aggressive way of abstracting word, since this even collapses words into a single node across word classes: “sail”, “sailer” and “sailing” would all be merged into a single stem “sail”. This might be desirable, since they do indeed refer to a common concept of wind powered shipping. But it might also blur important distinctions.

The second approach is lemmatisation. The idea behind this is to reduce a word to its base form or dictionary form. Ideally, the three words would be lemmatised as “sail”, “sailer”, and “to sail”. In practice, however, the result is typically “sail”, “sailer” and “sail”. To improve results, an additional step is word sense disambiguation: Instead of identifying a node by its lemma alone, it makes sense to use a distinct sense or meaning of the lemma. This is well-known from Wikipedia, where e.g. the page Go lists all the possible meanings of the lemma, while the individual pages are disambiguated as Go (verb) or Go (game). These disambiguated semantic units are best suited for a text network.

Often, not all of this information is available. In practice, we usually try to come as close as possible to this, but one might also start building a text network that uses word stems as its nodes, or plain lemmas.

The way of abstracting meaningful semantic units from words and the choice which words or word classes to include can vary, but they are fundamentally similar for all ways of building text networks. What differs significantly, however, are the criteria for identifying edges.

Identifying edges

There is no inherent criterion when we should regard two nodes in a text network as related. There are different approaches to this, one of the simplest being co-occurrence. This basically means that two words should be related if they occur together in a certain context. This definition leaves two important questions open:

  1. What is the context?
  2. (How) do we weight co-occurrences?

Defining context

There are two main approaches to defining context: One is to select a structural unit of the text, e.g. a sentence or a paragraph. Each word that appears in that unit (sentence) is connected to all other words appearing in the same unit. So one basically gets a fully connected network for each sentence. The final network is then created by merging all the sentence networks into one large network.

Another approach is to use proximity as an indicator for context: Then, a window of a certain size is used. A window of size five then means that all words that have at most three words between them are connected. Since sentences can get quite long, using a window-based approach usually yields more fine-grained results than using a sentence-based approach. On the other hand, though, one might miss some relevant connections, e.g. if there are long subordinate clauses between two related words. Word windows might even ignore sentence boundaries, based on the assumption that the beginning of a sentence tends to relate to the end of the previous sentence. But one can also define structural units that should not be crossed, e.g. paragraphs.

Weighing relations

An important information is not only that a relation exists, but also how strong it is. This can be included in the network by giving edges a weight. This can be done by simply counting co-occurrences: If two words appear in the same context thrice in a text, the edge between them gets a weight of 3.

In corpus linguistics, usually statistical metrics are used to determine not only how frequent a co-occurrence is, but also if it is statistically significant. Words that are frequent by themselves tend to have frequent relations to other words. Significance metrics like log-likelihood ratio (Manning and Schütze 1999, 172) will tell if a relation is more frequent than one would expect given the frequency of the individual words. In our work, however, we found that these significance metrics are somewhat less interpretable in a network context.

Another approach has been developed for the website textexture. It uses a sliding window, meaning that the window of size x is moved incrementally. So with a window of 5, in the first step all words from word one to word five are connected. In the second step, words two to six are connected. Since the words two to five are included in both the first and the second step, their edge weight gets increased: After the second step, they are connected by edges of weight 2, while the words one and six are connected to the others only by edges of weight 1 (and words one and six have no connection at all). Additionally, this approach processes the complete text twice, once with a window of 5 and once with a window of 2, giving extra weight to directly neighbouring words (Paranyushkin 2011).

Some years ago – never mind how long precisely – having little or no money in my purse, and nothing particular to interest me on shore, I thought I would sail about a little and see the watery part of the world.

An example sentence from Manville’s Moby Dick

year money purse nothing shore part world

Remaining words after abstraction (lemmatisation) and filtering (nouns only)

Connections with a window of 2 …

Connections with a window of 2 …

 

… and with a window of 5.

… and with a window of 5.

This approach takes distance into account in a nuanced way: Instead of just connecting all words within a certain window, it uses graded weights, depending on the distance within the window. In our research, this approach seemed to work quite well.

Results

This algorithms produces the following network for the first chapter of Moby Dick (Melville 1851):

The first chapter of Moby Dick as a network

The first chapter of Moby Dick as a network

One network metric that suggests importance of nodes is “betweenness centrality”. It highlights nodes that link different parts of the network and are thus important as “bridges”. The nodes in the graph are sized according to their betweenness centrality. The ten most central nodes are:

  1. sea
  2. water
  3. man
  4. land
  5. ship
  6. time
  7. thing
  8. voyage
  9. sailor
  10. street

The network structure allows us to inspect not only individual words, but also their relations. These are the five relations with the highest edge weight:

  1. voyage—whaling
  2. purse—passenger
  3. sea—sailor
  4. sea—passenger
  5. time—sea

The edge weight as calculated by the algorithm described above is not simply the frequency of the co-occurrence of two terms, but a mixed value that takes frequency and proximity into account. The first pair is mainly due to the expression “whaling voyage”. The second, third and fourth pair tell us about the two kinds of people who sail the sea, passengers and sailors, and what distinguishes them:

For to go as a passenger you must needs have a purse, and a purse is but a rag unless you have something in it.

But the overall network structure reveals more than individual connections. For example, I found it quite interesting that “sea” and “water” are not directly connected, but rather form the centres of two regions in the network. In this chapter, water is mentioned especially in the context of a reflection on how people are drawn towards it:

Let the most absent-minded of men be plunged in his deepest reveries – stand that man on his legs, set his feet a-going, and he will infallibly lead you to water, if water there be in all that region.

Further readings

Besides the original paper, there are some other examples of applying co-occurrence based networks to texts, e.g. for analysing the European constitution (in German), or Meaning circulation in Lolita.

The algorithm described above is implemented in our own package TCFnetworks. Other applications that might be useful are AutoMap and ConText.

References

Manning, Christopher D., and Hinrich Schütze. 1999. Foundations of Statistical Natural Language Processing. Cambridge (Mass.); London: MIT press.

Melville, Herman. 1851. Moby-Dick, or, the Whale. New York: Harper & Brothers; London: Richard Bentley.

Moretti, Franco. 2011. “Network Theory, Plot Analysis.” New Left Review, II, no. 68 (April): 80–102.

Paranyushkin, Dmitry. 2011. Identifying the Pathways for Meaning Circulation Using Text Network Analysis. Nodus Labs. http://noduslabs.com/research/pathways-meaning-circulation-text-network-analysis/.


Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert.