Entity Linking in Boorman’s biographical dictionary
Diving into a first dataset
A major objective of the ENP-China project is to bring together experts from very different fields to define and benefit from new research methods.
This will require establishing good communication channels across disciplines and people. For example, Natural Language Processing (NLP) scholars will have to pay attention to the specific properties of the historical sources used in this project, and historians will need to understand what NLP can or cannot do. This post (or series maybe ?) is a tale of NLP tools meeting historical sources (written by a computational linguist).
The idea that led to this small experiment on Howard Boorman’s Biographical Dictionary of Republican China is that it is easier to discuss and reason about what can and should be done based on a very concrete case. I chose to run a complete NLP pipeline available off-the-shelf to show historians the kind of outputs one can get, what is easy or not to achieve, and what kind of errors may be expected on documents they are familiar with. On the other hand, this provides a basis from which to discuss what seems useful or not. Finally, it also gives us a first intuition about how well pre-trained NLP models perform on our specific data, as we need to decide which part of the pipeline have to be improved and tuned up.
In this post I will introduce the corpus we chose to start with, the NLP analysis performed and will give some examples of outputs.
The Corpus
The corpus used in this first experiment consists in volumes one to four of Boorman’s Biographical Dictionary of Republican China, a reference work published between 1967 and 1979. It includes biographies of hundreds of prominent individuals in many areas of Chinese society, who were active during the Republican period (1911 — 1949).
The NLP Pipeline
Since the main objective was to obtain some preliminary results as soon as possible to provide a concrete ground for our discussions, I decided to go with a readily available NLP pipeline with almost no modification and tuning. I selected Stanford’s CoreNLP for a couple of reasons
- It includes pre-trained modules for every levels of processing I need for English.
- It provides a nice demo web interface, which allows me to explain what’s happening and illustrate how things can go wrong to my colleagues not familiar with NLP
- it runs on a JVM, and packages are on maven so it is incredibly easy and straightforward to include a CoreNLP pipeline in a small scala program. I wrote a very first step of pre-processing in scala to get basic document structure out of the OCRised PDF files.
The pipeline proceeded as follow. I split the processing steps in two groups. The first one was more closely related to linguistic analysis and the second one was closer to our final task, which can be considered as data mining or information extraction. The first part may seem weird if you do not have any linguistic background. It is necessary because providing the later steps of machine learning with an input structured according to such linguistic analysis helps a lot to improve the quality of final output of the pipeline.
Linguistic Structures Prediction
In our pipeline, after a first task of “sentence boundary detection” to split the input into sentences, we have three steps of linguistic analysis running on each sentence: tokenization, part-of-speech (POS) tagging and syntactic parsing. Here is a (very) short definition of each step
- Tokenization is the task of splitting the text (which otherwise is just a long stream of characters) into “tokens“, our basic unit of processing. This is quite close to what English speakers would call “”words”” (this is the short version of the story, for the long one you can refer to the first half of my Ph.D dissertation on Chinese Word Segmentation).
- POS tagging consists in assigning a grammatical category to each token, such as “noun”, “verb”, “preposition”…
- Syntactic parsing is the task of finding the structure of a whole sentence by creating connections between the tokens. The result is typically a tree structure. In our case, we run a dependency parser which connects token to one another without adding intermediate nodes (phrases). There is a large body of linguistic literature about dependency syntax. If you are in a hurry think of dependency relations between tokens as stuff line “subject”, “object” relations. At the end, all the tokens in a sentence are connected to form a (dependency) tree.
To sum up, we start with a large collection of sentences which from the computer’s point of view is just a lot of small streams of characters, and we end up with a collection of trees made of tokens labeled with grammatical categories. It can be illustrated like this :
Once we have syntactic trees, we are ready to focus on what matters more for the project : spotting Named Entities and their relations in the text.
Named Entities Recognition and Linking
In the ENP project, we want to be able to spot, retrieve and connect any piece of information about the elites in heterogeneous corpora of historical sources. The bits of information will include person names, organizations, dates, locations… in the NLP jargon, these are called Named Entities (NE). What should be considered as a NE in a text is quite open for discussion and depends on the kind of information one wants to extract. Defining a precise typology of NE of interest is still a work in progress in our team. Here we just want to assess what off-the-shelf tools can provide.
NER modules included in CoreNLP are trained on corpora quite different from the Boorman’s Dictionary and the other sources we will have to analyze. Some adaptations will have to be made. But the results we obtain are already exciting and allow our historian colleagues to get a grip on what kind of challenges we are facing from the NLP perspective.
The output we obtain can be illustrated with this kind of schema :
We just have to add a step of data visualization to enable our team to browse and discuss the results of the analysis.
Foretaste of Visualization
I offer two different ways of navigating through the results. The first one is to read the actual text, enriched with the annotations drawn like in the previous figures. Such visualization is made with Brat, which also provides a very nice query tool with concordances on the result to look for occurrences of specific entities. The second one is a graph visualization and exploration of all the entities detected in the whole corpus and the relations between them. This is achieved with the Padagraph platform.
check the screenshots below (click for fullsize), and stay tuned for more in a future post in which we shall explain how we can use these tools
Concluding Remarks
I hope this gives everybody interesting insights of what’s going in Aix-en-Provence. Again, this preliminary experiment aims at providing us a very concrete basis for discussion within the ENP team. These results were achieved in a couple of weeks and are far from perfection. I hid a lot of ugly details under the rug and many things are just very wrong in the outputs (sometimes in a funny way, sometimes in a puzzling way…). I may come back on the gory details in future posts or papers.
Another point worth emphasizing is that the Boorman provides us with a very easy and confy testbed. The genre of biographical dictionary is very special and closely related to our objectives. Its language is also not too remote from what the NLP tools were made for. This significantly helps the NLP pipeline to do its job. It may not be such an easy ride to process other kinds of sources, especially Chinese texts from the 19th century.
OpenEdition suggests that you cite this post as follows:
Pierre Magistry (October 26, 2018). Entity Linking in Boorman’s biographical dictionary. Elites, Networks and Power in modern China. Retrieved November 12, 2024 from https://doi.org/10.58079/o8k5