Rethinking historical research in the age of NLP

This is a short synthesis of reflections that have been building up since the beginning of the project. Mostly, it came from the growing sense of the formidable opportunity that digital methods and sources present to historians, and the daunting tasks and workload that result from implementing NLP (Natural Language Processing) techniques to digital corpora. Basically, there are three challenges:

  • dependence on techniques/tools beyond our level of expertise
  • massification of data extracted through these techniques
  • bridging the gap between quantitative data [or data in massive quantity] and qualitative historical methods [and hence historical interpretation]

These reflections have matured through repeated conversations and exchanges with Cécile Armand (as a historian) and Pierre Magistry (as a multivariate computational linguist), and more recently thanks to the spate of Slack messages between Cécile and Pierre about one of Cécile’s case studies, the Rotary Club. We have clearly come to a point where we see better what we want, what we can hope for, and what we may be able to break through. I append to this text a preliminary protocol, directly related to our experience, for dealing with digital texts.

For historians, the last two decades represent a watershed in how they work and how they define their methods. In 1997, William G. Thomas and Ed Ayers coined the term “digital history” that eventually became a new mantra among technology-savvy or technology-inclined historians.[1] It almost created a “great divergence” within the community of historians (at least in the USA). My position all along has been fairly constant. The issue was not “digital history” as a new form of history, the issue was “history in the digital aæge” by which I meant: digital technologies were and are transforming everything, and every discipline needs to rethink its practices in the light of the possibilities digital technologies bring into play. For historians, it meant the transformation of sources and that of methods. 

The digital age for historians is not just a single era. Technologies change fast and in fact, I can see two periods within this era. The first period was that of the dematerialisation of historical documents [from print to digitized images] and the rise of platforms that provided a much wider range of sources, texts, images, maps, data, etc. for historical inquiry. By and large, even if these platforms changed the scale of historical information, the modes of access to information, and radically transformed the way we could rethink the production and transmission of historical knowledge, these practices remained well within the established paradigms of historical research. It did not fundamentally change the way historians do research and write. I concur with Ayers on the failed promises of digital history.[2]

The second period started with the transformation of digitized text images into digital texts. Text images remained pages of texts to be read, on a computer instead of in print, but nothing had really changed. The massive transformation of digitized documents into textual datasets, starting with the academic literature — nearly all academic journals and all books since the mid-2005, not to mention retro-digitization — then primary sources, especially newspapers and periodicals, have fundamentally changed how historians need to work. First, it is now impossible, and even inadmissible, to miss anything ever published on a given topic in the academic literature. How are we do deal with this, especially for those working on China ? Second, historians have access to massive corpora of texts way beyond anything the mind can grapple with, even on a specific research topic. 

Quite interestingly, this second period brought “digital history” back to the source of “digital humanities”, namely the analysis of texts through computational methods. After all, “digital humanities” trace their origin to Father Busa and his processing of the Bible with punch cards in 1949. In our time, processing texts (and speech) has become the purview of scholars at the intersection of computing and linguistic, notably through NLP. And NLP and historical texts are precisely where the path of historians and computational linguists crosses, intersects, and even merges. Not because the two fields should be merging, but because historical research is no longer possible without incorporating not just the “tools” or the “techniques” of NLP — this would be too easy — but without rethinking how the massification of historical data that one can potentially extract from the digital corpora can be brought down to something the human mind can process.

Incorporating the methods of NLP into historical research does not come naturally. Because we work as historians, words are not just entities we can corral thanks to advanced methods of text mining. Words have meanings within a text, within a context from which the operation of data mining separate them. We obtain data, but we lose the surrounding information. Actually, this is not true as we can have both, the searched terms — named entities, common words — and their context — snippets of text, and we can even have the link back to the original full text. But this requires both an infrastructure and a workflow. This is what we have been developing within the ENP-China project. We have been testing the waters, feeling our way out, and while we are still a long way from a secured and stable sets of methods, a preliminary landscape is emerging, which we want to share. 

Historians work fundamentally with texts (although images and maps have come more forcefully into their basic staple in the last two decades). Texts provide information, and information can be turned into data. The latter has become a key buzzword  in almost all the disciplines. Historians are no strangers to data, but apart from quantitative data (the unfortunate experience of cliometrics left a deep scar) historians usually do not think and interpret historical facts (I know, some may reject the terms altogether) or historical events/actions solely or mostly from “data”. They read texts. They interpret words in context. They prioritize quality over quantity. 

In the age of NLP, we can have both but we face the challenge of quality in quantity. Historical knowledge and interpretation is much better off when based on a large quantity of data, which does not equate with quantitative data. At the same time, quality matters and historians need to be able to “qualify” the data they collect because otherwise massive messy data could well play the same tricks as the overzealous use of quantitative data in cliometrics. In view of the potentially infinite quantity of data available in massive digital corpora, this may sound like a utopian view. And it may indeed be beyond our reach if we think in too general terms about “data”. What NLP does is that it can transform any textual entity into a data point. This is very different from the way historians have thought of data until recently. Data not just as a measure of something (number of households in a city), but data as a situated count of occurrences of words in a corpus (mentions of cholera in cities in the full run of a newspaper), and even more, data as strings of interconnected words in a text, in a whole corpus, across various corpora. This is what NLP can do: to strip down a text to its bare bones (named entity recognition and extraction) while at the same time analyzing the structure of a text, segmenting its sentences, identifying word patterns, placing each word and its role in its context. Of course, the results are never perfectly crystal clear. They require a sound assessment of what NLP algorithms can or cannot achieve, what they pick and miss vs. what a human mind would pick and miss. Yet, this is a very promising avenue.

What does this bring into historical research? First, it can facilitate the exploration of massive corpora by identifying, counting,  and extracting textual data which may provide ways through which to get a sense of what’s in there that the historian can then explore more specifically. Second, data extraction can provide bare-bones data (raw data) for integration into databases (after a data cleaning process), which can enrich very rapidly the foundations upon which historians can build their investigations (individuals, organisations, locations, positions, etc. situated in time and in a source). It is not just about data, but linked data that connect data points within a database and across databases. 

The implementation of NLP methods can thus bring to historians nearly everything a text contains in terms of actors (individuals, organisations, places, dates, etc.) almost without a miss. This may vary depending on the quality of the corpus and the reliability of the NLP tools, but it will always be far more exhaustive than what a human brain can ever achieve. Yet the collection of data is not the end, it is the beginning. It is the beginning because with historical texts we end up with a higher rate of messy data that await curation. It is the beginning because we receive a very rich dataset of named entities that await various forms of analysis (statistical, spatial, network analysis, graphs). But the main point for historians is the challenge of connecting these data points pulled out of their context. We are interested in the five W: who, what, where, when, why. I would certain add two more: with whom. In using NLP methods, the who-what-where-when string does not raise too much of a challenge, especially for who-where-when. 

The Why, however, is much more elusive and it cannot be readily answered by solving the “Who was doing What in that Place on that point in Time” equation. To answer fully the Why question will often require reading through different sources as historians usually do. Yet the Why question also makes sense within the context of using NLP methods. Within a given corpus that provides narratives — here newspapers, periodicals, and archives — the Why question can be trimmed down to something more simple, and yet very difficult to grasp. The question could be formulated as: “why do(es) an individual or various individuals with certain qualities (title, position, institution) happen to be involved in doing something somewhere at a certain point in time?” in this corpus. What it comes down to is to refine from the text what action or what event was happening. To use a well-established formula, the challenge is event detection. Not events in the sense of named events (although this can be part of data mining the secondary literature, more rarely in the contemporary press when events are unfolding), but the very elementary bricks that bring together individuals for a given purpose without knowing beforehand that there will be such events in the text. These elementary bricks may — and for most will — remain just an event in a point in time, but some may become part of a chain of events that leads to a larger movement or even to a named event (e.g. Cemetery riot, Tianjin massacre, etc.). This is where one of our main challenges lies.

The challenges in incorporating NLP methods in historical research are manyfold and far more exciting and promising than just “data mining”, of which it is only one facet. Historians working on pre-2000 materials may think they can cling to their well-honed methods, and depending on one’s materials and approach this will remain true — but no historian working on post-2000 materials can escape the reality of the millions of terabytes of documents awaiting discovery. And in view of the creation of massive historical corpora for pre-2000 materials, going just by hand into historical sources or even in secondary literature may prove untenable without a measure of NLP methods.

Proposal for a protocole on processing textual corpora for historians

We have been working with various types of documents that came in different qualities of digital format. This is a proposal to define the steps through which historical documents needs to go to enable fruitful and relevant historical queries, both with a view to produce data for the MCDB and to gather data that can nourish and back up historical inquiries and interpretation. The five steps outlined below cover the basic workflow. It does not integrate the central issue of events.

Step 1 : From raw document to segmented document

We start from the assumption that raw documents that result from OCR would normally come as a massive undifferentiated textual corpus, running from line one to the last line of text. To turn this undifferentiated text into a searchable text beyond querying and finding individual terms, the structure of the text needs to be instantiated. It will take a different form depending on the nature and genre of the original document as listed below.

  • Newspapers/periodicals: re-creating the individualised IDed sections of the newspaper/periodical (e.g. Editorial, Men & Events, 電報, 本埠消息, etc.). This is a most crucial step for further relevant et efficient historical query.
  • Biographical dictionaries: re-creating the individualised IDed biographical texts pertaining to an individual.
  • Biodata works:  re-creating the individualised IDed biotexts pertaining to an individual and connecting individuals across sources with unique PIDs. Biodata works include Directories, Who’s who, Staff lists, Club lists, etc.
  • Language dictionaries: re-creating the individualised IDed terms entries
  • Monographs: re-creating the individualised IDed sections of the monograph. Simply said, the detailed table of content based on the actual sub-sections of the text. Monographs include works in secondary and primary literature, including sources such as local gazetteers (地方誌), yearbooks (年鑑), etc.

Step 2: General indexing

The operation of indexing parse through the entire corpus to identify all the individual terms contained in the text. It produces a searchable text where each terms is properly located in reference to the structure and metadata of the text [or it creates metadata such as each terms is located properly within the text]

Step 3: Query and data extraction

This is the actual point of entry for historical research. Query and data extraction are two sides of the same coin. 

Query means the possibility of implementing multivariate searches running concomitantly on one or several terms (name of actor, common term), time (year or date), section (newspaper,/periodical), source (title of newspaper/periodical). The aim is to narrow down as precisely as possible the information about an actor (individual, institution). This should produce a list of individualised IDed documents.

Data extraction consists in running a NLP pipeline that should extract all the information related to the targeted actors and produce it in the form of a spreadsheet and/or graph. The range of information can vary depending on the search criteria and/or the NLP pipeline. The general framework, however, should be that it includes such categories as Name, Organisation, Location, Position, etc.; a snippet of the text for contextualizing [predefine number of words around the targeted term(s)]; source metadata [Source name, time, section, page]

This step may not — and probably will not — work as a one-go operation, but it will require tuning up the query part so as to obtain as little noise as possible. One challenge is the inevitable duplication of extracted data when several actors are associated to the same set of actions (e.g. Wang Xiaolai, Du Yuesheng, Qian Xinzhi all tae part to the same meeting on the same day).

Step 4: Data exploration and data processing

There can be a variety of ways to make sense of the extracted data. The main challenge here is quantity. One can easily get booted down in the overflow of data, especially as this data never comes as “dirty data” that requires a lengthy process of data cleaning (and in the case of multilingual sources on China, disambiguating and standardizing name entities).

Nevertheless, dealing with such massive historical data requires close reading/deep knowledge of the sources (content and context). This is an absolute requirement for cleaning data — at every step in the cleaning process — before, during, after  (to double check). In order to go back and forth between raw lists of data and data in context, it is necessary that the workflow allows us to navigate smoothly between the entities/data points extracted and the original document from which it was extracted so we can check and correct on the fly and eventually annotate « errors » to further automatize the correction/cleaning process.

Historians are not prepared to deal with and make use of “dirty data”, even if simply using the clean parts of a “dirty” data set may eventually prove as valid as having manually produced a much small but cleaner dataset. Yet one cannot  just brush away the issue with such an argument.

One possible way around the issue of quantity is exploring the data before processing it. Such an exploration can rely on exploring graphs built with the extracted data. We have experimented with Padagraph, but with three main issues:

  • We did too little to actually take advantage and make sense of the possibilities of Padagraph. Further training and practice may help overcoming this stage.
  • Because the graph reconfigures the display of the data with each new “move” without providing a way to trail back to a previous “view”, it is quite destabilizing for recording “states” where the data made sense.
  • It raises the issue of moving from the stage of exploration to that of “selecting” data deemed to be worth focusing on and further processing.

Processing in our case means refining the data through a combination of tools, namely Open Refine to clean and standardize the data, R-studio to actually process the data (sometimes in combination with a spreadsheet), and other visualisation tools (Cyroscape, Tableau, etc.)

Step 5: Compilation and preservation of datasets

The final datasets — textual data, quantitative data, spatial data —need to be made available to the scholarly community with a system of versioning. We currently actually run a repository in Sharedocs, but even if we were to create a public collection there, it may prove incompatible with the criteria of FAIR. Ultimately, the choice will be to open a collection on CERN’s Zenodo.

In this text, I deliberately skipped the part on data interpretation and historical writing as this step fits in well-established paradigms. 


[1] “Interchange: The Promise of Digital History,” Journal of American History 95, no. 2 (September 1, 2008): 453, https://doi.org/10.2307/25095630.

[2] Edward L. Ayers, “Doing Scholarship on the Web: Ten Years of Triumphs – and a Disappointment,” Journal of Scholarly Publishing 35, no. 3 (2004): 143–47, https://doi.org/10.1353/scp.2004.0012.


You may also like...

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search OpenEdition Search

You will be redirected to OpenEdition Search