Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Back from Berlin

I wrote this post on the flight back to Aix, after a wonderful and fruitful stay in Berlin, where I spent a week there at the Max Planck Institute Für Wissenschaftsgeschichte  (MPIWG) on the kind invitation from Dr. Chen Shih-Pei, whose group is focusing on Local Gazetteers (地方志).

The week was dedicated to touch base on each other’s projects, following  the visit of Shih-Pei and Brent Ho in Aix last November. I also held a small  workshop series on NLP applied to Local Gazetteers (地方志). Beside myself, the attendees were members of the MPIWG, joined by Prof. Chu Ping-Tzu from National Tsing Hua University who was visiting the institute to work on the structure of the local gazetteers. Brent Ho, from the Berlin State Library  also joined us. 

I received a more detailed introduction to the LoGaRT tool and used the SHINE API to fetch the data that we used during the hands-on sessions after my presentations. We chose to focus on Language Models, Word Segmentation and Sinograms/Words Embeddings (many other topics where pointed out during the discussions, such as Topic Modeling or Sequence Labeling, but we had no time to work on these).

After one week of holding discussions about text processing for the local Gazetteers, it appeared that three main locks must be dealt with. They probably apply to most of the projects involving textual data in Digital Humanities, especially when Chinese Languages are concerned. (Sadly, those are too often ignored in NLP publications.)

The first one is the data licensing issue. Too restrictive licenses can make collaborative work, results publication, evaluation and reproducibility of the outcomes very complex and sometimes even impossible. This can prevent us from adopting an (Open) scientific methodology. For our present concern, a possible answer can be the Rise & Shine infrastructure developed at MPIWG and the release in collaboration with the Harvard-Yenching library of a collection of gazetteers under open licence (Creative Commons CC-By-NC-SA). Both can be combined as one can fetch the data through the Shine API !

The second one is more often discussed among DH scholars. It is preprocessing  and data curation/cleaning. Following common wisdom, any NLP tool can always be seen as a “Garbage In — Garbage Out” black box. No matter how fancy, new, or deep is your model, the quality of the data you feed it with will often account for 80% of the quality of the results. 

When it comes to text in classical Chinese like the local gazetteers, pre-processing will include the document structure, text segmentation and word segmentation.

The document structure (mainly books sections) is taken care of at the MPIWG. Text Segmentation (paragraphs and sentences) for classical Chinese is a work in progress on our side in Aix. We left these two aside for the time being, and focused on the third level, namely  Word Segmentation. The good news is that I wrote my PhD on the notion of « wordhood » for Mandarin Chinese and proposed an algorithm whose main advantages are to be unsupervised and pretty simple and fast to run. It was designed with Standard Mandarin Chinese as a case study, but its unsupervised nature allows for segmenting other languages or genre for which we do not have training data. The segmentation is based on statistics drawn from the corpus to be segmented (I call this endogenous training, but I am not sure this term made its way to be widely accepted).

Classical Chinese is a great use-case for my algorithm because we lack training data but not raw data. And this is all we need for unsupervised learning. An implementation in Python was released a long time ago and is available on pip and github, I also have a Scala version which I never took the time to package properly, but I will reconsider it (not to mention the fact that the packaging of the python version which contains some part of C++ is nice for Linux but was a source of troubles during the hands-on sessions for Mac and Windows laptops !)

The third lock is the lack of training data if one wants to apply supervised learning. We didn’t address this question fully this week, but it is worth noting that LoGaRT from the MPIWG enables scholars to semi-automatically annotate the gazetteers using regular expressions and manual correction. Applying AI to this kind of data is an exciting perspective and will probably be the subject of some future collaboration.

In such a short time span, we were able to come up with preliminary experiments of language modeling for document clustering and word embedding visualization. But it was more sketching possibilities than actual experiments. The results will need more work to be worth elaborating upon. 


OpenEdition suggests that you cite this post as follows:
Pierre Magistry (August 20, 2019). Back from Berlin. Elites, Networks and Power in modern China. Retrieved January 17, 2025 from https://doi.org/10.58079/o8kl


Pierre Magistry

Pierre Magistry is a postdoctoral researcher in Natural Language Processing with a special interest in Sinitic and low resource languages. He completed his PhD in the ALPAGE team (Paris Diderot – INRIA) in 2013 on Chinese Word Segmentation at the interface of linguistic theories and unsupervised machine learning. His experience benefited from multiple stays in Taiwan (2008-2010, 2014, 2016) under research fundings from TIGP (Academia Sinica), Erasmus Mundus Multi and a Taiwan Fellowship. His work on Taiwanese language led him to release the first input method on mobile devices for this language. In 2017-2018, he was a postdoc at LIMSI (CNRS), working on NLP for low resources situation and developing new semi-supervised learning approaches

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.