Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Word Segmentation for Named Entity Recognition in Asian Texts: A Case on Chinese Documents

In the context of my internship at the end of my master curriculum, my objective was to improve the named entities recognition applied to historical texts written in Chinese. The result of this experiment, conducted with the help of Pierre Magistry, led us to write a full paper on our results.

Named-entity recognition (NER) (also known as (named) entity identification, entity chunking, and entity extraction) is a subtask of information extraction that seeks to locate and classify named entities mentioned in unstructured text into pre-defined categories such as person names, organizations, locations, time expressions, quantities, monetary values, percentages, etc. These named entities are very important textual objects for historians, and their automatic extraction reduces the human cost of extracting information from large amounts of data.

NER is sometimes considered as a solved problem. At the very least, we can say that indeed, well-trained systems reach extremely high scores, almost comparable to human performance. Techniques evolved from rule-based systems, to statistical models such as CRFs or Maximum Entropy to Neural Networks, consistently bridging the gap with human efficiency.

However, this task remains difficult due to several factors. One of the major challenges in identifying named entities is language. Multiple words or sentences can be written in different forms; words can be abbreviated for ease of writing and understanding; same words can be written in long forms and it is also possible that some words may have multiple meaning depending on the context. Words that are not used very frequently or words that are not heard nowadays is another major challenge in this field.

In addition to the difficulties of classifying these entities, there has also been concerns and challenges regarding the ways to locate them. Indeed, some entities are sometimes composed of several words, and finding the right delimitation for them is not easy. Moreover, in some cases, some words composing an entity of several words are themselves entities in their own right.

As part of our research, we deal with texts written in Chinese, which makes this task more difficult, particularly because the Chinese script does not provide a clear and frequent typographic marker for word boundaries. As a result, when addressing the case of Chinese(s) language(s) in NER, we have to face the issue of word segmentation.

Recent models proposed in the literature can be divided into character-based, word-based or hybrid models, but every work had to take a stance regarding Chinese Word Segmentation (CWS). The importance and methods for CWS have a long history in Chinese Natural Language Processing (NLP). A recent work by Li et al. (2019) makes the strong claim that the neural era of NLP is turning CWS into an irrelevant or even harmful step in a pipeline. However, Li et al. (2019) did not provide experimental results on the NER task and the experiments presented in our paper tend to show that CWS can be either harmful or beneficial, depending on how much care is given to consistency in segmentation and to the way word embeddings are built and used.

In order to use our system to recognize named entities on historical texts in Chinese corpora, we set up some constraints on the resolution of the problem.

  • Working on Modern Chinese for comparison to Classical Chinese, but with historical corpus limitations in mind.
  • Everything has to be re-trainable
    • Multiple times (for experiments and subcorpora)
    • In a reasonable time.
  • Limited availability of raw data for Language Models.
  • Enter the Character-based model vs. word-based model debate.

To do so, we propose in our paper a new representation of sinograms enriched with word boundary information and we compare it with supervised and unsupervised Chinese Word Segmentation.

Our approach revolves around the representation of words also known as word embeddings. Within the framework of the use of textual data by our models we may ask ourselves how we best numerically represent this textual input? The idea here is that regardless of the type of representation used it must be semantically meaningful. The numerical values should capture as much of the linguistic meaning of a word as possible.

Word embeddings are a type of word representation that allows words with similar meaning to have a similar representation. Each word is represented by a real-valued vector, often tens or hundreds of dimensions. The distributed representation is learned based on the usage of words. This allows words that are used in similar ways to result in having similar representations, naturally capturing their meaning. The quality of these representations can have a massive impact on overall model performance.

As explained above, since we are dealing with Chinese, we must first establish word segmentation information in our data in order to obtain word representations. It is then possible, in order to have representations for our data, to have representations for each sinogram or to use word segmentation models.

In our work, we investigate the different ways to inject the CWS information into a NER pipeline. Several approaches propose to directly use the word-tokens as segmented by a CWS system, they show that discrepancies between the output of the CWS and the NE annotation can be harmful for NER. Out-of-Vocabulary (OOV) tokens is another common issue for NER.

In order to tackle these issues, we designed a new kind of sinogram representation, which contains the information of the chosen word segmentation at the character level. We decided to use the BIES (short for beginning, inside, end, single) format to represent the CWS , originally as an intermediary step for CWS and we trained a language model to produce embeddings of those characters with BIES tag. As we use a BI-LSTM to process the NER task and as we stay at character level, our new representation allowed us to reconstruct the entire word according to the BIES tag. But in the case of a mismatching segmentation between NE and word, the model can still learn to use this wrong segmentation as the right delimiter of an entity.

Below is an example of the different tokenization possible for a given sentence as well as the annotation of the named entities below. The BIES format is also used to delimit the entities; for example, B-GPE means that the token is the beginning of a geopolitical entity and E-GPE means that the token is the end of it.

「比如说在越南或者泰国…」

for example in Vietnam or Thailand…

TokenizationSentence
word-based比如说越南或者泰国
OOS-GPEOS-GPE
character-based
OOOOB-GPEE-GPEOOB-GPEE-GPE
our比 -B如-I说-E在-S越-B南-E或-B者-E泰-B国-E
OOOOB-GPEE-GPEOOB-GPEE-GPE

In order to train our new representation we decide to refer to the paper of Akbik et al. (2018) which introduces a contextual word-level embeddings based on a character-level language model (LM). Where the LM allows the text to be treated as a sequence of characters passed to an LSTM which at each point in the sequence is trained to predict the next character.
In our system, we train the LM to produce characters with segmentation information.
Given a sequence of characters with segmentation information ( C0,C1,…,CN ) we learn P(Ci|C0,…,Ci-1), an estimate of the predictive distribution over the next character given the previous characters. We utilize the hidden states of a forward-backward recurrent neural network to create contextualized character embeddings. The final contextual character representation is given by :

Where Cif denote the hidden state at position i of the forward LM and CbT-i denote the hidden state at position T-i of the backward LM.

Once we learned our new representations, we used the Flair framework (Akbik et al., 2019), which is a powerful NLP library that allows to apply state-of-the-art models to text, such as NER, part-of-speech tagging (PoS) and more, to learn our named entity recognition system using our representations.

Datasets / Modelschar baselineOursSOTA W/O BERTSOTA with BERT
OntoNotes 464.9577.2775.0181.63
OntoNotes 580.6579.92
Weibo52.8864.2468.9367.2
Resume93.5595.4595.2196.54
MSRA88.3794.1593.7195.54

The originality, the motivations, as well as the results obtained (presented in the table above) from our approach led us to be selected to present a paper at the Pacific Asia Conference on Language, Information and Computation 2020. Due to the circumstances of the year 2020, the organizing committee had decided to hold the conference online on 24-26 October as a virtual conference instead of an onsite conference in Hanoi. The video of our presentation is available below. If you wish to have more technical details, our paper entitled Contextual Characters with Segmentation Representation for Named Entity Recognition in Chinese will soon be published on the Association for Computational Linguistics website. It is also possible to reproduce our results or use our code in order to have your named entity recognition system trained on your data from our github page.


OpenEdition suggests that you cite this post as follows:
Baptiste Blouin (January 21, 2021). Word Segmentation for Named Entity Recognition in Asian Texts: A Case on Chinese Documents. Elites, Networks and Power in modern China. Retrieved December 15, 2024 from https://doi.org/10.58079/o8lj


You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.