Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

On Chinese Wikipedia biographies

In the last few months we tried to build a corpus based on the biographies of the Chinese Wikipedia. The reason for having such a corpus is that we can use it for future NLP experiments such as information extraction and we believe that Wikipedia is an ideal ’training ground’ for that due to its vast variety of documents. In this blog post we want to share some details about what we have achieved so far.

SOURCE AND PRE-PROCESSING

An offline copy of the Chinese Wikipedia can be obtained on dumps.wikimedia.org, either in xml or html.1 The Chinese wikidump as it is also called contains around two million pages (by comparison the English Wikipedia has around six million).  These pages are not only articles but also disambiguation pages (i.e. pages that list different topics that can be found with the same search term), redirects, lists, category, project descriptions etc. In order to reduce the number of non-articles in the dump, we used a python tool called WikiExtractor to do some filtering, which trimmed the corpus to only 1.046.744 pages.2

RULE-BASED CLASSIFIER VERSUS MACHINE LEARNING-BASED CLASSIFIER

In the beginning, we did some experiments on the method for extracting biographies from the offline copy. We wanted to figure out whether we should use a script that kept the documents based on a matching with predetermined language patterns such as (出)生在+ (‘Born in’), 出身 (‘family background’), etc. (rule-based classification) or whether we should rely on an algorithm trained to recognize biographies (Machine learning-based classification). After a lot of back and forth, we concluded that the latter was the most effective approach.

The reason for this is that the wikidump itself does not categorise pages in any meaningful way for us to detect biographies. Although a lot of metadata (page Id, redirect status, title, etc.) can be found in the offline copy, the data does not indicate anything about the type of topic we are dealing with. One could argue that the text of an article itself provides us with some clues about its content. However, due to the size of the dump (not to mention the vast variety of topics) it is impossible to produce a complete list of words and sentence patterns that captures all the nuances of a biography while minimizing the number of unwanted articles in the corpus.

MODEL TRAINING 

Before implementing our chosen method on the whole Wikipedia corpus, two steps had to be taken for preparing such a model. First, we trained the model by providing it with examples and counterexamples. The algorithm was fed with the first three sentences of each example page and was set to detect a difference in language of the biographies and non-biographies. Then we used a randomized 10% of the training sample to assess its accuracy. This process had to be iterated multiple times before the accuracy of the algorithm reached a satisfactory level. In the second step, we did a final test of its performance with a set of 414 manualy labeled unknown articles.3 As outcome of the last assesment, ten articles of the test sample were wrongly classified as biographies and only one biography is skipped.

The challenging part of the preparation was the composition of the examples. For the biographies, we generated a list of person names (in Chinese) via Wikidata SPARQL endpoint and used this list for the Special:export wikipage.4 There we were able to download the pages in the form of one big xml-file, which underwent some extra filtering to expunge potential non-articles. As for collecting bad examples, it was not easy as it seemed first. Although every article that was not about a person could be counted as a bad example to the machine, it did not guarantee that the algorithm could naturally recognize biographies. We conducted an error analysis and found out that the model mistook pages of fictional characters (superman, Lu Zhishen, etc.) for biographies, as well as pages of films, manga, books, etc. As a consequence, we collected a more balanced sample of ‘bad’ pages via the Wikidata:

  • 2,860 fake persons (categorized in Wikidata as ‘fictional characters’, ‘fictional humans’, ‘literary characters’, ’comics characters’, ‘Videogame characters’, etc.)
  • 3,040 media examples (categorized as ‘films’, ’television series’, ‘literary works’, etc.)

We also added another 2,984 random examples which were generated by the Wikipedia API. We removed the persons in this sample by use of web-scraping techniques. The XML-pages of non-biographies were obtained in the same manner as the person pages.

RESULTS

After the preparation phase, we applied the algorithm to the whole Chinese Wikipedia. So far, we have retrieved 228.601 articles. From a preliminary survey in the list of selected pages, it seems to be very successful, although we still have a few pages containing lists, such as awards (which is not surprising since the model is set to detect names and dates). We could decide to filter the corpus by setting the classifier threshold to 0.9 (instead of 0.5) and keep only 197.894 articles. However, given the test evaluation outcome described above and the type of applications we want to do with the corpus, there is no need to reduce the ’noise’ as much as possible.  
 
The next step will be the conversion of the documents to traditional Chinese characters. Because there is only one version of the Chinese Wikipedia, the pages are written in traditional or simplified characters and sometimes even both. We did not apply this step before the extraction due to the size of the wikidump.
 
In the near-future we plan to also extract biographies from the Japanese wikipedia with the same method. As for the English one, we use the wikipedia biography dataset created by Remi Lebret, David Grangier and Michael Auli.5

  1. The version of Chinese wikipedia we used is the zhwiki-2020-02-01
  2.  https://github.com/attardi/wikiextractor
  3. Originally there were 415 articles. One Chinese wikipage had to be discarded as it turned out to be written in English.
  4. Wikidata SPARQL: https://query.wikidata.org/; Special:export wikipage: https://zh.wikipedia.org/wiki/Special:%E5%AF%BC%E5%87%BA%E9%A1%B5%E9%9D%A2
  5. https://github.com/DavidGrangier/wikipedia-biography-dataset; For more information on the dataset: R. Lebret, D. Grangier and M. Auli. 2016. Neural Text Generation from Structured Data with Application to the Biography Domain. arXiv preprint arXiv:1603.07771. (http://arxiv.org/abs/1603.07771)

 


OpenEdition suggests that you cite this post as follows:
Nora Van den Bosch (March 24, 2020). On Chinese Wikipedia biographies. Elites, Networks and Power in modern China. Retrieved December 15, 2024 from https://doi.org/10.58079/o8kv


You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.