Classical Chinese Literature in a Digital Age

I’m very excited to be visiting Tsukuba University in Japan next week, where I will be giving a talk titled “Classical Chinese Literature in a Digital Age” (December 15), and also presenting a paper on “Optical Character Recognition for pre-modern Chinese Texts” at a Digital Humanities workshop (December 16).

Posted in Chinese, Digital Humanities, Talks and conference papers | Comments Off

Towards a dynamic, scalable digital library of pre-modern Chinese

Paper to be presented at the 7th International Conference of Digital Archives and Digital Humanities, December 2016, National Taiwan University

This paper contrasts two radically different approaches to full-text digital library design and implementation: firstly, the “static database approach”, in which materials are firstly created, edited, and manually reviewed before being added to a generally static database system; secondly, dynamic approaches in which incompletely reviewed materials are imported into a dynamic system providing similar functionality, but within which significant further editing is intended to take place. To illustrate the technical challenges, benefits, and practical consequences of these two design approaches as reflected in a large-scale digital system, specific examples are drawn from the Chinese Text Project digital library, which initially began as a primarily static database system, and has over time evolved into a primarily dynamic platform. This change has been motivated in particular by a desire to achieve a scalable, sustainable platform for the curation of textual data and metadata, to which new material can be easily added as well as improved over time, while requiring minimal administrative overhead. This paper argues that while there are technical challenges to a dynamic approach, the increase in scalability dynamic approaches offer can have significant advantages, including potential access to a “long tail” of data which might otherwise in practice be overlooked.

Posted in Chinese, Digital Humanities, Talks and conference papers | Comments Off

Harvard Yenching Library Chinese materials added to ctext.org

Update to the CTP:

Thanks to the support of Harvard Yenching Library, over 5 million pages of scanned materials from the Yenching Library collection have been added to the Library section of the site, including high quality images from the Chinese Rare Books Collection. Approximate transcriptions created using the ctext.org OCR procedure have also been added to the Wiki, making these materials full-text searchable. In future we hope to collaborate with other libraries to include materials from their Chinese language collections.

Posted in Chinese, Digital Humanities | Comments Off

Stanford DHAsia 2017

I’m delighted to be taking part in Stanford’s exciting DHAsia Digital Humanities initiative in the coming year.

I will be giving a talk titled “Parallels and Allusions in Early Chinese Texts: A Digital Approach” (April 25), as well as leading a workshop session “Chinese Text Project: Historical Texts in a Digital Age” (April 27).

Posted in Chinese, Digital Humanities, Talks and conference papers | Comments Off

Chinese Text Project: A Digital Library of Pre-Modern Chinese Literature

Paper presented at Digital Humanities Congress 2016, University of Sheffield

Since its creation in 2005 as an online search tool for a handful of classical Chinese texts, the Chinese Text Project has gradually grown to become the largest and most widely used digital library of pre-modern Chinese texts, as well as a platform for exploring the application of new digital methods to the study of pre-modern Chinese literature. This paper discusses how several unique aspects of the project have contributed to its success. Firstly it demonstrates how simplifying assumptions holding for domain-specific OCR (Optical Character Recognition) of historical works have made possible reductions in complexity of the task and thus led to increased recognition accuracy. Secondly it shows how crowd-sourced proofreading and editing using a publicly accessible version-controlled wiki system has made it possible to leverage a large and distributed audience and user base, including many volunteers located outside of traditional academia, to improve the quality of digital content and enable the creation of accurate transcriptions of previously untranscribed texts and editions. Finally, it explores how the implementation of open APIs (Application Programming Interfaces) has greatly expanded the utility of the library as a whole, facilitating open and decentralized integration with other projects, as well as leading to entirely new applications in digital humanities teaching and research.

Posted in Chinese, Digital Humanities, Talks and conference papers | Comments Off

Leveraging Corpus Knowledge for Historical Chinese OCR

Paper to be presented at “Digital Research in East Asian Studies: Corpora, Methods, and Challenges“, Leiden University, July 10 2016

Abstract

As an increasingly large amount of pre-modern Chinese writing is transcribed into digital form, the resulting digitized corpus comes to represent an ever larger fraction of the total body of extant pre-modern material. Additionally, many distinct items from the total set of pre-modern writings to which one might wish to apply OCR are either non-identical editions of the same abstract work, or commentaries on (and thus repeat much or all of the content of) earlier works. As a result, for historical OCR the probability that a text we wish to recognize contains extensive overlaps with what has previously been transcribed in another document is not only significant but also increases over time as more material is digitized. While general techniques for improving OCR accuracy using language modeling can also be applied successfully to historical OCR, it is also possible that more specialized techniques can take greater advantage of our more extensive knowledge of the historical corpus to further improve recognition accuracy. In this paper, I present an initial evaluation of unsupervised techniques that attempt to leverage knowledge extracted from a large existing corpus of pre-modern Chinese to improve OCR recognition accuracy on unseen historical documents.

Posted in Chinese, Digital Humanities, Talks and conference papers | Comments Off

Crowdsourcing, APIs, and a Digital Library of Chinese

Guest post published on Nottingham University’s China Policy Institute blog.

Digital methods have revolutionized many aspects of the study of pre-modern Chinese literature, from the simple but transformative ability to perform full-text searches and automated concordancing, through to the application of sophisticated statistical techniques that would be entirely impractical without the aid of a computer. While the methods themselves have evolved significantly – and continue to do so – one of the most fundamental prerequisites to almost all digital studies of Chinese literature remains access to reliable digital editions of these texts themselves.

Since its origins in 2005 as an online search tool for a small number of classical Chinese texts, the Chinese Text Project has grown to become one of the largest and most widely used digital libraries of pre-modern Chinese writing, containing tens of thousands of transmitted texts dating from the Warring States through to the late Qing and republican period, while also serving as a platform for the application of digital methods to the study of pre-modern Chinese literature. Unlike most digital libraries and full-text databases, users of the site are not passive consumers of its materials, but instead active curators through whose work it is maintained and developed – and increasingly, not all users of the library are human.

Digitization piece by piece

As libraries have increasingly come to recognize the value of digitizing historical works in their holdings, many institutions with significant collections of Chinese materials have committed themselves to large-scale scanning projects, often making the resulting images freely available over the internet. While an enormously positive development in itself, for many scholarly use cases this represents only the first step towards adequate digitization of these works. Scanned images of the pages of a book make its contents accessible in seconds rather than requiring a time-consuming visit to a physical library, but without a machine-readable transcription of the contents of each page, the reader must still navigate through the material one page at a time – finding a particular word or phrase in the work, for example, remains a time consuming task.

While Optical Character Recognition (OCR) – the process of automatically transforming an image containing text into digitally manipulable characters – can produce results of sufficient accuracy to be useful for full-text search, OCR inevitably introduces a significant number of transcription errors which can only be corrected by manual effort, particularly when applied to historical materials which may be handwritten, damaged, and faded. Proofreading the entire body of material potentially available – likely amounting to hundreds of millions of pages – would be prohibitively expensive, but omitting the proofreading step limits the utility of the data.


Variation in instances of the character “書” in texts from the Siku Quanshu. OCR software must correctly identify all of these instances as corresponding to the same abstract character – a challenging task for a computer.

In an attempt to address this problem, the Chinese Text Project has developed a hybrid system, in which uncorrected OCR results are imported directly into a database system providing full-text search of the source images and assembling the contents of the scanned images of pages into complete textual transcriptions, while also providing an integrated mechanism for users to directly correct the data. Like articles in Wikipedia, the contents of any transcription can be edited directly by any user; unlike Wikipedia, there is always a clear standard against which edits can easily be checked for correctness: the images of the source documents themselves. Proofread texts and uncorrected OCR texts are presented and manipulated in an identical manner within the database, with full-text search and image search available for both – the only distinction being that users are alerted to the possibility of errors in those texts still requiring editing. Volunteers located around the world correct mistakes and add modern punctuation to the texts as time allows and according to their own interests – typically hundreds of corrections are made each day.



Left: A scanned page of text with a transcription created using OCR and subsequently corrected by ctext.org users.
Right: The same data automatically assembled into a transcription of the entire text.

Library cards for machines: Application Programming Interfaces (APIs)

As digital libraries grow in size and scope, they also present increasingly valuable opportunities for research using novel methods including text mining, distant reading and other techniques that are often grouped under the label “digital humanities”. At the same time, what can in practice be achieved with individual projects and their associated tools and materials is frequently limited by the particular use cases envisioned by their creators when these resources were first designed and implemented. Application Programming Interfaces (APIs) – standardized mechanisms through which independently developed pieces of computer software are able to share data and functionality in real time – provide one approach to greatly increasing the flexibility and thus utility of such projects.

With these goals in mind, the Chinese Text Project has recently published its own API, which provides machine-readable export of data from any of the texts and editions in its collection, together with a mechanism to make external tools and resources directly accessible through its user interface in the form of user-installable “plugins”. While many of these have already been created – such as those for the MARKUS textual markup platform as well as a range of online Chinese dictionaries – the true value of such APIs lies in their flexibility, in particular their ability to be adapted to new resources and new use cases without requiring additional coordination or development work, often leading to their successful application to use cases quite unrelated to those for which they were first created.

While the Chinese Text Project API was developed primarily with the goal of facilitating online collaboration, it is now also being used to facilitate digital humanities teaching and research. In the spring semester of 2016, graduate students at Harvard University’s Department of East Asian Languages and Civilizations made extensive use of the API as part of the course Digital Methods for Chinese Studies, which introduced students with backgrounds in Chinese history and literature to practical programming and digital humanities techniques. By making use of the API, it was possible for students to obtain digital copies of precisely the texts they needed in exactly the format they required without the significant additional effort this would normally entail. Rather than working with set example texts for which data had been pre-compiled into the required format or spending classroom time dealing with uninteresting methods of data preparation, the API made it possible for students to directly access the texts most relevant to their own work in a consistent format with no additional work. For the same reasons of consistency, programs written to perform a given set of operations on one text could immediately be applied to any other text from the tens of thousands available through the API.


Part of a network graph representing single-character explanatory glosses given in the early character dictionary the Shuowen jiezi. Arrows indicate direction of explanation.

Conclusion

The application of digital techniques developed in other domains to humanities questions – in this case, of crowdsourcing and APIs to the simple but fundamental question “What does the text actually say?” – is characteristic of the emerging field of digital humanities. Collaboration – facilitated in this case by these same techniques – often plays an important role in such projects, due to the enormous amounts of data available, the scalability of digital techniques in comparison to individual manual effort, and the power of digital methods to help make sense of a volume of material larger than any individual could plausibly analyze by hand.

Donald Sturgeon is Postdoctoral Fellow in Chinese Digital Humanities and Social Sciences at Harvard University’s Fairbank Center for Chinese Studies, and editor of the Chinese Text Project.

Posted in Chinese, Digital Humanities | Comments Off

Classical Chinese Digital Humanities

Introducing the first in a series of online tutorials covering basic digital humanities techniques using the Python programming language and the Chinese Text Project API. These tutorials are based in part on material covered in the course CHNSHIS 202: Digital Methods for Chinese Studies, which I taught at Harvard University’s Department of East Asian Languages and Civilizations in Spring 2016.

Intended audience: People with some knowledge of Chinese literature and an interest in digital humanities; no programming experience necessary.

Format: Most of these tutorials will consist of a Jupyter Notebook file. These files contain a mixture of explanations and code that can be modified and run from within your web browser. This makes it very easy to modify, play with, and extend all of the example code. You can also read the tutorials online first (you’ll need to download the files in order to run the code and do the exercises though).

http://digitalsinology.org/classical-chinese-dh-getting-started/

Posted in Chinese, Digital Humanities | Comments Off

Text, Data, and Digital Humanities: APIs and the Chinese Text Project

Yale University, 22 April 2016

As databases, digital libraries, and digital tools grow in size and scope, they present increasingly valuable opportunities for research using novel methods including text mining, distant reading and other techniques that can be grouped under the heading “digital humanities”. At the same time, what can in practice be achieved technically using individual projects and their associated tools and materials is frequently limited by the types of use case envisioned by their creators when these resources were first designed and implemented.

Application Programming Interfaces (APIs) – standardized mechanisms through which independently developed pieces of computer software are able to share data and functionality in real time – provide one approach to greatly increasing the flexibility and thus utility of databases, digital libraries, and other tools. Key to the utility of such APIs is the possibility of functionality and content being reused in different ways by different users, without requiring central implementation of a new mechanism for each use case.

In this talk I describe how the implementation of existing third-party APIs as well as the development of a new special-purpose API for the Chinese Text Project, an online database and digital library of pre-modern Chinese texts, has opened up new opportunities for fast, efficient, and easy to use repurposing of data in a variety of contexts. These include user-driven integration with other online tools and resources (including both those already available, and those still to be constructed), statistical textual analysis and natural language processing research, and teaching and research in Chinese digital humanities.

Posted in Chinese, Digital Humanities, Talks and conference papers | Comments Off

Automated Identification of Parallels and Allusions in Classical Chinese Texts

Paper presented at AAS 2016, Seattle, April 1, 2016

The classical Chinese corpus has long been recognized to contain a vast amount of text reuse: closely related textual content that, for a variety of reasons, occurs in multiple works that might otherwise be considered to be quite independent creations ascribed to entirely different authors. Although this reuse occasionally involves explicit citation of a particular work, or acknowledgment that what follows is a widely known saying as opposed to an original invention of the author, far more often no indication is given that a passage may have been borrowed from elsewhere. Identifying such instances of reuse can shed light upon difficult issues of authorship and textual history, as well as highlight textual variations that can provide clues to the interpretation of obscure or disputed passages.

Digital methods make possible the exploration and analysis of text reuse not only in isolated instances, but systematically across a corpus of works as a whole. In this paper I propose methods of identifying two distinct types of text reuse in the classical Chinese corpus and provide an evaluation of the degrees of accuracy achieved. The first is overtly similar or “parallel” passages, which can be reliably located by defining and maximizing appropriate similarity metrics over regions of text. The second is less direct allusion to the content of earlier works, and is considerably more challenging to identify. I propose an approach that makes use of information retrieval and machine learning techniques, while also leveraging statistical data derived from the more easily identified “parallel” passages.

Posted in Chinese, Digital Humanities, Talks and conference papers | Comments Off