Background
Type: Article

An intelligent extension of the training set for the Persian n-gram language model: an enrichment algorithm

Journal: Onomazein (07171285)Year: September 2023Volume: Issue: Pages: 191 - 211
Hybrid GoldDOI:10.7764/onomazein.61.09Language: English

Abstract

In this article, we are going to introduce an automatic mechanism to intelligently extend the training set to improve the n-gram language model of Persian. Given the free word-order property in Persian, our enrichment algorithm diversifies n-gram combinations in baseline training data through dependency reordering, adding permissible sentences and filtering ungrammatical sentences using a hybrid empirical (heuristic) and linguistic approach. Experiments performed on baseline training set (taken from a standard Persian corpus) and the resulting enriched training set indicate a declining trend in average relative perplexity (between 34% to 73%) for informal/spoken vs. formal/written Persian test data. © 2023 Pontificia Universidad Catolica de Chile. All rights reserved.