Probabilistic natural language processing : bayes theorem in language modeling.
Анотація
Probabilistic reasoning is very important for NLP. Suppose that we have a system that recognizes speech, which converts an audio signal into text. Most of the time it will not be able to find the perfect interpretation of a speech signal. It may come up with a number of alternatives, some of which are more reasonable than others. For example, if you say “recognize speech”, it’s very possible that your system was going to hear something like “reach a crew peach”. Because for our speech recognition system those two strings sound very similar and they maybe very easy to confuse. But obviously for human being they are very different and one of them is reasonable, the other one is completely nonsensical. Which would suggest that we want the probability of the first string to be very high, and the probability of the second string to be relatively low. So, even if the speech recognition system has to chose between those two, it will have an easy time figuring out which one is correct.Посилання
Koller D., Friedman N. Probabilistic Graphical Models: Principles and Techniquesm / D. Koller, N. Friedman. – L.: MIT Press, 2009. – 230 p.
Rush A., Sontag D. On Dual Decomposition and Linear Programming Relaxations for Natural Language Processing / A. Rush, D. Sontag // In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1–11. L.: Cambridge, MA, 2010. – 234 p.
Spitkovsky V., Hiyan A. Viterbi Training Improves Unsupervised Dependency Parsing / V. Spitkovsky, A. Hiyan // Proceedings of the Fourteenth Conference on Computational Natural Language Learning. – Uppsala: UYHJ, 2011. – p. 12-19.
Smith N. Linguistic Structure Prediction / N. Smith. – NY: Morgan&Claypool, 2011. – 353 p.