Skip to content
Open
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
such an
  • Loading branch information
uralik committed Dec 30, 2017
commit 466425507e4e035658493edb13b89c329f8a9bfa
2 changes: 1 addition & 1 deletion lecture_note.tex
Original file line number Diff line number Diff line change
Expand Up @@ -3685,7 +3685,7 @@ \subsection{Smoothing and Back-Off}
travel.}

The biggest issue of having an $n$-gram that never occurs in the training corpus
is that any sentence containing such as $n$-gram will be given a zero probability
is that any sentence containing such an $n$-gram will be given a zero probability
regardless of how likely all the other $n$-grams are. Let us continue with the
example of ``I like llama''. With an $n$-gram language model built using all the
books in Google Books, the following, totally valid sentence\footnote{
Expand Down