Summary notebook
tensorflow.keras.preprocessing.text.Tokenizer- limit vocabulary size with
num_words=1000 - deal with out of vocab:
oov_token="<OOV>" .fit_on_texts(input_texts)
- limit vocabulary size with
- can get the vocabulary (as a dict) by
.word_indexproperty
