Introduction to Text Analytics with R – Part 7: VSM, LSA, & SVD
This data science tutorial introduces the viewer to the exciting world of text analytics with R programming. As exemplified by the popularity of blogging and social media, textual data if far from dead – it is increasing exponentially! Not surprisingly, knowledge of text analytics is a critical skill for data scientists if this wealth of information is to be harvested and incorporated into data products. This data science training provides introductory coverage of the following tools and techniques:
– Tokenization, stemming, and n-grams
– The bag-of-words and vector space models
– Feature engineering for textual data (e.g. cosine similarity between documents)
– Feature extraction using singular value decomposition (SVD)
– Training classification models using textual data
– Evaluating accuracy of the trained classification models
Part 7 of this video series includes specific coverage of:
– The trade-offs of expanding the text analytics feature space with n-grams.
– How bag-of-words representations map to the vector space model (VSM).
– Usage of the dot product between document vectors as a proxy for correlation.
– Latent semantic analysis (LSA) as a means to address the curse of dimensionality in text analytics.
– How LSA is implemented using singular value decomposition (SVD).
– Mapping new data into the lower dimensional SVD space.
The data and R code used in this series is available via the public