Text Mining using 'dplyr', 'ggplot2', and Other Tidy Tools


[Up] [Top]

Documentation for package ‘tidytext’ version 0.4.1

Help Pages

augment.CTM Tidiers for LDA and CTM objects from the topicmodels package
augment.jobjRef Tidiers for Latent Dirichlet Allocation models from the mallet package
augment.LDA Tidiers for LDA and CTM objects from the topicmodels package
augment.STM Tidiers for Structural Topic Models from the stm package
bind_tf_idf Bind the term frequency and inverse document frequency of a tidy text dataset to the dataset
cast_dfm Casting a data frame to a DocumentTermMatrix, TermDocumentMatrix, or dfm
cast_dtm Casting a data frame to a DocumentTermMatrix, TermDocumentMatrix, or dfm
cast_sparse Create a sparse matrix from row names, column names, and values in a table.
cast_tdm Casting a data frame to a DocumentTermMatrix, TermDocumentMatrix, or dfm
corpus_tidiers Tidiers for a corpus object from the quanteda package
dictionary_tidiers Tidy dictionary objects from the quanteda package
get_sentiments Get a tidy data frame of a single sentiment lexicon
get_stopwords Get a tidy data frame of a single stopword lexicon
glance.corpus Tidiers for a corpus object from the quanteda package
glance.CTM Tidiers for LDA and CTM objects from the topicmodels package
glance.estimateEffect Tidiers for Structural Topic Models from the stm package
glance.LDA Tidiers for LDA and CTM objects from the topicmodels package
glance.STM Tidiers for Structural Topic Models from the stm package
lda_tidiers Tidiers for LDA and CTM objects from the topicmodels package
mallet_tidiers Tidiers for Latent Dirichlet Allocation models from the mallet package
nma_words English negators, modals, and adverbs
parts_of_speech Parts of speech for English words from the Moby Project
reorder_func Reorder an x or y axis within facets
reorder_within Reorder an x or y axis within facets
scale_x_reordered Reorder an x or y axis within facets
scale_y_reordered Reorder an x or y axis within facets
sentiments Sentiment lexicon from Bing Liu and collaborators
stm_tidiers Tidiers for Structural Topic Models from the stm package
stop_words Various lexicons for English stop words
tdm_tidiers Tidy DocumentTermMatrix, TermDocumentMatrix, and related objects from the tm package
tidy.Corpus Tidy a Corpus object from the tm package
tidy.corpus Tidiers for a corpus object from the quanteda package
tidy.CTM Tidiers for LDA and CTM objects from the topicmodels package
tidy.dfm Tidy DocumentTermMatrix, TermDocumentMatrix, and related objects from the tm package
tidy.dfmSparse Tidy DocumentTermMatrix, TermDocumentMatrix, and related objects from the tm package
tidy.dictionary2 Tidy dictionary objects from the quanteda package
tidy.DocumentTermMatrix Tidy DocumentTermMatrix, TermDocumentMatrix, and related objects from the tm package
tidy.estimateEffect Tidiers for Structural Topic Models from the stm package
tidy.jobjRef Tidiers for Latent Dirichlet Allocation models from the mallet package
tidy.LDA Tidiers for LDA and CTM objects from the topicmodels package
tidy.simple_triplet_matrix Tidy DocumentTermMatrix, TermDocumentMatrix, and related objects from the tm package
tidy.STM Tidiers for Structural Topic Models from the stm package
tidy.TermDocumentMatrix Tidy DocumentTermMatrix, TermDocumentMatrix, and related objects from the tm package
tidy_triplet Utility function to tidy a simple triplet matrix
unnest_characters Wrapper around unnest_tokens for characters and character shingles
unnest_character_shingles Wrapper around unnest_tokens for characters and character shingles
unnest_lines Wrapper around unnest_tokens for sentences, lines, and paragraphs
unnest_ngrams Wrapper around unnest_tokens for n-grams
unnest_paragraphs Wrapper around unnest_tokens for sentences, lines, and paragraphs
unnest_ptb Wrapper around unnest_tokens for Penn Treebank Tokenizer
unnest_regex Wrapper around unnest_tokens for regular expressions
unnest_sentences Wrapper around unnest_tokens for sentences, lines, and paragraphs
unnest_skip_ngrams Wrapper around unnest_tokens for n-grams
unnest_tokens Split a column into tokens