Distant Reader Study Carrel

About your study carrel

This page outlines the breadth & depth of your "study carrel" -- the results & analysis of your Distant Reader submission. Peruse the content of this page, and then consider learning how to dig deeper by reading the Distant Reader Study Carrel Cookbook. If you want "just the facts", then consider reading this text's synopsis.

Size & scope

First, the simple things. Your study carrel was created through the submission of a [SINGLE URL|FILE OF URLS|FILE FROM YOUR COMPUTER|ZIP FILE]. This ultimately resulted in a collection of 33 item(s). The original versions of these items have been saved in a cache, and each of them have been transformed & saved as a set of plain text files. All of the following analysis has been done against these plain text files.

Your study carrel is 318287 words long. [0] Each item in your study carrel is, on average, 9645.0 words long. [1] If you dig deeper, then you might want to save yourself some time by reading a shorter item. On the other hand, if your desire is for more detail, then you might consider reading a longer item. The following histograms and box plots illustrate the overall size of your study carrel.

Readability

On a scale from 0 to 100, where 0 is very difficult and 100 is very easy, your documents have an average readability score of 51.0. [2] Consequently, if you want to read something more simplistic, then consider a document with a higher score. If you want something more specialized, then consider something with a lower score. The following histograms and box plots illustrate the overall readability of your study carrel.

Word frequencies

By merely counting & tabulating the frequency of individual words or phrases, you can begin to get an understanding of your carrel's "aboutness". Excluding "stop words", some of the more frequent words include: one, data, fiction, also, words, novels, texts, corpus, see, literary, reading, new, social, figure, gender, genre, may, novel, different, work, analysis, text, cultural, use, us. [3] Using the three most frequent words, the three files containing all of those words the most are ./txt/culturalanalytics-org-736.txt, ./txt/culturalanalytics-org-6358.txt, and ./txt/culturalanalytics-org-72.txt.

The most frequent two-word phrases (bigrams) include: new york, science fiction, cultural analytics, university press, nineteenth century, digital humanities, detective fiction, et al, female poets, topic modeling, poet heterosexual, machine learning, jan reading, twentieth century, performance styles, literary history, tang kristensen, dimensionality reduction, ted underwood, critical search, pitch range, sampled recordings, th century, data sets, poet voice, and the three file that use all of the three most frequent phrases are ./txt/culturalanalytics-org-6358.txt ./txt/culturalanalytics-org-4588.txt, and ./txt/culturalanalytics-org-6733.txt.

While often deemed superficial or sophomoric, rudimentary frequencies and their associated "word clouds" can be quite insightful:


unigrams

bigrams

Keywords

Sets of keywords -- statistically significant words -- can be enumerated by comparing the relative frequency of words with the number of times the words appear in an entire corpus. Some of the most statistically significant keywords in your study carrel include: literary, data, word, text, novels, figures, genres, likely, new, words, differ, digitized, feature, fictional, languages, news, novel, press, researcher, americans, books, category, centuries, century, character. And now word clouds really begin to shine:


keywords

Through the use of a concordance -- a keyword-in-context tool, or a "poor man's search engine" -- you can see how words are used in relation to other words. Here is a random sample of concordance entries using the two most significant keyword as input:

s employ a probabilistic memoryefficient data structure called the bloom filter which trad
ognizably generic fiction in a large literary corpus while we do not yet see strong eviden
 sum up transformations in modernist literary scholarship over the past decade or two in d
se that suggest the alternative category data preparation most folklore collections requir
is own preferences both personal and literary when choosing between available offers of wo
ion a corpus listing and related project data are available in the ca dataverse supervised
ary history neither pamphlet nor the literary lab website share the original results list 
erformance terms of white mainstream literary culture this is not to presume that a more e
s were indeed figures typical of our literary moment they are not they are representatives
on of verbs human analysis of the parsed data revealed that many words identified as verbs
aches on questions of what can be called data relied upon as data and the new kinds of cul
 a small portion of imperial chinese literary production these works represent a good init
re fiction more like the rest of the literary field the converse trend a playful borrowing
ransformation from dictionary to digital data is one not just of intersecting temporalitie
 genres can be viewed not as natural literary kinds but as generalizations about the organ
ures are available in an online code and data supplement my descriptions will remain brief
e the implications are important for literary scholars although systems are become increas
 book the performance of reading new literary history no oxford english dictionary neutral
le would provide further insights to our data but since we could not test them we turned i
is statistics they describe social media data and user behavior in terms of probabilities 
th the novels narrator using the booknlp data this step yields a sensitivity of and a spec
by retrospectively finding it in the literary text thus imposing an artificial category on
 theres also something enthralling about data that yields such stories almost organically 
d the extent to which followers of a literary movement will exaggerate stylistic trends de
truction of meaning by assessing how literary critics and linguists have tried to model re

Topic modeling

Topic modeling is another popular approach to connoting the aboutness of a corpus. If your study carrel could be summed up in a single word, then that word might be fiction, and ./txt/culturalanalytics-org-9268.txt is most about that word.

If your study carrel could be summed up in three words ("topics") then those words might be: reading, fiction, and data. And the respective files would be: ./txt/culturalanalytics-org-7209.txt, ./txt/culturalanalytics-org-5619.txt, and ./txt/culturalanalytics-org-4630.txt.

If your study carrel could be summed up in five topics, and each topic were each denoted with three words, then those topics and their most significantly associated files would be:

  1. data gender women - ./txt/culturalanalytics-org-4630.txt
  2. information data words - ./txt/culturalanalytics-org-5619.txt
  3. fiction genre model - ./txt/culturalanalytics-org-5559.txt
  4. poets poet london - ./txt/culturalanalytics-org-7209.txt
  5. books reading topic - ./txt/culturalanalytics-org-4114.txt

Moreover, the totality of the study carrel's aboutness, can be visualized with the following pie chart:

Nouns & verbs

Through an analysis of your study carrel's parts-of-speech, you are able to answer question beyonds aboutness. For example, a list of the most frequent nouns (↩, text, novel, word, fiction, work, genre, poet, corpus, datum, century, gender, figure, %, reading, model, character, history, way, *, time, book, analysis, author, language) helps you answer what questions; "What is discussed in this collection?" An enumeration of the lemmatized verbs (be, have, use, do, see, make, include, find, read, give, identify, write, take, suggest, show, appear, describe, base, understand, provide, know, compare, associate, represent, seem) helps you learn what actions take place in a text or what the things in the text do. Very frequently, the most common lemmatized verbs are "be", "have", and "do"; the more interesting verbs usually occur further down the list of frequencies:


nouns

verbs

Proper nouns & pronouns

An extraction of proper nouns (↩, University, London, Press, New, Journal, York, Poet, Cultural, Literary, English, James, John, Fiction, Digital, Analytics, Figure, Gothic, Oxford, Alger, David, Data, American, CA, Cambridge) helps you determine the names of people and places in your study carrel. An analysis of personal pronouns (we, it, they, i, us, them, he, she, you, itself, one, themselves, her, me, him, herself, himself, y, ourselves, ye, myself, ours, ''s, oneself, theirs) enables you to answer at least two questions: 1) "What, if any, is the overall gender of my study carrel?", and 2) "To what degree are the texts in my study carrel self-centered versus inclusive?" Below are words cloud of your study carrel's proper & personal pronouns.


proper nouns

pronouns

Adjectives & adverbs

Learning about a corpus's adjectives (other, such, different, literary, more, social, large, many, historical, same, high, cultural, new, female, particular, first, early, digital, good, critical, american, important, nineteenth, similar, low) and adverbs (not, more, also, only, most, as, even, well, so, here, out, however, often, then, very, less, rather, up, just, thus, perhaps, together, indeed, especially, still) helps you answer how questions: "How are things described and how are things done?" An analysis of adjectives and adverbs also points to a corpus's overall sentiment. "In general, is my study carrel positive or negative?"


adjectives

adverbs

Notes

[0] Once upon a time, a corpus of a million words was deemed large.

[1] To put this into context, the typical scholarly journal article is about [NUMBER] words long, Shakespeare's Hamlet is [NUMBER] words long, and the Bible is [NUMBER] words long.

[2] In this case, a Flesch readability score is being calculated. It is based on things like the number of words in a document, the lengths of the words, the number of sentences, the lengths on the sentences, etc. In general children's stories are have lower Flesch scores while insurance documents and doctoral dissertations have higher scores.

[3] "Stop words" are sometimes called "function words", and they are words which carry little or no meaning. Every language has stop words, and in English they include but are not limited to "the", "a", "an", etc. A single set of stop words has been used through out the analysis of your collection.

[4] Concordances are one of the oldest forms of text mining, first developed in the 13th century to "read" religious documents.

[6] An unsupervised machine learning process, topic modeling is a very popular text mining operation. Assuming that a word is known by the company it keeps, topic modeling identifies sets of keywords denoted by their centrality in the text. Words which are both frequent as well as in close proximity to each other are considered significant.