Distant Reader Study Carrel

About your study carrel

This page outlines the breadth & depth of your "study carrel" -- the results & analysis of your Distant Reader submission. Peruse the content of this page, and then consider learning how to dig deeper by reading the Distant Reader Study Carrel Cookbook. If you want "just the facts", then consider reading this text's synopsis.

Size & scope

First, the simple things. Your study carrel was created through the submission of a [SINGLE URL|FILE OF URLS|FILE FROM YOUR COMPUTER|ZIP FILE]. This ultimately resulted in a collection of 543 item(s). The original versions of these items have been saved in a cache, and each of them have been transformed & saved as a set of plain text files. All of the following analysis has been done against these plain text files.

Your study carrel is 1455272 words long. [0] Each item in your study carrel is, on average, 2680.0 words long. [1] If you dig deeper, then you might want to save yourself some time by reading a shorter item. On the other hand, if your desire is for more detail, then you might consider reading a longer item. The following histograms and box plots illustrate the overall size of your study carrel.

Readability

On a scale from 0 to 100, where 0 is very difficult and 100 is very easy, your documents have an average readability score of 52.0. [2] Consequently, if you want to read something more simplistic, then consider a document with a higher score. If you want something more specialized, then consider something with a lower score. The following histograms and box plots illustrate the overall readability of your study carrel.

Word frequencies

By merely counting & tabulating the frequency of individual words or phrases, you can begin to get an understanding of your carrel's "aboutness". Excluding "stop words", some of the more frequent words include: see, data, new, library, will, may, web, one, us, also, like, found, use, renewed, information, content, time, work, first, digital, open, search, june, now, people. [3] Using the three most frequent words, the three files containing all of those words the most are ./txt/onlinebooks-library-upenn-edu-9647.txt, ./txt/maisonbisson-com-7395.txt, and ./txt/www-yalelawjournal-org-5755.txt.

The most frequent two-word phrases (bigrams) include: renewals found, registered works, works database, issue renewals, contributions renewed, issues renewed, united states, july june, january december, june may, october september, april march, may april, march february, february january, september august, november october, december november, august july, see januaryjune, new york, see julydecember, contribution renewals, code lib, database contributions, and the three file that use all of the three most frequent phrases are ./txt/onlinebooks-library-upenn-edu-9647.txt ./txt/onlinebooks-library-upenn-edu-3057.txt, and ./txt/zbw-eu-9.txt.

While often deemed superficial or sophomoric, rudimentary frequencies and their associated "word clouds" can be quite insightful:


unigrams

bigrams

Keywords

Sets of keywords -- statistically significant words -- can be enumerated by comparing the relative frequency of words with the number of times the words appear in an entire corpus. Some of the most statistically significant keywords in your study carrel include: data, libraries, library, news, web, archival, new, https, http, archived, twitter, github, june, people, books, likely, users, like, posts, user, archives, evergreen, liked, privacy, record. And now word clouds really begin to shine:


keywords

Through the use of a concordance -- a keyword-in-context tool, or a "poor man's search engine" -- you can see how words are used in relation to other words. Here is a random sample of concordance entries using the two most significant keyword as input:

as readonly archiving tool that collects data from twitter but how it could operate as a t
inded me at a proverb about software and data software ages like fish data ages like wine 
 assistance of the nation that hosts the data the mlat system encourages international coo
ty wired opinion how to protect our kids data and privacy author wired opinionwired opinio
unter ensures reliable and audited usage data for journals ebooks databases and multimedia
hiving priority for most university libraries is documenting their institution further red
roject of open knowledge nepal like open data nepal local boundaries asknepal and more hav
ar update diddling with data great books data dictionary data curation in purdue twitter f
ush for the establishment of public libraries in a speech at the opening of the free publi
er categories about me ala american libraries assessment blogging book career classic blun
cess visualize clean interpret and share data especially open data using python pythonbase
ntly didnt have the ability to collected data from both the request and response records b
f web users throughout the world and use data normalization to correct for biases about es
metadata contextually traditionally libraries standardise subject metadata using x authori
f privacy controls this includes how the data collection is being framed how the different
y need to be done wol meet wvl we need a data modeling language that is suitable to rdf da
 the tracking of individuals however the data does contain information about the useragent
res you can see the three columns in the data there the next step was actually to sort all
 could do most of it on his own php libraries for collaborative filtering and recommendati
archives scratched into instruments libraries becoming motherships herbaria becoming ecolo
kova november list of animals of the red data book of russian federation unepgridarendal a
ings ndsa news ontology open access open data open repositories open source preservation a
res that help you get an overview of the data you are working with as well as identifying 
nstitutions appear to have in their libraries when i arrived at the forum what i found was
 of the idp selector by using additional data provided by the sp the user experience the f

Topic modeling

Topic modeling is another popular approach to connoting the aboutness of a corpus. If your study carrel could be summed up in a single word, then that word might be data, and ./txt/blog-esilibrary-com-5622.txt is most about that word.

If your study carrel could be summed up in three words ("topics") then those words might be: web, like, and renewed. And the respective files would be: ./txt/en-wikipedia-org-2180.txt, ./txt/maisonbisson-com-7395.txt, and ./txt/onlinebooks-library-upenn-edu-9647.txt.

If your study carrel could be summed up in five topics, and each topic were each denoted with three words, then those topics and their most significantly associated files would be:

  1. data library web - ./txt/inkdroid-org-7078.txt
  2. site amazon like - ./txt/www-yalelawjournal-org-5755.txt
  3. retrieved 2017 2014 - ./txt/en-wikipedia-org-7739.txt
  4. berlusconi retrieved 2013 - ./txt/en-wikipedia-org-4449.txt
  5. renewed renewals cce - ./txt/onlinebooks-library-upenn-edu-9647.txt

Moreover, the totality of the study carrel's aboutness, can be visualized with the following pie chart:

Nouns & verbs

Through an analysis of your study carrel's parts-of-speech, you are able to answer question beyonds aboutness. For example, a list of the most frequent nouns (●, ►, library, issue, datum, web, work, time, user, june, content, information, january, july, year, archive, site, december, ^, people, contribution, post, %, page, service) helps you answer what questions; "What is discussed in this collection?" An enumeration of the lemmatized verbs (be, have, do, see, use, find, make, ’, get, renew, say, work, include, go, take, archive, retrieve, need, register, know, create, look, think, want, provide) helps you learn what actions take place in a text or what the things in the text do. Very frequently, the most common lemmatized verbs are "be", "have", and "do"; the more interesting verbs usually occur further down the list of frequencies:


nouns

verbs

Proper nouns & pronouns

An extraction of proper nouns (June, January, July, December, Library, October, April, March, May, CCE, University, Berlusconi, February, November, September, New, August, Google, Amazon, |, Digital, United, Data, States, Open) helps you determine the names of people and places in your study carrel. An analysis of personal pronouns (it, i, we, you, they, them, he, me, us, she, him, itself, themselves, ’s, y, myself, one, her, em, yourself, ourselves, himself, mine, ‘, herself) enables you to answer at least two questions: 1) "What, if any, is the overall gender of my study carrel?", and 2) "To what degree are the texts in my study carrel self-centered versus inclusive?" Below are words cloud of your study carrel's proper & personal pronouns.


proper nouns

pronouns

Adjectives & adverbs

Learning about a corpus's adjectives (more, new, other, good, many, such, first, digital, public, open, available, different, social, same, large, last, high, most, own, few, free, political, great, full, long) and adverbs (not, also, up, so, now, more, out, just, here, only, even, then, as, well, most, very, still, back, first, too, really, often, in, much, on) helps you answer how questions: "How are things described and how are things done?" An analysis of adjectives and adverbs also points to a corpus's overall sentiment. "In general, is my study carrel positive or negative?"


adjectives

adverbs

Notes

[0] Once upon a time, a corpus of a million words was deemed large.

[1] To put this into context, the typical scholarly journal article is about [NUMBER] words long, Shakespeare's Hamlet is [NUMBER] words long, and the Bible is [NUMBER] words long.

[2] In this case, a Flesch readability score is being calculated. It is based on things like the number of words in a document, the lengths of the words, the number of sentences, the lengths on the sentences, etc. In general children's stories are have lower Flesch scores while insurance documents and doctoral dissertations have higher scores.

[3] "Stop words" are sometimes called "function words", and they are words which carry little or no meaning. Every language has stop words, and in English they include but are not limited to "the", "a", "an", etc. A single set of stop words has been used through out the analysis of your collection.

[4] Concordances are one of the oldest forms of text mining, first developed in the 13th century to "read" religious documents.

[6] An unsupervised machine learning process, topic modeling is a very popular text mining operation. Assuming that a word is known by the company it keeps, topic modeling identifies sets of keywords denoted by their centrality in the text. Words which are both frequent as well as in close proximity to each other are considered significant.