Category: Technology

Lab Notebook: Data Prep

So recently, in preparing for one of my comprehensive exams in early American literature, I scanned in the last eleven years worth of exams into PDF format to make it easier to take notes, copy and paste book titles, authors, etc. Unfortunately, the state of the copies in our department folders meant that I had to first make clean photocopies in order to be able to scan them using our Digital Humanities department’s Fujitsu Snapscan (the side effect of which is a serious case of scanner envy). It wasn’t until I was looking through the newly created PDF that I found a few missing pages, page sequence issues, as well as page direction problems. But using Acrobat Pro’s tools, cleanup of this sort was easy. I also used Acrobat’s OCR tool on the file so that I could copy and paste the text into Word and Excel. I had used Word as my first step so that I could clean up the text (lots of minor OCR misreads) before copying and pasting into Excel (I’m using different worksheets based on the different sections of the exams; though the format has evolved a little over the last decade, they basically fall into IDs, shorter essays, and longer essay sections). And since I was already cleaning up the text within Word, I decided to also keep the Word document just to have a cleaner version of the exams. In cleaning up my Word file, I made sure to try to maintain all the original significant formatting such as italics for book titles; it just makes the exams easier to read.

I also stripped out nonessential information, such as sectional instructions (though that could make for an interesting rhetorical analysis in itself), and just labeled each section “Part I,” “Part II,” and “Part III.”

 

My Excel file is just beginning with rudimentary information for now:

Eventually, I’ll add columns such as themes, persons of interests, periods, related critics, related novels, etc.

 

After  getting this far, it occurred to me this task might be easier and more valuable in the long run for gathering different sets of stats, if I were able to insert markup tags. For instance, I know Word will let me search and replace based on text formatting, and so I might be able to search and replace all the italicized words with their original words but also with opening and closing tags (something like <book>…</book>). I couldn’t figure out how to do the replace part until I googled around for Word and using regular expressions. Sure enough it handles them (read a good introduction on this by Microsoft: “Putting regular expressions to work in Word”.

However, in the end, I didn’t need to use them. I spent a great deal of time yesterday trying to get past the problem of being able to only replace every italicized word rather than the entire phrase. I eventually got it working using a VBA macro. But it turns out I didn’t need to use the regular expressions or my VBA code. Today, while retyping all of this (I lost my file while running test code—the lesson being, save all open files before testing out your VBA code!), I found exactly what I needed here. I just swapped out their replacement text with what I was looking for and it worked like a charm. The “^&” below is the  code for what I originally was finding (think of it like a variable that contains the original text). By using it in the replace box, I’m able to insert what I needed to as well as the original search terms (in this case, the formatted phrase).

Very cool. And powerful.

One thing to note, though, was that before I did this, I first had to do a search and replace italicized paragraph mark with a non-formatted paragraph mark because it would create an empty set of tags if the paragraph marker was also formatted as italics.

 

Since I didn’t remove formatting within the replace box, however, my <book>…</book> tags were inserted as italicized text. So a simple search and replace the tags with an non-formatted version and presto:

Just remember to do this with the closing tag as well. Though there are other cooler ways of doing this, simple and fast go a long way in my book.

 

Of course, I also had to manually verify that all of these tags actually were for book titles. There were a few cases of quotes or exam instructions that I hadn’t taken out, or cases where the question text was being emphasized. In those cases, I used other tags (such as <emphasis>why</emphasis>  or <foreign>fin de siècle</foreign>). I currently have no need of these tags, but since it was easy (and I was verifying the text anyway), I decided to go ahead and use them.

 

Next, I want to use this file in Wordsmith Tools to see if anything interesting or useful pops up and see if I can simply create a list of the books based on the <book> tags.

A Simple Frequency List using R

So recently in my Digital Humanities Seminar class, John Laudun asked us to find different tools to use to output a frequency list from a text that interested us. Instead, I decided to try out my R skills, and dusted off my notes from an R Bootcamp for corpus linguistics I attended last year (this was led by Stefan Th. Gries (author of Quantitative Corpus Linguistics with R: A Practical Introduction) and organized by Stefanie Wulff at the University of North Texas, Denton–a brilliant workshop that anyone interested in corpus linguistics and R should attend).

I won’t even try to go into what R is; however, a good introduction to it (besides Gries’ book) is at the R Project. What I’m going to do today is show you what I did to create that frequency list. It’s very simple and may be of use to some of you who are thinking about jumping into R and corpus linguistics.

Before I begin, I just want to make it clear that my information on the R commands comes from my notes of the bootcamp with Stefan Gries (I was/am very new to R). You can also find this information within his book.

 

Building the text file:

I am working with the text, Charlotte Temple by by Susanna Rowson found on Project Gutenberg. I originally downloaded the text last year, and while cleaning up the text (mis-scans from Project Gutenberg), I had also decided to break the text up into multiple files by chapter. Although I can work with multiple input files in R, I thought it might be easier to get my feet wet by beginning with only one text file. All of those individual files began with “ct” followed by the chapter number, ending with a .txt file extension (for example, ct01.txt).This made it easier for me to sort and locate the particular chapters I needed. I use Windows 7, and don’t know of an easy way to combine all the files into one file besides cutting and pasting, so I decided to go out to the Command Prompt. I first navigated to the directory where my files were located:

Here’s a list of all my chapter files: dir ct*.txt  (this says show me all the files that begin with “ct” and end with “.txt”):

To combine all of them into one file, I entered:  type ct*.txt > ctcompl.txt

This reads as follows:

  • type = display the contents to the screen
  • this is followed by what file to type–in this case, it’s all the files that begin with ‘ct’ and end with ‘.txt’ (the asterisk, *, is a wildcard).
  • but instead of displaying the contents on the screen, I used a “>” to send the contents to a file.
  • the file name I used for the complete text is ctcompl.txt (if the file doesn’t exist, it will be created; if it exists, then it will be overwritten).

(Notice how the file names are displayed. Though you can’t see it in this screen shot, it displays them all, including the new file created)

A quick command, type ctcompl.txt will allow you to verify the contents of the file. Now one thing I should point out is, that my original naming convention for the separate chapter files is what allowed the text to be built in the correct order. It’s something to keep in mind when building any sets of corpora.

 

The R commands

I’m assuming you have R already loaded. If not, or for instructions and help R, go to the R Project download page. This is the development environment I’m using. During the bootcamp, Stefan constantly warned us to type our commands in our favorite (R friendly) text editor so that we would not mistakenly overwrite a vector that took a number of steps to create. It’s really good advice. Though I like Notepad++ for most of my text editing, I use Tinn-R for my R work. This is probably because this is what Stefan had us use in class (I’m a creature of building on familiarity when it comes to learning…)

Now, if you’re like me, one of the first things I had to get over was what a variable is called in R. It’s called a vector. I’m sure someone (probably Stefan) knows why. But for my peace of mind, I still call it a variable. Declaring or creating a vector is very easy; you just type in the name you want (there are restrictions) then pipe to (send to) it whatever information you want it to contain.

Okay; let’s start. First of all we need to tell R what text we want to use. And not only that, but we need for R to remember it so that we can do things with the text later. This means we have to tell R to read in our file and stuff it into a vector. There are a number of ways of doing it, but Stefan showed us a slick trick for Windows users (sorry Mac fans–does anyone know of the Mac equivalent?):

temple.text<-scan(choose.files(), what=”char”, sep=”\n”)

This reads as follows:

  • create a vector (variable) called temple.text (this is just my own naming convention–it helps me to remember this is the complete text for Charlotte Temple)
  • <- is like the Windows Command Prompt’s redirect command”>” in that it takes the output of the command and sends it to the temple.text vector, and
  • in this case, the command is scan . This will read the data from a file, which we specify afterwards using the choose.files() command:
  • choose.files() opens up a browse window to allow you to choose your file (or files–it’s that cool; I really didn’t need to combine all my files after all!). Again, I’m not sure how to do this interactively on the Mac.
    • you could always manually set a path (hardcode it), using the argument for the scan command, file=””. For example, “scan(file=”c:\myTextFile.txt”, …”)
  • the what=”char” tells R what kind of data the file contains (allowed types are logical, integer, numeric, complex, character, raw and list).
  • the sep=”\n” tells R how the data is delimited. In this case, I’m using lines. I believe if I didn’t, the data would just be delimited by whitespace

So now that we have the text, we might want to normalize it in some way. R is case sensitive, and so the word “Dog” and “dog” are different words. And though there are many cases where that might be important, for me, I would rather treat them as the same, and so I am going to convert the entire text to lower case:

temple.text<-tolower(temple.text)

This reads as follows (from right to left (or inner to outer)):

  • tolower(temple.text) says to take the vector that contains all of our text, and convert it all to lowercase, then
  • redirect that data (send to/save)
  • <-
  • back into the original vector name we read from (overwriting it with the newly made lowercase version)
  • temple.text

Next we need to break up the data into words. Again, we are doing something to the complete text (temple.text) but instead of overwriting it like we did with the lowercase conversion, we are going to create a new vector in order to keep track of things.

temple.words.list<-strsplit(temple.text, “\\W+”, perl=TRUE)

This reads as follows (again, from the innermost, or right function):

  • strsplit tells are to break up (or split) a string (which is what a bunch of character data is called).
    • In order for it to know what to split, we have to feed it some arguments (specific inputs), such as the data we want to split, temple.text, and how we want to split it,  “\\W+”, perl=TRUE) (notice that the arguments are seperated by commas)
    • the “\\W+” says to split up the data based on whitespace (one way to create words–note though, this will keep punctuation within a word, possibly throwing off your results (“dog.” and “dog” are two different words)–for this post, I’m not going to clean that up, though it can be done).
  • Then take all of this and save into something new, temple.words.list (temple.words.list <-)

So now, I’ve slightly lied. I’ve been talking about everything as if it’s a vector. What we’ve created so far is a list, which has a different internal structure than a vector. For brevity’s sake, I’m just going to say that we need to convert the list to a vector to continue to work with it:

temple.words.vector<-unlist(temple.words.list)

Again, starting from the right, we can read it as

  • “Take the list, words.list, and turn it into a vector by unlisting it.
  • Then save that information into a vector called temple.words.vector.”

Right now, temple.words.vector looks like this:

> head (temple.words.vector,10)
[1] “preface”     “for”         “the”         “perusal”     “of”          “the”         “young”       “and”         “thoughtless”
[10] “of”

This is just giving a list of the words and their positions. So we need to use the table command to work with it in a slightly different fashion to make a frequency list :

temple.freq.list<-table(temple.words.vector)

Though we have a list, another way to store data is within a table (think Excel, columns and rows). R is performs certain kinds of calculations for tables that we don’t get within a list. For a very useful site that explains tables (was well as other things R), go visit the R Tutorial by Clarkson College.

To read this though, again, start from the right:

  • We are telling R to take our new vector, temple.words.vector,
  • and to make a table out of it,
  • and save it to temple.freq.list.

What the temple.freq.list looks like now is:

> table(temple.words.vector)
temple.words.vector
a         abandon       abandoned          abated
644            1415               2               6               2
abbess        abhorred       abilities          abject          abjure
2               2               4               4               2
able           abode           about           above          abroad
4               2              42               6               4
absence          absent      absolutely        absorbed           abuse
6               4               2               2               2
abused           abyss         academy          accent          accept
2               4               2               8               2

This lists the different factors with their frequencies. We can now use this list to sort by frequency rather than alphabetically:

temple.sorted.freq.list<-sort(temple.freq.list, decreasing=TRUE)

This works a lot like what we did with the tolower function in that we saved the new version over the original, but instead we are just sorting it rather than converting its case.

So,  we are telling R

  • to take the list we just made, temple.freq.list,
  • and sort it descending (using the argument “decreasing=TRUE“),
  • then save all of this into temple.sorted.freq.list

temple.sorted.freq.list now looks like this:

>temple.words.vector
the              to             and              of             her               a               i             she
3529            2522            2421            2291            1726            1415            1300            1137
in              he             was             you              my                            that             his
1036             884             826             768             757             644             622             604
it             but            with             not             for       charlotte              be             had
601             599             588             569             535             525             456             441
said              me              by              as            from              on              is              at
432             431             399             394             394             391             379             362

This looks a lot like how we would imagine:  function words are typically the most frequent.

Next we are going to put this into a table format that is more readable than what we see above:

temple.sorted.table<-paste(names(temple.sorted.freq.list), temple.sorted.freq.list, sep=”\t”)

This is weird looking, I know. But basically, we want to take what we’ve come up so far in temple.words.vector

the              to             and              of             her               a               i             she
3529            2522            2421            2291            1726            1415            1300            1137

somehow turn the words and frequencies into column looking data.

So we have to build this by first just getting the words themselves, using the name() function

> names(temple.sorted.freq.list)
[1] “the”             “to”              “and”             “of”              “her”             “a”               “i”
[8] “she”             “in”              “he”              “was”             “you”             “my”              “”
[15] “that”            “his”             “it”              “but”             “with”            “not”             “for”

We then feed that list of names as one argument back into the paste() function, followed by the list itself as another argument, then followed bythe last argument, which says to insert a tab (“\t)between them, so that when we open it up, the values will appear separated by the tab.  The paste() function basically takes these separate factors and puts them into one:

> paste(names(temple.sorted.freq.list), temple.sorted.freq.list, sep=”\t”)

[1] “the\t3529”          “to\t2522”           “and\t2421”          “of\t2291”           “her\t1726”          “a\t1415”
[7] “i\t1300”            “she\t1137”          “in\t1036”           “he\t884”            “was\t826”           “you\t768”
[13] “my\t757”            “\t644”              “that\t622”          “his\t604”           “it\t601”            “but\t599”

So what we’re doing is building a file to look the way we want it to: a frequency list in columns (rather than the output you saw above)

Saving the data to a file:

cat(“Word\tFREQ”, temple.sorted.table, file=choose.files(), sep=”\n”)

  • cat() is a way to output a file like type is within the Command Prompt, except that it concatenates its element into a character string before outputting them.
  • In this case,  R is beginning the string of data with the string of characters, “Word\tFREQ“–that is, “Word” and “Freq” will be separates by a tab in a text editor; they are the column headers.
  • Then R will concatenate the data we have in temple.sorted.table behind the column headers–all into a character string, which then will be save to a location (the file argument) using
  • the choose.files() argument where Windows users may browse and create a new file (again, Mac users will need to do something different–instead of choose.files(), you could specify the path and what you want to call the new file–for example:  file=”myTextFile.txt”)

The result look like this within Notepad++:

 

You could also opened the new text file within Excel to make it prettier:

Using IBM’s Word Cloud Generator on Windows

The following is just a revisioning of John Laudun’s Digital Humanities Blog post, Using IBM’s Word Cloud Generator, for where his instructions would differ for Windows users (basic, but useful information for command prompt initiates:

And so, perhaps, the first place to begin is finding out how to get to the command line in Windows (XP, Vista, 7):

click on the Start Button, All Programs, Accessories, Command Prompt:

Command Prompt from the Startmenu

(or from the Run box, type in cmd and press enter). This will display the command prompt window:

It should take you to your user folder that corresponds to your login ID (this is slightly different in Windows XP). The Windows world uses the backslash key to describe its folder structure.

So the following path C:\Users\Big John can be read as follows:

  • C: is the drive (in this case, the “C drive”)
  • The \ (backslash) separates different levels of folders and files, in this case,  \Users is the Users subfolder
  • followed by  \Big John, the “Big John” subdirectory (folder)
  • The folder names are not case sensitive (at least when navigating) (so “big john” is read the same as “Big John”)

Okay, now you have the Command Prompt window open.

The > is known as the prompt, which is short for “the command line prompt.”

Your prompt is ready to receive instructions. (There’s a lot more to say about the environment in which you now find yourself, but for the sake of getting on with this tutorial we will leave that for another time.)

If you were to paste the code that you copied out of the .bat file we discussed in class and try to run it from where you are, chances are you will get nothing. That is because the prompt can only run things when it knows where they are — much the same applies in the GUI, but Windows and Mac and Linux GUIs do a lot of work behind the scenes to find applications for you. You have two choices: add the file hierarchy to your command (the %PATH% variable) or to navigate to where the WCG application is and run it from within its directory. (If you were going to use the application a lot, there are some other considerations, but we will leave those for another time — but feel free to ask if you like.)

So to navigate within the Command Prompt, you can use the following commands

  • your current (working) directory is automatically shown in the command box to the left of the cursor
  • type dir to display the contents of the current working directory (this won’t show you hidden folders or files; to see those, type in dir /a:h (show me files w a
  • type cd to change the directory you are in
  • type cd .. (that’s cd followed by a space followed by two periods) to move “up” a directory
  • to see a list of most of the commands available from the command prompt, you can type help , or for help with a particular command, such as dir, you could enter after the > help dir

To Navigate to the IBM Word Cloud directory, we are going to pretend it is on your Desktop:

C:\Users\Big John> cd desktop\IBM Word Cloud

This means, change directory (cd) to subfolder called desktop, and within that one, go to another subfolder called IBM Word Cloud. You can always change one directory at a time and do a dir to see what folders are in there in case you don’t remember:

C:\Users\Big John> cd desktop

C:\Users\Big John\Desktop> cd IBM Word Cloud

Typically, most Terminal windows will start you in your user home directory. My best advice for the sake of this current activity is to use Windows Explorer or the Mac Finder and move the unzipped folder containing the WCG, which is named “IBM Word Cloud” in my case, to the Desktop or to your Documents folder. Some place easy to get to.

From here, you should be able to run the bat file for testing:

But if you want to paste in the script from within the bat file (right click from Explorer and open with your favorite text editor), then copy the text of the script as you normally would. To paste the text inside the Command Prompt window, click on the  upper left corner on the c:\ icon:

Click on Edit within the dropdown menu, then click Paste:

Keep watching the Digital Humanities Seminar blog for more information

Pen input is retro?

I was both elated and miffed by an article today on PC World’s site that says of HTC’s new Android tablet, Flyer, that “HTC reaches back to yesteryear by including a pen stylus with the Flyer.” I’m miffed because of the inherent bias that typically comes with the use of a stylus in a computer world (don’t get me wrong; the reviewer loved this function on the Flyer, but the author makes it seem like a retro feature). Finger based computing is great for many things such as navigation, but it is not the end all of inputs. True, I like to type most of my notes, but I also like to be able to scribble them–especially diagrams or directions. I also like annotation. Annotation is the lifeblood of scholars (for professors as well as students). Using a keyboard, mouse, or finger to highlight or insert comments on an ebook, pdf article, or text document just is not as good as being able to use a stylus. The pen is much more precise and quicker.

I’ve been waiting a long time for word on inclusion of a stylus with the new phone-centric tablets. After all, that is one of the key features of  the more mature laptop-based tablets. I’ve seen a couple of manufacturers include a stylus for selection and even a few for drawing and some handwriting-within-a-window uses (reminds me of Apple’s Newton) but nothing (again, on the phone-based tablets) that lets me use a pen the way I use a pen.  So bravo, HTC! I hope other manufacturers follow.

NYT article: The Dirty Little Secrets of Search

It’s not like search engine optimization uses and abuses haven’t been in the news before, but this is a good reminder that our World Net View comes filtered, not at the least by anything we ourselves do, but by the whims of one company. Don’t get me wrong; I like Google and really do believe their shtick about doing no evil (unlike Apple). However, they are a business. And a business’s continued existence relies on earning money.

That kind of control makes me nervous.

However, what this article did was to highlight a secret even dirtier than the “Black Hat” tactics: search engines require secrets. Right after my initial response to the article (thumping my desk, saying this is a problem in need of an open source solution!), I realized that an open-sourced search engine, whose search and ranking algorithms were publicly known, would not only be open to abuse out of the gate, but would encourage it. And so the searches would fail.

And so must we live with One Engine to seek them all and in the darkness find them, trusting that market competition will keep it from doing evil? Or should this become a government operated function–like a utility or even more so, like the Mint? After all, part of that function is to prevent counterfeiting. But do we really want the”success” and “efficiency” of government control involved? It seems that there would be as much temptation for abuse as there is in the private world.

What then? Does anyone know of an open source plan that could prevent search algorithm abuse?

The article can be found here (and speaking of  link manipulation, note that this link includes a variable being passed to the NYT that gives credit to the site from which I discovered the article. This is a legitimate way to give credit to the people who find and share this information (after all, I didn’t originally go to NYT’s site); but just in case it bothers you, here is the direct link)

Looking it up…

I couldn’t help but notice in my last post  that Firefox’s spell checker didn’t recognize the word, Obama. So just out of curiosity, I googled around for information that explained who maintained their dictionary. All I could find was this blurb (emphasis mine):

The contents of dictionaries are not maintained by Mozilla. In some cases they are not maintained by anyone. If you think the contents of a dictionary should be updated, you might be able to find out who maintains it by looking in its README file (if it has one). You can find the README file by using a zip tool or jar tool to open the dictionary’s installation file. .xpi files can be opened in windows explorer by first changing the .xpi extension to .zip

Well, I couldn’t find the readme file, so it may be that I’m just blind. However, I am concerned about that bold statement. Not that I’m a prescriptivist when it comes to language, but if you are going to offer a reference tool, should it not reflect the most current (accepted) data? Of course, not EVERY nonce word or even proper name should be included, however, a sitting president’s name ought to have more clout as it is bound to be relevant to many people.  That lack of relevant data  defeats the purpose of the tool and it’s ability to help me, the user. My pet turtle’s name–okay; I get it.  And yes, I know I can add words to my personal dictionary, but why should the current president’s last name be relegated to my personal dictionary? After all, I don’t think the president would think its my responsibility. This falls under the category of a good idea, but lack of follow-thru… (and yes, FF did catch that one…)

UPDATE:  I DID find, at least I think, who is responsible for the dictionary FF uses:  http://hunspell.sourceforge.net/ I was immediately happy to learn that that they support more than one product:

“Hunspell is the spell checker of OpenOffice.org and Mozilla Firefox 3 & Thunderbird, Google Chrome, and it is also used by proprietary softwares, like Mac OS X, memoQ, Opera and SDL Trados.”

after all, dictionaries should be a more communal oriented effort. I still need to search through their forums to see about the maintenance business.

What’s in a word?

Apparently that depends on whether you spoke the word or heard it.

Dan Amira posted this word cloud comparison on New York Magazine’s website of President Obama’s State of the Union speech vs. one created by NPR, based on a request of its listeners to describe the speech in three words. It would be funny if it weren’t so sad. But it does highlight the power of word clouds. I hope more people and events are subjected to this type of tool. But the usual precautions should be taken when citing such information’s accuracy and context…

Test post from Android device

Trying out a post from the WordPress app for Android. Seems okay if you need to edit something remotely,  but let’s face it: a real keyboard would be nice. But for short items,  it works fine.

I am now trying out androids dictation application. What do you know? It worked the first time!

The menu tagging and other editing features and menu options are pretty slick too. There are also the standard text editor features such as bold and italics.

Way to go, Android.