This summer, I am continuing my Presidential Fellows research in the Digital Humanities through the REED London project. REED London is a long running project compiling primary sources involving different forms of theatre in Elizabethan and Stuart England. I’ve chosen to narrow my focus to the so-called Magnificent Marriage of Princess Elizabeth and Prince Frederick V of the Rhine in 1613, and the history of place. The wedding of Elizabeth and Frederick was a massive cultural phenomenon surrounded by weeks of performance and festivities. Using the REED London texts and other primary sources, I’ve been attempting to map out the places involved in these performances.
So far this summer, I’ve focused primarily on ArcGIS software to manipulate historical maps to fit modern satellite maps in order to contextualize historical places with their location in modern London. This has been challenging, as London as a city has changed a lot in past centuries due to fires, bombings, industrialization, and urbanization. It’s difficult to find high quality maps of London that aren’t strangely stitched together, and so I’ve familiarized myself with Adobe Photoshop to crop and clean up maps to make them more usable (attached are the Ogilby-Morgan map of London as I found it online, and my edited and georectified version of the map). I’ve also begun reading letters from witnesses to the wedding festivities, such as prolific correspondent John Chamberlain.
In the digital humanities department at Bucknell University, we are making the most of our time during this pandemic by continuing to pursue our research endeavors remotely. This summer, I am working with Dr. Katherine Faull, Dr. Diane Jakacki and Justin Schaumberger on the Moravian Lives Project.
My particular focus in this project is using the CWRC-Writer program to markup emotion within memoirs from the Fulneck Moravian settlement in West Yorkshire, England. As shown in the following screenshot, the emotions tagged become pink so they can be easily picked out in the memoir.
I look forward to sharing my experiences with this project throughout the summer!
Spending the spring semester working on the TwitLit project was, for me, an engaging and hands-on first experience with the Digital Humanities (DH). As a research assistant, I worked with another student assistant, Meg Coyle, to document and record data on tweets in 2019 related to the writing community. Christian Howard-Sukhil, the head of the project and the DH Postdoctoral Fellow at the university, trained us to use Python scripts developed for scraping Twitter as well as Twarc tools developed through Documenting the Now (DocNow) in order to collect tweets (and accompanying metadata) that contained different writing-related hashtags. Using these scripts, we can record the number of tweets that contained a particular hashtag within a given time period, as well further information on each individual tweet, such as the timestamp or the number of likes and retweets.
From here, we are looking to expand the interpretation of this data into new avenues and to find ways to shed more light onto the sizable writing community on Twitter. For example, currently there are line graphs on the TwitLit website that display the growth of some of these hashtags, with analysis on what this data could mean. We have also speculated on ideas such as displaying viral tweets from the Twitter writing community on the website, in order to show what is drawing the most attention from inside and outside the community. One particularly exciting idea, which we unfortunately are unable to undertake without physically being at the university, is the geographic mapping of these tweets. It is possible to record the “geo-tag” of individual tweets, and through this we would be able to map where the writing community on Twitter comes from in the world, and further interpret this data and ask why tweets are concentrated in one place or another. Throughout the summer we plan to continue thinking of interesting ways to display the data we’ve collected and to keep the DH community at Bucknell updated through these blogs.
Since the beginning of 2020, it has been an awesome experience working on Project Twitter Literature (“TwitLit”) in an effort to break down Twitter literature over the course of the past couple of years. I was stranger to the technique of “scraping” or “scrubbing” tweets, but was immediately engaged with the idea when I heard about the opportunity. I have always had a love for writing and in this new age where social media is everyone’s outlet to express themselves, and Dr. Christian Howard-Sukhil, who heads the project, made me understand the shift in literature in this new media era.
In particular, I have worked to scrape over 30 hashtags, some taking hours to process, while others only a matter of minutes. Once COVID-19 became a factor and our campus had to turn remote, our team continued to meet once a week in an effort to finish the job. Despite technical difficulties distances away, it was awesome to see how much work we accomplished. I was able to scrape all of the hashtags and upload them each to their own file on Google Drive, while Jimmy Pronchick, the other student research assistant on the team, hydrated and counted each tweet, uploading the finished project to the Drive. It was a long process because if at any point my laptop shut down or lost Wifi for a second, I would have to rescrape for the term in order to ensure it was accurate. We followed the scraping process as outlined on the project website; the scraping script is freely available for download on GitHub. In the future, we will begin to interpret the data. On the TwitLit website, Christian has used line graphs to exemplify the growth of literature hashtags. She breaks it down into two different categories, “Writing Community” and “Fiction and Poetry”. This allows us to see the difference in what individuals are using as a platform to share to a greater audience. We will continue to do this for new data and try to think of creative ways to share it.