Page Tree:

Child pages
  • WS3 Need Statements

This wiki space contains archival documentation of Project Bamboo, April 2008 - March 2013.

Skip to end of metadata
Go to start of metadata

From: Frank Baron
Subject: Re: stories wg bamboo community Request for Workshop Three Attendees
Date: January 5, 2009 2:21:44 PM GMT-07:00

We have created a digital library for the writings of Alexander von Humboldt (1769-1859).
We are trying to cover his publications of 29 volumes about the Americas.
We completed the fourteen English volumes, the only ones available in translation.
We have a system that allows the user to search through all the volumes and continue to search.
If you know of other systems that can do this, we would like to know.

We are trying to develop the system in four languages, according to paragraphs. We want to recreate the range of information available on the environment at every point in Humboldt's five-year travel (Teneriffa, Venezuela, Cuba, Colombia, Ecuador, Peru, and Mexico).

We could use financial support. We have had very little in the eight years to accomplish a goal that Bamboo claims to have set as a future possibility (moving between text corpora).


Frank Baron

From: Judith Pearce
Subject:stories wg Collaborative development of a Catalogue Raisonné
Date: January 5, 2009 3:51:24 PM GMT-07:00


To provide technology support for the collaborative development of a
Catalogue Raisonné.


A Catalogue Raisonné is "a monograph giving a comprehensive catalogue
of artworks by an artist". The required elements are well described in
the Wikipedia article from which this definition is taken [Catalogue
Raisonné (2008, December 14). In Wikipedia, The Free Encyclopedia.
Retrieved 21:03, January 5, 2009, from].

In the past the development of such a catalogue takes years of a
researcher's efforts and its publication is a major effort at the end
of the process. Once published, it becomes an essential tool for other
researchers. As the Wikipedia entry says, "It contains most of the
information a researcher will need up to the year the catalogue
raisonné was printed".


The Internet provides unprecedented new ways of compiling and
publishing this information as a dynamic, collaborative, ongoing
process. A seed catalogue containing known information, perhaps from
an already well-documented collection or exhibition catalogue can
provide a model for the entries and can form the basis for gathering
new works and information. One of the important characteristics of
this method of research is to make the seed catalogue discoverable on
the Internet. This can be a potent way of attracting potential
collaborators and also of enabling non-researchers - dealers, buyers,
sellers and private collectors - to discover the project and
contribute information on works they own or that have passed through
their hands. Some may become researchers in their own right and
contribute directly to the growing catalogue. Others may provide
information via more traditional methods - letters, email, telephone
discussions, visits. Support for a creative commons approach might be
the default for this tool. However, there would need to be a minimum
level of access control for inviting contributors and approving
contributions. At some stage in the process, a version of the
Catalogue Raisonné might still be published in the form of a high
quality printed monograph.


My particular area of research interest is ceramics, a medium where
art, craft and social history merge and the Catalogue Raisonné is not
yet necessarily a commonly deployed form of research output. One issue
is that, while each hand-made studio pottery work is unique, it may
form one of a nearly identical line. A second issue is that the
provenance of ceramic works is more difficult to track than with other
art works. A third issue is that there may be very little published
information about the artist or pottery. Research in this area once
the capture of oral histories is no longer an option, may depend on
study of the works as found objects.

Voulkos & Co (retrieved January 5, 2009 from is the only example I've yet found
of a website dedicated to the development of a formal Catalogue
Raisonné for a particular ceramic artist. It has a quest section with
two sections; "LOST is comprised of known works by Peter Voulkos for
which the whereabouts are unknown and FOUND features pieces for which
the disposition is currently known along with information about them."
(The fact that so much of Voulkos' output is known is indicative of
his status as an artist.) The data model includes many of the elements
described in the Wikipedia article and may well be supported by an
underlying database. The website invites the contribution of
information about lost works but does not invite direct collaborative
contribution to the catalogue. (There may be support for collaborative
development of the catalogue behind the scenes.)

Remued (retrieved January 5, 2009 from is an
example of a website that is building a catalogue for a line of art
pottery. The aim is not to identify the location of known works but to
provide a register of shapes and shape numbers. The researcher is an
independent scholar drawing his information from the world of auction
catalogues and online auctions; and tapping into the collective
knowledge of private collectors. Like Voulkos & Co it invites
contributions but does not support direct collaborative contribution
to the catalogue. In fact, the technology is relatively primitive -
hand-constructed html pages.

As well as providing support for funded research, at grass-roots
level, there is a need for a simple open source cataloguing tool that
could be deployed as part of a social networking website for use as a
way of gathering the collective intelligence of enthusiasts not
necessarily having any formal qualification as researchers. Oddly
enough, blogging tools have some of the capabilities needed. My third
example is a demonstrator I mocked up recently for a tiny community of
collectors interested in the work of Gundars Lusis (Gunda: Australian
Studio Pottery by Gundars Lusis, retrieved January 5, 2009 from However, this example has had to
work against the blogging meme to achieve the required navigation
capabilities and has no underlying support for structured metadata,
the generation of thumbnail pages or linking to related information.

From: Jim Muehlenberg
Subject: Re: stories wg Reminder - tasks for Monday
Date: December 23, 2008 12:29:36 PM GMT-07:00

Folks - here is one proposed topic paragraph (sort of long), based on an
interview with our interim director of the Center for the Humanities at
UW-Madison. Call it virtual conferences and/or virtual collaboration:

With the economic situation (and environmental concerns for our carbon
footprints), people will have to travel less to professional conferences.
What kinds of facilities can Bamboo create or offer to substitute for
travel? We need to try to get ahead of the game on this, especially for
international conferences. We need to find solutions and models that work.
How can you get all the informal aspects of a conference, not only the
formal presentations, papers, and Q&A sessions? This can also apply to
research work groups - as in her field of 19th century poetry - getting
scholars together for workshops, digital or virtual - we need to find out
what works here. (Example of MLA discussion groups, but highly
interactive.) The emphasis is on collaboration and collaborative research,
need specific technologies for different collaborative styles and
preferences (workshops, conferences, etc), not a universal, one-size-fits
all. Accompanying this should be some way to access all the supporting and
resulting content which is the subject and outcome of the collaboration.

Sorry I didn't get to this earlier! I can dredge up some other examples if
needed from various notes of campus workshops. thanks JDM

From: Lisa Wymore
Subject:stories wg Requested Paragraph - sorry it is late
Date: January 7, 2009 2:10:45 PM GMT-07:00

Workshop 3: Stories Session
One paragraph description regarding "shared technology services"

What can Bamboo address regarding my project?
Description of Collaboration: I work with 3 Tele-Immersion labs (UC Berkeley, University of Illinois, Urbana-Champaign, and UC Davis). We are a collection of computer engineers/scientists and dance artists working on the concept of distributed co-presence and immersion-based creativity. The resources that we are developing to share include: creating compatible Tele-Immersion labs at numerous cites and the sharing of Tele-immersion data collected during experiments and performances. The needs of this project are twofold and require technology. One: develop the actual physical TI laboratories are inherently about live communication between numerous users. Two: sharing of visual data and other information collected during experiments and performances that other artists and computer engineers can utilize to make new things, catalog, analyze, etc. This project has less to do with sharing of existing collections and tools and more to do with making new artistic and scientific artifacts (and discoveries around the creative process) using a new technology. What is missing from this project on a "shared" level are the following: more TI labs and connecting with other artists, humanists and social scientists, etc. who are also interested in digital performance, the digital body, TI, art making using distributed VR technology, etc. Basically letting other people know about the data that is being collected and the technology that is being developed and finding ways to collaborate with others interested in this new sphere of performance and virtual co-presence.

Lisa Wymore
Assistant Professor
Department of Theater, Dance, and Performance Studies

From: Stan Ruecker, Geoffrey Rockwell, Maureen Engel
Subject: Re: stories wg bamboo community Request for Workshop Three Attendees
Date: January 6, 2009 4:16:13 PM GMT-07:00

Bamboo: a History Mechanism

  • WHAT: a short description of a small portion/subset of your work around which you would like technology support and any "need to know" relationships between this subset of work and your larger research/teaching efforts,
  • HOW: the tasks you currently perform to achieve this portion of your work and any required order of these tasks (i.e. "x" must be done before "y")
  • HELPS: any tools or collaborations you currently use to help complete your tasks, and
  • NEED: a description of your specific technology need or your best guess at the technology support you want

What I would like to have readily available is a method for tracking the history of user activities within any given system. If I have a customizable way to store whatever someone does that changes state, ideally grouped by minor and major state changes, I can use it for three things:
providing the user with an un-do/re-do, or else a forward/back mechanism;
providing the user with a way to share state with other users;
providing the researcher with a log mechanism.

Since we do usability studies of our prototypes, the last option is particularly useful to use. I have had one of these components built for one of our prototypes. It worked very well for logging but was a custom component. It would be useful to have a robust generic one that could be plugged in to new projects.

From: Phillip Thurtle
Subject:stories wg Project Needs: University of Washington, Extended Development Project
Date: January 5, 2009 1:40:32 PM GMT-07:00

The Extended Development Project: An Online Database on the History of Evolutionary and Developmental Biology

University of Washington

Phillip Thurtle, Associate Professor, Comparative History of Ideas and History

I am creating a database on the history of evolutionary and developmental biology. I am currently using two tools. The first is Zotero, an in-the-browser, reference management system. We use Zotero to keep track of key primary sources (scientific papers, images, video clips, etc.) and to add annotations to our files. We will then use Omeka, an interactive, online, environment to share the files. We chose Omeka for its flexibility of functionality. Specifically, it allows for the creation of complex narratives from the database materials as online exhibits (incorporating media rich files), allows for folksonomy and metadata from users, allows for the collection of electronic artifacts online, and has full search functions for user access of database materials.

My technological wish list would be for a patch that would allow Zotero and Omeka to work together. This way we could easily move between updating Zotero and Omeka. The other alternative would be to have an all-in-one working environment (file management, data collection, notation, video streaming, and tagging) that can also act as a robust publishing environment. The specific forms of expertise that I need at this point are a technologist to help me install and manipulate Omeka (hopefully she could help write code to bridge Omeka with Zotero). Also, I could use an information architect and a designer to help me design an attractive and workable interface. Finally, I could use some legal support to help secure permission for articles, images, and film and video clips.


From: Patrick Neher
Subject: Re: stories wg Reminder - tasks for Monday
Date: December 20, 2008 8:49:10 AM GMT-07:00

Many of us in the Arts would like to see a system by which we can search libraries, collections, and other arts-media assets deeply, referring to meta-data that is designed by a cross-disciplinary group of scholars. To be able to search a poetry reading by, not only the usual key words such as date of reading, who read it, who is the author, etc., but also by more "arts-oriented" keys like "rhythm, tempo, rhyme structure, length (time), emotional content, subject matter, key (music), inflection, prosody" etc. To be able to search music, lighting designs, dance compositions (labda notation?), visual arts, static arts (sculpture, paintings) in this way, and make use of these connections, would enhance performance, research, re-construction, arranging, composition, and teaching. In other words, there needs to be a system by which curators, artists, performers, composers, and all in the arts and humanities, can participate in allowing their unique collections to be connected via cross-media and cross-discipline data search and manipulations. I believe Bamboo could be the perfect consortium to develop the WAY to search these "obscure" types of meta-data within any participant collection, to develop the meta-data itself, and to create, what would be unique, sensible ACCESS to this data connection tool.

Patrick Neher,
Professor of Music
Coordinator of Strings
School of Music
University of Arizona
Tucson, Arizona 85721 USA

From: Jeremy Frumkin
Subject: RE: bamboo community Request for Workshop Three Attendees
Date: January 8, 2009 1:30:17 PM GMT-07:00

An open platform on which a variety of storage, preservation, access,
and use services could be built upon and seamlessly interact. As the
broader web moves towards cloud-level computing, one approach that would
allow researchers and technologists supporting researchers to scale
their efforts and provide next-generation services it to be able to
build upon a common network-level platform that would provide open and
consistent access to data, resources, and core services. If this
platform existed, it would not only reduce the amount of recreating the
wheel efforts, but it would also enable new and interesting research
that could take advantage of data and resources in a much greater scale.

From: Geoffrey Rockwell, Stan Ruecker, Maureen Engel
Subject: Re: stories wg bamboo community Request for Workshop Three Attendees
Date: January 8, 2009 6:49:11 PM GMT-07:00

What: Quick and Flexible Research Networks

I will often try to set up a a working group around an idea or project with a few grad students and colleagues. The colleagues might be at other institutions. These working groups are for new ideas for which we don't have grant funding to travel or pay for infrastructure to be set up. We need a mix of ways to communicate, share files, collaborate on writing grants. We need to be able to meet online regularly. We need to be able to set these up quickly and to be able to add communication tools as we need them. We need to be able to do this with a minimum of bureaucracy. We need to be able to close them down and archive stuff. Sometimes when we move we need to actually move the hair ball to another university.

How: Iterative scaling of a group research project

i. I meet Jean at a conference and we decide (over tea) that it would be neat to try a project together on X. She mentions she has a couple grad students who might be interested and I have a colleague.

ii. I get back and remembering the neat idea I e-mail her.

iii. We agree to look for funding and to do that we need to flesh out the idea. We bring on board the grad students. I ask for a discussion list to be set up. She creates a Google Document.

iv. We start iteratively writing the idea out.

v. We agree that we need to talk. Jean and I Skype, but then we can't include others. We can't afford to fly to meet so we decide to try some conferencing technology the campus is pushing. It is Elluminate this month. We book the room, get the account, figure it out. We use some online meeting organizer to find a common time. On the date I find that the local setup has been changed so Elluminate doesn't work because the Java libraries aren't right. My grad student e-mails the others while I try to find a technician. We find one and they fix the problem. The meeting is now late. But it sort of works.

vi. We agree that we need to give the project a name and a web presence especially since we got a little bit of money to run a half-day conference. I apply for an account on a research web server so my grad student can put up a simple web site.

Helps: Now I will do a number of things. I will apply for a discussion list (takes about 2 days to get permission and one set up.) I might apply to let people (at other universities) onto a wiki I have access to if we need shared writing space. Increasingly I have used Google Docs for shared documents. I have tried setting up a Ning group and that didn't work. I have also used Skype for one 2 one voice conferencing and Elluminate to try to do group to group conferencing. Skype works, but doesn't scale to groups. Elluminate with echo-canceling microphones sort of worked, but took a lot of support from staff. Access grid technology was beyond me.

Need: What I think I need is integration, better conferencing, and lower bureaucracy. I would like something a bit like Ning, but with Google Docs features and Skype-like conferencing. I would also like it to be simpler with the ability to turn things on as we go. (And to have no ads.)

Audiovisual Preservation issues and the Sound Directions Project
From: Alan Burdette, Indiana University
Date: 1/9/2009

Sound archives have reached a critical point in their history marked by the simultaneous rapid deterioration of unique original materials, the development of powerful new digital technologies, and the consequent decline of analog formats and media. Motivated by these concerns, in 2005 the Indiana University Archives of Traditional Music and the Archive of World Music at Harvard University began Phase 1 of Sound Directions: Digital Preservation and Access for Global Audio Heritage - a joint technical archiving project with funding from the National Endowment for the Humanities. One major goal of the project was to test emerging standards and develop best practices for audio preservation.

The project created a number of software tools that may be placed into service including the Harvard Sound Directions Toolkit - a suite of forty open-source, scriptable, command line interface tools that streamline workflow, reduce labor costs, and reduce the potential for human error in the creation of preservation metadata and in the encompassing preservation package. To aid selection for preservation, Indiana University developed the Field Audio Collection Evaluation Tool (FACET), which is a point-based, open-source software tool for ranking field collections for the level of deterioration they exhibit and the amount of risk they carry. These tools are all open source. Indiana also developed the Audio Technical Metadata Collector (ATMC) software for collecting and storing technical and digital provenance metadata. Harvard also produced Audio Object Manager for audio object metadata creation and Audio Processing XML Editor (APXE) for collection of digital provenance metadata. These tools will be released as open source later after further development.

On a broad scale, audiovisual preservation is a key, but often overlooked infrastructure need. Many new digital audio and video projects in the arts and humanities have the term "archive" as part of their description, but few are relying on sustainable digital preservation practices. Without attention to the preservation of audiovisual assets, many significant and irreplaceable documents in innumerable fields will be completely lost in the next few decades. The Sound Directions project presents a model for other archives to use, but there is a critical lack of facilities and funding for the necessary transfer of analog collections across the country. Even at the Archives of Traditional Music—one of the partners in Sound Directions— at the current rate of transfer there is not enough existing personnel or funding to effectively preserve their holdings. One of the options they are exploring at Indiana University is a campus-wide or even regional facility that would support preservation transfers. They are in the midst of surveying the audio and video holdings on the entire campus to assess the scope of the needs that exist. Needs are not limited to analog source recordings, either. Most scholars and even many special collections are not equipped to handle the long-term stewardship of born-digital recordings. Another broad need is for equivalent standards and best practices in the field of video preservation to match those that now exist for audio preservation.

From: Katie Hayne
Subject: Re: stories wg bamboo community Request for Workshop Three Attendees
Date: January 9, 2009 5:49:18 PM GMT-07:00

Here is a submission for the Monday workshop from The Research School of Humanities, The Australian National University.

We have a number of interdisciplinary projects involving different collaborations across Australia, but each project has a common objective of working with Aboriginal people to help them document their cultural heritage and environmental knowledge. This involves researchers collecting new data in the field, including images, video and GPS data. It also involves digitally returning material held in museum collections to remote communities and working with them to update museum records. The projects require a spectrum of tools and services to deal with data collection, annotation, preservation, access, analysis and publication. We have developed a few prototype systems addressing some of the preservation, access and publication needs but we have not managed to develop standardised workflows to get data into these systems.

1. Capture: Digital tape-based video cameras and digital still cameras are used. GPS data may also be recorded by a PDA using 'Cybertracker' or 'ArcGIS'.
2. Download: Currently images are downloaded by different researchers to their local hard drives in different ways eg. using digital camera software or iPhoto for example. Video is digitised and logged using Final Cut Pro.
3. Documentation: Adobe Bridge has been useful. Filemaker has been used, Microsoft Excel is also used by some. We need to develop some consistent workflows that researchers can adopt so that their data will interface with a repository. Some documentation needs to be done offline.
4. Filenaming and resizing: Maybe this could be done automatically on upload to a web-based system.
5. Metadata mapping: We have done some transforming of metadata from Filemaker to a standard schema.

The work done to date has been more at the end of setting up of repositories and web-based systems for access and discovery, eg Fedora, OAI-PMH and Google maps. This is possibly the more complex end from an IT perspective, but the systems are not usable by researchers as there is no workflow in place.

Develop standardised workflows for researchers working with visual material to move the media from their local hard drives into shared databases/repositories.

From: Mark J. Williams
Subject: stories wg Education wg input
Date: January 9, 2009 2:12:19 PM GMT-07:00

On behalf of the Education working group, we'd like to submit the following subset of our work in response to your request for Workshop 3 input:

"Education" deals with teaching three different kinds of skills:
a) The specifics of how to use hardware and software;
b) Methodologies for applying this hardware and software to Humanities research;
c) The critique of this hardware and software, using longstanding arts and humanities methodologies.

Faculty interested in teaching these skills to students may themselves desire training in how best to do so.

The interaction between scholars and IT can come from two directions:
a) There exist technologies with capabilities that could take scholars' research in new directions--if the scholar knew about them.
b) The scholar has research and/or pedagogical needs that are not met by existing technologies.

Project Bamboo could address the first of these by highlighting existing technologies of potential use to Humanities scholars. To address the second, Project Bamboo could aggregate scholars' requests for the technologies that match their needs and create a clearinghouse of relevant demonstrator models (with metadata relating to complexity, cost, required commitment, etc).

As a preliminary step towards accomplishing the above, it would be helpful to survey working group participants about the state of Digital Humanities on their campus and what stories, resources, or tools they have available to share as potential demonstrators for the community at large.

From: Kathleen Ryor
Subject: stories wg submission for Workshop #3
Date: January 9, 2009 2:13:19 PM GMT-07:00

Description of a sample project and the tasks currently performed in order to complete it: I am investigated a group of 16th century (Ming dynasty) Chinese painters in order to find out how their work was appreciated and collected during their lifetimes and slightly later. This group of painters was later disparaged and fell from critical and historical notice until the 20th century. Because they employed a style that was similar to one practiced in the 11-13th centuries (the Song dynasty) and paintings from this earlier period were highly prized antiques from the 16th century onwards in China, many of their paintings had the signatures erased and substituted with the names of Song artists. Some preliminary research has suggested that these Ming artists were in fact well regarded in their own time. In order to demonstrate the popularity and esteem that these sixteenth century painters had in their lifetimes (and slightly later), I need to perform the following tasks:
1. Find all mention of these artists in texts that date to the sixteenth and seventeenth centuries. Such material primarily includes the collected writings of individuals, local and imperial histories, and gazetteers. Read and translate such material.
2. Because these painters were categorized with the label "Zhe School" at some point in the 17th century (this label was construed as perjorative), I also need to find all uses of the term Zhe pai 浙派 in texts that date to the sixteenth and seventeenth centuries. Read and translate such material.
3. Examine all extant attributions to these painters, with particular attention to any inscriptions and seals by other contemporary figures who either saw or owned the work.
4. Examine anonymous paintings attributed to the Song dynasty and anonymous paintings of the Ming dynasty that exhibit the styles of these artists in order to look for seals of sixteenth century individuals.
None of these tasks need to be followed in any particular order, although the most efficient and potentially fruitful tasks are #1 and #2.

Technological tools used/needed for such work: In order to perform tasks #1 and #2, I currently have to find all collected writings (wenji 文集and biji 筆記), local and imperial histories and gazetteers in print form and examine the table of contents (if one exists) for titles of texts that might relate to painting and then look at those individual texts. This is tedious and extremely time consuming. Because the closest research university (University of Minnesota) now has the electronic imperial library from the 18th century, the Siku quan shu, which contains all books extant at the time and not subject to censorship, I can electronically search for artists' names and other terms with vastly more efficiency and speed. The problem is that the University owns the CD-Rom version, which is only installed on one workstation and is only accessible by driving an hour to the university when the limited hours of the East Asia Library are open (they are not open on weekends). There is also no printing facility available for the terminal. There is a Web-based version of the Siku quan shu and ideally access to this would enable me to do my research better and faster. This is very expensive, and my institution (small liberal arts college) simply cannot afford a subscription. Evidently, it was even too costly for the University of Minnesota to consider. There are also other electronic databases of historical texts that might be useful to me, mostly from Academia Sinica in Taiwan, but again, my institution cannot afford access. Databases of scholarly articles in Chiense also exist and the University of Minnesota subscribes to some, but I need to go there and download to PDF files to disk. While my situation could be worse, lack of easy access to the Siku quanshu database due to the fact that it can only be used on one computer terminal during the work week when I teach makes using this revolutionary tool very difficult.

For tasks #3 and #4, print sources do exist that reproduce all Chinese paintings in public collections (and a few private ones), and they have indices. The problem with this is that the individual photographs in such print sources are tiny black and white thumbnails for the most part. Thus, the inscriptions and seals are not legible. In the end, I need to see all works of potential importance to the project in person. This may not be feasible, but the specific technology that would best support my research in this area is the high resolution scanning of Chinese paintings in all museums worldwide (warning). This would necessarily have to include any colophons attached to the original work of art. Then if one could gain access to such databases, it would be possible to save time and money by eliminating extensive travel. Even if only the museums with the largest and/or most important collections digitized in this manner, it would still greatly improve my ability to conduct research on this and other similar types of projects. More generally, because the language in which I do internet searches is classical Chinese (traditional/non-simplified characters), character sets for Microsoft Word (and presumably Mac if one is a Mac user) need to be large and include rarely used characters that nonetheless appear in most major print dictionaries. I also use such characters in scholarly writing as they often appear in the names of individuals.

In sum, my specific technology needs are threefold. First, large databases of historical Chinese texts exist, but I have either no or limited access to them primarily because of cost. Second, most of the visual material is not digitized and print reproductions have limited use for the above stated reasons. Moreover, the quality of the few paintings that are available in digital form is almost universally poor. Third, day to day usage of classical Chinese for research on the internet is somewhat hindered by limited character sets in Microsoft's software.

From: Clai Rice
Subject: stories wg a research story
Date: January 9, 2009 2:36:44 PM GMT-07:00

One project I am embarking on now is a "distant reading" project (Moretti,
Graphs, Maps and Trees, 1). I am interested in patterns of diffusion in
American newspaper poetry of the late nineteenth century. It was common for
newspapers to reprint poems (and other small items, such as jokes or
stories) from other newspapers. I have been wondering lately if there are
any geographical, chronological, or formal patterns to this dispersal. Do
poems appear first in larger papers and then disperse to smaller ones? Is
there an overall geographical pattern, like dispersal from east to west? Do
poems on certain topics, or in cast in certain forms, gain preference? To
study this, I simply locate a poem in a newspaper, then search for it in
other newspapers, noting the date and location of the papers (and examining
any significant textual alterations. Titles are commonly quite variable).
All of this information is going into a database, with the goal of creating
a geographical map-based display that will allow users to track individual
poems, groups of poems, authors, topics, and newspapers of origin (what
papers print frequently reprinted original poems?)

The current portion of the research requires access to full-text databases
of nineteenth-century newspapers. Proquest Historical Newspapers is the most
reliable, but contains only 11 papers, all major dailies. What makes this
project possible is the rapid development of full-text archives for
genealogy research. These archives are developed from microform, and the
full text OCR is very unreliable. Currently the two fullest archives are and, but there are numerous smaller
databases as well. So my process is to locate a poem (as soon as the
procedure is set I will work from one large daily, covering a month at a
time), select 2-3 word search phrases, then search NA and GB for them. On
the result lists I have to verify each hit visually because both databases
are notoriously incorrect on dating. I must search on multiple strings
because of the unreliable text. And I can't do a single search for both
databases-there is no search aggregator. Currently I do not keep a copy of
each hit PDF due to file sizes and some poems have 50 reprints spanning a

Current tools include the browser, newspaper databases, and a text editor.
Later I will be using a database, probably mysql, with a web interface. The
online newspaper databases all have authentication procedures that
frequently interrupt searching or make it more time-consuming. The ideal
tool would be a search aggregator for the different databases, one that
would return hits in a uniform format. Also helpful would be an onscreen OCR
that would allow rapid text searching of graphic PDFs. Even if it worked
only 50% of the time it would save a good deal of time overall. One way I
would do this would be to adapt something like the Zotero ability to make
entries from current page views. On one click it could grab and search the
PDF, then after visual verification was complete, another click would cause
it to store the PDF and create a bibliography entry. Then the data could be
dumped into another database as needed for analysis and display.

From: Steve Brier

WHAT: As Senior Academic Technology Officer for the CUNY Graduate Center, I participate actively in a CUNY-wide (20 senior and community colleges, spread across the five boroughs of New York City) Academic Technology (A.T.) committee, which is comprised of CUNY faculty from each of our campuses who are actively engaged in using A.T. for teaching and research. We are in the process of setting up an Academic Commons (AC) website which will allow faculty to engage one another using a range of Web 2.0 resources (e.g., blogs and wikis, CMS, etc.) and to push forward a collective conversation about how best to deploy A.T. across CUNY's myriad teaching and research environments.

HOW: We are focused on being able to offer three interrelated "basic functional pieces": information (the AC should make it easy to publish useful information); identity (the AC should be able to easily display customizable profiles and should allow users to build communities through them via social networking tools); and interaction (tagging, collective editing, commenting).

HELPS: We currently use a Moodle site to stay in touch individually and through our sub-committees, but I'm not certain that this is the best environment for the long term.

NEEDS: Our challenge is to identify the best platform(s) to use to accomplish this set of tasks. What are the best/most optimal hardware and software configurations to use, given learning curves and costs, both human and financial? Open source vs. proprietary; Moodle vs. Drupal vs. WordPress Multi-User vs. Plone; PHP vs. Python; Oracle vs. Flash vs. Microsoft? What are the hidden human resource costs (tech support and faculty development) in setting up and maintaining an open-ended system like this that needs to be scalable?

From: David Greetham

WHAT: The compilation of a series of digital morphs illustrating the problem of the ontology of a text, its relation with precedent and subsequent texts, and the blurring of the boundaries of the "work". The compilation/database is primarily of visual materials (from paintings to video games to cartoons to architecture), but also incorporates audio files. The morphing project has been the subject of several lectures at "digital resources" conferences, is used in my textual/critical courses, and has been the subject of a number of published (and to be published essays). An example of a mid-point in a low-resolution morph constructed from two different video games is attached.

HOW: The following is a caption to an illustration from the compilation; it sets out the procedures in a general way [Editor's note: image not included due to limitations of Project Bamboo wiki]

Complex Morph storyboard, showing selection of key points and keylines in a two-sequence morph on three states. Note that once a keypoint has been selected in the opening frame of each level of a storyboard, the morphist must then make a subjective decision on what will be the appropriate analogous keypoint on the closing frame of that level (i.e. the initial digital pairing of the two keypoints is based purely on the positions of individual pixels in the graphic frame, and it is the morphist who must then drag the corresponding keypoint to the pixel that best represents the formal or ontological equivalence in the morph narrative being constructed). Other technical and critical decisions made by the morphist that will have direct effects on every frame of the total morph movie include the setting of time codes, the image resolution (in dpi), the image resizing, the chroma-keying (adjustment of colour wheel), the zoom ratio, the setting of interpolation points (transformation-control points along each keyline), degrees of rotation, the selection of crossfade protocols, the compression ratio, the relation between quality of animation-image and animation motion (in inverse proportion), the frames per second (8 is standard low-end for computer animations, 30 for NTSC US and 25 European video), and the pixel depth (i.e. the number of colours in the transition image), which will depend on the technical capacities of the playback device (8-bit, 24-bit). All of this demonstrates that, while the resulting morph may look like "free play" or "feminist fluidity", it is in fact the construct of a very complex series of technical and critical decisions made by the morphist.

HELPS: Graphics and audio editing programs that can accomplish the steps laid out above.

NEED: As above, with more sophisticated morphing software and display.

From: Jesse Merandy

WHAT: I have developed a unique multimedia walking tour that traces the 19th century American poet Walt Whitman's classic poem, Leaves of Grass, by allowing users to walk literally and virtually in the poet's shoes as he once traversed lower Manhattan and Brooklyn.

HOW and HELPS: Using a mix of MP3 audio, text, and maps delivered on an MP3 player or PDA, "Crossing Brooklyn Ferry: An Online Critical Edition" ( immerses the user in Whitman's 19th century world of Brooklyn and Manhattan, using poetry (both text and spoken word) as the entrepot for the walking tour.

NEED: The Walt Whitman Walking Tour and Online Critical Edition needs support with setup and implementation of a feedback device for users to be able to comment on their experiences during and after they have taken the walking tour. Although my knowledge of web design is strong, it has not proved sufficient to do the backend database setup that is required to design and implement such a system. In order to set up an appropriate user blog, I need some kind of ongoing and direct support in order to get the Web 2.0 elements of the site to a preliminary operational state. What have other digital humanities programs with limited financial and technical resources done in similar situations?

Plutarch: Portal for Learning and UndersTanding ARCHival sources

Task: Managing and integrating digital images of archival / special collections materials with secondary sources and data analysis tools to support the interpretation process

  • AUDIENCE: Historians, any other scholars who work with primary texts (e.g., English, Comparative Literature, Religious Studies, Languages, American Culture, etc.)
  • WHAT: Research using primary sources entails research visits or remotely contacting multiple repositories and special collections to collect materials about a person, activity, or subject. Now, many archives and special collections scan photographs or documents and deliver these to researchers electronically. However, researchers have no good means of managing these images or ones they have downloaded from existing online projects, such as American Memory. In particular, there is no software available to help integrate the primary sources with secondary sources, personal or research group notes, transcriptions, or other applications, such as GIS etc. to facilitate the interpretation process.
  • HOW: For example, if I am interested in a specific Civil War battle, I may collect hundreds of diary entries, documents, and letters about the event from participants, their family members, and government archives. I need to manage these scanned images on my server and integrate this with secondary sources (citations and perhaps even the full books out of copyright that are freely available). I also want to map the materials using GIS software, where was the soldier on the battlefield, where was he from (community/city/state) and can I integrate census data about the socio-economic status of the locality, many of the diary entries are difficult to read so my research groups has transcribed portions and I want to view these side by side with the original. All this data manipulation is really needed before I can really begin any interpretation. In short, I need to establish a context for my subject and that involves triangulating information from multiple places in order to gain new insights and generate new knowledge. The big X here is amassing, organizing, and preparing these data; the Y would be the actual analysis.
  • HELPS: There are currently no standard applications out there so different scholars use personalized and idiosyncratic solutions that are not sharable – and this complicates subsequent re-use or sharing (and definitely collaboration).
    There are currently discrete applications that enable some types of management: Zotero, Treepad, Transana, Google maps, A.nnotate; but they only do pieces of this puzzle. The multiple application approach also means that one must do multiple searches to get information and there is no interoperability for data exchange. Lots of duplicative data entry to maintain consistency across these data.
  • NEED: The technology I have in mind would not only be able to integrate information in different formats from different sources (with metadata) but also be able to search across the different types of information and help make new connections.
  • PLAYERS: Creating and maintaining this resource would require a variety of institutional players. Librarians, archivists, and curators would be data providers, so they would have to provide interoperable data in digital formats. There is a large programming role here for computer scientists to integrate existing tools (e.g., Zotero, A.nnotate), and academic technologists would have to support this program.

The Global Performing Arts Database

Submitted by Ann Ferguson, Associate Director, Global Performing Arts Consortium (, in collaboration with GloPAC colleagues at Cornell University

We have created the Global Performing Arts Database (, a multimedia, multilingual, Web-accessible database containing digital images, texts, video clips, sound recordings, and complex media objects related to the performing arts from around the world, plus information about related pieces, productions, performers, and creators. Our partners use the GloPAD ingest system and metadata structure to directly input the digital images and descriptions of their performing arts related items. The database offers a highly sophisticated metadata schema that was created to accommodate the complexity of describing the elements of a performance.
One of our pressing needs is to develop an efficient way to incorporate the images and descriptions of performing arts related material that reside in digital collections that were created outside of the GloPAD structure—for example, material from a library's website, an online museum exhibit, or a theatre company's digital archives. Many creators of these small collections would be happy to see their material available in GloPAD, in addition to their home site. Right now the only way that can be done is for someone to manually enter all of the metadata from those collections into the GloPAD system.—a huge investment of time and resources. The metadata already exists in electronic form, so it is taking a step backwards to manually re-enter that data into GloPAD. Our dream is to have the means to easily harvest and export the metadata from these smaller digital projects without having to hire a database expert to set up each such transfer. Ideally there would be a service to which we, and others, could send that data, a service that would reformat that data to allow for direct import into the GloPAD metadata schema. The Open Archives Initiative protocol for harvesting is a good step in this direction, but does not offer the non-expert a service that would convert data into the forms for use in various display systems. Several database-based content management systems (Drupal, Joomla) and digital collections systems (Omeka, Open Collection) have, or are working on, extensions for export and import of data sets. What is needed is a reliable non-commercial service for carrying out transfers of data between collections. Our goal is to make it easier for scholars to find the digital resources they need for their work. By incorporating material from these smaller sites into GloPAD, we can get closer to providing theatre and other performing art scholars with a single authoritative repository of digital resources for their research.

The Global Performing Arts Database

Submitted by Ann Ferguson, Associate Director, Global Performing Arts Consortium ( in collaboration with GloPAC colleagues at Cornell University

We have created the Global Performing Arts Database (, a multimedia, multilingual, Web-accessible database containing digital images, texts, video clips, sound recordings, and complex media objects related to the performing arts from around the world, plus information about related pieces, productions, performers, and creators. In addition, a team of GloPAC scholars is building JPARC, an interactive and interpretive Web-based research and teaching environment focused on the Japanese Performing Arts.
One of our more pressing technology needs for both GloPAD and JPARC is in the area of text and video annotation. We want our scholars to be able to easily annotate play scripts with multi-media objects, and to annotate videos with text subtitles and notes. We currently use an ad hoc combination of tools such as the QuicktimePro video editor, HTML page editors, and Flash players to annotate, but none of this work is possible without a good bit of training in complicated software and computer set ups. We have also been hampered by the technological decay of software. Some of the procedures we developed only a year or two ago no longer work due to the changes in the commercial software on which we had to rely. (See, for example, our subtitling how-to: We need a reliable service that includes tools for timed text (for subtitling and captioning) and multi-media annotation that can be easily used by the scholars who are helping to build these resources.

  • No labels