This wiki space contains archival documentation of Project Bamboo, April 2008 - March 2013.
Please add your name and email here if you would like to participate.
Bill Parod, email@example.com
Scott Prater, firstname.lastname@example.org
Mike Stroming, email@example.com
While there's other work underway to identify external, high-value collections for Bamboo, it's hard to consider their "interoperability" without reference to the "operations" we anticipate needing to perform with them. By drilling into scholarly narratives and breaking them down into specific operations, we hope to explicate a set of operations that will help define and against which to assess 'collection interoperability'. Considering such operations in relation to the identified collections, we'll note gaps in what specific operations require, and access that the collections provide. This will help us identify remediation/adapter patterns, recommended encoding standards for content and metadata, and perhaps inform the design of broader Bamboo object models and processing primitives. Persisting various collection derivatives may be part of the scholarly process. We'll note those basic CRUD (create, read, update, delete) operations as they occur in the narratives, but our effort won't address the specifics of content management operations or persistence models as they are the domain of other WorkSpace or Content Interoperability efforts. Our focus will be in identifying common scholarly operations and noting the access readiness of high value 3rd party collections to those operations.
Please add links here to scholarly narratives that involve the use of an external collections, including references to those collections:
Corpora Space Scalable Reading Experiment: Proposed Functionality for the First Corpora Space Workshop
Please list collections identified in the Use Cases above or identified as important otherwise in Bamboo activities:
Hathi Trust: http://www.hathitrust.org/data
1) Collect scholarly narratives already described in Bamboo and solicit others as available from participants.
2) Break a selected set of narratives down into atomic operations.
For each operation, note the following information:
What kind of information generally speaking, is the operation concerned with?
What are data sensitivities that can improve or degrade reliability of the operation's results? What is ideal data?
What specific data requirements as regards format, schema, terminology, syntax, etc… does the operation have?
For each operation, we'll note specific inputs, outputs, available implementations, etc.
Scott Prater's analysis of a Geo-mashup, suggested in a WorkSpaces - Collection Interoperability discussion provides an excellent template: https://wiki.projectbamboo.org/display/BTECH/Operations+API+use+case . We will enhance this template as needed.
3) Assess candidate collections' affordance for their appropriate narratives, noting required remediation or transformation operations.
Need to identify CI-related operations implicit in scholarly use cases in order to define and assess collections interoperability
Identify operations & most important transformation / remediation patterns from inspection of scholarly narratives gathered
Assess candidate collection affordances to meet relevant narratives
Identify gaps in existing CI stds and profiles
Scheduled calls (We have no calls scheduled at this time. We'll see first how many participants join the effort, what use cases are provided and then consider how best to use phone conferencing, email, and the wiki.