Scheduled DB Maintenance: January 21st - 8:00 AM to 10:00 AM. Confluence will be unavailable during this time.

Navigation:
Documentation
Archive



Page Tree:

Child pages
  • January 2011 Meeting - Adjusting Work Plans Presentation and Notes

This wiki space contains archival documentation of Project Bamboo, April 2008 - March 2013.

Skip to end of metadata
Go to start of metadata

Upload presentations and meeting notes about Adjusting Work Plans Presentation and Notes to this page.

Principles for selecting tools for Bamboo integration / adoption / adaptation

[Steve's capture of principles Marlita wrote on flip charts ... may be incomplete or misinterpreted!!]

  1. Has a useful functionality
  2. Has an audience of scholars interested in using it to operate on collections with which Bamboo is engaging
  3. Has an audience of content providers who want to supply input to the tool
  4. The body of selected tools can work on structured and unstructured content
  5. Tool has to offer something that is not available from the archive/repo where the collection is hosted/served
  6. might meet the principle by permitting existing function to apply to a wider world of content than it was designed for or to which it was originally applicable
  7. has to be a community of support for the tool
  8. have to be ready to be deployed
  9. not proprietary or doesn't require distribution by Bamboo
  10. adhere to standards that are in wide use rather than inventing new ones
  11. fungibility / plug-n-play of tools – if tools are doing similar things they should have similar interfaces – or be able to integrate via connector
  12. ease of use – time put in is commensurate with value a scholar gets from using
  13. Representativeness of functionality across types and disciplines of humanist practice
  14. tools are amenable to workflows

[Proposed addition criteria from Bruce Barton:]
In addition to thinking about how we evaluate any single tool, we could also ask how the collection of tools we support relate to each other.

  1. Are the tools complementary, such that a scholar who is attracted to one tool might find other tools useful as well?
  2. Is there enough diversity in our mix of tools, where diversity could be measured along several dimensions:
    1. Is the range of scholarly problems to which they can be usefully applied broad enough?
    2. Do they represent a broad enough range of deployment or invocation profiles to demonstrate the kinds of capacities we want the ecosystem to have at the end of Phase I? I realize that this is vague. Examples: tool in a work space with possibly more than one style of invocation;too on the BSP as a simple RESTful service where result returns the operation payload; tool on the BSP as a RESTful service where the result is cached for later retrieval after a notice has been sent.
    3. Other ranges might include
      1. simple vs. complex/sophisticated;
      2. standalone vs. component of a workflow;
      3. produces a derivative object by a transformation vs. performs an analysis.

Tools evaluated against the principles above

Tim Cole proposes, building off discussion with Neil Fraistat during the break

  1. create a matrix of tools, describing what's proposed for BSP deployment; then
  2. give folks some period to populate the table with different categories of tools.
  3. (Neil) or task a group to fill in the matrix, so that it becomes a vetted matrix of tools; and,
  4. (Tim & Neil): and then task a group (Scholarly Services group? something broader) to filter that against the principles we've compiled