Scheduled DB Maintenance: January 21st - 8:00 AM to 10:00 AM. Confluence will be unavailable during this time.
Skip to end of metadata
Go to start of metadata

The HearstCAVE, an immersive visualization platform installed in the gallery of the Hearst Museum of Anthropology, is part of a visualization network being developed at four UC campuses to share and preserve visualizations of cultural heritage sites and materials. Research IT is working with the HearstCAVE project as part of a broader effort to understand what kinds of viz-related services campus researchers need. With our partners in the Hearst Museum and CITRIS, we have built a team of students who are developing 3D models of objects from the Hearst Museum. These students are contributing to an exciting project while they learn about visualization and research projects in general.

In this reading group, we will share more about the HearstCAVE project and, as a group, discuss ideas and opportunities for related visualization work.

When: Thursday, 2 November from 12 - 1pm
Where: 200C Warren Hall, 2195 Hearst St (see building access instructions on parent page).
What: Jetstream/CyVerse Roadmaps
Presenting: Chris Hoffman, Research IT

Prior to the meeting, please review the following:



Presenting: Chris Hoffman, Research IT

Attending:

Amy Neeser, RDM (Research IT & Library)
Camille Crittenden, Banatao Institute / CITRIS / PRP
James Fong, ETS
Jean Cheng, ETS-AIS
John Lowe, Research IT & Linguistics
Kali Armitage, IST - Document Management
Katie Fleming, Hearst Museum of Anthropology
Marlita Kahn, TPO
Nico Tripsovich, ARF
Patrick Schmitz, Research IT
Rachna, HearstCAVE Team (undergraduate student)
Ray Lee, Research IT
Rick Jaffe, Research IT
Sami, HearstCAVE Team (undergraduate student)
Steve Masover, Research IT
Thomas Hammond, HearstCAVE Team (undergraduate student)
Willa Chan, ETS


Notes:

Slides (PDF, 15MB)

Photogrammetry is not just about creating pretty things: we're creating objects used by scholars to advance their research, accessing materials remotely, en masse, and without putting delicate objects in the collections at risk via travel, touching, etc. In addition, the various views of a 3D model can reveal detail about an object that is non-obvious or even not visible in an unprocessed photo or (especially in the case of small objects or objects visible from only one angle when fixed in an exhibition space) to the naked eye.

UCSD has done a lot of work helping to advance viz environments made from commodity hardware. Radically less expensive than environments developed from custom hardware by commercially-oriented professional outfits.

Sami: 30 min to 90 min, 30-150 photos to get a full digital 2D recording of an object, which are input to PhotoScan, the software that assembles a 3D object model in a series of photo-processing stages. This can take from an hour to days, depending on the number of 2D photos used to create the 3D model, but also depending on a broad set of issues that can arise in the course of the PhotoScan processing.

Rachna: Smaller, flatter objects are generally simpler (and quicker) to model.

Marlita: Can you display an object in HearstCAVE in a way that gives a sense of scale, e.g., large image of small object?
Katie: This is something we really need to be able to do in order to make these models useful to researchers.
Chris: Have not yet explored incorporating metadata of this sort in the 3D model files, but that's one thing we'll be exploring

Unity - game development software - used to develop applications or content displays that make 3D models of collection objects available.

Incorporation of audio in a "tour" of an object. Experiment: "The Doctor", stone sarcophogus, model created by Prof. Rita Lucarelli and students. One of the participants in the HearstCAVE project is a UCSC student in electronic music who worked with Prof. Lucarelli to develop a sound track for a "tour" of this object. [DEMO]

http://3dcoffins.berkeley.edu

https://immersivescholar.org

Preservation and IP issues to do with interim and final digital files; accessibility: what does that mean in the context of these models?

Patrick: In engaging other museums beyond Anthro, what issues/questions come up about models of what sort of objects they hold in their collections?
Chris: Interesting discussions. For example, doing photo capture work with objects in BAM's storage facility is associated with more expense than is funded: it requires certain staff to be present, etc. UC/Jepson Herbaria: question about whether 3D models of pressed plants are "useful" -- or whether 2D is valuable, or whether "pressed" really is 3D, as it allows one to see qualities of the same that are spatial.
Patrick: How to incorporate researcher priorities?
Chris: Absolutely. E.g., setting Anthro mus. priorities with Ben Porter -- aligning this work to fit & advance the Museum's mission.
Katie: Worthwhile to consider a survey that we could circulate to get information about which objects ought to be modeled, and what qualities of the objects need to be modeled or emphasized.

Camille: is there manual work involved in 2D->3D in PhotoScan, or does the software do it all?
Sami & Rachna: some manual, more taking away spurrious points that PhotoScan includes in the model than adding manual.

Patrick: Thinking about the Essig museum ... has anyone tried modeling an insect encased in amber, but digitally removing the amber to get at the insect itself?
Chris: Don't know, but very possibly.

Camille: Local vs. not -- how does PRP (fast network) come into play?
Chris: Files could -- theoretically -- be hosted at one institution, but visualized elsewhere over the PRP (fast) network

Nico: collaboration with Jacobs Institute for 3D printing re: printing some of these models for use in a teaching/learning context?
Chris: Definitely an idea that has come up...
Chris: Also raises issue of what 'collateral' effects come from making models, or 3D prints of objects. Art, Anthro -- different concerns in each collection.


  • No labels