Skip to end of metadata
Go to start of metadata

Our Research IT Reading Group  on Th 26 March will feature Greg Bell, Director of the  Energy Sciences Network (ESnet) as well as the Scientific Networking Division at Lawrence Berkeley National Laboratory. He will be discussing the Science DMZ: a network design pattern for data intensive social science.

When: Thursday, March 26 from noon - 1pm
Where: 200C Warren Hall, 2195 Hearst St (see building access instructions on parent page).
Event format: The reading group is a brown bag lunch (bring your own) with a short ~20 min talk followed by ~40 min group discussion.

Please review the following in advance of the 3/26 meeting:
==============================

(2) Executive Summary and Findings sections only of High Energy Physics and Nuclear Engineering Network Requirements (skimming other section of this long report is optional)


==============================

 

Attending:

Aaron Culich, Research IT
Aron Roberts, Research IT
Bill Allison, IST-API
Chris Hoffman, Research IT
David Greenbaum, Research IT
Erik McCroskey, IST-Network Services
Gary Jung, Research IT
Greg Bell, ESNet/LBNL
John Lowe, Research IT
Larry Conrad, OCIO
Patrick Schmitz, Research IT
Perry Willets, CDL
Rick Jaffe, Research IT
Ron Sprouse, Linguistics
Steve Masover, Research IT

 

DAG: Science DMZ, Faculty Engagement / Science Engagement Model. Reflect today on both the technical and social side of supporting research on campus.

Greg Bell (GB):

[slide deck: PDF, PPTX on Dropbox]

* Will focus on vision, overview, impact of the work we're doing to support research. Slides from ASCAC meeting recently; will focus on overview and impact from the slide deck (not so much update).
* ESNet is not an ISP; this is a risk, to be seen as a competitive rival to AT&T, etc., or an IT investment of the federal govt. DOE user facility aimed at overcoming constraints of geography.
* Origins of ESNet in 1986, vision to do on WANs what was then possible on LANs, network to perform as if any given user is the only user.
* ESNet: International networking facility optimized for DOE science missions; 340GB transatlantic; connections to universities will support transfers between universities; world's first 100G network at continental scale. 80% engineers on staff.
* ESNet designed for "elephant" flows (very large) that are 'bumpy' in terms of bandwidth variability; as opposed to "mouse" (small) flows that commercial carriers carry.
* TCP/IP packet loss (cf. Science DMZ: a Network Design Pattern for Data Intensive Science -- reading for this group)
* Exponential growth in quantity of data carried -- since early 90s. Few large data sources in past; now data sources include massive flows from many small sensors/detectors/instruments, single point sources of data from all over the place [including phones, and soon to include watches]. What's slowing data flow down is slowing budget/growth for data storage.
* When there's a planet of sensors/instruments, how do we build network to accommodate the data flows.
* 80% of ESNet traffic originates or terminates outside the DOE complex (sites); that is, 20% is DOE<-->DOE. Much of the 80% is to universities.
* Large Hadron Collider (LHC) as a common but still very good example of driver for models of network use: increasing reliance on reliability of network. Cf. slide (diagram/map).
* This model coupling instruments and networks will become more popular on campuses as well.
* Goal to improve network practices globally, consider that 80% of data flows that start or end outside DOE. This is where Science DMZ comes from: best practices codified and named. A UC innovation. http://fasterdata.es.net/science-dmz/ --- NIST, NASA, NOA all interested in implementing. Next: evolution of Science DMZ as regional cyberinfrastructure: Pacific Research Platform initiative, led by Larry Smarr.
* http://fasterdata.es.net is a great resource to see what other universities are doing with cutting edge network transport: contribution is invited

* DOE lives and dies on workshops that produce reports and other artifacts. DOE wants network requirements documented for each major science domain every 3 years. Started by administering surveys, which was a bad idea. Ethnographic approach: with funder requirement to meet, ESNet meets with researchers and gathers requirements that drive network buildout. Recipe: low hanging fruit, build relationships, achieve success, and publicize it. Interest and natural competitiveness among researchers takes care of the rest. Different from university context, which has much more breadth, diversity of domain/discipline than DOE.

Discussion:

PLS: What kind of person to hire as CI Engineer has been
GB: One expert network engineer (Eli Dart); another network engineer w performance background; a market/communications background person; one person with physics background with interest in networks. Domain scientist? Perhaps if team was doubled in size. Next hire: network engineer who is able to communicate and collaborate: people skills are very important, never try to hire *just* a network engineer.
PLS: Yes. Person doesn't exist in isolation. E.g., we have Erik M on this campus as a collaborator.
GB: Get someone who can channel the "just get it done" energy ... "a little bit agitated" -- to cut through the decentralization and process that is inherent to UCB.
GB: "Culture of urgency" -- Stephen Chu brought this to LBNL as a core value: urgency.
CRH: Have you considered using an ethnographer or sociologist to do some of your work
GB: Not professional, but that's certainly what we do, the skills we seek/require.
GB: Happy to collaborate with campus CI Engineer once hired. Formal or informal. Committed to build successful network of people who want to make ESNet mission succeed.
CRH: What's the growth path of ESNet as an organization, and in terms of impact
GB: Has accelerated significantly since we agreed we aren't LAN engineers. Building closer relationships with research groups is what drives that.
GB: Large fraction of budget maintains and grows network. Networking is going to merge with software, so investment in software engineering is becoming increasingly important. Science engagement: as much as I can afford to spend on it.
PLS: Some CIOs talk about IT being, ideally, invisible. Take issue with that: in some areas of what we do, we want to highlight what's behind and beneath research success.
GB: Network as part of scientific discovery instruments
GB: It will be key to harmonize scientific engagement efforts.
DAG: And here, at UCB, it has to be federated ... but some level of coordination / integration.
PLS: Consulting summit earlier this year, including library, DH and social sciences support. Key idea: faculty don't want to care/track who sits where on campus, they just want to be served.
GB: Key to Research IT -- become really good best friends among bleeding edge, monster users of computation and network infrastructure.
Erik M: Many of these monster users are doing their monster use up the hill, at LBNL and NERSC.
PLS: MOoe to the fabric than NERSC and ESNet.
GB: Would love to onboard Beamline users. We can identify some of these folks.
DAG: Would be helpful to know who those are.
Erik M: Eli Dart has mentioned some names, hope that Research IT can make these connections and bring me in when there are specific network problems to address.
GB: And by the way, we do use Salesforce
Erik M: multi-campus in state efforts to showcase
GB: CITRIS another terrific campus resource in areas that this work touches.
GB: People expect nice tools ... we recently hired a GUI expert.

 

 

  • No labels