Skip to end of metadata
Go to start of metadata

We hope you can join us for the next Research IT Reading Group as we hold an open discussion about the implementation and use of the Science DMZ at UC Berkeley. Erik McCroskey, who presented in a reading group in 2014, Science DMZ, will give an overview of what IST's Network Group does, and how that work can support computational research. Erik will also talk about Berkeley's Science DMZ, and discuss some use cases it can support.


When: 
Thursday, Sept 22, 2016 from 12 - 1pm
Where: 200C Warren Hall, 2195 Hearst St (see building access instructions on parent page).
What: The Science DMZ at UC Berkeley, a brown bag lunch and discussion
Presenting: Erik McCroskey (IST - Network Services) & Patrick Schmitz (Research IT)

To prepare for the meeting and discussion, please read/review the following beforehand:


Those who wish to dive more deeply can review additional chapters from An Introduction to Computer Networks, and/or The TCP/IP Guide. Reading group stalwarts may also recall Greg Bell's related presentation and discussion about 18 months ago; Greg's slide deck is available for review on the Reading Group notes page for that session: Science DMZ - a Network Design Pattern for Data Intensive Science.

Presenting: Erik McCroskey, IST Network
Facilitating: Patrick Schmitz

Attending:

Aron Roberts, Research IT
Bill Allison, CTO / IST-API
Camille Crittenden, CITRIS
Jack Shnell, IST-Data Storage
Jason Christopher, Research IT
Kelly Rowland, Research IT
Leon Wong, SNS
Maurice Manning, Research IT
Patrick Schmitz, Research IT
Perry Willets, CDL
Quinn Dombrowski, Research IT
Rick Jaffe, Research IT
Steve Masover, Research IT

 

Patrick: We've talked before about Science DMZ in general, but invited Erik today to have a deeper discussion on a practical level about what the network team does to support the network, what network architecture means, and how the Science DMZ functions here at Cal.

Erik:

o What my group does, how we can help Research IT, when network group ought to be brought into a consultation or engagement
o Erik's group runs network for campus, with some exceptions: EECS, Athletics runs a small network, Res Comp and network group collaborate to support the residential hall networks

 


whiteboard drawing of high-level UCB network topology: border routers with ten and hundred gig connections to CENIC/CalREN, core, zone routers (connected to core at 10G); root switches in buildings x 80 connected at 1G but sometimes 2G or 4G, multiple edge switches in buildings, wall jacks / WAPs (~4000 on the campus, controlled by a controller that lives, conceptually at least, at about the "zone" level); data center, edge switches x 40, servers

 


o network group configures all these devices (routers and switches) with IP addresses, etc.
o wireless is becoming the most common, default connection across the campus
o cable is single mode (light doesn't bounce around in fiber, therefore signals don't get spread out and collide with adjacent bits over long distances)
o cable pulled across campus is capable of faster connections than the switches it is often connected to -- result is that speeding up a network connection involves replacing switches (cost ~10K list, 5-6K; optics are $500 ea; plus a couple hours of time) to get 10G into an individual building
o risers are fiber as well -- so this holds true for getting to the edge switches in buildings ... it's the cable from there to the individual wall sockets and computers that might need cable upgrade. cost of cable drops are ~$309 each. for fiber, contractor assesses work involved.
o network group will provide 1G switches at no marginal cost to departments; 10G requires a charge. planning a standard for Cat6a cabling standard in new buildings, which will be capable of carrying 10G. some oddities to figuring out pricing on 10G switches in buildings: one user who orders 10G, a 10G switch is needed, subsequent orders in that building/location one doesn't need any new switches ... how to cost that out?

Q (Patrick, Rick, Maurice, Camille): How to put a DTN next to an instrument?
EM: Instruments (e.g., microscope) connect to switches (?) not to a DTN. So getting data from scope to Science DMZ DTN happens over the campus network. A DTN next to an instrument makes sense only if an instrument has a 40GB or 100GB interface to network, which EM believes is not a very usual use case at all (many of these instruments are equipped with older-OS computers that are not capable of transmitting data at currently-fast speeds). Note that this case assumes transfer is to an off-campus location -- not, say, an instrument to Savio's storage / computation.

Q (Patrick): Real-time feedback loop case. Reliability of transfer, even across campus at 1G link. Stuttering other drops across that distance
EM: Likely to go without a hitch, unless you're talking about a 1-2 millisecond round trip. 100-200MB speeds is what the standard campus network is built for. From data center, a couple of gigabits per second is not a problem.

o Science DMZ is physically located in Sudartja Dai Hall. Data node that was in Evans basement (connection to internet beyond campus) was moved there at some point after the data center was moved from Evans to Warren Hall.

Q: How frequently are there problems with the routers between buildings / zones
EM: Not frequently, maybe monthly. Sometimes it's equipment failure, sometimes it's someone jostling a connection when knocking about one of the "communications vaults" that provide access to the connections between cable laid around the cmapus.

 

 

  • No labels