CIS Seminars & Events

Spring 2018 Colloquium Series

Unless otherwise noted, our lectures are held weekly on Tuesday or Thursday from 3:00 p.m. to 4:00 p.m. in Wu and Chen Auditorium, Levine Hall.

February 8th (Rescheduled to Thursday, February 22nd)

Sebastian Angel
Computer Science Department
University of Texas at Austin
Privacy despite mass surveillance"

Read Abstract and Bio

In the past decade there has been a significant increase in the collection of
personal information and communication metadata (with whom users
communicate, when, how often) by governments, Internet providers,
companies, and universities. While there are many ongoing efforts to secure
users' communications, namely end-to-end encryption messaging apps and
E-mail services, safeguarding metadata remains elusive.
I will present a system called Pung that makes progress on this front.
Pung lets users exchange messages over the Internet without revealing
any information in the process. Perhaps surprisingly, Pung achieves this
strong privacy property even when all providers (ISPs, companies, etc.) are
arbitrarily malicious.

I will also present several improvements to a general cryptographic building
block called private information retrieval (PIR) that underlies many privacy
preserving systems including Pung. Among these improvements, I will discuss
SealPIR, a new PIR library that achieves orders of magnitude more network
efficiency than the state-of-the-art. Finally, I will briefly touch on some of
my work on verifiable computation and interfacing with malicious USB devices.

Sebastian Angel is a Ph.D. candidate at The University of Texas at Austin
and a visiting academic at New York University's Courant Institute of
Mathematical Sciences. His research interests are in systems, security,
and networking.

February 27th

Omer Paneth
Computer Science Department
Cryptography Outside the Black Box 

Read Abstract and Bio

Computational problems whose input is a program are central in Cryptography, as well as Complexity, Learning, and Optimization. The nature of such problems crucially depends on the way the program is accessed -- as a black box or explicitly by its implementation.

In which settings can we exploit code to gain an advantage over black-box access? In Cryptography, we explore this question from two opposing perspectives:

- Protecting Code: Can we obfuscate a program's code so that its functionality is preserved but it is otherwise unintelligible? Intuitively, such obfuscated code reveals nothing more than black-box access to the program. Obfuscation is, therefore, a powerful tool with numerous applications in software protection and Cryptography.

- Exploiting Code: Most security proofs in cryptography consist of a reduction that translates any potential attacker into an algorithm solving some underlying hard problem. While most security reductions only require black-box access to the attacker, for many applications black-box reductions are provably insufficient. Can we exploit the attacker's code to prove security where black-box reductions fail?

In this talk, I will describe new techniques for protecting and exploiting code, taking advantage of the inherent tension between these two tasks. I will also demonstrate applications of these techniques in and beyond cryptography.



Omer Paneth is a postdoctoral associate at the cryptography group at MIT. His research interests include program obfuscation, verifiable delegation, and secure protocols. He earned his Ph.D. from Boston University in 2016.


March 1st

Stefano Tessaro
Computer Science Department
University of California, Santa Barbara
The Foundations of Applied Cryptography

Read Abstract and Bio

The pervasive adoption of cryptography makes it imperative to assess when proposed solutions are secure, and when they are not. This talk will explain that proving meaningful security guarantees for practical cryptography often requires new and deep theoretical advances. I will showcase three examples from my recent and ongoing research.

First, I will show that the strongest known provable guarantees for common designs of block ciphers, the fundamental atomic building blocks of cryptography, follow from new general information-theoretic
techniques I have developed.

Second, I will introduce a comprehensive theory of so-called multi-user attacks, which aims in particular at measuring security under Internet-scale deployment, and use it to validate existing and upcoming cryptographic standards.

Finally, I will present proofs of best-possible lower bounds on space-time trade-offs that validate in-use mechanisms to protect passwords from attackers with access to special-purpose hardware, such as GPUs and ASICs.


Stefano Tessaro is an Assistant Professor of Computer Science at UC Santa Barbara, where he holds the Glen and Susanne Culler Chair. He received his PhD from ETH Zurich in 2010, and held postdoctoral
positions at UC San Diego and MIT. His research interests span a wide range of topics across cryptography and its applications. He has received the Sloan Fellowship, the NSF CAREER Award, the Hellman Fellowship, the best paper award at EUROCRYPT 2017, and the best student paper award at TCC 2011.



March 13th

Taesoo Kim
School of Computer Science
College of Computing, Georgia Tech
Scaling Security Practices: Automated Approaches to Eliminate Security Vulnerabilities

Read Abstract and Bio


Computer systems are highly vulnerable; attackers everyday discover
new security vulnerabilities and exploit them to compromise the target
systems. This talk will present our approaches to automatically
prevent software vulnerabilities from exploitation. In particular,
this talk will describe in detail two classes of vulnerabilities: an
emerging class, called "type confusion" (or "bad casting"), and a new
class we discovered, called "uninitialized padding," causing
information leakage in the Linux kernel. This talk will explain what
these vulnerabilities are, how attackers exploit them, why/how
developers introduced them, and why it is non-trivial to avoid them in
complex, real-world programs. Finally, approaches to automatically
eliminate them in practice will be demonstrated.


Taesoo Kim is a Catherine M. and James E. Allchin Early Career
Assistant Professor in the School Computer Science at the Georgia
Institute of Technology (Georgia Tech). He also serves as the director
of the Georgia Tech Systems Software and Security Center (GTS3). He is
genuinely interested in building a system that prioritizes security
principles first and foremost. Those principles include the total
design of the system, analysis of its implementation, elimination of
certain classes of vulnerabilities, and clear separation of its
trusted components.  His thesis work, in particular, focused on
detecting and recovering from attacks on computer systems, known as
“undo computing.” Taesoo holds a S.M. (2011) and a Ph.D. (2014)
from MIT.


March 14th

Alexandros Daglis
Ecole Polytechnique Federale de Lausanne, Switzerland
Towne 337, 10am - 11:30am
Network-Centric Computing for Online Services

Read Abstract and Bio
Modern datacenters offer an abundance of online services to billions of daily users. The most demanding online services come with tight response latency requirements, forcing service providers to keep their massive datasets memory-resident, distributed across the datacenter’s servers. Every user request accesses the memory of hundreds of servers; therefore, fast access to the aggregate memory pool is of crucial importance for service quality. The entity binding all the memory resources together is the datacenter network. Unfortunately, despite the dramatic evolution of datacenter fabrics over the past decade, networking still incurs a latency overhead of tens of microseconds to every remote memory access, making memory pooling impractical. In this talk, I will propose a holistic network-centric system redesign to enable memory pooling in future datacenters, which includes (i) a specialized lightweight network stack; (ii) on-chip integration of the network interface logic; and (iii) new network operations with richer, end-to-end semantics. I will highlight the role and demonstrate the effect of each of these three key features with systems built throughout my dissertation. 

Short bio: 
Alexandros (Alex) Daglis is a sixth-year PhD student at EPFL, advised by Prof. Babak Falsafi and Prof. Edouard Bugnion. His research interests lie in rack-scale computing and datacenter architectures. Alex advocates tighter integration and co-design of network and compute resources as a necessary approach to tackling the performance overheads associated with inter-node communication in scale-out architectures. He has been a founding member of Scale-Out NUMA, an architecture, programming model, and communication protocol for low-latency, distributed in-memory processing. Scale-Out NUMA has been prototyped and awarded a US patent. As an intern at HP Labs, Alex worked on the design of The Machine’s unique memory subsystem. More details can be found on his CV.

March 15th

Seyed Zahedi
Computer Science Department
Duke University
Managing Shared Resources in the Data Center Era: Computer Architecture Meets Game Theory

Read Abstract and Bio


Resource sharing is vital to improving efficiency and amortizing cost in high-performance computer systems. Within modern data centers, users selfishly pursue individual performance without regard for others or the system. To address this challenge and study the strategic behaviors of self-interested users, I turn to algorithmic economics and game theory. In this talk, I rethink resource management in computer architecture and systems, constructing mechanisms that are robust to strategic behavior. First, I describe novel methods to manage shared power supplies in modern data centers. Second, I introduce a processor bidding mechanism that operationalizes Amdahl’s Law to guarantee proportional shares. Finally, I demonstrate dynamic mechanisms that allocate shared resources across time. 

Seyed Majid Zahedi is a Ph.D. candidate in Computer Science at Duke University, advised by Prof. Benjamin C. Lee. Prior to his Ph.D., he obtained his M.S. and B.S. degrees from University of Tehran, Iran. His research focuses on the intersection of computer architecture, computer systems, and economic game theory. His work has been acclaimed by the computer architecture community. His paper on the Amdahl bidding mechanism has been nominated for the Best Paper Award at HPCA'18, his paper on computational sprinting received the Best Paper Award at ASPLOS'16, was selected as a CACM Research Highlight, and was distinguished as one of the IEEE Micro Top Picks Honorable Mentions, and his paper on multi-resource allocation was recognized as one of the IEEE Micro Top Picks from Computer Architecture Conferences in 2014.


March 20th

Huijia (Rachel) Lin
Computer Science Department,
University of California, Santa Barbara
On The Foundations of Program Obfuscation

Read Abstract and Bio


Program obfuscation is method for transforming (or “scrambling”) any computer program into one that is functionally equivalent, yet “unintelligible”. It not only promises to be a powerful tool for software protection, but has also reshaped the theory of cryptography in the last few years, with previously unimagined cryptographic applications, and has intriguing application in complexity theory.

In this talk, I will explain why the existence of sound program obfuscation has remained elusive, and how my work has made the most significant progress in establishing its feasibility. Previous program obfuscation schemes all rely on the same algebraic structure called multilinear maps, which are highly complex and have no well-understood instantiations. My research has significantly weakened the algebraic structure needed for obtaining program obfuscation to just "trilinear maps"—bringing us "one step away" from the holy grail of basing program obfuscation on standard (and well-studied) cryptographic structures, such as, bilinear maps on elliptic curves.


Huijia (Rachel) Lin is an assistant professor in the Department of Computer Science at UC Santa Barbara. She holds a PhD from Cornell University and has been a joint postdoc at MIT and Boston University prior to joining UCSB. Her work deals with the foundations of cryptography. She has been awarded the NSF CAREER Award, the Hellam Fellowship, the Microsoft PhD Research Fellowship, the best paper award at EUROCRYPT ‘18, and the best-paper honorable mention at EUROCRYPT ‘16.


March 22nd

Saurabh Gupta
Computer Science Department
University of California at Berkeley
Visual Perception and Navigation in 3D Scenes 

Read Abstract and Bio

Abstract: In recent times, computer vision has made great leaps towards 2D understanding of sparse visual snapshots of the world. This is insufficient for robots that need to exist and act in the 3D world around them based on a continuous stream of multi-modal inputs. In this talk, I will present some of my efforts in bridging this gap between computer vision and robotics. I will show how thinking about computer vision and robotics together, brings out limitations of current computer vision tasks and techniques, and motivates joint study of perception and action. I will showcase these aspects via three examples: representation learning for varied modalities, 3D scene understanding, and visual navigation. I will conclude by pointing out future research directions at the intersection of computer vision and robotics, thus showing how the two fields are ready to get back together. 

Bio: Saurabh Gupta is a Ph.D. student at UC Berkeley, where he is advised by Jitendra Malik. His research interests include computer vision, robotics and machine learning. His PhD work focuses on 3D scene understanding, and visual navigation. His work is supported by a Berkeley Fellowship and a Google Fellowship in Computer Vision.


March 27th

San Woo Jun
Computer Science & Artificial Intelligence
3:00 – 4:30 pm
Big Data Analytics Made Affordable: Hardware-Accelerated Flash Storage for Graph Analytics and Other Applications

Read Abstract and Bio

The economic value of the data that is continuously being collected in social networks, web pages, sensor networks etc. is dependent on our ability to analyze it in a timely and affordable manner. I will present a new system architecture for big data analytics using distributed flash storage augmented with reconfigurable hardware accelerators. I will show that our custom-designed prototype storage device plugged into a PC, coupled with novel external algorithms for graph analytics, was able to outperform cutting-edge graph analytics systems running on much costlier server-class systems for multi-terabyte datasets. I will also describe how our approach can be applied to important applications in bioinformatics and artificial intelligence.

Bio: Sang-Woo Jun is a Ph.D. Candidate at the Massachusetts Institute of Technology. His research interests are systems for Big Data analytics, especially using application-specific hardware acceleration and NVM storage

March 28th

Stephen Bach
Computer Science Dept.
Stanford University
337 Towne, 10:00 – 11:30am
Programming Statistical Machine Learning with High-Level Knowledge

Read Abstract and Bio


Machine learning is fundamentally changing how software is developed. Rather than program behavior directly, many developers now curate training data and engineer features, but the process is slow, laborious, and expensive. In this talk I will describe two multi-year projects to study how high-level knowledge can be programmed more directly into statistical machine learning models. The resulting prototypes are used in dozens of major technology companies and research labs, and in collaboration with government agencies like the U.S. Department of Veterans Affairs and U.S. Food and Drug Administration.

The first project is Snorkel, a framework for training statistical models with multiple user-written rules instead of hand-labeled training data. This alternative supervision paradigm raises new questions in statistical machine learning, such as how to learn from noisy sources that can have have rich dependency structures like correlations, and how to estimate these structures fast enough for interactive development. Snorkel powers applications, such as reading electronic health records, that otherwise would not admit a learning approach because of the difficulty in curating training data.

The second project is probabilistic soft logic (PSL), a probabilistic programming language for building large-scale statistical models over structured data like biological and social networks using logical rules. PSL scales up structured inference based on a new equivalence result among seemingly distinct convex relaxation techniques for combinatorial optimization. By enabling structured, statistical inference at scale, PSL unlocks new modeling techniques in domains such as bioinformatics and knowledge base construction.


Stephen Bach is a postdoctoral scholar in the computer science department at Stanford University, advised by Christopher Ré. He received his Ph.D. in computer science from the University of Maryland, where he was advised by Lise Getoor. His research focuses on statistical machine learning methods that exploit high-level knowledge, through projects like Snorkel ( and probabilistic soft logic ( Stephen's thesis on probabilistic soft logic was recognized with the University of Maryland's Larry S. Davis Doctoral Dissertation Award.

March 29th

Samira Khan
Computer Science Department,
University of Virginia
Solving the DRAM Scaling Challenge

Read Abstract and Bio


Technology scaling of DRAM cells has enabled higher capacity memory for the last few decades. Unfortunately, DRAM cells become vulnerable to failure as they scale down to a smaller size. Enabling high-performance, energy-efficient, scalable memory systems without sacrificing the reliability is a major research challenge. My work focuses on designing a scalable memory system by rethinking the traditional assumptions in abstraction and separation of responsibilities across system layers. 

In this talk, I will discuss three fundamental ways to enable DRAM scaling. First, we can enable scaling by letting the manufacturers build smaller cells without providing any strict reliability guarantee.  I envision manufacturers shipping DRAMs without fully ensuring correct operation, and the system being responsible for detecting and mitigating DRAM failures while operating in the field. However, designing such a system is difficult due to intermittent DRAM failures. In this talk, I will discuss a system design, capable of providing reliability guarantees even in the presence of intermittent failures. Second, we can enable high-capacity memory leveraging the emerging non-volatile memory technologies that are predicted to be more scalable. I will present my vision to redefine the hardware and operating system interface to unify memory and storage system with non-volatile memory and discuss the opportunities and challenges of such a system. Third, tolerating failures in the application can improve memory scalability. The fundamental challenge of such a system is how to assure, verify, and quantify the quality of the results. I envision a system that limits the impact of memory failures such that it is possible to statically determine the worst-case results from the maximum possible error in the input. 


Samira Khan is an Assistant Professor at the University of Virginia (UVa). Prior to joining UVa, she was a Post Doctoral Researcher at Carnegie Mellon University, funded by Intel Labs. Her research focuses on improving the performance, efficiency, and reliability of the memory system. She is the recipient of NSF CRII Award, NSF GOALI Award, and Rising Stars in EECS Award. She received her PhD from the University of Texas at San Antonio. During her graduate studies, she worked at Intel, AMD, and EPFL. 


April 3rd

Radhika Mittal
Electrical Engineering and Computer Science
University of California at Berkeley

Read Abstract and Bio


April 12th

Nika Haghtalab
Computer Science Department,
Carnegie Mellon University

Read Abstract and Bio



April 17th

Jacob Andreas
Computer Science Department
University of California, Berkeley
Learning from Language

Read Abstract and Bio


Natural language is built from a library of concepts and compositional operators that provide a rich source of information about how humans understand the world.  Can this information help us build better machine learning models? In this talk, we'll explore three ways of integrating compositional linguistic structure and learning: using language as a source of modular reasoning operators for question answering, as a scaffold for fast and generalizable reinforcement learning, and as a tool for understanding representations in neural networks.

Jacob Andreas is a fifth-year PhD student at UC Berkeley working in natural language processing. He holds a B.S. from Columbia and an M.Phil. from Cambridge, where he studied as a Churchill scholar. His papers were recognized at NAACL 2016 and ICML 2017. Jacob has been an NSF graduate fellow, a Huawei--Berkeley AI fellow, and a Facebook fellow.


April 19th

Judy Hoffman
Electrical Engineering and Computer Science
University of California at Berkeley
Adaptive Adversarial Learning for a Diverse Visual World

Read Abstract and Bio


Automated visual recognition is in increasingly high demand. However, despite tremendous performance improvement in recent years, state-of-the-art deep visual models learned using large-scale benchmark datasets still fail to generalize to the diverse visual world. In this talk I will discuss a general purpose semi-supervised learning algorithm, domain adversarial learning, which facilitates transfer of information between different visual environments and across different semantic tasks thereby enabling recognition models to generalize to previously unseen worlds. I’ll demonstrate applications of this approach to different visual tasks, such as semantic segmentation in driving scenes and transfer between still image object recognition and video action recognition.


Judy Hoffman is a postdoctoral researcher at UC Berkeley. Her research lies at the intersection of computer vision and machine learning with a specific focus on semi-supervised learning algorithms for domain adaptation and transfer learning. She received a PhD in Electrical Engineering and Computer Science from UC Berkeley in 2016. She is the recipient of the NSF Graduate Research Fellowship, the Rosalie M. Stern Fellowship, and the Arthur M. Hopkin award for seriousness of purpose and high academic achievement. She is also a founder of the WiCV workshop (women in computer vision) co-located at CVPR annually.


April 24th

Hannaneh Hajishirzi
Electrical Engineering and Computer Science
University of Washington

Read Abstract and Bio



April 26th

Ali Farhadi
Computer Science Department,
University of Washington

Read Abstract and Bio