CIS Seminars & Events

Spring 2016 Colloquium Series

Unless otherwise noted, our lectures are held weekly on Tuesday and/or Thursday from 3:00 p.m. to 4:15 p.m. in Wu and Chen Auditorium, Levine Hall.

 

January 28th, 2016

Mayur Naik
Georgia Tech
"A Declarative Approach to Leverage "Big Code" for Program Reasoning"

 

Read the Abstract and Bio

Abstract:
Big Code, the collective knowledge amassed from analyzing programs, presents a timely and unprecedented opportunity to improve existing methods for program reasoning and enable newer ones. I will present Petablox, a new declarative paradigm and system based on the logic programming language Datalog, as a foundation to realize this objective. Petablox addresses challenges of application diversity and implementation complexity in seeking a broad and deep unification that has eluded past declarative efforts in program analysis. Starting from a single common specification of any program analysis in Datalog, it automatically synthesizes Big-Code tasks such as tailoring program abstractions to analysis queries, transferring analysis facts across programs, and incorporating user feedback to improve analysis results over time. Despite their diversity, Petablox reduces all these tasks to the maximum satisfiability problem, an optimization extension of the Boolean satisfiability problem. I will also describe new algorithmic techniques in Petablox to solve very large instances of this problem that arise not only in our domain but also in other domains such as Big Data analytics and statistical AI.

Bio:
Mayur Naik is an Assistant Professor in the School of Computer Science at Georgia Tech since 2011. His research interests lie in areas related to programming systems, with a current emphasis on program analysis techniques for improving software quality and programmer productivity on modern computing platforms. He is a recipient of Distinguished Paper awards at FSE'15, PLDI'14, and ICSE'09, the Lockheed-Martin Dean's award for excellence in teaching at Georgia Tech (2015), and an NSF CAREER award (2013). He received his Ph.D. in Computer Science from Stanford University in 2008 and was a Research Scientist at Intel Labs, Berkeley from 2008 to 2011.



 


 

February 4th, 2016

Dan Roth
Computer Science and the Beckman Institute
University of Illinois at Urbana/Champaign
"Constraints Driven Learning and Inference for Natural Language Understanding"

 

Read the Abstract and Bio

Abstract:
Machine Learning and Inference methods have become ubiquitous and have had a broad impact on a range of scientific advances and technologies and on our ability to make sense of large amounts of data. I will describe some of our research on developing learning and inference methods in pursue of natural language understanding. This challenge often involves assigning values to sets of interdependent variables and thus frequently necessitates performing global inference that accounts for these interdependencies. I will focus on algorithms for training these global models using indirect supervision signals. Learning models for these structured tasks is difficult partly since generating supervision signals is costly. We show that it is often easy to obtain a related indirect supervision signal, and discuss algorithmic implications as well as options for deriving this supervision signal, including inducing it from the world's response to the model's actions. A lot of this work is done within the unified computational framework of Constrained Conditional Models (CCMs), an Integer Linear Programming formulation that augments statistically learned models with declarative constraints as a way to support learning and reasoning. Within this framework, I will discuss old and new results pertaining to learning and inference and how they are used to push forward our ability to understand natural language.

Bio:
Dan Roth is a Professor in the Department of Computer Science and the Beckman Institute at the University of Illinois at Urbana-Champaign and a University of Illinois Scholar. Roth is a Fellow of the American Association for the Advancement of Science (AAAS), the Association of Computing Machinery (ACM), the Association for the Advancement of Artificial Intelligence (AAAI), and the Association of Computational Linguistics (ACL), for his contributions to Machine Learning and to Natural Language Processing. He has published broadly in machine learning, natural language processing, knowledge representation and reasoning, and learning theory, and has developed advanced machine learning based tools for natural language applications that are being used widely by the research community and commercially. Roth is the Editor-in-Chief of the Journal of Artificial Intelligence Research (JAIR). He was the program chair of AAAI’11, ACL’03 and CoNLL'02. Prof. Roth received his B.A Summa cum laude in Mathematics from the Technion, Israel, and his Ph.D in Computer Science from Harvard University in 1995.


 


 

February 9th, 2016

James Zou
Microsoft Research
Massachusetts Institute of Technology
"Harnessing the unseen for next-generation data science"

 

Read the Abstract and Bio

Abstract: Massive datasets are being generated that have the potential to transform science, engineering and medicine. In order to harness the power of this data avalanche, it is crucial to model the information and structures that we do not explicitly see in the data. I will first illustrate this concept by discussing my close collaboration with one of the largest human genome sequencing consortiums (ExAC). To make sense of this genetic haystack and leverage it to quantify disease-causing mutations, it was essential to estimate the statistical structures of unseen mutations. We developed a new, scalable algorithm to solve this problem. Our algorithm has strong mathematical guarantees and has broad applications in machine learning. In the second part of the talk, I will discuss the omnipresent challenge of biases arising from data exploration, whereby many of the apparent patterns that we see in data are false discoveries. We developed a general approach based on information usage to bound the unseen biases due to exploratory data analysis. We also present rigorous techniques to reduce bias. I will conclude with general lessons for machine learning and data science and discuss new research directions.

Bio: James Zou is a postdoc at Microsoft Research New England and MIT. He works on machine learning methodology and applications to human genomics. He received his Ph.D. from Harvard University in 2014, supported by a NSF Graduate Fellowship. In Spring 2014, he was a Simons research fellow at U.C. Berkeley. He has multiple first-author papers in the top scientific journals (PNAS, Nature Methods) as well as the top machine learning conferences (NIPS, ICML, AISTATS), and has won several paper awards.


 


 

February 12th, 2016

Jan Cuny
National Science Foundation
"The Rise of Computer Science Education and the Move Toward Greater Diversity"

 

Read the Abstract and Bio

Abstract:
Computer science—traditionally ignored in K-12 education —is now rising to the forefront in many states and school districts. Some of the largest school districts in the country (including New York City, San Francisco, and Chicago) are requiring all schools to offer CS, as are entire states (such as Arkansas and Rhode Island). The change has been dramatic, spurred by the development of curricula for two new high school courses: an introductory course (Exploring Computer Science) and a new Advanced Placement® course  (CS Principles). Thousands of schools are piloting these courses, and many more are poised to soon adopt them. This emergence of computer science education in K-12 arose, in part, from efforts to increase the participation of groups too long underrepresented in computing: women, minorities, and persons with disabilities. Both agendas are being championed by a community that includes university faculty, K-12 teachers and adminsitrators, state and local governments, nonprofits, and foundations. This talk covers the changes that have occurred, and those that are still needed:  How are we addressing diversity? How will CS get into all high schools? Who will teach the courses? What’s needed in K-8?  What additional educational research on the teaching and learning of computing is required? Finally, what implications does this all have for higher education, including issues around increasing enrollments, the continued need to support student diversity, and the changes that will be needed to accommodate a much broader range of aspirations among enrolling students?

Biography: Since 2004, Jan Cuny has been a Program Officer at the National Science Foundation, heading the Broadening Participation in Computing Initiative. Before coming to NSF, she was a faculty member in Computer Science at Purdue University, the University of Massachusetts, and the University of Oregon.

Dr. Cuny has been involved in efforts to increase the participation of women in computing research for many years. Jan was a long time member of the Computing Research Association’s Committee on the Status of Women (CRA-W), serving among other activities as a CRA-W co-chair, a mentor in their Distributed Mentoring Program, and a lead on their Academic Career Mentoring Workshop, Grad Cohort, and Cohort for Associated Professors projects. She was also a member of the Advisory Board for Anita Borg Institute for Women and Technology, the Leadership team of the National Center for Women in Technology, and the Executive Committee of the Coalition to Diversify Computing. She was Program Chair of the 2004 Grace Hopper Conference and the General Chair of the 2006 conference. For her efforts with underserved populations, she is a recipient of one of the 2006 ACM President’s Awards and the 2007 CRA A. Nico Habermann Award.
.


 


February 16th, 2016

Shivani Agarwal
Radcliffe Institute for Advanced Study,
Harvard University
"Machine Learning: Art or Science?"

 

Read the Abstract and Bio

Abstract:
The notion of machines that can learn has caught imaginations since the days of the early computer. In recent years, as we face burgeoning amounts of data around us that no human mind can process, machines that can learn to automatically find insights from such vast amounts of data have become a growing necessity. The field of machine learning is a modern marriage between computer science and statistics, and is the soul behind what is increasingly termed “data science”. But, is machine learning a science or an art? While I won’t answer the question fully, I’ll argue that with a scientific approach, machine learning is indeed a science, and a beautiful and powerful one at that: it has rigorous mathematics at its core, its judicious use allows us to make various kinds of impact on society, and its exploration together with other natural and social sciences allows us to uncover surprising natural and social phenomena. I’ll illustrate these ideas with examples from our  work on foundations of supervised learning, applications in predicting anticancer drug response in patients, and connections with social sciences in understanding how we make choices.

Bio:
Shivani Agarwal is the 2015-16 William and Flora Hewlett Foundation Fellow at the Radcliffe Institute for Advanced Study at Harvard University, where she is on leave from her position as Assistant Professor and Ramanujan Fellow at the Indian Institute of Science. She leads the Machine Learning and Learning Theory Group at the Indian Institute of Science and co-directed the Indo-US Joint Center for Advanced Research in Machine Learning, Game Theory and Optimization, and is an Associate of the Indian Academy of Sciences and of the International Center for Theoretical Sciences. Prior to the Indian Institute of Science, she taught at MIT as a postdoctoral lecturer. She received her PhD in computer science at the University of Illinois, Urbana-Champaign, and her bachelors degrees in computer science and mathematics as a Nehru Scholar at Trinity College, University of Cambridge and at St. Stephen's College, University of Delhi. Her research interests include foundational questions in machine learning, applications of machine learning in the life sciences, and connections between machine learning and other disciplines such as economics, operations research, and psychology.


 


February 18th, 2016

Justine Sherry
University of California, Berkeley
"Middleboxes As A Cloud Service"

 

Read the Abstract and Bio

Abstract:
Today's networks do much more than merely deliver packets. Through the deployment of middleboxes, enterprise networks today provide improved security -- e.g., filtering malicious content -- and performance capabilities -- e.g., caching frequently accessed content. Although middleboxes are deployed widely in enterprises, they bring with them many challenges: they are complicated to manage, expensive, prone to failures, and challenge privacy expectations.

In this talk, we aim to bring the benefits of cloud computing to networking. We argue that middlebox services can be outsourced to cloud providers in a similar fashion to how mail, compute, and storage are today outsourced. We begin by presenting APLOMB, a system that allows enterprises to outsource middlebox processing to a third party cloud or ISP. For enterprise networks, APLOMB can reduce costs, ease management, and provide resources for scalability and failover. For service providers, APLOMB offers new customers and business opportunities, but also presents new challenges. Middleboxes have tighter performance demands than existing cloud services, and hence supporting APLOMB requires redesigning software at the cloud. We re-consider classical cloud challenges including fault-tolerance and privacy, showing how to implement middlebox software solutions with throughput and latency 2-4 orders of magnitude more efficient than general-purpose cloud approaches. Some of the technologies discussed in this talk are presently being adopted by industrial systems used by cloud providers and ISPs.

Bio: Justine Sherry is a computer scientist and doctoral candidate at UC Berkeley. Her interests are in computer networking; her work includes middleboxes, networked systems, measurement, cloud computing, and congestion control. Justine's dissertation focuses on new opportunities and challenges arising from the deployment of middleboxes -- such as firewalls and proxies -- as services offered by clouds and ISPs. Justine received her MS from UC Berkeley in 2012, and her BS and BA from the University of Washington in 2010. She is an NSF Graduate Research Fellow, has won paper awards from both USENIX NSDI and ACM SIGCOMM, and is always on the lookout for a great cappuccino.


 


February 23rd, 2016

Mooly Sagiv
Tel Aviv University
"Verifying Safety of Distributed Systems"

 

Read the Abstract and Bio

Abstract:
Distributed systems are notoriously hard to debug. I will describe two recent techniques for formally verifying the safety of distributed systems and automatically identifying potential safety violations.

In modern networks, forwarding of packets often depends on the history of previously transmitted traffic. Such networks contain stateful middleboxes, whose forwarding behavior depends on a mutable internal state. Firewalls and load balancers are typical examples of stateful middleboxes.  I will describe techniques for automatically verifying the safety of statefull networks of middleboxes. The main idea is to reason about potential interactions between state changes in individual middleboxes.

Then, I will describe IVY: an interactive static analysis tool for proving safety of infinite (distributed) state systems. IVY performs bounded model checking to identify potential bugs by considering fixed number of transitions w/o limiting the size of the system. IVY employs decision procedures for sound and complete Floyd-Hoare deductive verification when inductive invariants are specified. When the supplied candidate invariant is violated, IVY displays a counterexample to induction (CTI) and allows the user to generalize this CTI by ignoring irrelevant facts. This generalized CTI is interpolated by IVY in order to strengthen the inductive invariant. This procedure is repeated until the safety of the system is proved. IVY is applied to verify simple distributed protocols.

Bio: Mooly Sagiv is a professor in the School of Computer Sciences at Tel-Aviv University. He is a leading researcher in the area of large scale (inter-procedural) program analysis, and one of the key contributors to shape analysis. His fields of interests include programming languages, compilers, abstract interpretation, profiling, pointer analysis, shape analysis, inter-procedural dataflow analysis, program slicing, and language-based programming environments. Sagiv is a recipient of a 2013 senior ERC research grant for Verifying and Synthesizing Software Composition. Prof. Sagiv served as Member of the Advisory Board of Panaya Inc. He received best-paper awards at PLDI'11 and PLDI'12  for his work on composing concurrent data structures and a ACM SIGSOFT Retrospective Impact Paper Award (2011) for program slicing. He is an ACM fellow.
.


 


February 25th, 2016

Benedikt Shmidt
IMDEA Software Institute
"Computer-aided proofs for Cryptographic Primitives and Protocols"

 

Read the Abstract and Bio

Abstract:
The security of cryptographic libraries relies on both the security of the underlying cryptographic primitives and their correct implementation. To obtain strong security guarantees, it is desirable to develop formal computer-checked proofs for both since it is difficult to deal with the inherent complexity and to prevent mismatches between algorithmic specifications and implementations otherwise. In this talk, I will focus on computer-aided proofs of key exchange protocols and pairing-based crypto. In particular, I will present a new modular security proof for key exchange protocols in EasyCrypt and a new tool that supports extremely compact, and often fully automated, proofs of cryptographic constructions based on pairing groups. The tool uses a new formal logic which captures key reasoning principles in provable security, and operates at a level of abstraction that closely matches cryptographic practice. To obtain stronger guarantees and allow for proof reuse in different contexts, the tool also generates proofs that are independently verifiable in EasyCrypt.

Bio: Benedikt Schmidt is a researcher at IMDEA Software Institute Madrid. He completed his Ph.D. in computer science at ETH Zurich in December 2012. He received his masters in computer science from the University of Karlsruhe. In his research, he applies techniques from theorem proving, programming languages and program verification to problems in information security and cryptography. Here, his main areas of interest are the analysis of security protocols in both symbolic model and computational models and the analysis and synthesis of cryptographic primitives.
.


 


March 1st, 2016

Matthew Hicks
University of Michigan
"Engineering a more trustworthy computing base"

 

Read the Abstract and Bio

Abstract: As researchers devise ways to better secure software, attackers are forced to attack hardware to gain control of victim machines.  Hardware is an ideal place to attack because it is difficult to patch and attacks in hardware compromise the entire system stack---even otherwise secure software.  Making hardware an even more inviting layer to attack is the complexity of modern processors; with current processors weighing-in at several billion transistors, it is easy for malicious hardware designers to find a place to hide their attack (which I previously demonstrated could be as small as tens of transistors).

In this talk, I will present some of my work aimed at increasing trust in the hardware that our systems depend upon.  I will begin the talk with the results of my analysis of several years of processor errata that highlight the type of potentially security-compromising processor bugs that escape the advanced verification of an x86 processor manufacturer---I refer to this set of bugs as being security-critical.  From this analysis, I will present two systems that protect against the mined security-critical processor bugs: the first system is a lightweight approach aimed at addressing security-critical bugs found in previous processor generations; the second system targets deployed security-critical processor bugs through a reconfigurable invariant fabric.  I will conclude the talk with a look towards the next research steps required to create a more trustworthy computing base.

Bio: Matthew Hicks is a Lecturer in the Division of Computer Science at the University of Michigan.  His research interests span Security, Architecture, and Embedded Systems.  His current projects address hardware security, hardware for security, batteryless devices, and approximate computing.  His research has been used by military contractors, hardware security startups, and has inspired others to devise code analysis techniques aimed at uncovering malicious hardware.  Before becoming a lecturer, Matthew held a postdoctoral research fellowship, also at the University of Michigan, working with Todd Austin and Kevin Fu.  He earned a PhD in 2013 and a MS in 2008, both in Computer Science and both from the University of Illinois at Urbana-Champaign.  He earned a BS in Computer Science from the University of Central Florida in 2006.
.


 


March 3rd, 2016

Emmanouil Kapritsos
Microsoft Research
Sustainable Reliability for Distributed Systems

 

Read the Abstract and Bio

Abstract:
Abstract: Reliability is a first-order concern in modern distributed systems. Even large, well-provisioned systems such as Gmail and Amazon Web Services can be brought down by failures, incurring millions of dollars of cost and hurting company reputation. Such service outages are typically caused by either hardware failures or software bugs. We have developed various techniques for dealing with both kinds of failures (e.g. replication, software testing), but those techniques come at a significant cost. For example, our replication techniques for handling hardware failures are incompatible with multithreaded execution, forcing a stark choice between reliability and performance. As for guarding against software failures, our only real option today is to test our system as best we can and hope we have not missed any subtle bugs. In principle there exists another option, formal verification, that fully addresses this problem, but its overhead in both raw performance and programming effort is considered way too impractical to adopt in real developments.

In this talk, I make the case for Sustainable Reliability, i.e. reliability techniques that provide strong guarantees without imposing unnecessary overhead that limits their practicality. My talk covers the challenges faced by both hardware and software failures and proposes novel techniques in each area. In particular, I will describe how we can reconcile replication and multithreaded execution by rethinking the architecture of replicated systems. The resulting system, Eve, offers an unprecedented combination of strong guarantees and high performance. I will also describe IronFleet, a new methodology that brings formal verification of distributed systems within the realm of practicality. Despite its strong guarantees, IronFleet incurs a very reasonable overhead in both performance and programming effort.

Bio: Manos Kapritsos is a Postdoctoral Researcher at Microsoft Research in Redmond, WA. He received his Ph.D. from the University of Texas at Austin in 2014. His research focuses on designing reliable distributed systems, by applying fault-tolerant replication to combat machine failures and using formal verification to ensure software correctness.
.


 


March 15th, 2016

Xin Jin
Princeton University
"Dynamic Control of Software-Defined Networks"

 

Read the Abstract and Bio

Abstract:

Computer networks run many network services  (e.g., routing, monitoring, load balancing) to support applications from search engines to big data analytics. These network services have to continuously update network configurations to alleviate congestion, to detect and block cyber-attacks, to perform planned maintenance, etc. Network updates are painful because network administrators unfortunately have to balance the tradeoff between the disruption caused by the problem (e.g., congestion and cyber-attacks), and the disruption introduced in fixing the problem. In this talk, I will present my research on designing and building new network control systems to efficiently handle network updates for multiple network services. First, I will present CoVisor, a new network hypervisor that can host multiple network services and efficiently compile their configuration changes to a single update. Then, I will describe Dionysus, a new network update scheduler that can quickly and consistently apply the network update to a distributed collection of switches. I have built prototype systems for CoVisor and Dionysus, and part of CoVisor has been integrated into ONOS, a popular open-source control platform for software-defined networks developed by ON.LAB.

Bio:
Xin Jin is a PhD candidate in the Department of Computer Science at Princeton University, advised by Professor Jennifer Rexford. He has a broad research interest in networked systems, cloud computing and computer networking. His PhD study focuses on Software-Defined Networking (SDN). He has published several research papers in this area in premier venues, including SIGCOMM, NSDI and CoNEXT. He has interned and collaborated with leading research institutes and cutting-edge startups like Microsoft Research and Rockley Photonics. He received his BS degree in computer science and BA degree in economics from Peking University in 2011, and his MA degree in computer science from Princeton University in 2013. He has received many awards and honors, including the Siebel Scholar (2016), a Princeton Charlotte Elizabeth Procter Fellowship (2015), and a Princeton Graduate Fellowship (2011).


.


 


March 17th, 2016

Vincent Liu
University of Washington
"Improving the Efficiency and Reliability of Data Center Networks"

 

Read the Abstract and Bio

Abstract:
In recent years, data center networks have grown to an unprecedented scale. The largest of these are expected to connect hundreds of thousands of servers and are expected to do so with high reliability and low cost. The current solution to these problems is to use an idea first proposed for telephone networks in the early 1950's: Clos network topologies. These topologies have a number of substantial benefits, but their use in this new domain raises a set of questions.

In this talk, I will present two systems that make small changes to state-of-the-art data center designs to provide large improvements to performance, reliability, and cost. I will first describe F10, a data center architecture that can provide both near-instantaneous reaction to failures and near-optimal handling of long-term load balancing. Central to this architecture is a novel network topology that provides all of the benefits of a traditional Clos topology, but also admits local reaction to and recovery from failures. I will also describe Subways, a network architecture that looks at how to use multiple network interfaces on each server to handle growth and performance issues in today's data centers.

Bio:

Vincent Liu is a PhD candidate in Computer Science and Engineering at the University of Washington. Before that, he completed his undergraduate degree in Computer Science at the University of Texas at Austin. His research is in the general area of networked systems across all layers of the networking stack, from hardware concerns to application and workload modeling. He has published in a variety of fields including data center networks, fault-tolerant distributed systems, energy-efficient wireless communication, and systems to preserve security and privacy. His work has won Best Paper Awards at NSDI 2013, ACM SIGCOMM 2013, and NSDI 2015.
.


 


March 24th, 2016

Abhishek Bhattacharjee
Rutgers-New Brunswick
"Efficient Virtual Memory for Big Data Systems"

 

Read the Abstract and Bio

Abstract:
Abstract: We are now firmly in the era of big data, with scientific computing, data mining, social networks, and business management collecting and processing large data-sets to make intelligent decisions about our society and the way we interact with the world. Modern computer systems must not only collect all this data, but must also compute on it quickly and efficiently. At the same time, computer systems must remain easy to program. I will show that achieving efficiency and programmability is challenging, and requires the seamless operation of important abstractions in the systems stack. One such abstraction, virtual memory, is critical to the performance and management of big data systems. I will show, however, that modern virtual memory mechanisms struggle to cope with the massive data-sets prevalent in server and cloud deployments, and are insufficient (and even absent) for emerging hardware accelerators like GPUs. In response, my work architects an efficient virtual memory system for today's computing landscape. Through careful hardware and OS co-design, I will show how to make make big-data systems easier to program, while improving their performance on important classes of workloads (e.g., data mining, deep learning, face detection, graph processing, etc.) by 3-15x.

Bio: Abhishek Bhattacharjee is an Assistant Professor in the Department of Computer Science at Rutgers University. His research interests are on designing high-performance, energy-efficient, and secure datacenter, mobile, and robotic systems. He received a PhD from Princeton University in 2010, and the NSF CAREER award in 2013. His past work has been nominated for the Best Paper Award at MICRO '15, and been included in Micro's Top Picks in Computer Architecture journal in '15.
.


 


March 31, 2016

Gennady Pekhimenko
Carnegie Mellon University
"Practical Data Compression for Modern Memory Hierarchies"

 

Read the Abstract and Bio

Abstract:

Although compression has been widely used for decades to reduce file sizes (thereby conserving storage capacity and network bandwidth then transferring files), there has been little to no use of compression within modern memory hierarchies. Why not?  Especially as programs become increasingly data-intensive, the capacity and bandwidth within the memory hierarchy (including caches, main memory, and their associated interconnects) are becoming increasingly important bottlenecks.  If data compression could be applied successfully to the memory hierarchy, it could potentially relieve pressure on these bottlenecks by increasing effective capacity, increasing effective bandwidth, and even reducing energy consumption.  

In this talk, I describe a new, practical approach to integrating data compression within the memory hierarchy, including on-chip caches, main memory, and both on-chip and off-chip interconnects. This new approach is fast, simple, and effective in saving storage space. A key insight in our approach is that access time (including decompression latency) is critical in modern memory hierarchies. By combining inexpensive hardware support with modest OS support, our holistic approach to compression achieves substantial improvements in performance and energy efficiency across the memory hierarchy. In addition to exploring compression-related issues and enabling practical solutions in modern CPU systems, we discover new problems in realizing hardware-based compression for GPU-based systems and develop new solutions to solve these problems.

Bio:
Gennady Pekhimenko is a PhD candidate in the Computer Science Department at Carnegie Mellon University under the supervision of Professor Todd C. Mowry and Professor Onur Mutlu. Before that (2008-2010), he worked at the IBM Toronto Lab, Compilers Group. He received his MSc. in Computer Science in 2008 from the University of Toronto, and his B.S. in Applied Mathematics and Computer Science from Moscow State University in 2004. His research interests are on efficient memory hierarchy designs with data compression, compilers, GPUs, and bioinformatics. Gennady is serving on the PC of WWW'16 and the ERC of ISCA'16. His work is funded by NVIDIA Graduate, Microsoft Research, Qualcomm Innovation, and NSERC CGS-D fellowships.


.


 


April 19, 2016

Duncan Watts
Microsoft Research
"Computational Social Science: Exciting Progress and Future Challenges"

Read the Abstract and Bio

Abstract:

The past 15 years have witnessed a remarkable increase in both the scale and scope of social and behavioral data available to researchers, leading some to herald the emergence of a new field: “computational social science.” Against these exciting developments stands a stubborn fact: that in spite of many thousands of published papers, there has been surprisingly little progress on the “big” questions that motivated the field in the first place—questions concerning systemic risk in financial systems, problem solving in complex organizations, and the dynamics of epidemics or social movements, among others. In this talk I highlight some examples of research that would not have been possible just a handful of years ago and that illustrate the promise of CSS. At the same time, they illustrate its limitations. I then conclude with some thoughts on how CSS can bridge the gap between its current state and its potential.

Bio:

Duncan Watts is a principal researcher at Microsoft Research and a founding member of the MSR-NYC lab. He is also an AD White Professor at Large at Cornell University. Prior to joining MSR in 2012, he was from 2000-2007 a professor of Sociology at Columbia University, and then a principal research scientist at Yahoo! Research, where he directed the Human Social Dynamics group. His research on social networks and collective dynamics has appeared in a wide range of journals, from Nature, Science, and Physical Review Letters to the American Journal of Sociology and Harvard Business Review, and has been recognized by the 2009 German Physical Society Young Scientist Award for Socio and Econophysics, the 2013 Lagrange-CRT Foundation Prize for Complexity Science, and the 2014 Everett Rogers Prize. He is also the author of three books: Six Degrees: The Science of a Connected Age (W.W. Norton, 2003); Small Worlds: The Dynamics of Networks between Order and Randomness (Princeton University Press, 1999); and most recently Everything is Obvious: Once You Know The Answer (Crown Business, 2011). He holds a B.Sc. in Physics from the Australian Defence Force Academy, from which he also received his officer’s commission in the Royal Australian Navy, and a Ph.D. in Theoretical and Applied Mechanics from Cornell University.


.


 


April 21, 2016

Saul Gorn Memorial Lecture
Cary Phillips
"Creating the Impossible: Hollywood Visual Effects at Industrial Light & Magic"

Read the Abstract and Bio

Abstract:

When you watch a movie in a darkened theater, you imagine that the scenes on the screen actually unfolded in real life with a camera there to capture them. This is utterly false, it’s all fake. At Industrial Light & Magic, small armies of artists and engineers blend digital environments and CG characters with actors on live action sets to bring the impossible to life and blur the lines between fantasy and reality. Cary Phillips will share a light-hearted view behind the scenes of the visual effects of such movies as The Avengers, Pirates of the Caribbean, Transformers, and Star Wars.

Bio: Cary co-leads the R&D group at ILM, where he has worked for over 20 years. He’s a member of the Academy of Motion Picture Arts and Sciences and the recipient of three Academy Technical Achievement Awards. He earned his PhD in computer graphics from Penn in 1991.


.


 


 

April 26, 2016

Linh Thi Xuan Phan
"Timing Guarantees for Cyber-Physical Systems"

Read the Abstract and Bio

Abstract:

Cyber-physical systems -- such as cars, pacemakers, and power plants --
need to interact with the physical world in a timely manner to ensure
safety. It is important to have a way to analyze these systems
and to prove that they can meet their timing requirements. However,
modern cyber-physical systems are increasingly complex: they can
involve thousands of tasks running on dozens of processors, many of
which can have multiple cores or shared caches. Existing techniques
for ensuring timing guarantees cannot handle this level of complexity.
In this talk, I will present some of my recent work that can help to bridge
this gap, such as overhead-aware compositional scheduling and analysis,
and multicore cache management. I will also discuss some potential
applications, such as real-time cloud platforms and intrusion-resistant
cyber-physical systems.

Bio:

Linh Thi Xuan Phan is an Assistant Research Professor in the Department
of Computer and Information Science at the University of Pennsylvania.
Her interests include real-time systems, embedded systems, cyber-physical
systems, and cloud computing. Her research develops theoretical foundations
and practical tools for building complex systems with provable safety and
timing guarantees. She is especially interested in techniques that integrate
theory, systems, and application aspects. Recently, she has been working on
methods for defending cyber-physical systems against malicious attacks,
as well as on real-time cloud infrastructures for safety-critical and
mission-critical systems. Linh holds a Ph.D. degree in Computer Science
from the National University of Singapore (NUS); she received the
Graduate Research Excellence Award from NUS for her dissertation work.


.


 


May 3, 2016

James Weimer
University of Pennsylvania
"Personalizing Medicine in an Impersonal World: Parameter-Invariant Design for Cyber-Physical Systems"

Read the Abstract and Bio

Abstract:
Modern computing systems increasingly interact with the physical world are called cyber-physical systems (CPS). The application domains of CPS include healthcare, transportation, smart building, etc. and many of these systems are used in life-critical situations, making safety and security critically important. The current research directions for ensuring safety and security of CPS are to develop model-based and data-driven techniques. One challenge is that these cyber-physical systems contain complex and often messy plant dynamics that make it difficult to apply conventional model-based and data-driven techniques to design specification-based classifiers, estimators, and controllers. In this talk, I will present my recent work on parameter-invariant design. Across multiple CPS domains including smart buildings and healthcare, parameter-invariant techniques have enabled specification-based design in the presence of unknown model parameters. Specifically, in healthcare, this translates to personalizing medicine without knowing, estimating, learning, or classifying the patient’s unique physiology. Real-world case study evaluations and implementations covering prediction of hypoxia in infants, meal detection for type I diabetics, and monitoring hypovolemia in intensive care units provide insight into the parameter-invariant techniques and future research challenges.

Biography:
James Weimer is a Postdoctoral Researcher in the Department of Computer and Information Science at the University of Pennsylvania. His research interests include the design and analysis of cyber-physical systems with application to medical devices/monitors, networked systems, building energy management, and security. James holds a Ph.D. degree in Electrical and Computer Engineering from Carnegie Mellon University and prior to joining Penn held a Postdoctoral Researcher position at the KTH Royal Institute of Technology. He has earned the best paper award and been a best paper finalist at the International Conference on Cyber-Physical Systems (ICCPS) in 2014 and 2015, respectively.


.