CIS Seminars and Events

Fall 2018 Colloquium Series

Unless otherwise noted, CIS lectures are held weekly on Tuesday and/or Thursday from 3:00 p.m. to 4:00 p.m. in Wu and Chen Auditorium, Levine Hall.  For all Penn Engineering events, visit the Penn Calendar.

If you need further information please contact cherylh@cis.upenn.edu.

Thursday, Sep 27th
Pamela Zave
Department of Computer Science
Princeton University
"The compositional architecture of the Internet"

Read the Abstract and Bio

Abstract: In 1992 the explosive growth of the World Wide Web began. In 1993 the last major change was made to the ``classic'' Internet architecture. Since then the Internet has been adapted to handle a truly impressive list of additional applications and unforeseen challenges, at global scale. Although the architecture of the Internet has changed drastically, the way that experts talk about it has not changed. As a result, there is no adequate foundation for re-using solution patterns, verifying trustworthy network services, or evolving toward a better Internet.

This talk introduces a new formal model of networking, based on flexible composition of modular networks. Each module is a microcosm of networking, with all of the basic structures and mechanisms. The new model provides precise, consistent descriptions of how the Internet works today. It also shows that a new network is usually added to the Internet architecture because there is a need for a capability that is intrinsically difficult to implement in the classic architecture, or because there is a need to maintain several distinct views (membership, topology, trust, etc.) of the same network. Layered architectures can be simpler than flattened ones, and also more efficient.

Technological advances are rapidly making most network components programmable. In this context, the new model could be used to implement re-usable, customizable infrastructure. Currently we are using it to investigate the interactions among services (such as secure communication or mobility) and architectural features (such as middleboxes or cloud computing), in pursuit of proof structures and better
compositional designs.

Bio

Pamela Zave received the AB degree in English from Cornell University and the PhD degree in computer sciences from the University of Wisconsin--Madison. She has held positions at the University of Maryland,
Bell Laboratories, and AT&T Laboratories, and is now at Princeton University. She is an ACM Fellow, an AT&T Fellow, and the 2017 winner of the IEEE Computer Society's Harlan D. Mills Award. Her work on the foundations of requirements engineering has been recognized with three Ten-Year Most Influential Paper awards. At AT&T, she led a group that developed two successful large-scale telecommunication systems based on her Distributed Feature Composition architecture, including AT&T's first public voice-over-IP offering. She has been chair of IFIP Working Group 2.3 on Programming Methodology, and holds 30 patents. Her current interests focus on network architecture and verification of distributed systems.

Tuesday Oct 9th
Jack Stankovic
Department of Computer Science
University of Virginia
"Research Challenges and Solutions for IOTT/CPS"
Read the Abstract and Bio

Abstract: As the Internet of Things (IOT) matures and supports increasingly sophisticated applications, the research needs for IOT also expand considerably. This talk discusses several major research challenges for the future IOT where trillions of devices are connected to the Internet; call it the Internet of Trillions of Things (IOTT). A brief discussion on the relationship of IOTT, to Cyber Physical Systems (CPS), and Wireless Sensor Networks (WSN) is presented. Research topics covered include systems of systems, the impact of massive scaling, and IOTT for healthcare. Smart cities are used to present examples of new system of system research issues and their solutions. Scaling and long time maintenance problems give rise to the need for runtime validation. How to accomplish this is presented. We use the Internet of Healthcare Things to identify the realisms that must be addressed in real home deployments. We also discuss the problems and solutions for using speech as a major sensing modality for smart healthcare based on an emo2vec (an extension to word2vec) and LSTMs. The list of topics is not meant to be comprehensive, but does address some of the main research issues in IOTT/CPS.

Brief Bio: Professor John A. Stankovic is the BP America Professor in the Computer Science Department at the University of Virginia. He served as Chair of the department for 8 years. He is a Fellow of both the IEEE and the ACM. He has been awarded an Honorary Doctorate from the University of York for his work on real-time systems. He won the IEEE Real-Time Systems Technical Committee's Award for Outstanding Technical Contributions and Leadership. He also received the IEEE Technical Committee on Distributed Processing's Distinguished Achievement Award (inaugural winner). He has seven Best Paper awards, including one for ACM SenSys 2006. Stankovic has an h-index of 115 and over 56,000 citations. In 2015 he was awarded the Univ. of Virginia Distinguished Scientist Award, and in 2010 the School of Engineering’s Distinguished Faculty Award. He also received a Distinguished Faculty Award from the University of Massachusetts. He has given more than 40 Keynote talks at conferences and many Distinguished Lectures at major Universities. He also served on the National Academy’s Computer Science Telecommunications Board. He was the Editor-in-Chief for the IEEE Transactions on Distributed and Parallel Systems and was founder and co-editor-in-chief for the Real-Time Systems Journal. His research interests are in real-time systems, wireless sensor networks, smart and connected health, cyber physical systems, and the Internet of Things. Prof. Stankovic received his PhD from Brown University

Thursday, Oct 11th
Lothar Thiele
Department Information Technology and Electrical Engineering
ETH Zurich
"The Quest for Trust"

Read the Abstract and Bio

Abstract: If visions and forecasts of industry come true then we will be soon surrounded by billions of interconnected embedded devices. We expect dependable results from sensing, computation, communication and actuation due to economic importance or even catastrophic consequences if the overall system is not working correctly. Trustworthiness and reliability are mandatory for the societal acceptance of human-cyber interaction and cooperation.

It will be argued that we need novel architectural concepts to satisfy the strongly conflicting requirements and associated design challenges of platforms for cyber-pyhsical systems: Handle at the same time limited available resources, adaptive run-time behavior, and predictability. These challenges concern all components of a distributed embedded system, e.g., computation, storage, wireless communication, energy management, harvesting, sensing and sensor interfaces, and actuation. The talk will present some novel models and methods that help to reach the above-mentioned goals as well as examples from various application domains such as environmental sensing.

Brief Bio: Lothar Thiele joined ETH Zurich, Switzerland, as a Professor of Computer Engineering, in 1994. His research interests include models, methods and software tools for the design of embedded systems, internet of things, cyberphysical systems, sensor networks, embedded software and bioinspired optimization techniques.

Lothar Thiele reveived the "Outstanding Young Author Award" of the IEEE Circuits and Systems Society in 1987, the Browder J. Thompson Memorial Award of the IEEE in 1988, and the "IBM Faculty Partnership Award" in 2000/2001. In 2005, he was the recipient of the Honorary Blaise Pascal Chair of University Leiden, The Netherlands. Lothar Thiele received the "EDAA Lifetime Achievement Award" in 2015. Since 2017, Lothar Thiele is Associate Vice President of ETH Zurich for Digital Transformation.

Tuesday, Oct 23rd
Aditya Akella
Department of Computer Science
University of Wisconsin-Madison
"Putting Networks on a Firm Footing: Revolutionizing Network Management (and More)"
Read the Abstract and Bio

Abstract:Network management plays a central role in keeping networks up and running. Perhaps the most important network management task is configuring a network's devices to compute routes that govern how the network might: move data around, impose security and compliance policies (such as, who can/not communicate), and determine which communications to prioritize or isolate. Typically, configurations are large in size, and strewn across hundreds of devices. They may also encode complex interactions between distributed routing protocols. Unfortunately, this complexity often leads to configuration bugs that cause large-scale outages, blackholes, and isolation breaches.

In this talk, I will survey recent advances that are transforming the field of network management. These techniques, inspired by formal methods, aim to automate key configuration management tasks, and systematically improve resilience, and trustworthiness of networks. They automatically verify whether networks satisfy important properties; synthesize network configurations with provably correct policies realizations; and repair broken configurations with minimal human involvement. In the limit, these advances can lead to "zero touch" networking.

With this confluence of formal methods and networking, network management is no longer “a black art”, but a science. Yet, it is likely to face fundamental constraints in the not-too-distant future. I believe that these constraints are rooted in basic attributes of network hardware, and in our equating network management with programming. I will conclude my talk with a call to arms, and some ideas, for overcoming these constraints.

Bio:

Aditya Akella is a Professor of Computer Sciences and an H. I. Romnes Faculty Fellow at UW-Madison. He received his B. Tech. in Computer Science from IIT Madras in 2000, and PhD in Computer Science from CMU in 2005. His research spans computer networks and systems, with a focus on network verification, synthesis, and repair, data center networking, software defined networks, and big data systems. Aditya has published over 100 papers, and has served as the program co-chair for several conferences including NSDI, SIGMETRICS, and HotNets. He is currently the Vice Chair for ACM SIGCOMM. Aditya co-leads CloudLab (http://cloudlab.us), a large-scale testbed for foundational cloud research. Aditya has received several awards

including the "UW-Madison CS Professor of the Year" Award (2017), Vilas Associates Award (2017), IRTF Applied Networking Research Prize (2015), ACM SIGCOMM Rising Star Award (2014), NSF CAREER Award (2008) and several best paper awards.

.

Thursday, Oct 25th
Tal Rabin
Cryptography Research Group
TJ Watson Research Center
IBM
"Multiparty Computations and Threshold Cryptography – 30 Years in the Making"
Read the Abstract and Bio

Abstract: Multiparty Computations and Threshold Cryptography are proving to be extremely relevant to practice with the emergence of cloud computing, blockchain technologies and ML and the ever-growing need for privacy. In this talk we will explain the fundamentals of these notions starting with the original theoretical results all the way to practical solutions of today. The talk will be self-contained, no prior cryptographic knowledge required.

Bio:

Tal Rabin is a Distinguished Research Staff Member and the manager of the Cryptography Research Group at IBM’s T.J. Watson Research Center. Her research focuses on the general area of cryptography and, more specifically, on secure multiparty computation and privacy preserving computation.

Rabin is an ACM Fellow, IACR Fellow and member of the American Academy of Arts and Sciences. In 2014 she won the Anita Borg Women of Vision Award winner for innovation and was ranked by Business Insider as the #4 on the 22 Most Powerful Women Engineer. She has served as the Program and General Chair of the leading cryptography conferences and is an editor of the Journal of Cryptology. She has initiated and organizes the Women in Theory Workshop, a biennial event for graduate students in Theory of Computer Science.

.

Tuesday, Oct 30th
Simson Garfinkel
US Census Bureau's Senior Computer Scientist for Confidentiality and Data Access
"Cybersecurity Research Is Not Making Us More Secure"
Read the Abstract and Bio

Abstract:Four decades of research on cybersecurity has resulted in in lots of theory, papers, and even deployed technology. Today's computers are appear to offer dramatically more security features, yet in many ways users do not seem to be experiencing operations that are significantly more secure. In part this is because security is a so-called "wicked problem," but it is also because the profession has failed to address several specific technical issues. As a result, despite fixing many security problems, such poor TCP/IP implementations, poor network servers, flaws in cryptographic primitives and implementations, and even correcting the occasional usability faux pas, our systems are still threatened by insider attacks, supply chain vulnerabilities, weak authentication schemes, buggy code, and overwhelmingly poor usability—at least when it comes to the usability of secure operations. The good news is that we can make systems dramatically more secure!  The bad news is that we probably won't — which itself is good news for security researchers, I guess.

Bio:

Simson L. Garfinkel has published articles in both the academic and popular press for many years in the areas of computer security, digital forensics and privacy. He is a fellow of the Association for Computing Machinery, holds a PhD in Computer Science from MIT, and teaches as an adjunct faculty member at the George Mason University in Vienna, Virginia.

Garfinkel is the author or co-author of fourteen books on computing. His book Database Nation: The Death of Privacy in the 21st Century (O'Reilly, 2000) discussed the impact of technology on privacy in the 20th and 21st centuries. His book Practical UNIX and Internet Security (co-authored with Gene Spafford and Alan Schwartz), was considered the "bible" of Unix security from 1991 until a few years ago, when many organizations seemed to give up on creating Unix or Linux servers that could withstand the attack of a determined user.

Garfinkel is also a journalist and has written many articles about science, technology, and technology policy in the popular press since 1983. He has won several national journalism awards, including the Jesse H. Neal National Business Journalism Award. Today he mostly writes for MIT's Technology Review Magazine and the technologyreview.com website.

As an entrepreneur, Garfinkel founded five companies between 1989 and 2000, including Vineyard.NET, which provided Internet service on Martha's Vineyard from 1995 through 2005, and Sandstorm Enterprises, an early developer of computer forensic tools.

Garfinkel received three Bachelor of Science degrees from MIT in 1987, a Master's of Science in Journalism from Columbia University in 1988, and a Ph.D. in Computer Science from MIT in 2005.




Thursday, Nov 1st
Babak Falsafi
Computer & Communication Sciences
EPFL
"Server Architecture for the Post-Moore Era"

Read the Abstract and Bio

Abstract: Data centers are growing at unprecedented speeds fueled by the demand on global IT services, economies of scale and investments in massive data management and analytics. The conventional silicon technologies laying the foundation for server platforms, however, have dramatically slowed down in efficiency and density scaling in recent years. We are now entering the post-Moore era of digital platform design with a plethora of emerging logic, memory and networking technologies presenting exciting new challenges and abundant opportunities from algorithms to platforms for server designers. In this talk, I will first motivate the post-Moore era for server architecture and then present avenues to pave the path forward for server design.

Bio: Babak is a Professor in the School of Computer and Communication Sciences and the founding director of the EcoCloud research center at EPFL. He has made a number of contributions to computer system design including a multiprocessor architecture for the WildCat/WildFire severs by Sun (now Oracle), memory prefetching technologies in IBM BlueGene and ARM cores, and server evaluation methodologies used by AMD, HPE and Google PerfKit. His recent work on workload-optimized server processors lays the foundation for Cavium ThunderX. He is a recipient of a Sloan Research Fellowship, and a fellow of ACM and IEEE.

Tuesday, Nov 6th
Jeff Bilmes
Department of Electrical Engineering
University of Washington
"AI, Information, and the Future of Machine Learning"

Read the Abstract and Bio

Abstract: Machine learning involves the extraction and aggregation of information from data.  The ability to extract useful information from increasingly larger datasets, however, is becoming decreasingly cost-effective. This is because data is getting bigger at a rate that computational improvements are becoming more expensive to continue to match.  A common strategy to overcome such difficulties is either to discard data or to randomly subsample, but this is not sustainable if machine learning is to continue to improve by exploiting all useful information in available data.  In this talk, we will discuss how to be more efficient in representing information in data through the process of summarization.  In particular, we will see how submodular and supermodular functions can model information in data, and how these can be used to produce theoretically justified but still practical algorithms for various forms of data summarization.  This will include approaches that summarize data before training takes place, and also some new tactics that learn and summarize simultaneously.

Bio:
Jeffrey A. Bilmes is a professor at the Department of Electrical Engineering at the University of Washington, Seattle Washington. He is also an adjunct professor in Computer Science & Engineering and the department of Linguistics. Prof. Bilmes is the founder of the MELODI (MachinE Learning for Optimization and Data Interpretation) lab here in the department. Bilmes received his Ph.D. from the Computer Science Division of the department of Electrical Engineering and Computer Science, University of California in Berkeley. He was also a researcher at the International Computer Science Institute, and a member of the Realization group there.
Prof. Bilmes is a 2001 NSF Career award winner, a 2002 CRA Digital Government Fellow, a 2008 NAE Gilbreth Lectureship award recipient, and a 2012/2013 ISCA Distinguished Lecturer. Prof. Bilmes was a UAI (Conference on Uncertainty in Artificial Intelligence) program chair (2009) and then the general chair (2010). He was also a workshop chair (2011) and the tutorials chair (2014) at NIPS (Neural Information Processing Systems). He is currently an action editor for JMLR (Journal of Machine Learning Research).


Friday, November 16th
Special time: 12:15 pm - 1:15

Bjarne Stroustrup
Managing Director, Technology Division of Morgan Stanley
Visiting Professor at Columbia University
http://www.stroustrup.com/
"What - if anything - have we learned from C++?"

Read the Abstract and Bio

•What is the essence of C++? Why did it succeed despite its well-understood flaws? What lessons - if any - can be applied to newer languages?

•Themes: Social and technical factors. Resource management. Generic programming. The importance of being inefficient. The importance of syntax. How (not) to specify a language. Standardization and compatibility. And no, I don't plan to condemn C++ - it is still the best language around for a lot of things, and getting better. It just isn't anywhere near perfect (even of its kind) or the best at everything - and was never claimed to be.

Bio:

Bjarne Stroustrup is the designer and original implementer of C++ as well as the author of The C++ Programming Language (Fourth Edition), A Tour of C++ (Second edition), Programming: Principles and Practice using C++ (Second Edition), and many popular and academic publications. Dr. Stroustrup is a Managing Director in the technology division of Morgan Stanley in New York City as well as a visiting professor at Columbia University. He is a member of the US National Academy of Engineering, and an IEEE, ACM, and CHM fellow. He received the 2018 Charles Stark Draper Prize, the IEEE Computer Society's 2018 Computer Pioneer Award, and the 2017 IET Faraday Medal. His research interests include distributed systems, design, programming techniques, software development tools, and programming languages. He is actively involved in the ISO standardization of C++. He holds a masters in Mathematics from Aarhus University, where he is an honorary professor, and a PhD in Computer Science from Cambridge University, where he is an honorary fellow of Churchill College. Personal website: www.Stroustrup.com.

.
Grace Hopper Lecture Series (CANCELLED)
Tuesday, Nov 27th
Kate Crawford
AI Now Institute
Read the Abstract and Bio
Content forthcoming.

Thursday Nov 29th
Le Song
Computational Science and Engineering College of Computing
Georgia Tech
"Representation Learning for Graphs"

Read Abstract and Bio

Abstract: Graph has become a universal language in science and technology for describing structures in data, modeling complex systems, and expressing symbolic knowledges. How to represent complex graphs, such that model and algorithm over graphs can become more effective? In this talk, I will describe a learning framework for graph representation by neutralizing message passing operators. I will show that the learned representation can be thousands of times more compact than hand-crafted features, and such representation can be learned efficiently for very large graphs and used for solving challenging combinatorial optimization problems. In the end, I will also explain how to understand such deep representation over graphs.

Bio: Le Song is an Associate Professor in the College of Computing, an Associate Director of the Center for Machine Learning, Georgia Institute of Technology, and also a Principal Engineer of Ant Financial, Alibaba. Before he joined Georgia Institute of Technology in 2011, he was postdoc in the Department of Machine Learning, Carnegie Mellon University, and a research scientist at Google. His principal research direction is machine learning, especially nonlinear models, such as kernel methods and deep learning, probabilistic graphical models, and optimization. He is the recipient of the NSF CAREER Award’14, and many best paper awards, including the NIPS’17 Materials Science Workshop Best Paper Award, the Recsys’16 Deep Learning Workshop Best Paper Award, AISTATS'16 Best Student Paper Award, IPDPS'15 Best Paper Award, , NIPS’13 Outstanding Paper Award, and ICML’10 Best Paper Award. He served as the area chair or senior program committee for many leading machine learning and AI conferences such as ICML, NIPS, AISTATS, AAAI and IJCAI, and the action editor for JMLR and IEEE TPAMI.

Tuesday, Dec 4th
Dinesh Manocha
Department of Computer Science
University of Maryland
"Autonomous Driving: Simulation and Navigation"
Read the Abstract and Bio

Abstract: Autonomous driving has been an active area of research and development over the last decade. Despite considerable progress, there are many open challenges including automated driving in dense and urban scenes. In this talk, we give an overview of our recent work on simulation and navigation technologies for autonomous vehicles. We present a novel simulator, AutonoVi-Sim, that uses recent developments in physics-based simulation, robot motion planning, game engines, and behavior modeling. We describe novel methods for interactive simulation of multiple vehicles with unique steering or acceleration limits taking into account vehicle dynamics constraints. In addition, AutonoVi-Sim supports navigation for non-vehicle traffic participants such as cyclists and pedestrians AutonoVi-Sim also facilitates data analysis, allowing for capturing video from the vehicle's perspective, exporting sensor data such as relative positions of other traffic participants, camera data for a specific sensor, and detection and classification results. We highlight its performance in traffic and driving scenarios. We also present novel multi-agent simulation algorithms using reciprocal velocity obstacles that can model the behavior and trajectories of different traffic agents in dense scenarios, including cars, buses, bicycles and pedestrians. We also present novel methods for extracting trajectories from videos and use them for behavior modeling and safe navigation.

Bio:

Dinesh Manocha is the Paul Chrisman Iribe Chair in Computer Science & Electrical and Computer Engineering at the University of Maryland College Park. He is also the Phi Delta Theta/Matthew Mason Distinguished Professor Emeritus of Computer Science at the University of North Carolina - Chapel Hill. He has won many awards, including Alfred P. Sloan Research Fellow, the NSF Career Award, the ONR Young Investigator Award, and the Hettleman Prize for scholarly achievement. His research interests include multi-agent simulation, virtual environments, physically-based modeling, and robotics. He has published more than 500 papers and supervised more than 36 PhD dissertations. He is an inventor of 9 patents, several of which have been licensed to industry. His work has been covered by the New York Times, NPR, Boston Globe, Washington Post, ZDNet, as well as DARPA Legacy Press Release. He was a co-founder of Impulsonic, a developer of physics-based audio simulation technologies, which was acquired by Valve Inc. He is a Fellow of AAAI, AAAS, ACM, and IEEE and also received the Distinguished Alumni Award from IIT Delhi. See http://www.cs.umd.edu/~dm

Tuesday, December 11th
Fei Sha
Department of Computer Science
University of Southern California
"Asking Harder Questions So Machines Can Answer More Intelligently"

Read the Abstract and Bio

Abstract: Question Answering (QA) and its variants such as Visual QA have become increasingly popular among AI researchers. They provide much needed working definitions of what text or visual scene understanding should look like. They are also readily framed as supervised learning problems, thus amenable to standard machine learning techniques. In particular, neural architecture based learning models have greatly facilitated the progress in those tasks. Recent results have shown that some models can even exceed human performance on benchmark datasets.

Then, would we be too optimistic in projecting that we are very near the goal that machines can understand texts and visual scenes? In this talk, I will describe several vignettes of our work, focusing on visual question answering and open-domain chatbots. Those work highlight several unsettling observations. In particular, answering questions by machines does not seem as challenging as we had previously thought. By exploiting design biases and flaws in the datasets, even simple learning models can achieve impressive results on benchmarks, while humans falter. For examples, machines can answer correctly, without even seeing questions!

We discuss several techniques to remedy this obvious lack of intelligence in understanding visual scenes and texts. For example, we ask harder questions, so machines cannot "cheat". In open-domain chatbots, we design a strategy for asking questions by the chatbots so that the conversations with human participants are more engaging. While we still believe in the value of QA tasks, our work underlines the challenges of designing valid tasks and the needs for cautiously interpreting our progress.

Bio: Dr. Fei Sha is an associate professor and the Zohrab A. Kaprielian Fellow in Engineering at University of Southern California. His primary research interests are machine learning and artificial intelligence. He has a Ph.D (2007) from Computer and Information Science from U. of Pennsylvania and B.Sc and M.Sc from Southeast University (Nanjing, China). He had won an Alfred P. Sloan Research Fellowship and an Army Research Office Young Investigator Award. He also won outstanding (student) paper awards at NIPS and ICML. He has served as area chairs and senior area chairs at NIPS/ICML/CVPR/ECCV/ICCV and was workshop co-chairs for ICML (2016, 2017). He will co-chair AAAI-2020. He is currently on sabbatical leave at Netflix as a Director of Content Machine Learning.

TBA
Read the Abstract and Bio

Abstract:Content forthcoming.