CIS Seminars & Events

Fall 2013 Colloquium Series

September 17

Raymond J. Mooney
Computer Science Dept., University of Texas, Austin
"Grounded Language Learning"

Read the Abstract and Bio

Abstract: Most approaches to semantics in computational linguistics represent meaning in terms of words or abstract symbols. Grounded-language research bases the meaning of natural language on perception and/or action in the (real or virtual) world. Machine learning has become the most effective approach to constructing natural-language systems; however, current methods require a great deal of laboriously annotated training data. Ideally, a computer would be able to acquire language like a child, by being exposed to language in the context of a relevant but ambiguous environment, thereby grounding its learning in perception and action. We will review recent research in grounded language learning and discuss future directions.

Bio: Raymond J. Mooney is a professor in the Department of Computer Science at the University of Texas at Austin. He received his Ph.D. in 1988 from the University of Illinois at Urbana/Champaign. He is an author of over 150 published research papers, primarily in the areas of machine learning and natural language processing. He was the president of the International Machine Learning Society from 2008-2011, program co-chair for AAAI 2006, and is a AAAI and ACM Fellow. His recent research has focused on learning for natural-language processing, statistical relational learning, active transfer learning, and connecting language, perception and action.

September 24

Michael Reiter
Dept. of Computer Science, University of North Carolina at Chapel Hill
"How to Misuse, Use, and Mitigate Side Channels in Virtualized Environments"

Read the Abstract

Abstract: A side channel is an attack against (usually) a cryptographic algorithm that leverages aspects of the algorithm's implementation, versus relying entirely on its abstract design or underlying assumptions. Side channels have been studied for decades but have received renewed attention due to the increasing use of virtualization to isolate mutually distrustful virtual machines (VMs) from each other (e.g., in clouds), thereby highlighting the question of whether modern virtualization techniques do an adequate job of isolating VMs against side-channel attacks from their co-tenants. In this talk we will answer this question in the negative, and then paradoxically show how side channels can be used constructively to help defend cloud-resident VMs from abuse by others. Finally, we will describe a novel design for cloud environments to mitigate potential sources of side channels.

October 1

Raquel Urtasun
Toyota Technological Institute at Chicago
"Visual Scene Understanding for Autonomous Systems"

Read the Abstract

Abstract: Developing autonomous systems that are able to assist humans in everyday's tasks is one of the grand challenges in modern computer science. Notable examples are personal robotics for the elderly and people with disabilities, as well as autonomous driving systems which can help decrease fatalities caused by traffic accidents. In order to perform tasks such as navigation, recognition and manipulation of objects, these systems should be able to efficiently extract 3D knowledge of their environment. In this talk, I'll show how graphical models provide a great mathematical formalism to extract this knowledge. In particular, I'll focus on a few examples, including 3D object and layout estimation and self-localization.

October 8

Avrim Blum
Dept. of Computer Science, Carnegie Mellon University
"Towards theoretical models of natural inputs: aiming to bridge the Theory/AI divide"

Read the Abstract

Abstract: Theory has become particularly adept at proving problems hard to solve or even to approximate well; yet this does not make those problems disappear, and in some domains heuristic methods have been developed that work quite well in practice. This brings up the question of how theory can best contribute to both the understanding and the development of better algorithms for such problems when worst-case analysis is too pessimistic and simple probabilistic models are on the other hand too optimistic or just inappropriate. In this talk I will discuss a few vignettes, centered around problems of clustering, finding equilibria, and analysis of social networks. One theme in part of this work is that often when AI problems such as clustering are formulated as optimization tasks, the objective being optimized is really a proxy for some other underlying goal. In such cases, implicitly-assumed connections between the proxy and underlying goal (which one would want to hold for the formulation to be reasonable) will sometimes imply additional structure that algorithms can use to bypass worst-case approximation barriers.

October 15 - Canceled

Special Diversity Talk
**We regret that Dr. Joshua Aronson's talk and visit for Tuesday, October 15th has been canceled due to illness. We will notify you when we reschedule Dr. Aronson's talk**
Josh Aronson
New York University, Steinhardt School of Culture, Education and Human Development
"Stereotypes and their effects on Academic Performance and Evaluation"

October 18

Special Event
Location: Harrison Hall
Al Roth
Professor of Economics at Stanford University
"Networks and Algorithms: What Have We Learned from Market Design"

Description

Fred and Robin Warren have been early adopters of many successful inventions and ventures, helping shape the communities that foster innovation. Through the funding of The Warren Center, they plan to shape a community which fosters, inspires and leads innovation and new ventures."Penn Engineering's steadfast commitment to innovation is what keeps me engaged with Penn. We hope that The Warren Center will become the premiere academic and technology incubator of its kind."

October 29

Trevor Mudge
Electrical Engineering & Computer Science Dept, University of Michigan
"Interconnect Fabrics for Multicore Computers"

Read the Abstract and Bio

Abstract: This talk will examine a new class of interconnect fabrics suitable for small to medium size multicore systems on a chip-those with less than 100 processors. The basis of this class of interconnects is a crossbar. Crossbars have several advantageous properties: simple one hop routing; ease of implementing QoS policies. On the other hand their scalability is limited and multicast operations can be clumsy to implement. We show that the limits of scalability are not as severe as often thought. Additionally we show how we solve multicast and implement common QoS policies. To support our arguments we will present performance data from several prototype chips that we have built and show how they can be used to create multicore systems.

Bio: Trevor Mudge received the Ph.D. in Computer Science from the University of Illinois. He is now at The University of Michigan. He was named the Bredt Professor of Engineering after a ten-year term as Director of the Advanced Computer Architecture Laboratory-a group of a ten faculty and about 60 graduate students. He is the author of numerous papers on computer architecture, programming languages, VLSI design, and computer vision. He has also supervised 50 theses in these areas. He is a Fellow of the IEEE, a member of the ACM, the IET, and the British Computer Society. In addition to his position as a faculty member, he runs Idiot Savants, a chip design consultancy.

November 5

Grace Hopper Distinguished Lecture
Lydia E. Kavraki
Computer Science and Bioengineering, Rice University
"From Robots to Biomolecules: Computing for the Physical World"

Read the Abstract and Bio

Abstract: This talk will first describe how sampling-based methods revolutionized motion planning in robotics. The presentation will quickly focus on recent algorithms that are particularly suitable for systems with complex dynamics. The talk will then introduce an integrative framework that allows the synthesis of motion plans from high-level specifications. The framework uses temporal logic and formal methods and establishes a tight link between classical motion planning in robotics and task planning in artificial intelligence. Although research initially began in the realm of robotics, the experience gained has led to algorithmic advances for analyzing the motion and function of proteins, the worker molecules of all cells. This talk will conclude by discussing robotics-inspired methods for computing the flexibility of proteins and large macromolecular complexes with the ultimate goals of deciphering molecular function and aiding the discovery of new therapeutics.

Bio: Lydia E. Kavraki is the Noah Harding Professor of Computer Science and Bioengineering at Rice University. She also holds an appointment at the Department of Structural and Computational Biology and Molecular Biophysics at the Baylor College of Medicine in Houston. Kavraki received her B.A. in Computer Science from the University of Crete in Greece and her Ph.D. in Computer Science from Stanford University. Her research contributions are in physical algorithms and their applications in robotics (robot motion planning, hybrid systems, formal methods in robotics, assembly planning, micromanipulation, and flexible object manipulation), as well as in computational structural biology, translational bioinformatics, and biomedical informatics (modeling of proteins and biomolecular interactions, large-scale functional annotation of proteins, computer-assisted drug design, and systems biology). Kavraki has authored more than 180 peer-reviewed journal and conference publications and a co-author of the popular robotics textbook "Principles of Robot Motion" published by MIT Press. She is heavily involved in the development of The Open Motion Planning Library (OMPL), which is used in industry and in academic research in robotics and biomedicine. Kavraki is currently on the editorial board of the International Journal of Robotics Research, the ACM/IEEE Transactions on Computational Biology and Bioinformatics, the Computer Science Review, and Big Data. She is also a member of the editorial advisory board of the Springer Tracts in Advanced Robotics. Kavraki is a Fellow of the Association of Computing Machinery (ACM), a Fellow of the Institute of Electrical and Electronics Engineers (IEEE), a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), a Fellow of the American Institute for Medical and Biological Engineering (AIMBE), a Fellow of the American Association for the Advancement of Science (AAAS), and a Fellow of the World Technology Network (WTN). Kavraki was elected a member of the Institute of Medicine (IOM) of the National Academies in 2012. She is also a member of the Academy of Medicine, Engineering and Science of Texas (TAMEST) since 2012

Learn more about the Grace Hopper Lecture Series

November 12

Stephen Chong
School of Engineering and Applied Sciences, Harvard University
"Towards a Practical Secure Concurrent Language"

Read the Abstract and Bio

Abstract: Concurrent programs pose both a challenge and an opportunity for enforcing strong information security. The challenge is that it is hard to reason about information security guarantees in the presence of concurrency. The opportunity is that new and existing language abstractions for concurrency can make it easier to reason about information security. This opportunity arises because information security, like concurrency, is intimately connected to notions of dependency.

We address the challenge, and seize the opportunity: We demonstrate that a practical concurrent language can be extended in a natural way with information security mechanisms that provably enforce strong information security guarantees. We extend the X10 concurrent programming language with coarse-grained information-flow control. Central to X10 concurrency abstractions is the notion of a place: a container for data and computation. By restricting what information may influence data and computation at a place, we can prevent dangerous information flows, including information flow through covert scheduling channels. For many common patterns of concurrency, our security mechanisms impose no restrictions. This work is joint with Stefan Muller.

Bio: Steve Chong is a Computer Science faculty member in the Harvard School of Engineering and Applied Sciences. Steve's research focuses on programming languages, information security, and the intersection of these two areas. He is the recipient of an NSF CAREER award, and an AFOSR Young Investigator award. He received a PhD from Cornell University, and a bachelor's degree from Victoria University of Wellington, New Zealand.

November 19

Frans Kaashoek
Dept. of Electrical Engineering, MIT
"The multicore evolution and operating systems"

Read the Abstract

Abstract: Multicore chips with hundreds of cores will likely be available soon. Although many applications have significant inherent parallelism (e.g., mail servers), their scalability on many cores can be limited by the underlying operating system. We have built or modified several kernels (Corey, Linux, and sv6) to explore OS designs that scale with increasing number of cores. This talk will summarize our experiences by exploring questions such as what is the impact of kernel scalability on application scalability, is a revolution in kernel design necessary to achieve kernel scalability, and how to build perfectly-scalable operating systems.

Joint work with: S. Boyd-Wickizer, A. Clements, Y. Mao, A. Pesterev, R. Morris, and N. Zeldovich

November 26

Hal Daume III
Computer Science Dept., University of Maryland, College Park
"The Many Flavors of Language: Understanding and Adapting Statistical Models"

Read the Abstract and Bio

Abstract: Language use can vary along many axes, including genre, topic, register and communication medium. Rounded to two decimal points, of all text produced today, 0.00% of it is newswire. Yet most of our statistical models are built based on labeled data drawn from news and related media. These systems fall apart when applied on other types of language, often falling short of the performance of oft maligned "rule-based systems." If we want statistical systems that we can use on the diverse types of language we see today (social media, scientific texts, speech, etc.) we essentially have two choices: annotate new types of data for all relevant tasks or develop better learning technology. We take the second approach because it scales better to the large variety of types of language and large number of interesting tasks.

I'll begin this exploration into language flavors by asking the question: when statistical models are applied to new domains, what goes wrong? Despite almost a decade of research in domain adaption, very little effort has gone into answering this question. My goal is to convince you that by taking this analysis problem seriously, we can develop much better hypotheses about how to build better systems. Once we understand the problem, I'll discuss my work that addresses the various aspects of the adaptation problem with applications ranging from simple text categorization through structured prediction and all the way to machine translation. (Along the way I'll also highlight applications of these technologies to other domains like vision and robotics.)

This is joint work with a large number of students and collaborators: Arvind Agarwal, Marine Carpuat, Larry Davis, Shobeir Fakhraei, Katharine Henry, Ann Irvine, David Jacobs, Jagadeesh Jagarlamudi, Abhishek Kumar, Daniel Marcu, John Morgan, Dragos Munteanu, Jeff Phillips, Chris Quirk, Rachel Rudinger, Avishek Saha, Abhishek Sharma, Suresh Venkatasubramanian.

Bio: Hal Daume III is an associate professor in Computer Science at the University of Maryland, College Park. He holds joint appointments in UMIACS and Linguistics. He was previously an assistant professor in the School of Computing at the University of Utah. His primary research interest is in developing new learning algorithms for prototypical problems that arise in the context of language processing and artificial intelligence. This includes topics like structured prediction, domain adaptation and unsupervised learning; as well as multilingual modeling and affect analysis. He associates himself most with conferences like ACL, ICML, NIPS and EMNLP. He earned his PhD at the University of Southern California with a thesis on structured prediction for language (his advisor was Daniel Marcu). He spent the summer of 2003 working with Eric Brill in the machine learning and applied statistics group at Microsoft Research. Prior to that, he studied math (mostly logic) at Carnegie Mellon University, while working at the Language Technology Institute.

December 5

Jame O'Brien
Dept. of Electrical Engineering and Computer Science, University of California, Berkeley
"Talk Geometric Image and Video Forensics"

Read the Abstract and Bio

Abstract: Advances in computational photography, computer vision, and computer graphics allow for the creation of visually compelling photographic forgeries. Forged images have appeared in tabloid magazines, main-stream media outlets, political attacks, scientific journals, and the hoaxes that land in our email in-boxes. These doctored photographs are appearing with growing frequency and sophistication, and even experts cannot rely on visual inspection to distinguish authentic images from forgeries. Techniques in image forensics operate on the assumption that photo-tampering will disturb some statistical or geometric property of an image. In a well-executed forgery these disturbances will either be perceptibly insignificant, or they may be noticeable but subjectively plausible. Methods for forensic analysis provide a means to detect and quantify specific types of tampering. To the extent that these perturbations can be quantified and detected, they can be used to objectively invalidate a photo. This talk will focus on forensic methods based on geometric content analysis. These methods work by finding inconsistencies in the geometric relationships among objects depicted in a photograph. The geometric relationships in the 2D image correspond to the projection of the relations that exist in the 3D scene. If a scene is known to contain a given relationship but the projected relation does not hold in the photograph, then one may conclude that the photograph is not a true projective image of the scene. The goal is to build a set of hard constraints that must be satisfied or else the image must be fake.

Bio: James F. O'Brien is a Professor of Computer Science at the University of California, Berkeley. His primary area of interest is Computer Animation, with an emphasis on generating realistic motion using physically based simulation and motion capture techniques. He has authored numerous papers on these topics. In addition to his research pursuits, Prof. O'Brien has worked with several game companies on integrating advanced simulation physics into game engines, and his methods for destruction modeling have been used in more than 15 feature films. He received his doctorate from the Georgia Institute of Technology in 2000, the same year he joined the Faculty at U.C. Berkeley. Professor O'Brien is a Sloan Fellow and ACM Distinguished Scientist, Technology Review selected him as one of their TR-100, and he has been awarded research grants from the Okawa and Hellman Foundations. He is currently serving as ACM SIGGRAPH Director at Large.