Contact
University of Pennsylvania
Levine 402
myatskar@cis.upenn.edu
Mark Yatskar

I am an Assistant Professor at University of Pennsylvania in the department of Computer and Information Science. I did my PhD at University of Washington co-advised by Luke Zettlemoyer and Ali Farhadi. I was a Young Investigator at the Allen Insitute for Artifical Intelegence for several years working with their computer vision team, Prior. My interests are in the intersection of natural language processing, computer vision and fairness in computing.

My research broadly explores how language can be used to structure visual perception. I work on machine learning approaches that enable tight coupling between how people express themselves in language, how machine behavior is specified and how machines ultimately express themselves back to people. Yet if machines learn from people and their language, we risk allowing them to inherit human bias. Many studies, including my own, have found that this is in fact the case. My lab explores three main research themes around expanding the abilities artificial intelegence systems:

  • How can language be used as a scaffold to accelerate visual intelligence?
  • How can machines communicate about the world in natural language?
  • How do we specify machine behavior without inheriting bias?
My work is interdisiplinary, spanning Natural Language Processing, Computer Vision, and Fairness in Machine Learning. My research has been presented in academic venues such as ACL, NAACL, EMNLP, CVPR, ICCV, and featured in popular press such as the New York Times, and Wired. I recieved a Best Paper Award at EMNLP for work on gender bias amplification (Wired article).

Students

Perspective PhD Students: info
Penn Students Interested in Research: info

Teaching

CIS 530: Computational Linguistics: SP 2021 , FA 2021
CIS 700: Language and Vision: FA 2020

News

  • Nov 2021: QuAC section of EMNLP Crowdsource Tutorial
  • June 2021: Co-advising Penn Alexa TaskBot Team with Chris Callison-Burch
  • May 2021: Talk at CMU
  • May 2021: Talk at Princeton

    Publications

    Usually, the most up to date list of my publications is found on semantic scholar.

    Visual Goal-Step Inference using wikiHow
    [Paper][Bib]
    Yue Yang, Artemis Panagopoulou, Qing Lyu, Li Zhang, Mark Yatskar, Chris Callison-Burch
    EMNLP 2021

    Visual Semantic Role Labeling for Video Understanding
    [Paper][Bib]
    Arka Sadhu and Tanmay Gupta and Mark Yatskar and R. Nevatia and Aniruddha Kembhavi
    Conference on Computer Vision and Pattern Recognition (CVPR), 2021


    2020

    Robothor: An open simulation-to-real embodied ai platform
    [Paper][Bib]
    Deitke, Matt, Winson Han, Alvaro Herrasti, Aniruddha Kembhavi, Eric Kolve, R. Mottaghi, Jordi Salvador, Dustin Schwenk, Eli VanderBilt, Matthew Wallingford, Luca Weihs, Mark Yatskar and A. Farhadi
    Conference on Computer Vision and Pattern Recognition (CVPR), 2020

    Grounded Situation Recognition
    [Paper][Bib][Demo]
    Sarah Pratt, Mark Yatskar, Luca Weihs, Ali Farhadi, Aniruddha Kembhavi
    European Conference on Computer Vision (ECCV), 2020

    What Does BERT with Vision Look At?
    [Paper][Bib]
    Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang
    North American Chapter of Association of Computational Linguistics (NAACL), 2020

    Visualbert: A simple and performant baseline for vision and language
    [Paper][Bib]
    Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang
    Preprint 2020

    Learning to Model and Ignore Dataset Bias with Mixed Capacity Ensembles
    [Paper][Bib]
    Christopher Clark, Mark Yatskar, Luke Zettlemoyer
    Findings of Empirical Methods in Natural Language Processing (EMNLP), 2020


    2019

    Dont Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset Biases
    [Paper][Bib]
    Christopher Clark, Mark Yatskar, Luke Zettlemoyer
    Empirical Methods in Natural Language Processing (EMNLP), 2019

    Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations
    [Paper][Bib][Demo]
    Tianlu Wang, Jieyu Zhao, Mark Yatskar, Kai-Wei Chang, Vicente Ordóñez
    In International Conference on Computer Vision (ICCV), 2019

    A Qualitative Comparison of CoQA, Squad 2.0, and QuAC
    [Paper] [Bib] [Website]
    Mark Yatskar
    In North American Chapter of Association of Computational Linguistics (NAACL), 2019

    Gender Bias in Contextualized Word Embeddings
    [Paper] [Bib]
    Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordóñez, Kai-Wei Chang
    In North American Chapter of Association for Computational Linguistics (NAACL), 2019


    2018

    QuAC: Question Answering in Context
    [Paper] [Bib] [Website]
    Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, Luke Zettlemoyer
    In Empirical Methods in Natural Language Processing (EMNLP), 2018

    Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods
    [Paper] [Bib]
    Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordóñez, Kai-Wei Chang
    In North American Chapter of Association for Computational Linguistics (NAACL), 2018

    Neural Motifs: Scene Graph Parsing with Global Context
    [Paper] [Project Page ] [Code] [Bib]
    Rowan Zellers, Mark Yatskar, Sam Thomson, Yejin Choi
    In Computer Vision and Pattern Recognition (CVPR), 2018


    2017

    Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints
    ( Best Long Paper Award ) [Paper] [Bib] [Talk] [Code]
    Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordóñez, Kai-Wei Chang
    In Empirical Methods in Natural Language Processing (EMNLP), 2017
    Press: Wired: Machines Taught By Photos Learn a Sexist View of Women

    Commonly Uncommon: Semantic Sparsity in Situation Recognition
    [Paper] [Bib] [Demo]
    Mark Yatskar, Vicente Ordóñez, Luke Zettlemoyer, Ali Farhadi
    In Computer Vision and Pattern Recognition (CVPR), 2017

    Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
    [Paper] [Bib] [Demo] [Code]
    Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, Luke Zettlemoyer
    In Association for Computational Linguistics (ACL), 2017


    2016

    Situation Recognition: Visual Semantic Role Labeling for Image Understanding
    [Paper] [Bib] [Supplemental Material][Slides] [Data] [Browse] [Demo] [Code]
    Mark Yatskar, Luke Zettlemoyer, Ali Farhadi
    In Computer Vision and Pattern Recognition (CVPR), 2016 Oral
    Press: New York Times: Computer Vision: On the Way to Seeing More

    Stating the Obvious: Extracting Visual Common Sense Knowledge
    [Paper] [Bib]
    Mark Yatskar, Vicente Ordóñez, Ali Farhadi
    In North American Chapter of Association for Computational Linguistics (NAACL), 2016


    Before 2014

    See No Evil, Say No Evil: Description Generation from Densely Labeled Images
    [Paper] [Bib] [Data] [Captions] [Output]
    Mark Yatskar, Michel Galley, Lucy Vanderwende, and Luke Zettlemoyer
    In Third Joint Conference on Lexical and Computation Semantics (*SEM), 2014

    Learning to Relate Literal and Sentimental Descriptions of Visual Properties
    [Paper] [Bib] [Data]
    Mark Yatskar, Svitlana Volkova, Asli Celikyilmaz, Bill Dolan, and Luke Zettlemoyer
    In North American Chapter of the Association for Computational Linguistics (NAACL), 2013

    For the sake of simplicity: Unsupervised extraction of lexical simplifications from Wikipedia
    [Paper] [Bib] [Data]
    Mark Yatskar, Bo Pang, Cristian Danescu-Niculescu-Mizil, Lillian Lee
    In North American Chapter of the Association for Computation Linguistics (NAACL), 2010