sculpture television buddha (after Something Pacific) by Robert Twomey

Bridging the Gap between Subjective and Computational Measurements of Machine Creativity for CVPR 2021

June 20, 2021, 09:00 - 12:30 (Time Zone TBD)

Speakers Schedule Questions Participants Call for Participation Metrics Organizers

CVPR 2021 is a virtual event and this workshop will be conducted online. Our invited speakers will provide prerecorded video talks and participate in live panel discussions.

Introduction

While the methods for producing machine creativity have significantly improved, the discussion on a scientific consensus on measuring the creative abilities of machines has only begun. As Artificial Intelligence becomes capable of solving more abstract and advanced problems (e.g., image synthesis, cross-modal translations), how do we measure the creative performance of a machine? In the world of visual art, subjective evaluations of creativity have been discussed at length. In the CVPR community, by comparison, evaluating a creative method has not been as systematized. Our goal in this workshop is to discuss current methods for measuring creativity both from experts in creative artificial intelligence as well as artists. We do not wish to narrow the gap between how humans evaluate creativity and how machines do, instead we wish to understand the differences and create links between the two such that our machine creativity methods improve.

Format

This workshop will consist of a combination of expert panelists discussing key questions in measuring creativity, breakout sessions amongst attendees drawing on their disciplinary expertise, and an interactive evaluation of artworks submitted to the workshop. We have gathered a list of experts in creative computer vision and visual art who are interested in discussing their own methods of measuring creativity and hearing how others do so. These guest speakers will engage in two panel discussions and a question and answer session.

Invited Speakers

David Bau a PhD student at MIT, advised by Antonio Torralba. His research focuses on the dissection, visualization, and interactive manipulation of deep networks in vision, and he is the creator of the Network Dissection, GAN Paint and GAN Rewriting methods. Previous to MIT he was an engineer at Google where he contributed to Image Search and Hangouts and created the Pencil Code educational programming system. David is coauthor of the widely-used textbook, Numerical Linear Algebra.

Kazjon Grace is the director of the Designing with AI Lab at the University of Sydney’s School of Architecture, Design and Planning. His work explores how intelligent interactive systems can be part of the creative decision-making process, with a particular focus on computational models of surprise, curiosity and interpretation. He is currently an ARC DECRA Fellow leading a project to use methods from Creative AI to encourage people to diversify their diets and thus eat more healthily.

Kristen Grauman is a Professor in the Department of Computer Science at the University of Texas at Austin and a Research Scientist in Facebook AI Research (FAIR). Her research in computer vision and machine learning focuses on visual recognition, video, and embodied perception. Before joining UT-Austin in 2007, she received her Ph.D. at MIT. She is an IEEE Fellow, AAAI Fellow, Sloan Fellow, and recipient of the 2013 Computers and Thought Award. She and her collaborators have been recognized with several Best Paper awards in computer vision, including a 2011 Marr Prize and a 2017 Helmholtz Prize (test of time award). She serves as an Associate Editor-in-Chief for PAMI and previously served as a Program Chair of CVPR 2015 and NeurIPS 2018.

Ellen Pearlman is a new media artist, curator and critic. A current Research Fellow at MIT, she is also a Senior Researcher Assistant Professor at RISEBA University in Latvia, as well as Director of ThoughtWorks Arts, a global innovation and research lab, and President of Art-A-Hack(TW) a rapid prototyping collaborative workshop. Ellen is a Fulbright Specialist in Art, Media and Technology, a Zero1 American Arts Incubator/U.S. State Department artist, and a U.S. Alumni Ties (Fulbright) grantee. She received her PhD from the School of Creative Media, Hong Kong City University where her thesis was awarded highest global honors by Leonardo LABS Abstracts. Ellen created “Noor: A Brainwave Opera”, an interactive immersive work in a 360 degree theater in Hong Kong, and “AIBO: An Emotionally Intelligent Artificial Intelligence Brainwave Opera” at the Estonian Academy of Music.

Mark Riedl is a Professor in the Georgia Tech School of Interactive Computing and Associate Director of the Georgia Tech Machine Learning Center. Dr. Riedl’s research focuses on human-centered artificial intelligence—the development of artificial intelligence and machine learning technologies that understand and interact with human users in more natural ways. Dr. Riedl’s recent work has focused on story understanding and generation, computational creativity, explainable AI, and teaching virtual agents to behave safely. His research is supported by the NSF, DARPA, ONR, the U.S. Army, U.S. Health and Human Services, Disney, and Google. He is the recipient of a DARPA Young Faculty Award and an NSF CAREER Award.

Carolyn Rose is a Professor of Language Technologies and Human-Computer Interaction in the School of Computer Science at Carnegie Mellon University. Her research program focuses on computational modeling of discourse to enable scientific understanding the social and pragmatic nature of conversational interaction of all forms, and using this understanding to build intelligent computational systems for improving collaborative interactions. Her research group’s highly interdisciplinary work, published in over 270 peer reviewed publications, is represented in the top venues of 5 fields: namely, Language Technologies, Learning Sciences, Cognitive Science, Educational Technology, and Human-Computer Interaction, with awards in 3 of these fields. She is a Past President and Inaugural Fellow of the International Society of the Learning Sciences, Senior member of IEEE, Founding Chair of the International Alliance to Advance Learning in the Digital Era, and Co-Editor-in-Chief of the International Journal of Computer-Supported Collaborative Learning. She also serves as a 2020-2021 AAAS Fellow under the Leshner Institute for Public Engagement with Science, with a focus on public engagement with Artificial Intelligence.

      

Kenneth Stanley leads a research team at OpenAI on the challenge of open-endedness. He was previously Charles Millican Professor of Computer Science at the University of Central Florida and was also a co-founder of Geometric Intelligence Inc., which was acquired by Uber to create Uber AI Labs, where he was head of Core AI research. He received a B.S.E. from the University of Pennsylvania in 1997 and received a Ph.D. in 2004 from the University of Texas at Austin. He is an inventor of the Neuroevolution of Augmenting Topologies (NEAT), HyperNEAT, novelty search, and POET algorithms, as well as the CPPN representation, among many others. His main research contributions are in neuroevolution (i.e. evolving neural networks), generative and developmental systems, coevolution, machine learning for video games, interactive evolution, quality diversity, and open-endedness. He has won best paper awards for his work on NEAT, NERO, NEAT Drummer, FSMC, HyperNEAT, novelty search, Galactic Arms Race, POET, and MCC. His original 2002 paper on NEAT also received the 2017 ISAL Award for Outstanding Paper of the Decade 2002 - 2012 from the International Society for Artificial Life. He is a coauthor of the popular science book, “Why Greatness Cannot Be Planned: The Myth of the Objective” (published by Springer), and has spoken widely on its subject.

Schedule

Draft Schedule from ISEA 2020 (online videos: invited speaker talks, participants will watch it beforehand)

Time                   Activity Location
9:00 - 9:20 Introduction (20 min) main room
9:20 - 9:50 Discussion 1 - Elements of Creative AI (30 min) breakout rooms
9:50 - 10:05 Q & A (15 min) main room
10:05 - 10:35 Discussion 2 - Evaluating ML/Art Projects (30 min) breakout rooms
10:35 - 10:50 Q & A (15 min) main room
10:50 - 11:30 Guest Speaker Panel (40 min) main room
11:30 - 12:00 Discussion 3 - Revising Metrics, Evaluation 2 (30 min) breakout rooms
12:10 - 12:30 Presentation of Results and Wrap up main room

Questions

Panel 1: TBD Arts

Panel 2: TBD Engineering

Additional Questions (All)

Call for Participation

We offer two ways for participation.

Call for Participation in the Live, Active Breakout session

In this interactive workshop, participants will discuss thought-provoking topics of Creative AI. For instance, how do we evaluate the artifacts created by AI systems? What are the metrics for measuring creativity? Participants in this workshop will engage in breakout room discussions to collaboratively find answers to these questions. This interdisciplinary workshop aims to provide a platform for the researchers in computer vision, machine learning, or AI in general, to meet artists, writers, and performers to discuss and share their views on the subject of creativity and intelligence. The discussions will draw from concrete examples from a variety of creative domains, including image generation, robots painting, and story generation. We aim to build on existing views from both artists and AI researchers to carve out interdisciplinary discourse on Creative AI.

Participation in this workshop is open to all CVPR attendees, however, space is limited to ensure that the live, active breakout sessions are productive. In order to reserve a spot for these sessions, please fill out the following Google Form by Saturday, May 1st, midnight Eastern. Participation Form

Call for Artwork

This workshop is accepting submissions of computer vision artwork to form a virtual gallery exhibition. The submissions will be evaluated for acceptance by a jury committee of artists. These artworks will be featured in an online exhibition, and additionally will serve as contemporary examples of machine creativity for workshop participants to discuss. Guest speakers and attendees will evaluate these artworks during the workshop, applying the measures of machine creativity developed earlier in the event.

If you are interested in submitting artwork, please use the following Google Form to send your work by Saturday, May 1st, midnight Eastern. Art Call Form

Participants

Participants will be updated as they are confirmed.

Metrics

Coming soon.

Organizers

Ahmed Elgammal is a professor at the Department of Computer Science at Rutgers University. His research areas include data science in the domain of digital humanities. His work on knowledge discovery in art history and AI art generation received wide international media attention, and his art has been shown in several technology and art venues in Los Angeles, Frankfurt, San Francisco, and New York City.

Hyeju Jang is a postdoctoral fellow at the University of British Columbia. Her research interests include natural language processing, computational linguistics, discourse analysis, and text mining in various domains. She has been working on computationally modeling creative uses of language, such as metaphor, in order to capture how they are used in discourse context and identify a broader spectrum of predictors that contribute towards their detection and generation.

Eunsu Kang is an artist, a researcher, and an educator who explores the intersection of art and machine learning, one of the core methods for building AI. She has been making interactive art installations and performances, teaching art-making using machine learning methods, and recently looking into the possibility of creative AI. She is also a co-founder of Women Art AI collective.

James McCann is an Assistant Professor in the Carnegie Mellon Robotics Institute. He is interested in systems and interfaces that operate in real-time and build user intuition, including systems that enable and enhance creativity.

Jean Oh is a faculty member at the Robotics Institute at Carnegie Mellon University. She is passionate about creating robots that can collaborate with humans in shared or remote environments, continuously improving themselves through learning, exploration, and interactions. Jean co-designed a new graduate-level course on Creative AI at CMU and was a co-organizer of the first workshop on Measuring Computational Creativity at ISEA’20.

Devi Parikh is an Associate Professor in the School of Interactive Computing at Georgia Tech, and a Research Scientist at Facebook AI Research (FAIR). Her research interests are in computer vision, natural language processing, embodied AI, human-AI collaboration, and AI for creativity. Devi has co-organized workshops at CVPR since 2010. For instance, the series of Visual Question Answering workshops.

Peter Schaldenbrand is a graduate student and technical staff member at Carnegie Mellon University. His research interests include creating machine learning models that perform creative tasks and artificial intelligence in education. Recently, he has been focusing on a robot artistic painting project.

Robert Twomey is an Assistant Professor of Emerging Media Arts at the University of Nebraska-Lincoln, and a Visiting Scholar with the Clarke Center for Human Imagination at UC San Diego. His work as an artist and engineer explores how emerging technologies transform sites of intimate life. He has presented his work at SIGGRAPH (Best Paper Award), the Museum of Contemporary Art San Diego, and has been supported by the National Science Foundation, the California Arts Council, Microsoft, Amazon, and NVIDIA.

Jun-Yan Zhu is an Assistant Professor in the School of Computer Science of Carnegie Mellon University. He studies computer vision, computer graphics, computational photography, and machine learning, with the goal of building intelligent machines, capable of recreating our visual world. Jun-Yan has co-organized several relevant workshops and tutorials including CVPR 2020 Tutorial on Neural Rendering, ICCV 2019 Workshop on Image and Video Synthesis, and CVPR 2018 Tutorial on Generative Adversarial Networks.

Support