Reverse Ekphrasis by Robert Twomey

Bridging the Gap between Subjective and Computational Measurements of Machine Creativity for CVPR 2021

June 2021, Time/Date TBD

Speakers Schedule Questions Participants Call for Participation Metrics Organizers

Description coming soon.

CVPR 2021 is a virtual event and this workshop will be conducted online. Our invited speakers will provide prerecorded video talks and participate in live panel discussions.

Introduction

Introductory slides COMING SOON

Tentative Speakers

Embedded youtube video of pre-recorded talk (see html comments/.md for example)

David Bau, MIT CSAIL

Kazjon Grace, U. Sydney

Jessica Hodgins, CMU/FAIR

Mark Riedl, Georgia Tech

Carolyn Rose, CMU

Kenneth Stanley, Open AI

Schedule

Draft Schedule from ISEA 2020 (online videos: invited speaker talks, participants will watch it beforehand)

Time                   Activity Location
3:00 - 3:20 Introduction (20 min) main room
3:20 - 3:50 Discussion 1 - Elements of Creative AI (30 min) breakout rooms
3:50 - 4:05 Q & A (15 min) main room
4:05 - 4:40 Guest Speaker Panel 1 (35min) main room
4:40 - 5:10 Discussion 2 - Evaluating ML/Art Projects (30 min) breakout rooms
5:10 - 5:25 Q & A (15 min) main room
5:25 - 6:10 Guest Speaker Panel 2 (35 min) main room
6:10 - 6:30 Discussion 3 - Revising Metrics, Evaluation 2 (20 min) breakout rooms
6:30 - 7:00 Presentation of Results and Q&A main room

Questions

Panel 1: TBD Arts

Panel 2: TBD Engineering

Additional Questions (All)

Participants

This session is an actual working session: together we will collaboratively define metrics for Creative AI.

Workshop Participant This is their website/artwork/publication title

Participants will be updated as they are confirmed.

Call for Participation

Participants in this workshop, as a group, will examine a number of Artificial Intelligence (AI) Art projects, and articulate metrics to evaluate dimensions of creativity in those works. Workshop participants are key contributors to this research in Measuring Creative AI, and we plan for every participant to be named on future Measurable Creative AI publications (website, papers, etc.) as contributors. This is a novel research project with no prior examples as far as we know, and this workshop will be the inaugural event for this effort as this exercise has only been conducted with students previously.

How to participate: If you are interested to participate, please fill out the following google form by Some Deadline, Spring 2021.

Form: google form link

We will send out an acceptance notification by Spring, 2021.

Metrics

Here is our metrics worksheet: google docs

We will publish our results after the workshop.

Organizers

Ahmed Elgammal is a professor at the Department of Computer Science at Rutgers University. His research areas include data science in the domain of digital humanities. His work on knowledge discovery in art history and AI art generation received wide international media attention, and his art has been shown in several technology and art venues in Los Angeles, Frankfurt, San Francisco, and New York City.

Hyeju Jang is a postdoctoral fellow at the University of British Columbia. Her research interests include natural language processing, computational linguistics, discourse analysis, and text mining in various domains. She has been working on computationally modeling creative uses of language, such as metaphor, in order to capture how they are used in discourse context and identify a broader spectrum of predictors that contribute towards their detection and generation.

Eunsu Kang is an artist, a researcher, and an educator who explores the intersection of art and machine learning, one of the core methods for building AI. She has been making interactive art installations and performances, teaching art-making using machine learning methods, and recently looking into the possibility of creative AI. She is also a co-founder of Women Art AI collective.

James McCann is an Assistant Professor in the Carnegie Mellon Robotics Institute. He is interested in systems and interfaces that operate in real-time and build user intuition, including systems that enable and enhance creativity.

Jean Oh is a faculty member at the Robotics Institute at Carnegie Mellon University. She is passionate about creating robots that can collaborate with humans in shared or remote environments, continuously improving themselves through learning, exploration, and interactions. Jean co-designed a new graduate-level course on Creative AI at CMU and was a co-organizer of the first workshop on Measuring Computational Creativity at ISEA’20.

Devi Parikh is an Associate Professor in the School of Interactive Computing at Georgia Tech, and a Research Scientist at Facebook AI Research (FAIR). Her research interests are in computer vision, natural language processing, embodied AI, human-AI collaboration, and AI for creativity. Devi has co-organized workshops at CVPR since 2010. For instance, the series of Visual Question Answering workshops.

Peter Schaldenbrand is a graduate student and technical staff member at Carnegie Mellon University. His research interests include creating machine learning models that perform creative tasks and artificial intelligence in education. Recently, he has been focusing on a robot artistic painting project.

Robert Twomey is an Assistant Professor of Emerging Media Arts at the University of Nebraska-Lincoln, and a Visiting Scholar with the Clarke Center for Human Imagination at UC San Diego. His work as an artist and engineer explores how emerging technologies transform sites of intimate life. He has presented his work at SIGGRAPH (Best Paper Award), the Museum of Contemporary Art San Diego, and has been supported by the National Science Foundation, the California Arts Council, Microsoft, Amazon, and NVIDIA.

Jun-Yan Zhu is an Assistant Professor in the School of Computer Science of Carnegie Mellon University. He studies computer vision, computer graphics, computational photography, and machine learning, with the goal of building intelligent machines, capable of recreating our visual world. Jun-Yan has co-organized several relevant workshops and tutorials including CVPR 2020 Tutorial on Neural Rendering, ICCV 2019 Workshop on Image and Video Synthesis, and CVPR 2018 Tutorial on Generative Adversarial Networks.

Support