Contributed Artworks


image

The Ear by Esteban Y Agosin

The ear is a listening, surveillance, and Artificial Intelligence project. It is a device that collects human voices, transcribe them into text, and with that information create new texts. The ear is a machine that through an Artificial Intelligence system creates ideas in real time based on what it is listening to. This piece could be defined as an ironic idea of the surveillance system. In a way, it is not a perfect and functional machine, it constantly makes mistakes, degrading the original information from the source, creating erroneous and potentially false information, digital garbage that in a way undermines the expected fantasies of technology and surveillance systems. The experience with the piece show how the human voice is transformed into a digital object and that could be analyzed, recorded, transformed, stored, and used, questioning the concept, sense and value of the information, privacy and freedom in our contemporary society, also questioning the relevance of the information collected in surveillance systems, and the ethical and political limits in this type of devices.


image

Zen Machine by Neo Christopher Chung

Zen Machine is a meditative audio-visual installation that answers questions from the audience with a generative hypotic soundscape. Trained on a large corpus of the Sutras, Shastras, Zen teachings, scholarly essays and texts, koans, and tweets, this artificial intelligence algorithm (GPT-2) explores existential and spiritual realms. As koans (paradoxical dialogs used as a meditative device) may be only understood by willing students and perceived as subtle invocation for awakening, our perception of Zen Machine’s answers – and broad AI – depends on our state of mind. To that end, Zen Machine provides an immersive environment and a poetic context. If so, would it be possible for AI to aid in our pursuit of enlightenment?

A series of digital paintings such as “Illusion of falling asleep” (2021), “Unreality of reason” (2021), and “The awe of his supernatural deficiencies” (2021) are created from its interactive exhibition at Galeria Entropia in Wrocław, Poland (9–30/03/2021). The audience questions (as shown in quotes) and answers were fed to a generative AI system to create digital paintings. In particular, Deep Daze combining CLIP (Radford et al. 2019) and Siren (Sitzmann et al. 2020) imagines and visualizes unique scenes based on this new kind of koans. The interplay between texts and paintings provides an opportunity to pause and reflect on potentiality of going beyond anthropocentric understanding.


image

Jungle in the Tiger by Chrisantha Fernando

A neural L-system was evolved to produce images that satisfy the text description ““Jungle in the Tiger”” according to a Duel Encoder trained on the ALIGN dataset.

See the paper Generative Art Using Neural Visual Grammars and Dual Encoders


image

Generating Furry Cars: Disentangling Object Shape and Appearance across Multiple Domains by Utkarsh Ojha

We developed a model which can generate images in such a way that different properties of the images (e.g. foreground shape, background) can be changed independently. Because of this ability, we can mix properties from different domains to create hybrid images which didn’t exist in any domain exclusively; e.g. shape or a car with dog’s furry texture to create a furry car.


image

In The Bleak Midwinter by Glenn Marshall

An AI generated imagery using text interpreted into pictures featuring the AI synthesised voice of Christopher Lee.

Beeple Generator + Image Synthesis by Glenn Marshall

“Michelangelo once stared at a block of stone for months - before he even BEGAN. 3 Years later, he completed David. Today we can just click buttons and instantly create ‘art’. Beeple Generator - click for an instant garish creation of one the world’s highest valued living artists. Text to Image Synthesis - type some words to have the AI turn this into a ‘masterpiece’. Beeple + AI = Art or Crap?”


image

Models for Environmental Literacy by Tivon Rice

Models for Environmental Literacy creatively and critically explores the challenges of describing a landscape, an ecosystem, or the specter of environmental collapse through human language. The project further explores how language and vision are impacted by the mediating agency of new technologies. How do we see, feel, imagine, and talk about the environment in this post-digital era, when there are indeed non-human/machine agents similarly trained to perceive “natural” spaces? This project explores these questions, as well as emerging relationships with drone/computer vision and A.I.


image

Permanent Visibility by Nica Ross

A virtual reality based essay that celebrates the failure of surveillance and nonhuman vision when applied to the human form. The work is the result of capturing gender non-conforming bodies practicing Brazilian Jiu Jitsu in Carnegie Mellon’s Panoptic Dome - a sensor-free motion capture studio. As the name implies the technology’s intention is to fully capture and render the “truth” of a body’s performance. Throughout the piece Jeremy Bentham’s musings on the perfection of the Panopticon’s form are juxtaposed against the Dome’s raw data. We see the noisy shadows of bodies moving across the Dome’s walls, digital skeletons popping in and out of sight as their movements shift outside of a machine’s understanding and we are left with the impression of their contact recorded in millions of point clouds. Bentham’s words describe an omnipotent yet focused power harnessed by surveillance while queer bodies jump in and out of understanding in pursuit of joy rather than legibility.


image

Tunes from the Ai Frontiers: Week 36: Evigt Förlorad – Forever Lost (folk-rnn v2 + Sturm) by Bob L.T. Sturm

Each week I learn and record one folk tune generated by an Ai system and post a video and description of it. These “machine folk” tunes are problematic for a few reasons. First, their origin is not in /folk/, but instead in a lifeless algorithm operating with statistical procedures extracted from crowd-sourced datasets of music ephemera - the impovershed “dots” of the bones of tunes people play in contexts that are deeply personal and social. Second, these tunes come from nowhere – they are connected to neither places, nor musicians, not even a story. Each is revealed to the world through a computational procedure involving on average one billion operations, and then subsequent efforts on my part to discover them. However, these “machine folk” tunes are just like their “real” folk cousins: authored anonymously by a collective community. They are tunes that I feel /ought/ to be. And many of them are about my dog.


image

AI Helper 002 by Maksim Surguy

A combination of methods and generative design techniques, colorization assisted by AI, on Hic Et Nunc.


image

Synthetic Still Life by Ivona Tautkute

The project is a juxtaposition of the artificially generated still life and organic life. The goal of the project is to create new artificial life forms from the stillness of common objects and make them react to sounds of nature as if these synthetic creations were part of natural life in some alternative world and environment.


image

Mezs by Rihards Vitols

Mezs is a speculative look into plausible new tree species that could cover the earth in the future to maintain natural balance. The new trees are the result of evolution of existing ones and mutations between them. They have qualities from multiple trees from different environments that allow them to be more resilient to future environments, and in some cases, even migrate away from the environment if the conditions for their survival become too harsh. The art work engages with my growing interest in the application of AI and forests. Combining data collection of tree species from different environments (deserts, rainforests, tundra, etc.) and Machine Learning image-making techniques styleGans2. The result is a fictional collection of trees that proposes and simulates a future alteration of the planet biodiversity, with results that are often unimaginable, abstract and absurd. I was interested in using AI not as a solutionist strategy for climate change but to comment on the effects of the Anthropocene on the environment, to speculate about the potential of future imaginations by the collaboration of human and non-human agents, explore the poetics of using AI both visually, and also sonically, create an immersive experience, and disseminate and archive the work.


Back to main page