Author Archives:

New Atlantis

New Atlantis

A networked multiuser virtual environment for sonic experimentation and telematic performance. New Atlantis emerged from a multi-year series of workshops and exchanges between French and American art schools under the name “TransAtLab” (a trans-atlantic laboratory). The project refers to the “sound houses” in Francis Bacon’s short novel New Atlantis, a space in their utopian society for marvelous devices and rooms for storing and manipulating sound. It is an open-world project, in which any user can create and share virtual spaces. It draws inspiration from virtual environments like Second Life, but while most similar virtual environments focus primarily on visual representation, New Atlantis also focuses on audio capabilities such as virtual acoustics. We see it as a multipurpose platform: for educational use, for networked performance, and ultimately as a public virtual space. It is presented in both an interactive installation format, and in a performance format with participants connecting remotely to perform for a live audience. 

Networked performance for the Ear Taxi Festival in Chicago, showing the “Chicago space”, with pieces created at SAIC.  This video shows the RPI node during the performance, using the CRAIVE Lab 360 degree screen and multichannel audio. 

A Grove in Winter with Birds

A meditative space I created in New Atlantis, a small grove of trees in a valley in winter, home to a flock of birds and their nests.  The sound of each bird is made from heavily processed acoustic guitar samples, including the scraping sound of the strings, percussive sound of the guitar body, and manipulating overtones.  This video shows the RPI node during a performance at Le Cube in Paris, using the CAVE screen at the Emergent Reality Lab at RPI.


Networked performance for the Ear Taxi Festival in Chicago, showing the “Troy space”, including pieces created by students in a workshop with Rob Hamilton’s “Designing Musical Games” class.
This video shows the RPI node during the performance, using the CRAIVE Lab 360 degree screen and multichannel audio.



Peter Sinclair, Locus Sonus, École supérieure d’art d’Aix-en-Provence
Roland Cahen, ENSCI-Les Ateliers, the École nationale supérieure de création industrielle, Paris
Peter Gena, The School of the Art Institute of Chicago
Ben Chang, Rensselaer Polytechnic Institute
Rob Hamilton, Rensselaer Polytechnic Institute

Software development

Jonathan Tanant, lead developer

PhD Students

Daan de Lange (ESA-Aix)
Théo Paolo (ESA-Aix)
Alexandre Amiel (ESA-Aix)

Previous Participants:

Robb Drinkwater
Jerome Joy
Mark Anderson
Ricardo Garcia
Gonzague Defos de Rau
Margarita Benitez
Anne Laforet
Jerome Abel
Eddie Breitweiser
Sébastien Vacherand

Special Treatment


Special Treatment

Virtual reality environment, 2003-2005; partially re-coded, 2010; re-made, 2014-

Applied Interactives in collaboration with art(n) Laboratory

Special Treatment is an immersive and interactive Virtual Reality installation examining the strength and persistence of memory. An ominous journey by train car deposits viewers in a sparsely populated camp pieced together from plans, photographs and other artifacts from Auschwitz II/Birkenau, Poland. As visitors explore the camp and its architectural structures, conversations and ephemera of the past fade in and out of perception – at times almost tangible, at other times mere allusions. These structures and stories are not intended to be strictly historical or documentary. Each element is the foundation for the folding together of past and present; where the sounds and images of old memories blend with memories created by each new visitor.

Special Treatment is a project created by Applied Interactives, an artist-based non-profit organization that was co-founded by Todd Margolis, Geoffrey A. Baum, Keith Miller and Tim Portlock in 2001, in collaboration with (art)n Laboratory and with support from the Electronic Visualization Laboratory (EVL) at UIC, Panstwowe Muzeum Auschwitz-Birkenau w Oswiecimiu and VRCO.  It was first exhibited in 2004.

I became involved in the project in 2010 while curating exhibitions of virtual reality artworks in Boston and San Jose. One of the perennial challenges in exhibiting and archiving art and technology work, particularly in an area like VR, is the pace of obsolescence. When the work depends on software or hardware that’s no longer available, it becomes increasingly difficult to view it.  One strategy is preservation, where all the technology is preserved intact.  Sometime this isn’t possible, and continued upgrading and patching can be done, as with a typical piece of software; this could be argued as a form of preservation, in the sense of repairs to the material while keeping the overall work intact.  A third strategy is emulation, which is impractical for VR, and the final option is remediation or porting, where the work is recreated using new technologies.

In five years the underlying hardware and software had changed enough that I needed to reprogram some of the core components to be able to exhibit it, changing my role into something between curating and preservation.  Soon afterward the opportunity arose to exhibit it at the Arts for a Better World fair in Miami Beach, one of the few (possibly only) examples of a CAVE-type immersive virtual reality installation at a major international contemporary art fair.  For this exhibition I joined the artist team as a co-designer, programmer and artist, leading to our current project to re-create the work in Unity to support both CAVE’s and consumer VR headsets such as the Oculus Rift and HTC Vive. 

For more information on this project:


(In)Security Camera


(In)Security Camera

Interactive installation with camera, computer, robotics, 2003

Ben Chang, Silvia Ruzanka, Dmitry Strakovsky

The (In)Security Camera is a robotic surveillance camera with advanced computer-vision software that can track, zoom, and follow subjects walking through its field of view. Deploying sophisticated artificial intelligence algorithms in use today by the U.S. military and Homeland Security forces, it can assess threat levels in real time and respond accordingly.

However, the camera is, in fact, a little insecure. Easily startled by sudden movements, it is shy around strangers and tends to avoid direct eye contact. This reversal of the relationship between the surveillance system and its subjects gives the machine an element of human personality and fallibility that is by turns endearing, tragic, and slightly disturbing.


The Emergent Reality Lab


The Emergent Reality Lab is a CAVE-type VR ab at the Rensselaer Tech Park, designed for virtual reality and mixed reality teaching and research.  It features 3 walls with passive 3D rear projection, a VICON tracking system and an 8 channel surround sound.  

For more information about the lab please visit

The Lost Manuscript 2 : The Summer Palace Cipher

Mixed-reality environment and learning game (unreleased)

Year: 2013

Lee Sheldon: PI,Lead Writer

Ben Chang: Co-PI, Lead Developer

Mei Si: Co-PI, AI team lead

Helen Zhou: Instructional design, translation, integration of narrative and pedagogy; “Mrs. Ling.”

Jianling Yue: Instructional design, translation, language and culture reference

Silvia Ruzanka: Animation

Shawn Lawson: Animation

Marc Destefano: VR software architecture

Graduate students:

Anton Hand: Environment modeling, art team leading

Michael Garber Barron: Programming

Undergraduate students: Nick Cesare, Gabriel Violette, Tom Weithers, Kevin Zheng, Reginald Franklin III, Stephen Jiang, Kevin Fung, Kai Van Drunen, Conor Sjogren, Victor Cortes, Kevin Chang, David Strohl, Jessica Falk, Doug Miller, Randy Sabella

The Lost Manuscript is a game designed to incorporate immersive environments, alternate reality strategies, and extended narrative into a college-level Chinese language course.  The game was written by Lee Sheldon using his “multiplayer classroom” method, in which all aspects of a course are integrated into a game.  In the design for The Lost Manuscript, students travel to Beijing for the semester, with each class taking them to different locations in search of a mysterious lost book. These episodes take place in the CAVE, combining VR with real props to create a mixed-reality environment.  Students use their language skills to navigate the city, find clues, interact with characters in the game, some of whom have their own ulterior motives.  

There are three versions of the game using different formats.  The Lost Manuscript (2011) was first run as a live-action roleplaying game across half a semester of a class.  The Lost Manuscript 2: The Summer Palace Cipher (2013) extends the story to a full semester and is built around virtual reality environments rather than a cast of live actors.  Finally, The Lost Manuscript 3 (2015) is a simplified version designed as a standalone PC game. 


The most complete realization of the game and its incorporation of immersive environments and virtual characters is The Lost Manuscript 2.  Though the full 15 episodes of are as yet unfinished, in 2013 we completed and demonstrated a full vertical slice of the game using the CAVE at RPI’s Emergent Reality Lab.

In this episode, students visit a teahouse with “Mrs. Ling,” a character who reappears throughout the semester with both helpful advice and secrets of her own.  Mrs. Ling teaches them the gongfu tea ceremony, a ritualized way of preparing and serving tea.  The students ask and answer questions in Chinese, and follow her demonstration by performing the tea ceremony, using the CAVE’s Wand interface to make tea in VR. 

The class are seated at restaurant tables in the middle of the CAVE, creating a mixed-reality setting for the episode.  Students use the wand to select dialog choices, with options that can be enabled based on learner level for English subtitles for Mrs. Ling’s dialog as well as pinyin or English translation for the player’s dialog choices.  Students also use the wand to perform the steps of the tea ceremony: filling the teapot, pouring water over the teapot, brewing the tea, discarding the first pour, and pouring and presenting the finished tea.






(There’s an App for That Shirt!)

Augmented reality performance

Katherine Behar, Ben Chang, Silvia Ruzanka.


The Grind


The Grind

artgame, 2011

Ben Chang, Silvia Ruzanka, Rodger Ruzanka

An artgame about daily office work and Windows 95.  More work keeps coming – try to keep you Desktop clean.  Created for the 4th Moscow Biennale’s Interior-ity exhibition, curated by Lana Zaytseva and Dmitry (Dima) Strakovsky, a site-specific project examining interior mental, artistic, and social spaces within the architectural space of an office.

Sounder and Relay


Sounder and Relay (2010), with Silvia Ruzanka

Computer, two channel video projection, custom software, electronics, telegraph equipment

Sounder and Relay is a meditation on online romance in the age of the telegraph.  Two video projections show computer-generated figures in Victorian interiors.  As the two tap messages on their telegraphs, the signals are transmitted across the room through physical antique telegraph equipment.  The text is composed of dialogue excerpts from the novel Wired Love: A Romance in Dots and Dashes.  Written in the 1860’s by Ella Cheever Thayer, a former telegraph operator, the story prefigures the world of the Internet, avatars, online dating, and the blurring boundaries between real and electronically generated worlds.


Philosopher Deathmatch



Philosopher Deathmatch

“Welcome to the Arena, where high-ranking philosophers are transformed into spineless mush. Abandoning every ounce of common sense and any trace of doubt, you lunge onto a stage of harrowing landscapes and veiled abysses. Your new environment rejects you with ontological uncertainties and existential angst as legions of foes surround you, testing the gut reaction that brought you here in the first place. Your new mantra: Out-argue or be finished.”

Philosopher DeathMatch is a complete game based on the Quake III Arena engine, where players argue to the death against the great thinkers of history. The finely distilled aggression and conflict of the original game remains, but is recast and often obstructed by the discursive force of reprogrammed AI bots with amplified chat features. New game maps from the Agora to the University of Frankfurt provide the backdrop for battle between philosophers from Aristotle to Adorno in a furious bloodbath of ideas. Traditional Quake items like the Gauntlet, Railgun, and Missile Launcher are augmented with brutal new weapons such as Occam’s Razor, the Dialectrocution Blaster, and of course the unmatchable force of the Western Cannon.

This game is intended to be presented in two ways. In Single Player Mode, individual gamers fight an army of philosopher bots, as in the original Quake. Philosopher DeathMatch is also designed for a public performance – a deathmatch between living philosophers. This tournament is a combination between a panel discussion and a LAN party. On stage, a panel of six philosophers sit facing the audience with microphones, glasses of water, and souped-up gamer PC’s. Each round features a topic – dasein, ethics, the nature of consciousness, etc – which the distinguished panel must debate while fragging each other. All the action is displayed on a projection screen overhead.


System Requirements

Philosopher DeathMatch will run on any PC system qualified for Quake III Arena. The current development build is available for Linux, but the final release will also be available for Windows and Mac OS X.

To play the demo build, download the tarball below. After uncompressing, launch “pdm”.

The demo was tested on Ubuntu 8.04 with a GeForce 6200 graphics card.

Download (Linux)

philosophy_deathmatch_release.tar.gz (37 MB, TAR GZIP)




Earthquake Map Interactive Exhibit

Client: The Field Museum of Natural History, Chicago

Year: 2008

Design and Programming: Ben Chang and Silvia Ruzanka

For the exhibition Nature Unleashed : Inside Natural Disasters, the Field Museum of Natural History wanted an interactive exhibit to put visitors in touch with the data that scientists use to study earthquakes around the world. We developed an interactive map that shows realtime earthquake activity as well as historical patterns. The touchscreen display uses live data from international seismic sensor networks and a database of earthquakes going back over fifty years.  An interactive 3D view mode shows the depth of earthquakes, making the structure of underground seismic events visible. 


Exhibition Tour Schedule 

May 23 – Jan. 4, 2009 The Field Museum, Chicago, Illinois / 198,180 visitors
February 13 – May 3 Denver Museum of Nature and Science, Denver, Colorado / 186,000 visitors
May 23 – Jan. 3, 2010 Liberty Science Center, Jersey City, New Jersey / 283,581 visitors
February 6 – May 2 Fernbank Museum of Natural History, Atlanta, Georgia / 89,000 visitors
May 22 – September 12 The Durham Museum, Omaha, Nebraska / 52,322 visitors
February 11 – May 1 Ontario Science Center, Toronto, Ontario, Canada / 235,505 visitors
May 28 – September 5 Connecticut Science Center, Hartford, Connecticut / 74,759 visitors
October 7 – January 8, 2012 Science Museum of Minnesota, St. Paul, Minnesota/120,000 visitors
January 28 – May 5 Washington Pavilion, Sioux Falls, South Dakota / 32,789 visitors
May 26 – September 3 Oregon Museum of Science & Industry, Portland, Oregon / 175,921 visitors
September 28 – May 5, 2013 Canadian Museum of Nature, Ottawa, Canada / 58,551 visitors
May 25 – December 8 Natural History Museum of Utah, Salt Lake City, Utah / 113,828 visitors
December 24 – May 4, 2014 TELUS Spark, Calgary, Alberta, Canada / 142,266 visitors
May 24 – September 14 Houston Museum of Natural Science, Houston, Texas / 22,642 visitors
Nov. 16 – August 10, 2015 American Museum of Natural History, New York
October 2 – Jan. 10, 2016 Museum of Science and History, Jacksonville, Florida / 28,264 Visitors
February 6 – May 1 Pink Palace/Memphis Museums, Memphis, Tennessee
July 1 – February 20, 2017 THEMUSEUM, Kitchener, Ontario, Canada
March 10 – May 29 Indiana State Museum, Indianapolis, Indiana
June 24 – September 4 Adventure Science Center, Nashville, Tennessee







Ben Chang, Silvia Ruzanka
two-channel realtime video installation, monitors, computer, custom software

Becoming is a two-channel computer-driven video installation, in which two computer animated figures live in a minimally-furnished virtual domestic space. They stand and watch the viewer, yawn, sit on the sofa, talk on their cell phones, each on their own LCD screen. Alongside this simulation, another process continually manipulates their geometric mesh data, exchanging the vertex and polygon data between the two figures. Over time this causes each figure to take on attributes of the other, though distorted by the structure of their digital information. The process has none of the smoothness of a “morphing” effect, instead rupturing the surface of the figures and turning them into fragmented hybrids – two figures each becoming something new from the other’s presence. Becoming is a durational piece – the process is slow and continuous, lasting weeks or months.

Interactive Timeline of Do-It-Yourself Culture

Client: A+D Gallery, Chicago

Year: 2007

Design and Programming: Ben Chang and Silvia Ruzanka

Description: Website, Flash, PHP, MySQL

Launch Website : DIY Timeline

An interactive timeline for the exhibition Pass It On! Connecting Contemporary Do-It-Yourself Culture at the A+D Gallery, Columbia College, Chicago, curated by Anne Dorothee Boehme, Lindsay Bosch, and Kevin Henry.  This exhibition brought together a wide range of activities and ideas in contemporary culture around the concept of “DIY,” from music and art to publishing and political activism. The DIY TimeLine traces the interconnected histories of DIY culture, from the Women’s Suffrage movement to punk rock to Open Source Software. 

The “timeline” originated as a central feature of the exhibition, as a table running the length of the gallery with index cards for significant events and people.  In addition to the database collected by the curators, visitors to the exhibition could add their own cards, helping to fill in these different histories.  The website draws from the same database, to help visualize connections and parallels.





Tele-immersive VR environment, 2005, Applied Interactives

Synthecology is a multi-user virtual reality environment for creating distributed, collaborative soundscapes. Participants in Synthecology collect, plant, grow, and cultivate a garden of strange plants from seed pods that come floating over the ocean. Each plant brings with it sounds ranging from field recordings to musical instruments to spoken-word stories to abstract sonic textures, creating a constantly evolving spatialized soundscape composition.

While many virtual environments are primarily visual, Synthecology is just as much a sonic environment as a visual one. As a multi-user participatory artwork, Synthecology also uses the activities of sharing, combining, and playing sounds as an immediate and familiar situation for social interaction.

Synthecology was first exhibited at the 2005 Wired NextFest at Chicago’s Navy Pier, with simultaneous live connections to immersive displays at the University of Illinois and others in Los Angeles, San Diego, Indiana, and Buffalo. Synthecology was produced through a collaboration between the School of the Art Institute of Chicago Immersive Environments Lab and the non-profit group Applied Interactives, with support from the University of Illinois at Chicago Electronic Visualization Lab and (art)n Studio.


Synthecology is an immersive virtual environment, displayed on a stereoscopic projection screen with magnetic hand and eye trackers for gestural interaction. Viewers wear 3D glasses, while one user interacts using the Wand. The virtual soundscape is generated by a 4-speaker surround sound system.

Participants enter the environment of Synthecology through a field of thick grass by the water’s edge to find a deserted beach with only the sound of the surf and wind. Faint objects come drifting over the water towards the land, translucent seedpods carrying sounds.

Reaching towards a floating seed, it emits a vibrating sound that increases in excitement as the user’s hand gets closer and touches it. Pressing a button on the wand ‘grabs’ the seed.

Releasing the wand button drops the seed to the ground, where it plants itself in the ground and begins growing. As the plant grows, it begins emitting its sound – musical notes, conversations, stories, abstract sonic textures, sound effects, field recordings, all manner of sounds.  As more seeds are planted, the participants in Synthecology begin constructing an evolving soundscape. When touched, each object emits its sound again, such that they can be played almost like a musical instrument. Participants can pick up the plants and move them to create new sonic arrangements of the space.

Research and Technical Development

Synthecology’s primary foundation is the CAVE[1], but the project presented a number of new technical challenges. Some of this work was specifically oriented toward this specific artwork, though much of it was done with the additional goal of advancing the expressive capabilities of immersive virtual environments for artists generally. Synthecology showcased the ATS VRLab’s recent research in spatialized audio for immersive environments, as well as powerful and reusable components ranging from particle systems to web interfaces.


The core goal of Synthecology was to create a multi-user immersive virtual environment for creating collaborative, evolving spatialized soundscapes. Beginning with the CAVE as display system and the Ygdrasil programming language, we identified several key technologies that needed to be developed or integrated into Synthecology:

  1. Spatialized audio and real-time synthesis
    Synthecology is essentially an environment for building virtual soundscapes, in which the illusion of spatial location of sounds is crucial. For a large public installation, a multiple-speaker array approach offers advantages over binaural headphone playback. Additional requirements included both sample playback and realtime synthesis, reasonable performance (approx 30 to 50 spatialized sound sources at any one time), usability on reasonably common off-the-shelf hardware, easy configuration for variable size speaker arrays, support for real-time DSP (such as reverb), integration into the Ygdrasil and CAVE system, and reusability.
    While the research on multichannel spatialization is substantial, a review of existing solutions found nothing that met our needs. The solution, building on ongoing research at the ATS VRLab in virtual audio, was to develop an extensible sound server for the CAVE using SuperCollider, controlled using Open Sound Control and with a simple node interface in Ygdrasil and a testing GUI in Max/MSP.
  2. Database integration
    Many virtual environments use sound by loading a set of defined sounds at startup, or by defining relationships between sounds and objects or events. In Synthecology, the set of sounds is not predefined but is constantly evolving and growing as new sounds are contributed over the web. Participants sending sounds over the web can provide additional information such as a category or visual association for the sound when it appears in the virtual environment; we needed a way to get all this information from the web into the CAVE. We used a MySQL database to store the sounds and all their metadata, and wrote Ygdrasil nodes for reading from the database, while using the existing CAVERN framework for distributing information between networked CAVE’s.
  3. Database and soundserver management layer
    To aid in integrating Ygdrasil, SuperCollider, and MySQL, we developed an additional software layer in Python. This centralized much of the most complex control and logic in the system, and improved performance by removing tasks such as time-costly SQL queries from both the SuperCollider and Ygdrasil code.
  4. Ygdrasil Dynamics
    There were several important aspects to Synthecology that were difficult to implement using existing capabilities in the Ygdrasil language. Wind was particularly important. Since the primary activities involved planting, growing, arranging, handling, and sharing objects, it was crucial for the objects to exhibit characteristics such as weight and gravity. Additionally, the objects had to change the way they behaved as they grew through different states. To solve these challenges, we developed a rudimentary system for physics simulation, a general-purpose particle system, and a new multiplexor node that can be used to easily build complex state machines in Ygdrasil.
  5. Flash front end
    While Synthecology is centered on the idea of group interaction and collaboration, the specialized hardware of the immersive display itself limits the number of participants. As a way of addressing this issue, we created an additional way to enter the Synthecology world using a simplified web interface in Flash. The Flash interface and the primary CAVE world were connected using the MySQL database, raising the intriguing possibility of creating shared virtual worlds with interfaces ranging from the CAVE to cellphones.


  1. CAVE is used as shorthand here to additionally refer to the larger class of projection-based immersive display systems. The public display system for Synthecology was a single-wall passive-stereo rear-screen projection system with eye and hand tracking, often called a C-Wall or GeoWall. Some of the participants in Synthecology connected using standard desktop PC’s with just a DSL internet connection – but the same CAVE software is used on all these systems.


  • Ben Chang: Ygdrasil code development and scripting, Yg->SuperCollider coding
  • Geoff Baum: Project management, Python sound server layer coding
  • Dan St. Clair: SuperCollider sound engine coding
  • Robb Drinkwater: SuperCollider synthesis coding
  • Helen-Nicole Kostis: 3D modeling, software installer management, release packaging, testing, remote CAVE participant coordination.
  • Mark Baldridge: 3D modeling
  • Hyunjoo Oh: 3D modeling
  • Jon Greene: Flash front end

Remote CAVE Participants

  • Todd Margolis, Technical Director, Center for Research in Computing and the Arts, University of California, San Diego
  • Marientina Gotsis, Interactive Media Division of the School of Cinema-Television, University of Southern California

NextFest Volunteers

Dubi Kauffman, Tina Shah, Daria Tsoupikova, Kapil Arora, Javier Girado, Josephine Lipuma, Melissa Golter, Javid Alimohideen, Allan Spale, Geoff Holmes, Jennings Hanna, Victoria Scott , Flo McGarrell.

Exhibition Studies Viewbook CD-ROMs

Client: SAIC Exhibition Studies Program

Year: 2002-2005

Description: CD-ROM

The ES Book Series was an annual publication of the SAIC Exhibition Studies program, highlighting exhibitions and events throughout the year. The CD-ROM’s included additional multimedia documentation of these projects. Each issue includes an experimental interface, designed to express ideas about the exhibition studies program through the activity of interaction.

ES Book 5 (2003)

screengrabs from the main menu page, a timeline with animations of the gallery at 1926 North Halsted throughout the year.

ES Book 7 (2005)

screengrabs from the interactive stop-motion animation on the menu page, and internal pages:




Interactive installation / VR installation, with sound by Rodger Ruzanka.

is an interactive, immersive virtual reality installation that combines abstract form, physics-based animation, and generative sound processes. A virtual object unfolds and transforms into complex permutations in response to the user’s actions. Every motion or button pressed triggers new orbits, sounds, and forms. Left alone, the object slowly returns to some type of equilibrium; but attempts to control it are often complicated by its own reckless momentum.

The myriad components are meant to evoke mechanical and organic imagery; some are modeled after machine parts, while others are extruded from abstract hand-drawn shapes. Part interactive sculpture and part musical instrument, SPINLOCK is a hypnotic experiment in order and chaos.

The interface to this whirling, exploding virtual machine is a wireless videogame controller, with two joysticks and numerous buttons. Every tap of the buttons or nudge of the joysticks can subtle or radical changes according to a hidden logic.


technical details:

SPINLOCK is a CAVE-based application, and can be run on a variety of hardware, from single-wall projection Linux systems to full CAVEs. It was written using the Ygdrasil scripting language, with custom code for 3D model loaders, Maya keyframe animations, physics modeling, joystick input, networked sound control.

The sound is algorithmically generated by a second computer, with communication between the two using Open Sound Control.

Dots and Dashes


Ben Chang, Silvia Ruzanka

Dots and Dashes is an interactive narrative in virtual reality. It tells the story of a romance in a different kind of virtual world, the world of the telegraph. Listening in on the wire, the viewer follows Nattie, a telegraph operator, and her online conversations with another operator, a man known only as “C”. Their story emerges in bits and pieces, fragments of communication in dots and dashes in between the primary business of the telegraph wires, the business of linking financial institutions and burgeoning industries. As Nattie discovers that “C” may not be the man he appeared to be on the wire, the line between the real world and the electric world of the telegraphs begins to blur.

This piece is based on the novel “Love on the Wire – a Romance in Dots and Dashes (an Old Story Told in a New Way),” by Ella Cheever Thayer, published in the 1850’s. The uncanny similarities between this story and the world of the Internet, online dating, and today’s virtual environments provides the springboard for an exploration of a wealth of anxieties and dreams that link two time periods over a hundred years apart. The construction of identities in an electronically mediated environment, the shfting boundaries between the natural and machine worlds, the spiritual dimension of science and technology. The telegraph wires become a labyrinth, passing through magnetism and hypnotism, spirit photography, clockwork automata, hysteria, illusionists, and the machine age.


The Consistency of Shadows

Client: Anne Dorothee Boehme and the Betty Rymer Gallery

Year: 2003

Description: CD-ROM, with seven offset printed booklets, housed in a custom-designed, vacuum-formed acrylic box

The Consistency of Shadows : Exhibition Catalogs as Autonomous Works of Art was an exhibition that investigated relationships between catalogs and the exhibitions they document and represent. Featuring approximately 120 exhibition catalogs from the 1960s to the present, the exhibition highlighted ways that these documents depart from their usual function as mere memory of an event and instead function as works of art.  It was the first significant show of its kind focusing on catalogs created by artists for their own exhibitions.  The catalog for the exhibition is also designed as a work of art, including custom package design by Kevin Henry and an experimental, interactive CD-ROM by Ben Chang. The exhibition was curated by Anne Dorothee Boehme, special collections librarian at the Joan Flasch Artists’ Book Collection.

The CD-ROM’s interface is designed with the idea of exhibition catalogs as both memories and as autonomous objects. It contains images of the whole collection of catalogs in the exhibition, including covers and images from inside pages, as well as a video interview with artist Christian Boltanski. Rather than presenting the material through traditional interface designs, alphabetical listings, or other organizing schemes, each item appears as one of a cloud of shadows, drifting in circular currents. Pulling these shadows up brings up the images, text, and other materials, making the interface itself an act of reaching towards something which is hard to grasp and has a life of its own.

The catalog for Consistency of Shadows received a 2004 IDEA Award (Industrial Design Excellence Award) from the Industrial Design Society of America, and a 2003 Good Design Award from the Chicago Athenaeum.

For more information about the exhibition, or to order a copy of the catalog, please visit the website for SAIC’s Sullivan Galleries Publications.




The Jackals


The Jackals

Ben Chang, Mary Lucking, Silvia Ruzanka, Andrew Sempere, Dmitry Strakovsky, Rodger Ruzanka, and Chris Sorg.

In the grey space between utopia and dystopia we who are jackals live on the edges. Opportunistic omnivores are unavoidably circling your city! There have always been jackals, there always will be jackals. We are the ones who put your tech to use, the ones who recycle the glut and make it useful in aesthetic glory. The technology is neither servant nor master, but merely our raw material, to gnaw, rework, shape and build.


The Jackals are a nomadic band of technology scavengers who repurpose obsolete and discarded technologies to create enigmatic, absurd, and chaotic future inventions. A project of the TangentLab Collective, the Jackals are probably best described as a kind of guerilla hacker street theater. During an event, the Jackals invade and set up camp anywhere from a sidewalk to a museum to a convention center, bringing soldering irons and software. Scavenging obsolete technologies from public donations, thrift stores, and back alleys, the Jackals then set about repurposing this material for their own ends. All of this work is done in public, making the process as much the focus of the project as the finished objects. The process is both visible and open, designed to engage the public in the act of making.

Common themes in Jackal projects include:

  • Resisting the cycle of obsolescence
  • Demystifying technology
  • Exploring collective models as alternatives to the standard organizational logic of the technology industry
  • Re-negotiating the relationships between consumers and their electronic commodities

The Jackals are a project of TangentLab, a small group of artists who bring collective and collaborative strategies to their technological interventions.

The Jackals at VERSION>02, Museum of Contemporary Art, Chicago

The Jackals invaded the Version>02 festival at the Chicago Museum of Contemporary Art, hauling in piles of video monitors, computers, slide projectors, soldering irons, and broken electronic toys despite protests from the festival organizers. The Jackals inhabited their makeshift camp through the duration of the festival, slowly transforming the electronic detritus into strange new creations. These included the JackalVision wireless video suits, Killer the Robot Dog, and the Jackal PDA (Personal Data Annoyance). The Jackals took turns building while others presented improvisational circuit-bending performances or slide lectures with found slides.

Most art and technology or new media festivals include a section for web art, presented in browsers on individual PC stations. Festival-goers universally take this as an opportunity to check their email, in effect re-appropriating the web browser back from its new status as art-site. Towards the end of the festival, two of the Jackals made their way onto an aptly-named panel entitled “Creative Technology as Weaponry” where it was revealed that they had been running a Carnivore client on the museum’s LAN, streaming the data packets of all email-checkers onto their pile of ancient monochrome monitors in the front lobby.

The Jackals at Summer Solstice, Museum of Contemporary Art

The Jackals returned to the Chicago Museum of Contemporary Art for Summer Solstice, the Museum’s annual midsummer 24-hour art party. In the Jackal GamePod, classic videogame consoles were wired together to produce luminous video abstractions. The GamePod apparatus begins with high-paced, aggressive live game imagery, and extracts, layers, magnifies, and saturates minute details of the image, transforming it into something lush and contemplative.



SIGGRAPH is the premiere annual worldwide conference on computer graphics, featuring the latest research and products in 3D animation, visualization, rendering, entertainment and simulation, motion capture, and visual effects. During the week-long conference, the Jackals set up camp at the convention hall, bringing their scavenged and hacked creations into direct contrast with the gleaming, cutting-edge technologies inside.

The San Antonio convention center was a particularly interesting site, with the downtown’s public riverwalk cutting through its center. This location gave the Jackal three distinct audiences – the electronic art crowd at the SIGGRAPH Art Gallery, engineers and scientists from the main SIGGRAPH conference, and the local and tourist population of San Antonio itself.

The PDA (Personal Data Annoyance)

The Jackal PDA is a reconstructed version of the now-ubiquitous Personal Data Assistant (e.g. Palm, Treo, Blackberry, iPaq, etc…), built from an early laptop computer and a briefcase. Bulky and totally devoid of any input device, the Jackal PDA loudly recites a continous litany of appointments, stock quotes, plane flight information, reminders, memos, emails, schedule changes, weather reports, traffic updates, and sports scores.

As we design information and communication technologies to become increasingly ubiquitous and intimately tied to our bodies, there is a paradoxical effect of, on the one hand, a hunger for freedom and mobility and, on the other, the entangling weight of this connectivity itself. As the ad campaign for Microsoft Windows Mobile promises, anywhere you go, you can “Take Your Office With You.”

Jackal Weather Balloons


The Data Weather Balloons, built from scavenged surplus home alarm systems, are designed to pick up WiFi transmissions and sonify the data*. Each cluster of balloons carries an antenna, circuitry, speaker, and battery pack. By guiding the balloons through the air, through different spots and at different altitudes, one can find currents of WiFi transmission. When converted to audio streams, the WiFi data is reminiscent of bird calls; grouping a large number of the data balloons around a specific site creates a chorus of electronic squawks, chirps, and trills in response to different WiFi frequencies. Carrying the balloons around a site, one can identify ‘hotspots’ and follow vectors of network transmission. The data weather balloons are a way of allowing a direct, embodied understanding of the invisible currents of information that permeate the air around us, reminding us that even in as radio waves, information does have a physical presence.

The only actual components are (1) hacked surplus alarm systems and (2) a bunch of helium balloons. There is not really a WiFi sensor of any kind on the balloons, they’re instruments

Image Pillager


The Image Pillager

Search engines, portals, and online indexes create the surface appearance of order on the web. The ultimate, unreachable aim of these engines is be able to index everything, repeating the dreams of the early encyclopedists. In its ideal form a search engine such as Altavista, Lycos, Ditto, Yahoo, or of course Google, could retrieve any image of anything, as long as it is online. Within its search trees and databases, the entirety of this created space could be categorized and controlled.

Another form of image database is an archival service such as Corbis ( This database is comprised of stock footage, historical archives, and the rights to an increasing percentage of the world’s “fine art.” These images are copyrighted and sold to designers and advertisers as a source point for the creation of the contemporary visual landscape. The aim of the Corbis service is to be able to provide a visual encyclopedia of the physical world, human history and culture, as opposed to the virtual space of the web.

Both types of databases, however, present views of imaginary spaces. Altavista and Lycos index the web, a virtual space filled with banner graphics, snapshots of friends and pets, decontextualized graphs and charts, advertisements for unidentifiable products, landscapes from unknown tourist destinations, unattractive logos and icons, and a high percentage of porn. Corbis, on the other hand, presents the real world in pictures – but it is a strange view of the real world. Highly saturated in color, artfully composed, this is the equally imaginary fantasyland of the mediated world.

By circumventing the structures imposed onto this glut of information by the search engine, the Image Pillager presents the raw data itself in completely random form – admitting noise for what it is, hoping that signal may emerge. It may also be used as a tool for restructuring this information by making collages, generating an ever-changing surface of densities and visual signs.



installation_3 installation_6 installation_7 installation_9 installation_10 installation_11 installation_13 installation_14 installation_16 installation_17 installation_18 installation_19 installation_20 installation_22




DataGhost Defragmenter

CD-ROM, 2001

The DataGhost Defragmenter is a piece of software-art masquerading as a piece of actual software. A “defragmenter” is a disk utility that optimizes and cleans hard drives by removing and compacting the dead space left on the drive when files are deleted. Disks that have not been defragmented can easily become filled with small pieces of files that were supposedly deleted; these fragments can remain even after the entire drive is erased. These data fragments are like something we try hard to erase and forget but that keeps coming back, like a memory that won’t stay repressed.

The DataGhost Defragmenter analyzes your hard drive, discovering a trace memory imprint of an entire previous operating system. Reconstructing it from data fragments, like an archaeologist unearthing a forgotten ruin, the Defragmenter attempts to completely erase this echo of the machine’s past. However, the situation rapidly spins out of control as the data fragments begin spilling out of their boundaries, polluting the desktop, spawning and multiplying until the whole interface collapses.




interactive installation, 2001

If the net is a space, it is a kind of non-space without the attributes we normally use to understand space. Distance, boundaries, scale, all have different meanings; while our presence, our body in this other space, seems invisible, erased. How can we imagine an online, virtual, networked body? Perhaps by the information that passes through it – information being the material of this space, the body being of that material, a nexus point for it, a pathway for it, a process that desires and acts upon it. Imagining parallels with the physical body, we can look for flows of information in perception, consciousness, DNA, emotion … creating a connection that can bind these two bodies together. WebShadow explores these ideas through the video image of the viewer, seeing herself recomposed on the screen from fragments of web pages and internet traffic.

Transmute : The Virtual Artist and the Virtual Curator

Client: Joshua Decter and the Museum of Contemporary Art, Chicago

Year: 1999

Description: digital interactive exhibits with touchscreen and projection for the exhibition Transmute

2D Design: Kim Collmer

3D Design and Programming: Ben Chang

Transmute is an exhibition at the Museum of Contemporary Art in Chicago that rethinks the relationship between the public and the institution by allowing visitors to transform the show and individual works within it. A computer-based “virtual museum,” installed both online and in the museum, contains a selection of works from the museum collection. Visitors take on a curatorial role by exploring the collection, selecting works, and creating their own transformed version of the exhibition.

In a second computer installation, visitors transform John Baldessari’s iconic photocollage Fish and Ram, replacing its image components with new ones from a database of image material solicited from the general public over the internet. Both roles involve an active process of constructing meaning through framing and juxtaposition. In his statement for the exhibition, independent curator Joshua Decter writes:

How do museums of contemporary art create meaning with their collections? How do they determine the relationship of audience to the collection? And how might we begin transforming the relationship of the public to the collection, so that relatively passive forms of reception are converted into active processes of re-interpretation?

In Transmute, I have approached these issues by developing an interactive exhibition, at once actual and virtual, that uses works from the MCA Collection as a platform for the audience to become more directly (i.e., virtually directly) involved with exploring the conceptual, visual, and thematic attributes of the show itself. I invite members of the public to join with me in the process of rethinking – and even symbolically transforming – the collection in imaginative ways. Visitors to the Transmute exhibition at the MCA – and the museum’s website – are given the option to function in the capacity of virtual curators, and virtual artists.

The physical exhibition at the MCA in summer/fall 1999 contained computer installations of the interactive Transmute systems, and a physical exhibition of works from the collection presented as one possible configuration or instantiation of the curatorial idea. Both programs are also hosted on the MCA website.

The interactive Virtual Curator and Virtual Artist systems were created by Ben Chang and graphic designer Kim Collmer, in collaboration with curator Joshua Decter. Within a three-month timeline, we developed an image-manipulation program for the Virtual Artist and a fully-navigable 3D environment with interactive image database for the Virtual Curator. These systems were required to function in both a gallery installation and a web context. We leveraged a wide range of technologies – VRML for the 3D environment, Java for the image database and the Virtual Artist, and Flash for animated information screens about each work in the exhibition.

Subterranean Cosmology


Subterranean Cosmology

interactive installation, 1998

Subterranean Cosmology is an interactive video installation exploring themes of memory, longing, and tactility in the digital image. The interface for the piece is a rear-screen projection with embedded touch sensors, creating activation points within the image field. Each sensor responds to pressure, producing different responses to a push them from a light touch. The viewer explores the surface of the projected image, bringing to life fragments of images, texts, and sounds, all woven in an interconnected network of associations that creates a thread through eighteen scenes or vignettes. This is a labyrinth without a single correct path, without a beginning or end. Rather than a linear process of exploration, discovery, or revelation, the experience is created through the accumulation of associations and through the physical tactility and sensuousness of the interface itself.