Synthecology

Synthecology

Tele-immersive VR environment, 2005, Applied Interactives

Synthecology is a multi-user virtual reality environment for creating distributed, collaborative soundscapes. Participants in Synthecology collect, plant, grow, and cultivate a garden of strange plants from seed pods that come floating over the ocean. Each plant brings with it sounds ranging from field recordings to musical instruments to spoken-word stories to abstract sonic textures, creating a constantly evolving spatialized soundscape composition.

While many virtual environments are primarily visual, Synthecology is just as much a sonic environment as a visual one. As a multi-user participatory artwork, Synthecology also uses the activities of sharing, combining, and playing sounds as an immediate and familiar situation for social interaction.

Synthecology was first exhibited at the 2005 Wired NextFest at Chicago’s Navy Pier, with simultaneous live connections to immersive displays at the University of Illinois and others in Los Angeles, San Diego, Indiana, and Buffalo. Synthecology was produced through a collaboration between the School of the Art Institute of Chicago Immersive Environments Lab and the non-profit group Applied Interactives, with support from the University of Illinois at Chicago Electronic Visualization Lab and (art)n Studio.

Description

Synthecology is an immersive virtual environment, displayed on a stereoscopic projection screen with magnetic hand and eye trackers for gestural interaction. Viewers wear 3D glasses, while one user interacts using the Wand. The virtual soundscape is generated by a 4-speaker surround sound system.

Participants enter the environment of Synthecology through a field of thick grass by the water’s edge to find a deserted beach with only the sound of the surf and wind. Faint objects come drifting over the water towards the land, translucent seedpods carrying sounds.

Reaching towards a floating seed, it emits a vibrating sound that increases in excitement as the user’s hand gets closer and touches it. Pressing a button on the wand ‘grabs’ the seed.

Releasing the wand button drops the seed to the ground, where it plants itself in the ground and begins growing. As the plant grows, it begins emitting its sound – musical notes, conversations, stories, abstract sonic textures, sound effects, field recordings, all manner of sounds.  As more seeds are planted, the participants in Synthecology begin constructing an evolving soundscape. When touched, each object emits its sound again, such that they can be played almost like a musical instrument. Participants can pick up the plants and move them to create new sonic arrangements of the space.

Research and Technical Development

Synthecology’s primary foundation is the CAVE[1], but the project presented a number of new technical challenges. Some of this work was specifically oriented toward this specific artwork, though much of it was done with the additional goal of advancing the expressive capabilities of immersive virtual environments for artists generally. Synthecology showcased the ATS VRLab’s recent research in spatialized audio for immersive environments, as well as powerful and reusable components ranging from particle systems to web interfaces.

Goals

The core goal of Synthecology was to create a multi-user immersive virtual environment for creating collaborative, evolving spatialized soundscapes. Beginning with the CAVE as display system and the Ygdrasil programming language, we identified several key technologies that needed to be developed or integrated into Synthecology:

  1. Spatialized audio and real-time synthesis
    Synthecology is essentially an environment for building virtual soundscapes, in which the illusion of spatial location of sounds is crucial. For a large public installation, a multiple-speaker array approach offers advantages over binaural headphone playback. Additional requirements included both sample playback and realtime synthesis, reasonable performance (approx 30 to 50 spatialized sound sources at any one time), usability on reasonably common off-the-shelf hardware, easy configuration for variable size speaker arrays, support for real-time DSP (such as reverb), integration into the Ygdrasil and CAVE system, and reusability.
    While the research on multichannel spatialization is substantial, a review of existing solutions found nothing that met our needs. The solution, building on ongoing research at the ATS VRLab in virtual audio, was to develop an extensible sound server for the CAVE using SuperCollider, controlled using Open Sound Control and with a simple node interface in Ygdrasil and a testing GUI in Max/MSP.
  2. Database integration
    Many virtual environments use sound by loading a set of defined sounds at startup, or by defining relationships between sounds and objects or events. In Synthecology, the set of sounds is not predefined but is constantly evolving and growing as new sounds are contributed over the web. Participants sending sounds over the web can provide additional information such as a category or visual association for the sound when it appears in the virtual environment; we needed a way to get all this information from the web into the CAVE. We used a MySQL database to store the sounds and all their metadata, and wrote Ygdrasil nodes for reading from the database, while using the existing CAVERN framework for distributing information between networked CAVE’s.
  3. Database and soundserver management layer
    To aid in integrating Ygdrasil, SuperCollider, and MySQL, we developed an additional software layer in Python. This centralized much of the most complex control and logic in the system, and improved performance by removing tasks such as time-costly SQL queries from both the SuperCollider and Ygdrasil code.
  4. Ygdrasil Dynamics
    There were several important aspects to Synthecology that were difficult to implement using existing capabilities in the Ygdrasil language. Wind was particularly important. Since the primary activities involved planting, growing, arranging, handling, and sharing objects, it was crucial for the objects to exhibit characteristics such as weight and gravity. Additionally, the objects had to change the way they behaved as they grew through different states. To solve these challenges, we developed a rudimentary system for physics simulation, a general-purpose particle system, and a new multiplexor node that can be used to easily build complex state machines in Ygdrasil.
  5. Flash front end
    While Synthecology is centered on the idea of group interaction and collaboration, the specialized hardware of the immersive display itself limits the number of participants. As a way of addressing this issue, we created an additional way to enter the Synthecology world using a simplified web interface in Flash. The Flash interface and the primary CAVE world were connected using the MySQL database, raising the intriguing possibility of creating shared virtual worlds with interfaces ranging from the CAVE to cellphones.

 

  1. CAVE is used as shorthand here to additionally refer to the larger class of projection-based immersive display systems. The public display system for Synthecology was a single-wall passive-stereo rear-screen projection system with eye and hand tracking, often called a C-Wall or GeoWall. Some of the participants in Synthecology connected using standard desktop PC’s with just a DSL internet connection – but the same CAVE software is used on all these systems.

Credits

  • Ben Chang: Ygdrasil code development and scripting, Yg->SuperCollider coding
  • Geoff Baum: Project management, Python sound server layer coding
  • Dan St. Clair: SuperCollider sound engine coding
  • Robb Drinkwater: SuperCollider synthesis coding
  • Helen-Nicole Kostis: 3D modeling, software installer management, release packaging, testing, remote CAVE participant coordination.
  • Mark Baldridge: 3D modeling
  • Hyunjoo Oh: 3D modeling
  • Jon Greene: Flash front end

Remote CAVE Participants

  • Todd Margolis, Technical Director, Center for Research in Computing and the Arts, University of California, San Diego
  • Marientina Gotsis, Interactive Media Division of the School of Cinema-Television, University of Southern California

NextFest Volunteers

Dubi Kauffman, Tina Shah, Daria Tsoupikova, Kapil Arora, Javier Girado, Josephine Lipuma, Melissa Golter, Javid Alimohideen, Allan Spale, Geoff Holmes, Jennings Hanna, Victoria Scott , Flo McGarrell.

Leave a Reply