Thursday, 14 October 2021: 1 pm
The aim of this class is to introduce the audience to how the artists go about collecting, sourcing or generating data. In particular, it will give an overview of how Studio Above&Below used data derived from an air pollution sensor for ‘Digital Atmosphere’, a mixed reality artwork, and how they are currently researching and developing a process to source and implement air pollution data from the web throughout the UK. Through further funding, the project is currently being transformed into a public accessible app aiming to use air pollution data across the UK.
The speakers will share insights about their past development of ‘Digital Atmosphere’ but also the currently ongoing research and development for a public UK wide app. The participants will have an exclusive opportunity to find out which parts of the project are hard or which parts may not work that well (yet).
The outcome of the class is to get a general understanding of how data can be used to influence and drive digital artworks, in particular Augmented Reality art through the games engine Unity.
Skills and knowledge that can be honed through active listening and asking include an introduction to Unity, data storing and sending to Unity, possible data visualisations.
We invite 16+ people interested in art, technology, the environment & data within generative art and games engines.
Studio Above&Below is a London based art and technology practice founded by Daria Jelonek (DE) and Perry-James Sugden (UK) after graduating from the Royal College of Art. Their work combines computational design, digital art and data in order to draw together unseen connections between humans, machines and the environment – working towards better future interactions with our environment. Believing in research-based art, Studio Above&Below works with scientists, technologists and communities to push the boundaries of digital media for future living. Over the last years, the duo has created groundbreaking artworks using immersive technologies such as AR and MR with live data inputs in order to make the invisible visible and give our environment a voice to express itself.