PROJECTS AND RESEARCH

My overall research aims are to examine and implement the use of creative and collaborative artificial intelligence applications into a multitude of fields such as audio, digital health and art. 

Please scroll down this page for selected current and recent projects

Scorch: a new programming language for algorithmic composition and performance (2020-onwards)

Scorch is a music programming language designed to be straightforward for those not experienced in traditional programming languages. Initially for alogorithmic composition as a MIDI generating VST plugin, but ultimately intended to be used for live coding and a variety of media computing applications. Scorch has various AI implementation including an AI collaborator similar to the  Autopia project which allows for collaboration with AI and human performer.

Scorch is a project by Norah Lorway, Ed Powley and Arthur Wilson (beesting.xyz)

Full release coming in 2022

WhatsApp Image 2021-03-28 at 22.08.44.jp

Autopia: An AI Collaborator for Live Coding Music Performance (2019-onwards)

Autopia is a project by Dr Norah Lorway, Arthur Wilson, Dr Edward Powley. It uses template based genetic programming to write SuperCollider code with audience feedback determining the fitness function of the evolution for the code. It interfaces with Utopia a system developed at the University of Birmingham by Dr Scott Wilson et al for collaborative, networked live coding performances.
 

We have presented Autopia at the following conferences: 

AISB (2019) - The Society for the Study of Artificial Intelligence and Simulation of Behaviour 

ICLC  (2020) International Conference on Live Coding

NMF  (2020) Network Music Festival

AIMC (2021) International Conference on AI Music Creativity 

You can read our latest paper from AIMC 2021 HERE

HiveSynth (2019 - onwards)

Hive Synth is an Augmented Reality synthesiser for mobile platforms. 

We are currently working on training a machine learning model on associations between image and sound, and therefore generating sound from image. These include: 

  1. Training on footage of instrument performance, and then generating sound from "air guitar" style miming of performance

  2. Training on dance performance, and then generating music to accompany a dancer's movements

  3. Replacing the image input with immersive controllers (motion capture, VR controllers) to train on 3D movements

Release date: Summer/Fall 2022 

 

It is being developed by my music tech company Beestings Labs

Screenshot 2020-06-17 at 16.35.38.png
Screenshot 2022-02-04 at 16.58.23.png

"A Virtual Assistant using Artificial Intelligence Technology for the social and logistical support of people with Dementia" 

(2020- onwards)

This is a collaborative research project with the NHS Cornwall Partnerships/Plymouth UK.

The project examining and creating a virtual assistant to assist people and carers living with dementia in rural areas.

Currently we are undertaking an NIHR funded PPI study and have a paper in progress. 

More information soon. 

AcrossVoids​ (2018-2019)

Across Voids is an immersive experience which explores how AI and Immersive technologies can help support the grieving process. 

It was funded by UKRI Research England in 2018 as part of of the SouthWest Creative Technology Network

You can read about the project here: https://swctn.org.uk/2019/10/30/across-voids-an-interactive-experience-on-grief/

 You can watch an excerpt of the experience here, premiered at University of Birmimgham, BEAST FEaST 2019 Festival

https://vimeo.com/500238592

02_Burtynsky_Lithium_720_519_90.jpg
Screenshot 2020-06-17 at 16.30.11.png

Anthropocene: The Human Epoch (2018)

(I composed the original score along with Rose Bolton)

A cinematic meditation on humanity’s massive reengineering of the planet, ANTHROPOCENE: The Human Epoch is a four years in the making feature documentary film from the multiple-award winning team of Jennifer Baichwal, Nicholas de Pencier and Edward Burtynsky.

World Premier: Toronto International Film Festival 

Has been shown at Sundance International Film Festival, Berlin Film Festival and many others. 

Myself and Rose were nominated for a Cinema Eye Honor's award for Outstanding Achievement in Original Music Score 2020

We were also named in Sundance Institute's "18 Women Composers You Should Know" list in 2019. 

Birmingham Ensemble for Electroacoustic Research (BEER) 

The ensemble was founded in 2011 as a research project within the Music Department to explore aspects of realtime electroacoustic music making. Particular interests include networked music performance (generally via our Utopia project), group improvisation and live coding.

 

Past and current members include Scott WilsonNorah Lorway, Martin Ozvold, Winston Yeung, Luca Danieli and Konstantinos Vasilakos.

You can read our Computer Music Journal article: 

https://www.mitpressjournals.org/doi/abs/10.1162/COMJ_a_00229

103333261_10100346298932603_657834762144
Screenshot 2020-06-18 at 18.55.52.png

"We Have Never Been Asian" 

 

Why is cyberpunk sci-fi always set in Asian cities? Why are Asians often assumed to be good with technology?《We Have Never Been Asian》is the world's first short film to investigate this curious linkage between Asia and technology in global media like movies, television, and animation."

Directed by Brent Lin

Music by Norah Lorway

Link: https://www.youtube.com/watch?v=9iahNsfAHUo&t=13s

"Hollow Vertices" (2015-6) 

Live coding - Norah Lorway
clarinet + effects - Kiran Bhumber
visuals - Nancy Lee
Premier - Vivo Media Arts Center, Vancouver Canada - November 2015
P
erformances at: NIME 2016, TIES 2016 and ICLC 2016

Hollow Vertices is an improvisatory audio-visual performance environment. The sonic components are co-created through live coding in the real-time audio synthesis language, Supercollider. This results in the creation of dense percussive and ambient textures. The two sound sources are linked through a custom built network upon which each performer has control over the others’ code. This produces developmental elements in the composition. This is combined with an amplified clarinetist using a custom programmed pedal board in Max/MSP to drive live audio effects.
A projected image is displaying the sound source’s live-code, framed by another projection of video content manipulated in real-time through an internal video feedback process programmed in CoGe VJ. The video feedback is processed by custom-built effects that transform the content into new video feedback abstractions. Effects are programmed so unpredictable visual outcomes or glitches appear. The visual glitches aid in the transitional process between visual aesthetics during the performance. Some visual effects are programmed to interact rhythmically with the composition, and some are manually controlled as the piece is improvised.
The composition converges disciplines and mediums by augmenting sensorial modalities through human-computer interaction. This is realized through employing different programming languages, combinations of instruments, and reacting to the collective output while maintaining awareness of individual contributions to the composition.