Upcoming seminars

Upcoming seminars will be announced soon.

Past seminars

dialogues3: probing the future of creative technology.

Topic: Artistic and legal-philosophical perspectives on deep fakes
Ania Catherine and Dejha Ti, and Katja de Vries

Ania Catherine and Dejha Ti are an award-winning experiential artist duo who founded their collaborative art practice, known as Operator, in 2016. Referred to as “the two critical contemporary voices on digital art’s international stages” (Clot Magazine), their expertises collide in large scale conceptual works recognizable for their poetic approach to technology. Ti’s background as an immersive artist and HCI technologist, and Catherine’s as a choreographer, performance artist and gender scholar make for a uniquely medium-fluent output–bringing together environments, technology and the body. Operator has been awarded a Lumen Prize (Immersive Environments), ADC Award (Gold Cube), S+T+ARTS Prize (Honorary Mention), and MediaFutures (a European Commission funded programme). They’ve been speakers at Christie’s Art+Tech Summit, Art Basel, MIT Open Doc Lab, BBC Click, Bloomberg ART+TECHNOLOGY, Ars Electronica, Contemporary Istanbul, and CADAF. Ti and Catherine are originally from Los Angeles and currently based in Berlin. Title: Soft Evidence–Synthetic cinema as contemporary art Abstract: Art has always explored notions of truth and fiction, and the relationship between image and reality. Synthetic media’s capability to depict events that never happened makes that relationship more complex than ever. How can artists use synthetic media/deepfakes creatively, and start conversations about ethics and the social implications of unreliable realities? In this presentation, artist duo Ania Catherine and Dejha Ti of Operator discuss their work Soft Evidence–a slow synthetic cinema series created as part of MediaFutures in 2021. They will detail how research and interviews with experts on media manipulation in law, education, and activism informed their creative and technical processes. As experiential artists, Ti and Catherine plan to exhibit Soft Evidence as an installation, a site for the public to learn and process a rapidly changing media landscape through immersion and feeling states. (For Katja:) Katja de Vries is an assistant professor in public law at Uppsala University. Her work operates at the intersection of IT law and philosophy of technology. Her current research focuses on the challenges that AI-generated content (‘deepfakes’ or ‘synthetic data’) poses to data protection, intellectual property and other fields of law. Title: How can law deal with the counterfactual metaphysics of synthetic media? Abstract: How can law deal with deep fakes and synthetic media? Law is influenced by the politics, norms and ontologies of the society in which it operates but is never exhausted by it. Law always first and foremost obeys to an already existing system of parameters, rules concepts and ontologies, to which new elements can only be incrementally added. This contributes to legal certainty and foreseeability, as well as law’s slowness to adapt. The EU legislator is trying to adapt to new digital challenges and opportunities by creating a true avalanche of legislation. In the case of deep fakes and other synthetic media the question, however, is if operative concepts such as transparency and informed consent and dichotomies such as fact v. fiction, human v. machine, etc. work well with the counterfactual metaphysics of synthetic media, namely the articulation of what is possible into digital mathematical spaces of seemingly endless alternative realities, and extensions in time and space. More concretely: is it important to simply flag that we are interacting with a synthetic work? Can we consent to live-on forever in disseminating digital alter-egos?

2 February 2023,
15:00 – 16:00 CET

Zoom link:

dialogues2: probing the future of creative technology

Guests: Albena Baeva and Sam Salem

Albena Baeva ( Abstract: I will talk about the relationship between feminism, biases and algorithms – a topic that plays a central role in my recent work. Algorithms are not only automating many production processes but are also already shaping our perception of reality. AI is becoming a curator and creator of content, while humans are left to engage in poorly paid mechanical activities in content control factories or the preparation of training databases. Speculating over the imagery of the new reality is what inspires the artist to keep collaborating with different neural networks creating different artworks. My presentation will show how I as an artist simultaneously uses and critiques new technologies in my works. Bio: Albena Baeva works at the intersection of art, technology, and social science. In her interactive installations for urban spaces and galleries, she uses ML and AI, augmented reality, physical computing, creative coding, and DIY practices. Albena has two MAs; in Restoration and Digital Arts from the National Academy of Art in Sofia. She received an Everything is Just Fine commission from the Bulgarian Fund for Women (2019), won the international Essl Art Award for contemporary art (2011) and the VIG Special Invitation (2011). Albena is a co-founder of Symbiomatter: experimental arts lab, the studio for interactive design Reaktiv, the first Bulgarian gallery for digital art gallery Gallery and the AR sculpture park Ploshtadka. Her work was shown internationally in museums for contemporary art including Essl (Austria, 2011), EMMA (Finland, 2013), MCV Vojvodina (Serbia, 2015 and 2019), galleries and festivals for video and performance in Austria, Bulgaria, Czech Republic, Cyprus, Denmark, France, Finland, Germany, Hungary, Italy, Lithuania, Switzerland, Serbia, Turkey, Ukraine and the USA. Sam Salem ( Abstract: I will discuss approaches to, and reflections on, the use of Neural Synthesis in my recent works, Midlands (2019) and THIS IS FINE (2021), and my forthcoming work for solo trombone (+), Bury Me Deep (2022).  Bio: Sam Salem is a British / Jordanian composer who creates works for performers, electronics & video. His compositional process begins with a set of locations, a line on a map connected by a particular theme, history, or set of constraints. He captures moments, surprises, and ultimately, like prominent London-based psychogeographer Iain Sinclair, he offers a reading of his chosen locations, a divination made through an “act of ambulatory sign-making”. The layers of myth and history that he uncovers form his building blocks. His first works for live performers, London Triptych, were recorded and released as Salem’s debut portrait album in November 2021 via dFolds. He is a founding member and co-artistic director of Distractfold Ensemble, the recipients of the Kranichstein Music Prize for Interpretation from Internationales Musikinstitute Darmstadt (IMD) in 2014. Sam is also co-founder and co-director of Unsupervised / The Machine Learning for Music Working Group, a collaboration between RNCM and the University of Manchester that explores the creative applications of ML and AI. He is currently PRiSM Lecturer in Composition at the Royal Northern College of Music and was once described by the New York Times as “young”.

9 September 2022,
15:00 – 16:00 CEST

The seminar recording is available HERE.

dialogues1: probing the future of creative technology
Subject: “Interaction with generative music frameworks”

Guests: Dorien Herremans and Kıvanç Tatar.

31 March 2022, 10:00(sharp)
– 11:00 CEST

The seminar recording is available HERE

dialogues1: probing the future of creative technology Subject: “Interaction with generative music frameworks” Guests: Dorien Herremans and Kıvanç Tatar The seminar recording is available HERE. Dorien Herremans: Controllable deep music generation with emotion Abstract: In its more than 60-year history, music generation systems have never been more popular than today. While the number of music AI startups are rising, there are still a few issues with generated music. Firstly, it is notoriously hard to enforce long-term structure (e.g. earworms) in the music. Secondly, by making the systems controllable in terms of meta-attributes like emotion, they could become practically useful for music producers. In this talk, I will discuss several deep learning-based controllable music generation systems that have been developed over the last few years in our lab. These include TensionVAE, a music generation system guided by tonal tension; MusicFaderNets, a variational autoencoder model that allows for controllable arousal; and seq2seq a controllable lead sheet generator with Transformers. Finally, I will discuss some more recent projects by our AMAAI lab, including generating music that matches a video. Bio: Dorien Herremans is an Assistant Professor at Singapore University of Technology and Design, where she is also Director of Game Lab. At SUTD she teaches Computational Data Science, AI, and Applied Deep Learning. Before being at SUTD, she was a Marie Sklodowska-Curie Postdoctoral Fellow at the Centre for Digital Music at Queen Mary University of London. She received her Ph.D. in Applied Economics on the topic of Computer Generation and Classification of Music through Operations Research Methods, and graduated as a business engineer in management information systems at the University of Antwerp in 2005. After that, she worked as a Drupal consultant and was an IT lecturer at the Les Roches University in Bluche, Switzerland. Dr. Herremans’ research interests focus on AI for novel applications such as Music and Audio. Kıvanç Tatar: Musical Artificial Intelligence Architectures with Unsupervised Learning in Improvisation, Audio-Visual Performance, Interactive Arts, Dance, and Live Coding Abstract: Generalized conceptualization of music suggests that music is “nothing but organised sound”, involving multiple layers where any sound can be used to produce music, and strong connections exist between pitch, noise, timbre, and rhythm. This conceptualization indicates two kinds of organization of sound: 1- organization in latent space to relate one sound to another, 2- organization in time to model musical actions and form. This talk covers different Artificial Intelligence architectures that were developed with the perspective of generalized understanding of music. These architectures train on a dataset of audio recordings using unsupervised learning, which make these technologies to cover a wide range of aesthetic possibilities, and enable them to be incorporated into various musical practices. The example projects will span musical agents in live performances of musical improvisation and audiovisual performance, interactive arts and virtual reality installations, music-dance experiments, and live coding approaches. Bio: Kıvanç Tatar works in the field of advanced Artificial Intelligence in Arts and Music, active both as a researcher (with important theoretical and technical contributions) and an artistic practitioner, as an experimental musician and audiovisual artist, often in artistic collaborations. His research has expanded to multimodal applications that combine music with movement computation, and visual arts, and his computational approaches have been integrated into musical performances, interactive artworks, and immersive environments including virtual reality. Tatar has a dual educational background in music and technology, with a PhD from Simon Fraser University in Canada (2019) and started as Assistant Professor in Interactive AI in Music and Art at Chalmers in 2021, funded by a WASP-HS grant until 2026.

Many Hinges and other Problematic Metaphors: programmatic data mining towards Fluid Corpus Manipulation

This presentation is part of the KTH Sound and Music Interaction seminar series.

7 December, 15:00-15:45

In this presentation, Prof Pierre Alexandre Tremblay will present the ecosystem of software extensions and knowledge exchange around the Fluid Corpus Manipulation project, whose agenda is to empower techno-fluent musicians with the tools and the thoughts of machine listening and machine learning to enable musicking and musicking-based research around sound bank data mining. The design agenda will be discussed, and their implementation demonstrated, followed by a Q&A session.