Media Infrastructures


Technical Platforms for Art, Design, and Exhibition
SS 2026

Termine

Startdatum: 10.04.2026
Enddatum: 10.07.2026
Freitag: 09:00 - 11:00
wöchentlich


Lehrende*r

Dr. Michael Schmitz

Verantwortliche*r Professor*in

Prof. Burkhard Detzler



Lehrveranstaltung
geeignet für

Studiengänge

Medieninformatik–Kooperationsstudiengang mit der UdS


Lehrveranstaltung
auch geöffnet für

Studiengänge

Master Experimental Media
Media Art & Design


Veranstaltungsort

E-Haus Game Lounge


Maximale Anzahl Teilnehmer*innen

12


Anmeldeverfahren



Veranstaltungsart

Atelierprojekt kurz

ECTS

8 ECTS


Leistungskontrolle

Vorlage und Präsentation von Arbeitsergebnissen, Vorlage und Präsentation von Projektergebnissen


Beschreibung

Art and design schools have access to extraordinary infrastructure — media facades, immersive projection environments, motion capture systems — yet much of this potential goes untapped because the technical interfaces that would allow designers and artists to work with these systems simply don't exist. "Media Infrastructures" addresses this gap: across concrete sub-projects, you will develop technical frameworks that function as lasting, reusable platforms — not one-off prototypes, but real infrastructure that outlives the semester and can be adopted by others.

The project is aimed at media informatics students who want to put their technical skills to work for creative production, building real systems that will be deployed in exhibitions, at fairs, and in installations. Participants should have a solid background in software development.

 

We have defined three preliminary sub-projects at a glance (subject to change or to be extended with more options):

1 – FassadenAudio: Synchronised Audio Streaming for the Media Facade

The media facade at the Hochschulgalerie is driven by projectors — animations, short films, and generative works play on it in public space. What's missing is sound: passers-by and visitors should be able to hear the audio track of a projection on their own smartphone, individually and reasonably synchronised with the image. The technical challenge lies in synchronisation across networks and heterogeneous devices, in handling variable latencies, and in the question of how such a system can work without requiring an app installation (Progressive Web App, browser audio APIs). The result is an open platform that can give any future facade projection a soundtrack.

2 – AgentStage: 3D Agents as a Mobile Fair Installation

At fairs and events, universities and companies increasingly present themselves digitally — but this usually amounts to screens running slideshows. AgentStage goes a step further: you will build a framework that enables designers to create a 3D character and a 3D scene, link them to an AI agent, and provide the agent with specific context (knowledge about the university, its programmes, current projects). Visitors can then interact with this staged bot via voice or text, receiving information embedded in a spatial, designed experience. Technically, the project sits at the intersection of real-time 3D (WebGL/Three.js or a game engine), speech synthesis, speech recognition, and LLM integration. The central design question: how do you make the authoring — the interplay of 3D scene, character design, and knowledge configuration — accessible enough that a design team without backend experience can set up their own installation?

3 – MotionBridge: Body-Reactive Middleware for the Motion Hub

The HBKsaar's Motion Hub — a surround projection with markerless motion capture for multiple people — is a powerful system that currently requires custom programming for every use. MotionBridge aims to change that: you will develop a middleware layer that processes mocap data in real time and exposes it to the projection environment through an accessible interface. Designers should be able to define, via a configuration layer (visual editor, node-based system, or declarative configuration), how body movements influence the projected environment — without needing to understand the entire technical pipeline. The result is a platform that turns the Motion Hub into a space where changing interactive installations can be realised without months of lead time. The motion capturing software and projection setup is already given and can be used as building blocks.

 

All three sub-projects follow the same logic: the goal is not a single artistic work, but the infrastructure that enables many future works. Methodologically, we work in agile small teams with regular reviews and usability tests conducted by designers from across the school. Beyond working code, you are expected to deliver technical documentation that ensures the platform can be maintained and extended.

The use of existing technical solutions — preferably open source — is encouraged over building from scratch, as is the responsible and transparent use of AI-assisted coding tools; in both cases, what matters is that you understand and can account for every component in your stack.

For questions do not hesitate to contact:
Michael Schmitz

 


Zurück

Diese Website verwendet lediglich systembedingte und für den Betrieb der Website notwendige Cookies (Session, individuelle Einstellungen).
Durch die Nutzung der Seite erklären Sie sich damit einverstanden, dass wir diese Cookies setzen.
Datenschutzbestimmungen