Audio Particles 
Bubble 

Audio-reactive particle visualizer
deeper immersiontake a look
Audio Particles Bubble — screenshot 1
about

An experimental WebGPU-based audio visualizer combining a GPU particle system and a deformable glow sphere. Real-time audio data drives a uniform-based state system, procedural displacement logic, and shader-controlled visual modes.

main technologies

React // Three.js // TSL // WebGPU

project role

Creative Coding, Shader Development, GPU Programming /

Designed and developed a WebGPU-powered audio visualization system featuring a GPU particle simulation and a deformable glow sphere. Built a uniform-driven state machine with procedural displacement logic, mapping real-time audio data to shader uniforms to create dynamic generative visuals.

Audio Particles Bubble — screenshot 2
Audio Particles Bubble — screenshot 3
challenges and achievement

What I learned through this project:

Uniform-driven state machine

Implemented a uniform-driven state machine controlling multiple visual modes — Speaking, Thinking, Audio Listening, and Idle. Each state modifies shader uniforms, deformation intensity, glow behavior, and particle motion patterns in real time.

Procedural displacement logic

Custom shader-based procedural displacement logic manipulates vertex positions of both particles and the central sphere. Audio frequency data is mapped to GPU uniforms, driving dynamic deformation, pulsation, and distortion effects.

Dual system: particles + glow sphere

The scene combines a GPU-accelerated particle field and a glowing central sphere. Both systems are synchronized through shared uniform inputs, creating a cohesive audio-reactive visual composition.

WebGPU rendering & performance optimization

Built on WebGPU with custom TSL shaders, the rendering pipeline is optimized for stable frame rates despite complex deformation, state transitions, and continuous audio input processing.

process

steps of development

Concept & Visual Architecture

  • Designing particle + glow sphere composition
  • Defining multi-state visual behavior architecture
  • Planning uniform-driven animation logic

Audio-to-Shader Mapping

  • Implementing real-time audio frequency analysis
  • Mapping frequency bands to shader uniforms
  • Creating smooth state transitions between visual modes

WebGPU & Shader Pipeline

  • Building a GPU-accelerated particle system
  • Developing custom TSL shaders
  • Implementing procedural displacement logic
  • Managing uniform-driven state machine
  • Optimizing GPU buffers and rendering flow

Performance & Deployment

  • Frame rate stabilization
  • GPU workload optimization
  • Deploying demo on Vercel