In a previous post, I explored how to render 2D metaballs; those smooth, organic shapes that seem to merge and flow together like blobs of liquid. It was a fun dive into the mathematic and techniques behind creating soft, blobby visuals in two dimensions.

But metaballs don’t have to live in flat space. The real magic happens when we extend the concept into three dimensions. Suddenly, we’re not just working with shapes on a plane, but sculpting dynamic, volumetric forms that feel alive. Whether you’re aiming for liquid simulations, abstract art, or special effects, 3D metaballs open up a whole new set of possibilities.

In this post, we’ll explore how to bring metaballs into 3D. We’ll looking at the underlying math, the rendering challenges, and some practical techniques to make them come to life.

Metaballs using signed distance fields

Blobs, also known as metaballs, are a modeling technique where simple primitives like spheres are combined into smooth, organic shapes. A sphere on its own is defined by a center and a radius, and no matter how you place multiple spheres they remain separate and rigid. Metaballs, in contrast, are built from spheres whose influence fields overlap. The visible surface is the set of points where the combined field reaches a threshold. When two metaballs come close their fields merge and the surface connects seamlessly, creating the impression of a single blobby object. This property makes metaballs feel more like liquid drops that fuse together, rather than isolated solid balls.

A signed distance field returns the distance from a point in space to the closest surface. For a sphere centered at \(c\) with radius \(r\) the signed distance from point \(p\) is:

\[d(p) = \lVert p - c \rVert - r\]

Negative values are inside, zero lies on the surface, and positive values are outside. Ray marching advances along a ray by sampling the field and stepping forward by the returned distance. When the distance falls below a small threshold \(\epsilon\) it’s treated as a ray intersection with the surface.

To merge the spheres smoothly I used the well known quadratic polynomial smooth minimum function[01] by Inigo Quilez:

\[h = \operatorname{clamp}\!\left(0.5 + 0.5 \cdot \frac{b - a}{k},\, 0,\, 1\right) \\ smin(a,b,k) = \operatorname{mix}(b, a, h) - k \cdot h \cdot (1 - h)\\\]

The formula is parameterized by \(k\) which controls how soft the blend is. Larger \(k\) yields softer merging. Here’s what it looks like:

Ray marching loop

The fragment shader builds a ray from the camera position and marches it until it either hits the surface or exits the scene bounds.

const float EPS_H = 0.001;   // hit threshold
const float FAR_T = 50.0;    // max distance
const float STEP_SAFETY = 0.95;

vec4 traceRay(vec2 uv, vec3 cam, vec3 right, vec3 up, vec3 forward) {
    vec3 rd = normalize(uv.x * right + uv.y * up + forward);
    vec3 ro = cam;

    float t = 0.0;
    for (int i = 0; i < 2000; ++i) {
        vec3 p = ro + rd * t;
        float d = blobDE(p);
        if (d < EPS_H) return vec4(p, 1.0);     // hit
        t += max(d * STEP_SAFETY, 0.01);        // adaptive step
        if (t > FAR_T) break;                   // miss
    }
    return vec4(0.0, 0.0, 0.0, 0.0);            // miss
}

For normals, we normally compute the gradient using a four tap tetrahedral sampling strategy. To reduce the cost, I switched to a three tap forward difference. That saves one field evaluation per shaded hit.

const float EPS_N = 0.0012;

vec3 getNormal(vec3 p) {
    float dx = distAt(p + vec3(EPS_N, 0.0,   0.0)) - distAt(p);
    float dy = distAt(p + vec3(0.0,   EPS_N, 0.0)) - distAt(p);
    float dz = distAt(p + vec3(0.0,   0.0,   EPS_N)) - distAt(p);
    return normalize(vec3(dx, dy, dz));
}

Material

I wanted a visual result that feels physical without paying a heavy cost. I used a GGX microfacet specular term with Schlick Fresnel and Smith geometry. The diffuse term is energy preserving for dielectrics. In compact math form:

\[f_s = \frac{D \, G \, F}{4 \, (N \cdot V)(N \cdot L)} \quad,\quad f_d = \frac{(1 - F)(1 - m)}{\pi} \, c \, \max(0, N \cdot L)\]

Here m is metalness and c is the diffuse color. The final color is \(f_d + f_s\).

To map linear HDR values to display I apply a simple exposure curve:

\[c_{out} = 1 - e^{-c \cdot e_x}\]

where \(e_x\) is the exposure control.

uniform float exposure;

vec3 applyExposure(vec3 c) {
    return vec3(1.0) - exp(-c * exposure);
}

Soft shadows

I added a cheap penumbra shadow based on sphere tracing. It samples along the light ray and attenuates light when geometry blocks it. I also exposed a quality mode so I can turn shadows off or use a binary hard shadow for speed.

float shadowTerm(vec3 ro, vec3 rd, float maxt) {
    const float k = 12.0;   // penumbra sharpness
    float t = 0.02, res = 1.0;
    for (int i = 0; i < 48; ++i) {
        if (t > maxt) break;
        float h = distAt(ro + rd * t);
        if (h < EPS_H) return 0.0;
        res = min(res, k * h / t);
        t += clamp(h * 1.25, 0.02, 0.6);
    }
    return clamp(res, 0.0, 1.0);
}

Animation of sphere centers

The sphere positions are calculated on the CPU side and are passed on to the the GPU through a 1×N RGBA32F texture. Each texel stores one sphere as xyz for the center and w for the radius.

On the CPU side I compute the animated centers once per frame, write them into a DataTexture, and upload it with nearest filtering. In the fragment shader a single texture2D lookup is used per sphere. That removes a lot of per pixel ALU work that would normally be used to calculate all those positions and keeps the SDF loop simple. I also pass a precomputed scene bound as uSceneRadius so the marcher can skip work far away.

This pattern works well and gives room to scale up the number of metaballs or add per-sphere radii without growing the shader cost.

Performance checklist

These changes made the biggest difference during the development of this demo:

  • Calculate the normal from three taps instead of four taps.
  • Switch to adaptive stepping with a safety factor.
  • Use slightly larger hit epsilon where it does not harm quality.
  • Implement switchable shadows with a cheap soft mode.
  • Calculate the positions of the spheres on the CPU cpu and store in a texture for the shader to consume.

With those in place I was able to nudge the number of spheres and the max step count to balance quality and speed on different devices.

References / Further Reading