Ray Marching: Signed Distance Fields, Sphere Tracing, and SDF Shading
Ray marching is a rendering method that finds surfaces by advancing along a ray in repeated steps instead of solving an exact geometric intersection formula up front. It is most often taught together with signed distance fields (SDFs), because an SDF tells you how far you can move before you might hit something. That makes the method conceptually simple: start at the camera, ask the scene for the distance to the nearest surface, move forward by that amount, and repeat until you are close enough to count it as a hit.
This article focuses on the core mental model behind ray marching rather than on one specific shader implementation. If you understand why the distance value gives a safe step size, why repeated evaluation converges toward the surface, and how normals come from the field, then most shader code becomes much easier to read. The interactive sections below stay in 2D on purpose so that each geometric idea remains visible. The exact same logic extends to 3D scenes rendered in fragment shaders.
Ray Marching and Sphere Tracing
The term ray marching is broad. It refers to advancing along a ray in increments and checking the scene repeatedly. Those increments could be fixed, adaptive, stochastic, or based on accumulated volume density. In graphics tutorials, though, ray marching usually means sphere tracing over a signed distance field.
A signed distance field returns the shortest distance from a point to the nearest surface, with a sign convention:
- positive outside the object
- zero on the surface
- negative inside the object
If the field is accurate, a value of means there is no surface closer than that distance from the current sample point . That lets you jump forward by exactly without overshooting the nearest surface. Instead of taking many tiny fixed steps, you take the largest safe step the field permits. That is the reason ray marching with SDFs can be surprisingly efficient.
Safe Steps from the Distance Field
The first visualization shows one ray and one circle SDF. Each orange ring is the distance returned at the current sample point. Because the circle touches the nearest surface but does not cross it, the ray can move to the edge of that ring safely.
Distance-Guided Stepping
Move the ray angle and sphere position to see how the marcher advances by safe distances.
When the ray points directly toward the circle, the returned distance shrinks quickly and the steps converge to the boundary. When the ray points away, the distances may grow and the algorithm exits without a hit. That is already most of sphere tracing. The update rule is simply:
where is the ray direction. The important detail is that the step size is not arbitrary. It comes from the geometry of the field itself. That is what separates SDF ray marching from naive fixed-step marching.
In a shader, you typically stop when one of three conditions happens:
- the distance falls below a small hit threshold such as
0.001 - the accumulated travel distance exceeds a maximum range
- the loop reaches a maximum number of steps
All three limits matter. The hit threshold controls precision, the maximum range bounds work for rays that miss, and the step limit protects you from pathological cases or badly behaved distance estimators.
Building an SDF Scene
A single sphere is useful for intuition, but ray marching becomes powerful when a whole scene can be described as one distance function. You can do that by combining primitive SDFs. In 2D, a circle is the distance to a center minus radius. A line or plane can represent a floor. To combine them, you usually take the minimum distance to all objects because the nearest surface controls the next safe step.
2D Ray Marching Scene
Cast a small fan of rays through circles and a ground line to see how the minimum SDF controls every step.
In the scene explorer, each ray repeatedly queries the minimum of the circle distances and the ground distance. That minimum matters for a simple reason: if one object is close and another is far, the close one defines the largest safe step. Using the wrong larger distance could skip across the nearer surface.
This “minimum of primitives” rule is the basis of constructive SDF modeling. Common combinations are:
min(a, b)for unionmax(a, b)for intersectionmax(a, -b)for subtraction
Once those combinations are clear, you can build surprisingly complex procedural scenes from a small set of primitive formulas. A shader does not need triangle meshes to describe the scene. It only needs a function that answers: “how far am I from the closest surface right now?”
Why the Method Converges
A common beginner question is why this process does not skip over thin geometry. The answer is that sphere tracing relies on a conservative distance estimate. If the SDF is exact, the returned value is a guaranteed lower bound on the true distance to the nearest surface. So the next step lands on or before the closest possible surface, never beyond it.
That guarantee weakens when the function is only a distance estimator rather than a perfect signed distance field. Fractal rendering is a classic example. The estimator may still work well, but then your step safety depends on how conservative that estimate is. This is why high-quality ray-marching renderers care about field quality, Lipschitz behavior, and carefully chosen thresholds. The renderer is only as reliable as the distance information it follows.
Surface Normals from the Field
Rasterized meshes usually store vertex normals or derive them from triangles. A ray-marched SDF surface has no explicit mesh, so the normal has to come from the field itself. The usual approach is to sample the field at tiny offsets and approximate the gradient:
In 2D, you only need the x and y offsets.
After normalizing that gradient, you get a surface normal that can be used for lighting.
Gradient Samples and Surface Normal
Probe the SDF just left, right, above, and below a surface point. Their differences reconstruct the normal direction.
In the visualization, the black point is a position on the surface. The red probes sample the field slightly to the left and right, and the purple probes sample slightly below and above. If the right sample is farther outside the object than the left sample, the gradient points to the right. If the upper sample is farther outside than the lower sample, the gradient points upward. Combining those two differences gives the green normal arrow. This is one of the most useful ideas in SDF rendering: the same field that guides intersection also provides a way to shade the resulting surface. You are not just finding where the surface is. You are also recovering local orientation directly from the function.
Typical Shader Loop
A fragment shader version usually looks conceptually like this:
- build a camera ray for the current pixel
- set
t = 0 - evaluate
sceneSdf(rayOrigin + rayDir * t) - if the distance is below
epsilon, register a hit - otherwise increase
tby that distance and continue - if the loop ends without a hit, draw the background
- if a hit occurs, estimate a normal and compute lighting
The loop is simple, but the surrounding details determine image quality. Camera setup affects distortion and composition. The epsilon threshold affects banding and self-shadow artifacts. The maximum step count determines whether glancing rays terminate cleanly. Lighting quality depends on normal estimation, shadows, ambient terms, reflections, and whatever material model you build afterward.
Advantages and Tradeoffs
Ray marching is attractive because it makes procedural geometry unusually direct.
A primitive is just a formula.
Scene combination is often a few min and max operations.
Smooth blends, repetition, twists, and noise-based deformations can often be expressed analytically without rebuilding mesh topology.
That is why the technique became popular in shader art, demoscene production, and procedural rendering experiments.
The tradeoffs are just as real:
- every pixel may require many scene evaluations
- thin or highly detailed features can need many steps or careful thresholds
- soft shadows, ambient occlusion, and reflections add even more marches
- distance estimators that are not conservative can cause misses or artifacts
- large scenes need careful acceleration or domain design to stay fast
So ray marching is not a universal replacement for rasterization or hardware triangle tracing. It is best understood as a powerful procedural rendering tool with a specific strength: when geometry is easier to define as a field than as a mesh, in contrast to the standard raster pipeline described in vertex vs fragment shaders in the graphics pipeline.
Common Extensions
Once the primary ray hit works, many classic effects are built from the same machinery. Shadow rays march from the surface toward the light. Ambient occlusion samples how quickly nearby geometry appears along the normal direction. Reflections spawn another ray from the hit point. Fog and volumetric effects accumulate contributions as the ray travels. Each one reuses the same central pattern of repeated scene evaluation, and many become more visually useful once you modulate shapes or materials with value noise, Perlin noise, and fractal noise.
That reuse is one reason ray marching is so teachable. The renderer stays conceptually consistent even as you add more features. You are still asking the scene for distance information and using that information to decide how far to move next. The complexity grows, but the underlying mental model does not change.
Summary
Ray marching with signed distance fields works because the scene tells each ray how far it can move safely. That turns intersection into an iterative process guided by geometry rather than by fixed increments. If you keep four ideas in mind, the technique becomes much easier to reason about:
- an SDF returns distance to the nearest surface with a sign
- the minimum distance in the scene determines the safe next step
- repeated safe steps converge toward the surface when the field is conservative
- the field gradient gives a usable normal for lighting
Those four pieces are the core of most introductory SDF shader renderers. After that, the work is mainly engineering: choosing thresholds, optimizing scene evaluation, and layering lighting or secondary effects on top.