by Merlin Hughes

Draw textured spheres

how-to
Jun 1, 199822 mins

Learn a few basics of computer graphics programming -- texture mapping, shading, and perspective -- and how to display your creation with Java's java.awt.image package

I had originally planned on writing this month’s article espousing the minutiae of implementing MOM with callback over RMI, using HTTP proxies, CGI bouncers, network PDUs, and other TLAs. Then, upon waking from twenty-six days of comatose inertia brought on by a massive overconsumption of purloined candy, I had but one thing on my mind: Easter eggs.

As a result, this article is instead about developing a Java applet that draws textured Easter eggs. The textures are just a tile pattern built from a straightforward mathematical function of sines and cosines. We will transform these planar textures onto a sphere’s surface to produce our finished product. To quickly draw these images to the screen we will render them into Java Image objects using classes from the java.awt.image package, letting the browser take care of any issues involved in actually displaying the resulting pictures. See Resources for the complete source code.

I must admit that the original inspiration for this article comes from Clifford A. Pickover’s Computers, Pattern, Chaos and Beauty: Graphics from an Unseen World (St. Martin’s Press, ISBN: 031206179X). If pretty computer-generated pictures interest you, I recommend you pick up a copy of this book.

Creating a texture

The first issue we encounter when generating our eggs is what texture to use. So as not to unduly restrict ourselves I’m going to start by defining a generic Texture interface that can be supported by a variety of different texture functions.

public interface Texture {
  public RGB getTexel (double i, double j);
}

An implementation of this interface must provide a method that returns the color of the texture element at the specified texture coordinate (i,j). The texture coordinate will be a value 0.0 , meaning that a texture function will define a texture over a square domain paramaterized by i and j. The texture function should, however, accept values outside this range, clipping, replicating, or extending the texture as appropriate. The value returned from the getTexel() method is of type RGB:

public class RGB {
  double r, g, b;
  public RGB (double r, double g, double b) {
    this.r = r;
    this.g = g;
    this.b = b;
  }
  public RGB (int rgb) {
    r = (double) (rgb >> 16 & 0xff) / 255;
    g = (double) (rgb >> 8 & 0xff) / 255;
    b = (double) (rgb >> 0 & 0xff) / 255;
  }
  public void scale (double scale) {
    r *= scale;
    g *= scale;
    b *= scale;
  }
  public void add (RGB texel) {
    r += texel.r;
    g += texel.g;
    b += texel.b;
  }
  public int toRGB () {
    return 0xff000000 | (int) (r * 255.99) << 16 |
      (int) (g * 255.99) << 8 | (int) (b * 255.99) << 0;
  }
}

Our RGB class is similar to the standard Color class, except that it stores RGB colors in double precision; the color components should have values 0.0 . We also provide some helper methods to convert, scale, and combine colors.

An image-based texture

This class implements a texture that uses an Image object as a source. We can use this class to map images onto a sphere by first converting the image into an array of integer RGB values (using the java.awt.image.PixelGrabber class) and then using this array to calculate texel values (as pixel is to picture element, so texel is to texture element).

import java.awt.*;
import java.awt.image.*;
public class ImageTexture implements Texture {
  int[] imagePixels;
  int imageWidth, imageHeight;
  public ImageTexture (Image image, int width, int height) throws InterruptedException {
    PixelGrabber grabber = new PixelGrabber (image, 0, 0, width, height, true);
    if (!grabber.grabPixels ())
      throw new IllegalArgumentException ("Invalid image; pixel grab failed.");
    imagePixels = (int[]) grabber.getPixels ();
    imageWidth = grabber.getWidth ();
    imageHeight = grabber.getHeight ();
  }
  public RGB getTexel (double i, double j) {
    return new RGB (imagePixels[(int) (i * imageWidth % imageWidth) +
      imageWidth * (int) (j * imageHeight % imageHeight)]);
  }
}

Note that we simply convert the texture coordinate into an integer location on the surface of the image and then return the image color at that exact point. If the texture is sampled at a greater or lower frequency than the original image, the result will be jagged as pixels are skipped or replicated. Properly addressing this problem requires us to interpolate between colors of the image; however, such a task is difficult to do properly when we don’t know where the texture will finally be displayed. Ideally, we would determine the amount of texture area covered by a single pixel on screen, and would then sample this amount of the actual texture. This approach is not practical, however, so we will not attempt to address it; supersampling, which we will examine later, is a much simpler way to reduce the effects of the problem.

An algorithmic texture

We may wish to experiment with an alternate texture, a completely artificial mathematical function. We could go with something like the Mandelbrot set or a Lyapanov function, but we’ll instead go with a texture computed from the sin() function (described in Pickover’s book).

public class SineTexture implements Texture {
  double multiplier, scale;
  int modFunction;
  public SineTexture (double multiplier, double scale, int modFunction) {
    this.multiplier = multiplier;
    this.scale = scale;
    this.modFunction = modFunction;
  }
  public RGB getTexel (double i, double j) {
    i *= multiplier;
    j *= multiplier;
    double f = scale * (Math.sin (i) + Math.sin (j));
    return ((int) f % modFunction == 0)
      ? new RGB (1.0, 0.0, 0.0)
      : new RGB (0.0, 1.0, 0.0);
  }
}

This class computes a simple sinusoidal function of (i,j). If the result, modulo a certain value, is 0, it returns bright red; otherwise, it returns bright green. The function uses three constants that control details of the resulting texture.

Mapping from sphere-space to texture-space

Now that we have a texture function, we must decide how to map a square, flat texture onto the closed surface of a sphere; or in other words, how to transform a point on the surface of the sphere into an (i,j) texture coordinate.

An obvious transformation is simply from longitude to i and latitude to j. The primary problem with this solution is that near the poles, the i coordinate will be greatly compressed: Walking around the earth at latitude 89o North is a lot quicker than at latitude 0. In other words, our uniform flat texture will be squashed at the poles.

A nice solution to this problem is to use a 3D texture function, so instead of a function of (i,j) it becomes a function of (i,j,k). We can then simply sample the texture at points on the surface of the sphere within three dimensions. In this way, we get an even application of our texture. A fine solution, except that we are limited to textures that can be expressed in three dimensions. While this is possible for the algorithmic texture described above, it is not generally a valid option; most textures are only described by two variables (for example, an image or the Mandelbrot set) and cannot be mapped into three dimensions.

For the purposes of this article, we will stick with 2D texture functions, which means that we will have to map longitude and latitude directly to (i,j).

Mapping from screen-space to sphere-space

Now that we can transform from points on the surface of a sphere into points on the texture function, we need to be able to transform from points on the screen into points on the surface of a sphere. We’re going to keep things simple here, bypassing many of the complexities of a fully-generalized solution.

One simple way to draw our sphere is to iterate over a large number of latitude and longitude values, computing the corresponding texture values and then placing these on the screen. Transforming from sphere space to screen space is a simple matter of sines and cosines.

y = r * sin(latitude)

x = r * cos(latitude) * cos(longitude)

The problem with this type of transformation is that points are concentrated at the edge of the sphere compared with the center. This can lead to a particular problem: the center of our sphere may have cracks if we use too few latitude/longitude values, but if there are no cracks in the center we will be computing an excess number of values at the edge.

A much better solution is to simply iterate over points in our image, determining whether any point actually hits the sphere, and if it does, determining the corresponding latitude and longitude on the surface of the sphere. This provides us with sphere coordinates that we can use to compute a texture value. For such computations, however, we need some real mathematics.

An egg without perspective

Let us pretend, for a moment, that we’re Bambuti Pygmies. For us, perspective doesn’t exist. Having lived all our lives in a rainforest, we’ve never seen anything far away. As a result, things don’t get smaller as they recede, so our task is quite simple: We iterate over points (x,y) on the screen and compute where on the sphere our point hits.

Let’s look at what happens.

A point hits the sphere if x2 + y2 2. If it hits the sphere, its latitude is latitude = arcsin(y / r).

The earth’s longitudinal radius at this latitude is r * cos(latitude). Therefore, the longitude is longitude = arccos(x / r / cos(latitude))

Got all that?

Unfortunately, owing to the Western military industrial machine that controls our lives, all the rainforests have been cut down to grow cows and coffee and to print paper magazines (yay JavaWorld!), and so we can see things at a distance, and they do get smaller as they recede. If we draw our egg without perspective it looks all wrong. Being accustomed to looking at real Easter eggs, we’re not fooled by this phony.

An egg with perspective

Perspective is our enemy. It complicates things. No longer can we simply take a point on the screen, orthogonally map it to the sphere, and perform an inverse transformation. We must now take the location of the viewer into account, fire a ray from their eye through the screen, and see where on the sphere it lands.

The basic setup for adding perspective is to choose a location for the viewer (we will assume just one eye) and pretend that the computer display is a window in space (every pixel is a point on the window). To see what color any given pixel is, we fire a ray from the viewer’s eye through the pixel’s location on the window and into our scene. We then compute where on what object this ray hits, and from this value we compute the pixel’s color. We define an interface Obj that describes this simple behavior:

public interface Obj {
  public RGB getIntersection (Vec ray);
}

This interface provides a method that should return the color of the object where the specified ray strikes it. Note that this is very simplistic; we don’t provide intersection or distance tests or any other of the features that would be needed by a full raytracer.

public class Vec {
  double x, y, z;
  public Vec (double x, double y, double z) {
    this.x = x;
    this.y = y;
    this.z = z;
  }
}

The Vec class describes a vector in space — a simple triple (x,y,z) that represents a direction in three dimensions. We are assuming, for this article, that the viewer is at the origin (0,0,0).

For our Easter egg, we will be computing the intersection between a ray and a sphere. The mathematics of this computation are fairly straightforward: The definition of the surface of a sphere; radius r, center (xc,yc,zc), is:

(x – xc)2 + (y – yc)2 + (z – zc)2 = r2

The definition of a ray (line) starting from point (x0,y0,z0) and passing through point (x1,y1,z1) is:

x = x0 + (x1 – x0) * t

y = y0 + (y1 – y0) * t

z = z

0

+ (z

1

– z

0

) * t

To simplify things, our ray will start at the origin (x0=y0=z0=0) and the sphere itself will be located along the Z axis (xc=yc=0).

Now the definition of our sphere is:

x2 + y2 + (z – zc)2 = r2

And our ray:

x = x1 * t

y = y1 * t

z = z

1

* t

To determine the intersection, we substitute the ray into the sphere definition.

x12 * t2 + y12 * t2 + (z1 * t – zc)2 = r2

t

2

* (x

1

2

+ y

1

2

+ z

1

2

) – t * 2 * z

1

* z

c

+ z

c

2

– r

2

= 0

This is a simple quadratic equation, the solution to which is:

a = x12 + y12 + z12

b = – 2 * zc * z1

c = zc2 – r2

(t

0

,t

1

) = [-b +- sqrt(b

2

– 4 * a * c)] / 2 / a

Here, (t0,t1) are the two values of t for which the ray intersects the sphere. Note that b2 – 4 * a * c if the ray does not hit the sphere.

Once we have values for (t0,t1), we determine which point is closer to the viewer (the front side of the sphere) and then compute the location in space of this intersection. We now have an (x,y,z) location on the sphere that we can transform back into latitude and longitude as before, and from this we can compute the appropriate texture color.

public class Sphere implements Obj {
  Texture texture;
  double sphereZ, sphereR;
  double c;
  public Sphere (double sphereZ, double sphereR, Texture texture) {
    this.sphereZ = sphereZ;
    this.sphereR = sphereR;
    this.texture = texture;
    c = sphereZ * sphereZ - sphereR * sphereR;
  }
  public RGB getIntersection (Vec ray) {
    double a = ray.x * ray.x + ray.y * ray.y + ray.z * ray.z;
    double b = -2.0 * sphereZ * ray.z;
    double tmp = b * b - 4.0 * a * c;
    if (tmp < 0) {
      return new RGB (0.0, 0.0, 0.0);
    } else {
      double t = (- b - Math.sqrt (tmp)) / 2.0 / a;
      Loc intersect = new Loc (ray.x * t, ray.y * t, ray.z * t);
      double j = Math.asin (intersect.y / sphereR);
      double i = Math.acos (intersect.x / sphereR / Math.cos (j));
      RGB rgb = texture.getTexel (i / Math.PI, j / Math.PI + 0.5);
      return rgb;
    }
  }
}

This class is a sphere implementation that accepts a Z-location, and radius and a texture. For a given ray, we first compute the ray-sphere intersection and then compute the latitude and longitude of the intersection. From this we return the texture color.

public class Loc {
  double x, y, z;
  public Loc (double x, double y, double z) {
    this.x = x;
    this.y = y;
    this.z = z;
  }
}

This class simply describes a 3D coordinate in space.

An Easter egg with perspective

The resulting image, once we have added perspective, is much more convincing than before: The texture now distorts over the sphere’s surface in a much more realistic way than it did without perspective.

Light sources

Although our textured egg is now somewhat geometrically correct, the image is still far from convincing. The biggest obvious flaw is that the texture color is uniform across the sphere’s surface — there is no shading. Until the Earth is surrounded by a luminous fog of pollution that uniformly lights all surfaces from all directions, we will expect Easter eggs to be nonuniformly shaded. If the sun is in the sky, we will expect the top of an egg to be brighter than the bottom.

To address this issue we need to introduce the concept of a light source. There are a few different types of light source:

  • A point light source is a point in space that emits light in all directions
  • A directional light source is a light source that emits light over a large area in a single direction; this is, in fact, equivalent to a point light source that is infinitely far away
  • A spotlight emits light in a cone
  • Ambient lighting refers to light that has bounced off so many surfaces that it is essentially directionless; all surfaces of all objects will be uniformly illuminated by ambient light

There are a few other types of light sources, but these four cover the bases most often needed.

Some light sources

For the purpose of this article, we will just look at directional light sources. The sun is a good example of a directional light source; it is sufficiently far away that all rays striking the earth are essentially parallel. Computing the lighting from a directional light source is fairly simple (handling point light sources is not significantly more complex, however, other localized light sources would make our task significantly harder).

We will also use ambient lighting, which serves to illuminate surfaces of objects that have no direct light on them at all. The real world is full of ambient light; without adding it to our computer model we would encounter many completely black surfaces.

To describe a nonambient light source we provide an interface, Light:

public interface Light {
  public Vec getDirection (Loc position);
}

A light implementation must return a vector describing the direction of the light at the specified position in space. For efficiency, this must be a unit-length vector. To help enforce unit length, we add the following methods to Vec:

double getLength () {
  return Math.sqrt (x * x + y * y + z * z);
}
void scale (double scale) {
  x *= scale;
  y *= scale;
  z *= scale;
}

The first method returns the length of a vector, and the second allows us to scale a vector by a given value, such as 1/length.

A directional light implementation has the following form:

public class DirectionalLight implements Light {
  Vec dir;
  public DirectionalLight (double x, double y, double z) {
    dir = new Vec (x, y, z);
    double length = dir.getLength ();
    dir.scale (1.0 / length);
  }
  public Vec getDirection (Loc loc) {
    return dir;
  }
}

Here, the constructor accepts the direction of the light. We store this in a unit-length vector, and the getDirection() method simply returns this value. A point light source would compute a new direction vector for each location in space, and we would have to add support for the light becoming weaker with distance.

Shading models

Once we have a light source, we need a shading model. This determines how the surface of our sphere reflects light that falls upon it. The two most important shading models are diffuse and specular: A diffuse shading model describes a matte object; a specular shading model describes a shiny object.

If a ray of light hits a completely diffuse surface then it is uniformly reflected back by the object in all directions. This means that it does not matter where the viewer is, the same amount of light will be reflected toward him from a particular point on the object’s surface.

The important information needed to calculate diffuse shading is the angle at which the light strikes the surface. If the light strikes the surface very directly, a large amount will be reflected; if the light strikes the surface very obliquely, just a small amount will be reflected. In this manner, surfaces directly facing the light source will be brightly lit, while surfaces facing at an angle will be less-brightly lit, and those facing perpendicular to the light source will not be lit at all.

The amount of diffuse reflected light is in fact proportional to the cosine of the angle that the light makes incident upon the surface. A simple way to compute this value is to compute the negative dot product of the unit-vector light direction and surface normal. The normal of the surface refers to a vector pointing directly up from the surface at a particular point. For a point on the surface of a sphere, the normal is in the same direction as a vector joining the center of the sphere to the particular point on the sphere’s surface.

Light direction: (vx,vy,vz)

Normal direction: (nx,ny,nz) = normalized(px – cx, py – cy, pz – cz)

Lighting: -(vx * nx + vy * ny + vz * nz)

The specular shading model works a little differently. When a ray of light hits the object, it is reflected by the object primarily along the direction the ray would take if it bounced off perfectly from the object’s surface. In this model, we need to take into account the location of the viewer in addition to that of the light source and the object. When determining the shading of a point on the surface of the sphere, we must calculate how close the viewer is to the direct ray reflection, and, thus, how bright the point will be.

Specular shading

Most real surfaces are best represented as a combination of both a specular and a diffuse material: Some light is diffusely reflected in all directions, and the rest is specularly reflected along the path of the light reflection. For the purposes of this article, however, we will only consider diffuse shading. Adding specular highlights would be fairly simple, but it would slow down our eggs.

To implement diffuse shading, we first add another method to the Vec class:

double dot (Vec v) {
  return x * v.x + y * v.y + z * v.z;
}

This method returns the dot product of one vector with another. If two vectors have unit length, the dot product is the cosine of the angle they make with each other. The closer they are to parallel, the closer the dot product is to 1 (or -1 if they are parallel but in opposite directions). We use this value to determine how directly the light hits our surface by computing how close to parallel (but opposite) the light direction and surface normal are.

Next, we add lighting code to the Sphere class:

Vec norm = new Vec (intersect.x, intersect.y, intersect.z - sphereZ);
double length = norm.getLength ();
norm.scale (1.0 / length);
Vec dir = light.getDirection (intersect);
double coincidence = - norm.dot (dir);
double lighting = 0.25 + 0.75 * ((coincidence >= 0) ? coincidence : 0.0);
rgb.scale (lighting);

We compute the normal of the sphere and light direction at the intersection location. We then compute the negative dot product of these vectors, and we have the amount of diffuse lighting. If this value is less than 0, the surface is facing away from the light, so we have only ambient lighting; otherwise, we have a combination of diffuse and ambient lighting. We use this to scale the texture color and then return this value.

Supersampling

The final shortcoming of our egg is that the picture is very jagged (aliased). In other words, because we are sampling just a single point in the image for each pixel, our result has very sharp edges; either a point is in the sphere or it is not; either it is one texture color or another. In reality, each pixel represents a small volume of space, so our single sample point is not a very good representation.

To address this problem, we use supersampling. Supersampling refers to firing several rays into the image for each pixel, and averaging the result to be the color of the single pixel. In this way, we get a better approximation of the true color the pixel is representing. The more rays we fire, the better our resulting image.

The code to implement supersampling is quite simple: We compute the color of our scene for several rays within each pixel volume, we sum these colors, and then divide by the number of rays. The result is the color for a single screen pixel.

int sup = 2;
double supInv = 1.0 / sup;
for (int j = 0; j < height; ++ j) {
  for (int i = 0; i < width; ++ i) {
    RGB pixel = new RGB (0.0, 0.0, 0.0);
    for (int k = 0; k < sup; ++ k) {
      for (int l = 0; l < sup; ++ l) {
        Vec ray = new Vec ((i * 2. - width) / 2. + k * supInv,
                           (j * 2. - height) / 2. + l * supInv, 150.0);
        RGB rgb = obj.getIntersection (ray);
        pixel.add (rgb);
      }
    }
    pixel.scale (supInv * supInv);
    imageData[i + width * j] = pixel.toRGB ();
  }
}

Here, we basically iterate over the pixels of our image; for each pixel, we perform an even supersampling of sup * sup rays, summing the color values for each ray and then dividing by the amount of oversampling.

A slight improvement on our simple supersampling implementation is to use adaptive supersampling. In this case, we initially perform no supersampling; if, however, we identify a significant change in the surface color between adjacent pixels, we supersample the two pixels. In this way, the extra work of supersampling is only performed where it is actually needed. The problem with adaptive supersampling is that we may entirely miss some small details on the surface.

Producing an image

We now have code to produce supersampled, shaded, textured spheres. Our last task is to render this to the screen. One option would be to provide a paint() method that simply calls setColor() and fillRect() for each pixel of the image. Unfortunately, such a process is inordinately slow. A bigger problem exists on a computer that cannot display true color. In this case, the image would not be dithered for the best result; instead, each pixel would be truncated to the nearest available color.

Instead, we use classes from the java.awt.image package to convert our spheres into Image objects that will take advantage of the Java runtime to optimally draw on screen. On a true-color display, the images will be displayed directly; on a less colorful display, the image will be dithered and displayed as accurately as possible.

int[] imageData = new int[width * height];
MemoryImageSource source = new MemoryImageSource (width, height,
  imageData, 0, width);
source.setAnimated (true);
source.setFullBufferUpdates (true);
tile = createImage (source);
// fill it in
source.newPixels ();

We first create a 24-bit RGB MemoryImageSource from a data array. We set this image source to be animated (which means that we can change the contents of the memory array and re-create the image) and then create an image to display in the paint() method. Whenever the tile image is filled in, we call source.newPixels(), which notifies all listeners (including the applet displaying our image) to redisplay our image.

We use this in the supplied EasterEgg class that displays our rendered egg and provides, for convenience, controls over the texture map that we use.

Conclusion

What we have implemented here is a very simple raytracer. The adaptations necessary for it to be fully generalized and supportive of different objects and configurations are actually comparatively simple. If you want to learn about computer graphics or generate your own pretty pictures, it is quite easy to experiment with this framework. A basic knowledge of mathematics, a good book, and a little inspiration is all you need to generate your own masterpieces.

Merlin’s origin is on this planet. He still belongs here. He’s an agent. Was an agent. He was sent here to accomplish a mission. He has a life to go back to.