Coordinate Systems

Display Agnosticism in p5.js, with Shaders

A look at how a p5.js rendering might achieve a display that works for all displays and aspect ratios.

*disclaimer: I expect that other artists may have solved this problem in other ways than what is presented in this post. If I learn of better methods used by other artists, this post will be updated!

While developing my Generative Art project titled Flows, I ran into the issue of how best to show the work across a variety displays with different aspect ratios. Managing aspect ratio, pixel density, and display density became a bit complex, especially as I waded through vertex and fragment shaders, so I organized my thoughts and implemented a relatively simple pattern that I’ll describe in this post.

Base Coordinates

Flows uses some physics-based equations that depend on distances, so I needed some source of “true distance” intrinsic to the piece itself to avoid different physics behavior on different displays. In order to keep my life simple, I decided to use a concept I called “Base Coordinates” to manage particles and flow elements.

Put simply, each piece operates within a 1000×1000 base coordinate system. These coordinates do not correspond to any pixels on a display, but instead are a virtual coordinate system that must be scaled before showing up on a screen.

// base coordinate system is 1000 x 1000
var BASE_SIZE = 1000;

Window Coordinates

The Base coordinates must be scaled to window size. Because we want this to work with any aspect ratio, we don’t know exactly how to scale without a little help from the window. Assuming we want the entire base coordinate system to be shown at all times, the following logic is appropriate:

// translate to window size
var WIDTH = window.innerWidth;
var HEIGHT = window.innerHeight;
var DIM = Math.min(WIDTH, HEIGHT);
// calculate scale factor
var S = DIM / BASE_SIZE;

We also want to center the 1000×1000 base coordinate system on the screen. This means when scaling x or y base coordinates to the screen, we want to offset such that the base coordinate (500, 500) is in the center of the screen.

// we center the base coordinate system on the screen
var OFFSETX = (WIDTH - DIM) / 2;
var OFFSETY = (HEIGHT - DIM) / 2;

Finally, we add three utility functions to return window coordinates or lengths from base coordinates or lengths, respectively.

// utility functions to return window coordinate from base coordinate
function M(value) {
  return value * S;
}
function Mx(x) {
  return x * S + OFFSETX;
}
function My(y) {
  return y * S + OFFSETY;
}

Draw in p5 – from Base to Window Coordinates

Whenever we are drawing in p5 to the screen, we can use our three scaling functions to properly scale to the screen.

// scale non-coordinate lengths/sizes with M()
screen.strokeWeight(M(2))
// scale coordinates with Mx() and My()
screen.curve(
  Mx(this.xlast - this.ulast),
  My(this.ylast - this.vlast),
  Mx(this.xlast),
  My(this.ylast),
  ...
);

Draw in shader – from Base to Window Coordinates

The code above works within P5.js’s draw() function, but often vertex and fragment shaders are employed to unlock the power of graphic cards on a piece of generative art. How, then, can we scale the base coordinates to the shader window?

First, we can record two scale factors that are the ratio of the window width and height to the base size of 1000.

var SX = WIDTH / BASE_SIZE;
var SY = HEIGHT / BASE_SIZE;

In our p5 setup function, we can pass all relevant information required to scale from base coordinates to the window coordinates as a uniforms.

function setup() {
  stream = createShader(vertexShader, fragmentShader);
  stream.setUniform("M", [S, S]);
  stream.setUniform("Sxy", [SX, SY]);
  // determine display density and set pixelDensity
  let density = displayDensity();
  pixelDensity(density);
  stream.setUniform("D", density);
  stream.setUniform("WIDTH", WIDTH);
  stream.setUniform("HEIGHT", HEIGHT);
  stream.setUniform("OFFSETX", OFFSETX);
  stream.setUniform("OFFSETY", OFFSETY);
}

We can employ generic vertex shader code to display in 2D.

attribute vec3 aPosition;
attribute vec2 aTexCoord;

varying vec2 pos;

void main() {
  // copy the texcoords
  pos = aTexCoord;

  vec4 positionVec4 = vec4(aPosition, 1.0);
  positionVec4.xy = positionVec4.xy * 2.0 - 1.0;

  gl_Position = positionVec4;
}

In our fragment shader, the corresponding uniforms to the values set must be defined.

varying vec2 pos;
uniform sampler2D texture;

// multiplier to scale to screen size
uniform vec2 M;
// ratios of screen dims to base dims
uniform vec2 Sxy;
// pixel density of display
uniform float D;
// screen values to calculate offsets later
uniform float OFFSETX;
uniform float OFFSETY;

Finally, in our fragment shader’s main() function, we can translate between the shader’s gl_FragCoord.xy and base coordinates.

void main() {
  // get virtual screen texture at pos
  vec2 uv = pos;
  // translate from pos to intermediate coordinates independent of pixel density
  vec2 ic = gl_FragCoord.xy / (M) / vec2(D);
  // scale from intermediate to base coordinates
  vec2 bc = vec2(ic);
  bc.x = bc.x - 0.5 - (OFFSETX / (Sxy.y));
  bc.y = bc.y + 0.5 - (OFFSETY / (Sxy.x));

We now have our base coordinate values available from within the fragment shader!

In my Flows project, this is used to calculate the stream function and velocity potential values in the base coordinate reference frame.

  ...
  // calculate stream function and velocity potential in base coordinates
  vec2 psiPhi = getPsiPhi(bc);
  float psi = psiPhi.x;
  float phi = psiPhi.y;
  ...

Summary

The solution described in this post certainly appears to work as far as I have been able to tell. I like the behavior of the faming across a substantial variation of aspect ratios. Additionally, while precise, the scaling between base coordinates and screen coordinates is not too burdensome that it overloads the GPU on my macbook. There is certainly optimization that could be employed beyond what is described in this post, but this solution is intended to take a balanced approach with respect to balancing optimal performance and code readability.

Follow-On

Perhaps this implementation can be iterated on in the future to enable the artist to define any pixel density instead of just using display density. Setting a higher display pixel density would result in more computational effort, but it also would enable ultra high resolution outputs to be generated.


Posted

in

,

by

Tags: