Nikolaus Mayer





Let’s  build  a  Raytracer





Sommercampus 2019
Technische Fakultät
Universität Freiburg

Why this course?

  • Fact: Computer graphics are awesome
  • Fact: OpenGL is a mess (of user friendliness)
  • Fact: Blender is mostly a black box

  • my own opinion: you can never fully understand something which you have never made yourself

  • consequence: one should develop one's own renderer

  • this course wants to be the next best thing

Overview

  • Introduction
  • 0. An empty image
  • 1. Heaven and earth
  • 2. Triangles
  • 3. Raytracing
  • 4. Ray energy
  • 5. Spheres
  • 6. Light
  • 7. Shadows
  • 8. Random stuff
  • 9. Textures
  • █. Bonus: ████████

Introduction

"Computer graphics"

  • Generating images from model knowledge

Introduction

Raytracing vs Rasterization

  • Rasterization: step-by-step pipeline, whole image is processed in each step

Introduction

Raytracing vs Rasterization

  • Raytracing: pixel-by-pixel, each pixel complete on its own

Introduction

Raytracing vs Rasterization

  • Rasterization:
    • ultra-optimized hardware (GPUs)
    • ultra-optimized engines and tools (OpenGL)
    • (thanks to that) fast
    • many effects need tricks or post-processing
  • Raytracing:
    • reflections, depth-of-field etc. are "all inclusive"
    • can be physically correct
    • is used for simulations
    • (also special hardware (Nvidia RTX, AMD RDNA(?)))
  • both are active research areas!

Introduction

Recommended reading

  • "Graphics Study" blog posts by Adrian Courrèges
  • "Physically Based Rendering" book by Pharr/Humphreys
  • "Graphics Gems" book series
  • www.scratchapixel.com
  • Conferences: SIGGRAPH, SIGGRAPH Asia, EuroGraphics
  • and: GDC (developer conference with talks on e.g. engines)

Introduction

Course structure

  • split into individual "levels", each with
    • motivation
    • content (with theory)
    • coding session
    • inspirational ideas








Level 0 — An empty image

#0 An empty image

What is an image?

  • an ordered collection of pixels
  • pixel are triples (Red-Green-Blue)
  • integer numbers 0–255 per color
  • 16.7 million colors!!!1

#0 An empty image

The PPM image format

  • "Portable PixMap"
  • the simplest possible format
  • (no compression, very inefficient)
  • 3 colors (RGB) per pixel
  • Header: P3 WIDTH HEIGHT MAXIMUM
  • in our case: P3 512 512 255
  • P3 means "ASCII-PPM" → human readable \Ü/
  • P3 512 512 255 255 0 0 255 0 0 255 0 0 [...]
  • P6 512 512 255 ÿ^@^@ÿ^@^@ÿ^@^@[...] (binary PPM)
  • left-to-right, top-to-bottom, separation by spaces

#0 An empty image

The code basis

std::ofstream outfile("img.ppm");
outfile << "P3 512 512 255";

for(int y = 0; y < 512; ++y) {    /// Rows
  for(int x = 0; x < 512; ++x) {  /// Columns
    outfile << " " << 255 
            << " " << 0 
            << " " << 0;
  }
}
  • 512 · 512 = 262144 red pixels
  • (we could write  << " 255 0 0"  instead)

#0 An empty image

Hack time






Dummy image

#0 An empty image

Result



#0 An empty image

Debugging

  • Loop conditions (< / ≤)
  • file size: $ du --bytes img.ppm
    • "P3 512 512 255" = 14 bytes header
    • " 255 0 0" = 8 bytes per pixel (ASCII: 1 byte/character)
    • → 14 + 8 · (512 · 512) = 2097166 bytes
    • file contents: $ head -c 100 img.ppm
    • P3 512 512 255 255 0 0 255 0 0 255 0 0 ... 








Level 1 — Heaven and earth

#1 Heaven and earth

What is a camera?

  • assumption: pinhole cam
  • (no lens effects)
  • attributes:
    • projection center
    • viewing direction
    • "focal length" (=zoom)
    • image width/height
  • assumption: 4×4 pixels



#1 Heaven and earth

World coordinate system

  • Z axis in viewing direction
  • together with X/Y: coordinate base
  • here: left-hand system
  • X RIGHTWARDS
  • Y UPWARDS
  • Z FORWARDS

  • the system does not matter, but
  • one has to make a choice
  • and then be consistent


#1 Heaven and earth

Making rays

  • Z axis in viewing direction
  • together with X/Y: coordinate base
  • here: left-hand system
  • ray = 3D vector
  • ray = vector chain
  • ratio of lengths X/Y to Z determines zoom
  • always normalize all rays to length 1.0!




#1 Heaven and earth

Hack time






Vector class

#1 Heaven and earth

Pixel offset

  • two conditions
    • 1. rays go through pixel centers
    • 2. (0,0) is in the image center
  • here: width, height even
  • (0,2,4,... and not 1,3,5,...)
  • → integer coordinates are wrong!
  • pixel centers (0.5,0.5), (0.5,1.5), ...

#1 Heaven and earth

Pixel offset

up to now:

for(int y = 0; y < 512; ++y) {    /// Rows
  for(int x = 0; x < 512; ++x) {  /// Columns
    ...

from now on: pixel centers with correct coordinates

for(int y = 256; y >= -255; --y) {    /// Rows
  for(int x = -255; x <= 256; ++x) {  /// Columns
    ...
    ...(x - 0.5)...
    ...(y - 0.5)...
    ...

#1 Heaven and earth

Hack time






Ray construction

#1 Heaven and earth

Above and below

  • camera is "level"
  • (for more we would need rotation matrices)
  • infinite ground plane, Y=0
  • camera not at Y=0!
  • ray hits ground if
    • ... ray.y < 0 !
  • ray hits sky if
    • ... ray.y ≥ 0 !

#1 Heaven and earth

Hack time






Ground and sky

#1 Heaven and earth

Result




#1 Heaven and earth

Debugging

  • Coordinate system vectors
  • Vector::operator+, Vector::operator*

#1 Heaven and earth

Heaven 2.0

  • old sky color: constant
  • new sky color: depending on ray angle
    • horizontal: blue-ish
    • vertical: black
    • (totally up to you!)
  • horizontal: ray.y = 0, vertical: ray.y = 1
  • interpolation between color and black:
    [color] * std::pow(1-ray_direction.y,2)

#1 Heaven and earth

Hack time






Heaven 2.0

#1 Heaven and earth

Result




#1 Heaven and earth

Earth 2.0

  • just red is boring
  • no sense of space
  • new ground: checkerboard
  • ground coordinate determines color
  • → we need the "observed" point
  • → intersection ray-ground
  • Y = 0 = camera.y + distance · ray.y
  • one equation, one unknown → \Ü/
  • distance := -camera.y / ray.y

#1 Heaven and earth

Earth 2.0

  • distance := -camera.y / ray.y
  • X := camera.x + distance · ray.x
  • Z := camera.z + distance · ray.z
  • checkerboard: X and Z both odd/even

#1 Heaven and earth

Hack time






Earth 2.0

#1 Heaven and earth

Result




#1 Heaven and earth

Debugging

  • coordinate system vectors
  • ray length normalization
  • std::abs on negative numbers!
  • (int)std::abs(std::floor(...))%2
  • ray hits ground if (and only if) ray.y < 0








Level 2 — Triangles

#2 Triangles

Why triangles

  • the base of most polygons
  • (polygons are the base of computer graphics)
  • easy to compute
  • besides...

#2 Triangles

What is a triangle?

  • defined by 3 corners/vertices $$P_0, P_1, P_2$$
  • spanned by 2 vectors $\mathbf{u}$, $\mathbf{v}$: $$\mathbf{u} = P_1 - P_0$$ $$\mathbf{v} = P_2 - P_0$$
  • triangle area: points $\{p_i\}$ $$p_i = P_0 + u\cdot\mathbf{u} + v\cdot\mathbf{v}$$ for which it holds that $$0 ≤ u,v ≤ 1$$ $$u + v ≤ 1$$
  • (equivalent form:) $$p_i = (1-u-v)\cdot P_0 + u\cdot P_1 + v\cdot P_2$$

#2 Triangles

Intersection ray/triangle

  • camera center $\mathbf{O}$, ray direction $\mathbf{R}$, distance $d$
  • $P_{0} + u\cdot\mathbf{u} + v\cdot\mathbf{v} = \mathbf{O} + d\cdot\mathbf{R}$
  • → linear equation system $$P_{0,x} + u\cdot\mathbf{u}_x + v\cdot\mathbf{v}_x = \mathbf{O}_x + d\cdot\mathbf{R}_x$$ $$P_{0,y} + u\cdot\mathbf{u}_y + v\cdot\mathbf{v}_y = \mathbf{O}_y + d\cdot\mathbf{R}_y$$ $$P_{0,z} + u\cdot\mathbf{u}_z + v\cdot\mathbf{v}_z = \mathbf{O}_z + d\cdot\mathbf{R}_z$$
  • rearranged: $$0 = P_{0,x} + u\cdot\mathbf{u}_x + v\cdot\mathbf{v}_x - \mathbf{O}_x - d\cdot\mathbf{R}_x$$ $$0 = P_{0,y} + u\cdot\mathbf{u}_y + v\cdot\mathbf{v}_y - \mathbf{O}_y - d\cdot\mathbf{R}_y$$ $$0 = P_{0,z} + u\cdot\mathbf{u}_z + v\cdot\mathbf{v}_z - \mathbf{O}_z - d\cdot\mathbf{R}_z$$
  • 3 equations, 3 unknowns → \Ü/
  • but so much work... ಥ_ಥ

#2 Triangles

Lazy solving

SymPy can do symbolic math
from sympy import var, solve
pox,poy,poz,ux,uy,uz,vx,vy,vz,ox,oy,oz,rx,ry,rz,u,v,d = 
  var('pox poy poz ux uy uz vx vy vz ox oy oz rx ry rz u v d')
E1 = pox + u*ux + v*vx - ox - d*rx
E2 = poy + u*uy + v*vy - oy - d*ry
E3 = poz + u*uz + v*vz - oz - d*rz
solutions = solve([E1,E2,E3],[u,v,d])
print(solutions)

{u: [...], v: [...], d: [...]}
not pretty but correct¯\_(ツ)_/¯

#2 Triangles

Normal vectors

  • we want to distinguish front and back
  • → we need the normal vector $\mathbf{n}$
  • (also later for other things)
  • cross product and left-hand rule: $$\mathbf{n} := \mathbf{v} \times \mathbf{u}$$
  • consequence: vertex order matters!
  • → our triangles are defined CCW
    (counterclockwise)
  • (looking at it from its "front")
$$\begin{pmatrix}\mathbf{a}_x \\ \mathbf{a}_y \\ \mathbf{a}_z\end{pmatrix} \times \begin{pmatrix}\mathbf{b}_x \\ \mathbf{b}_y \\ \mathbf{b}_z \end{pmatrix}$$ $$=$$ $$\begin{pmatrix} \mathbf{a}_y\mathbf{b}_z - \mathbf{a}_z\mathbf{b}_y \\ \mathbf{a}_z\mathbf{b}_x - \mathbf{a}_x\mathbf{b}_z \\ \mathbf{a}_x\mathbf{b}_y - \mathbf{a}_y\mathbf{b}_x \end{pmatrix}$$

#2 Triangles

Back-face culling

  • normal vector determines front side
  • (more important for rasterization than for raytracing)

#2 Triangles

Back-face culling

  • normal vector determines front side
  • (more important for rasterization than for raytracing)
  • scalar product

    $$\begin{pmatrix}\mathbf{a}_x \\ \mathbf{a}_y \\ \mathbf{a}_z\end{pmatrix} \cdot \begin{pmatrix}\mathbf{b}_x \\ \mathbf{b}_y \\ \mathbf{b}_z \end{pmatrix} = \mathbf{a}_x\mathbf{b}_x + \mathbf{a}_y\mathbf{b}_y + \mathbf{a}_z\mathbf{b}_z $$

  • ray "sees" triangle if and only if

    $$\mathbf{n}\cdot\mathbf{R} < 0$$

#2 Triangles

Implementation

  • abstract Object class for traceable objects
  • an Object does its own intersection test
  • depth test (only render the nearest object)
  • objects have a color
  • ray does not hit any object? → we already have that!
  • all scene objects → std::vector<Object*>

#2 Triangles

Hack time






Object class, triangles

#2 Triangles

Result




#2 Triangles

Debugging

  • CCW vertices
  • $\mathbf{n} := \mathbf{v}\times\mathbf{u}$  and  $\mathbf{n}\cdot\mathbf{R} < 0$
  • depth test (smaller depth → object closer)
  • only positive depths!








Level 3 — Raytracing

#3 Raytracing

Ray reflection

  • monochromatic objects are boring
  • reflections are the strong point of raytracing
  • problem: given object and incoming ray, find the outgoing ray
  • easy solution via geometric construction

#3 Raytracing

Ray reflection

  • important vector:  $\mathbf{n}\cdot\left(\mathbf{n}\cdot\left(-\mathbf{R}\right)\right)$
  • $\mathbf{n}\cdot\left(-\mathbf{R}\right)$  is the length of the projection of  $\mathbf{-R}$  onto  $\mathbf{n}$
  • scalar product  $\mathbf{n}\cdot\left(-\mathbf{R}\right)$  is a scalar (not a vector!)
  • we need a vector in direction of  $\mathbf{n}$
  • → scalar multiplikation ith  $\mathbf{n}$  =  scaling of  $\mathbf{n}$

#3 Raytracing

Implementation

#3 Raytracing

Hack time






Ray reflection

#3 Raytracing

Result




#3 Raytracing

Debugging

  • "CCW" depends on viewing angle
  • Back-face culling
  • triangles' tops leaning towards camera
  • max_hit_bounces limits visibility








Level 4 — Light transport

#4 Light transport

Reflection and object color

  • perfect mirrors are boring
  • materials have different reflectivities

#4 Light transport

Mixing colors

  • our materials have a reflectivity
  • a ray sees many colors on its journey
  • the final pixel color is a mixture
  • (preliminary formulation, no lighting)

#4 Light transport

Hack time






Mixing colors

#4 Light transport

Result












Level 5 — Spheres

#5 Spheres

Parametrized shapes

  • polygons allow for arbitrary geometry

#5 Spheres

Parametrized shapes

  • polygons allow for arbitrary geometry
  • but complexity needs many polygons

#5 Spheres

Parametrized shapes

  • polygons allow for arbitrary geometry
  • but complexity needs many polygons
  • ...maaaany polygons. (Stanford dragon: 200.000 triangles)

#5 Spheres

Parametrized shapes

  • real renderers use hierarchical structures
  • still: more geometry → longer render times
  • idea: some shapes can be computed

#5 Spheres

Intersection test

  • edge case: ray "grazes" sphere
  • $\mathbf{p} = $  sphere center $-$ camera center;  $r = $  radius
  • distance to hit point  $ = \sqrt{\left(\mathbf{p}\cdot\mathbf{p}\right) + r^2} = \sqrt{||\mathbf{p}||^2 + r^2}$
  • $ = \mathbf{p} $ $ \cdot$ ray direction
  • (ray has length 1)
  • hit → longer
  • no hit → shorter
  • → intersection test → \Ü/

#5 Spheres

Hack time






Sphere intersection test

#5 Spheres

Result




#5 Spheres

Debugging

  • $\mathbf{p}$ points from sphere center to camera center

#5 Spheres

Distance computation

  • $\mathbf{p} = $  sphere center $-$ camera center;  $r = $  radius
  • $b = \mathbf{p}$ $\cdot$ ray direction
  • $s = \sqrt{\left(\mathbf{p}\cdot\mathbf{p}\right) - b^2}$
  • $t = \sqrt{r^2 - s^2}$
  • $d = b - t$
  • → distance → \Ü/

#5 Spheres

Normal vectors

  • $\mathbf{p} = $  sphere center $-$ camera center;  $r = $  radius
  • normal vector  $= -\mathbf{p} + d$ $\cdot$ ray direction
  • (scale normal vector to length 1!)

#5 Spheres

Hack time






Distance, normal vector

#5 Spheres

Result




#5 Spheres

Debugging

  • $\mathbf{p}$ points from sphere center to camera center
  • $\mathbf{b}$ has negative length








Level 6 — Light

#6 Light

Phong illumination model

  • perfect colors are boring
  • light generates shadows, reflections, shading...
  • "real" light simulation is infeasible
  • we use the Phong model (or something similar)
  • (not physically correct, but easy)
  • materials have coefficients per component
ambient + diffuse + specular = Phong

#6 Light

Ambient light

  • constant light everywhere in the scene
  • independent of light sources
  • cheap imitation of global illumination
  • constant factor  $k_\text{ambient}$, here: independent of material
  • ($C_\text{material}$: material color) $$I_\text{ambient} = k_\text{ambient} \cdot C_\text{material}$$
ambient + diffuse + specular = Phong

#6 Light

Diffuse light

  • depends on direction "surface → light source"
  • → local normal vector
  • maximum: surface perpendicular to light → object color
  • minimum: surface parallel to light → black/nothing
  • perfect "Lambertian" material $$I_\text{diffuse} = f_\text{diffuse} \cdot k_\text{diffuse,material} \cdot C_\text{material}$$
ambient + diffuse + specular = Phong

#6 Light

Diffuse light

  • illumination factor depends on surface orientation
  • $\mathbf{L} := $ surface → light source $$f_\text{diffuse} = \mathbf{n} \cdot \mathbf{L}$$
  • never $\leq 0$ !
  • (points that "look away" from the light are not lit)
  • no dependency on viewing direction!

#6 Light

Specular light

  • depends on directions surface → light source
    and camera → surface
  • "size" of highlight can be individual
  • → "hardness"/"shininess"
  • light color instead of material color $$I_\text{specular} = f_\text{specular} \cdot k_\text{specular,material} \cdot C_\text{light}$$
ambient + diffuse + specular = Phong

#6 Light

Specular light

  • highlight = light source reflection on surface
  • maximum = "surface → light = reflected ray"
  • "harder"/"shinier" material → highlight more focused

    $$f_\text{specular} = \left(\mathbf{L}\cdot\mathbf{R}\right)^\text{hardness}$$

#6 Light

Full lighting

$$ I = I_\text{ambient} + I_\text{diffuse} + I_\text{specular} $$ $$ I = \left( k_\text{ambient} + f_\text{diffuse} \right) \cdot k_\text{diffuse,material} \cdot C_\text{material} + \\ f_\text{specular} \cdot k_\text{specular,material} \cdot C_\text{light} $$ $$ I = \left( k_\text{ambient} + \mathbf{n}\cdot\mathbf{L} \right) \cdot k_\text{diffuse,material} \cdot C_\text{material} + \\ \left( \mathbf{L}\cdot\mathbf{R} \right)^\text{hardness} \cdot k_\text{specular,material} \cdot C_\text{light} $$
ambient + diffuse + specular = Phong

#6 Light

Hack time






Illumination

#6 Light

Result




#6 Light

Debugging

  • $\mathbf{L}$: surface → light
  • $\mathbf{R}$: surface → ...
  • $\mathbf{n}$: surface → ...
  • $0 \leq f_\text{diffuse}, f_\text{specular} \leq 1$
  • → unit length








Level 7 — Shadows

#7 Shadows

Shadows

  • we have shading but no shadows
  • what is in shadow?
    • what is not in the light
    • = points which do not "see" the light
    • = points with an object between them and the light
    • → Raytracing + intersection tests!
  • → we have all of that! \Ü/
  • even in shadow:
    ambient light!

#7 Shadows

Hack time






Shadows

#7 Shadows

Result




#7 Shadows

Debugging

  • (cf. #6 Light)








Level 8 — Random stuff

#8 Random stuff

  • anti-aliasing
  • matte surfaces
  • soft shadows
  • depth of field

#8 Random stuff

Aliasing

  • sensor grid: resolution  $X$
  • real world: resolution  $\infty$
  • imaging = discretization
  • one ray = one measurement
  • single ray → aliasing

#8 Random stuff

Anti-aliasing

  • one ray = one measurement
  • → multiple measurements
  • → average

#8 Random stuff

Hack time






Anti-aliasing

#8 Random stuff

Result







#8 Random stuff

Debugging

  • random offset  $\in [-0.5, 0.5]^2$
  • (one of the limits should really be excluded)

#8 Random stuff

Surface structure

  • up to now:

perfect reflection
no reflection

mirroring

absorbing

#8 Random stuff

matte surfaces

  • matte objects are like "bad mirrors"
  • (example: polished vs. sandblasted metal)
  • matte surfaces are rough surfaces
  • rough surfaces' normals are random
  • grade of randomness = roughness
  • simulation: take perfect reflection and add random offset vector

#8 Random stuff

Implementation

  • why don't we add to the normal vector?
    • would be more elegant, but turned out to be harder
  • avoid impossible reflections $$\mathbf{n}\cdot\mathbf{R} < 1 \Rightarrow \mathbf{R'} := \mathbf{R} + \mathbf{n}\cdot 2 \cdot\left(\mathbf{n}\cdot-\mathbf{R}\right)$$
  • (same reflection formula as in "#3 Raytracing")
  • after the change: recompute effective normal $$\mathbf{n'} := \mathbf{R'}-\mathbf{V}$$

#8 Random stuff

Hack time






matte surfaces

#8 Random stuff

Result




#8 Random stuff

Debugging

  • reflection, normal, random offset → unit length
  • $\mathbf{R}\cdot\mathbf{n} \leq 0$ → invalid reflection → flip

#8 Random stuff

Ideas

  • matte surfaces
    • better models: e.g. microfacets
    • anisotropic roughness: brushed metal (bottoms of cooking pots)
    • normal mapping: change normals according to a special texture

#8 Random stuff

Soft shadows

  • perfect shadows are boring
  • where do soft shadows come from?
    • atmospheric scattering (→ volume tracing...)
    • non-point light sources
  • → light with random position
  • → we already have that \Ü/

#8 Random stuff

Hack time






Soft shadows

#8 Random stuff

Result




#8 Random stuff

Debugging

  • which position dimensions of the light source are randomized?
  • noisy shadows? → more samples!

#8 Random stuff

Depth-of-field

  • up to now: perfect focus (pinhole camera)
  • → unrealistic and boring
  • real cameras: blur by aperture (diaphragm opening)

#8 Random stuff

Depth-of-field

  • we have neither a lens nor a diaphragm
  • → simulation by "sensor shift"
  • geometric construction
  • focal distance   $f$, camera shift vector  $\mathbf{s}$
    $\Rightarrow$  ray shift vector  $= \frac{1}{d}\cdot \left(-\mathbf{s}\right)$

#8 Random stuff

Depth-of-field

  • points at distance  $d$   (here: $4$) are rendered "sharp"
  • (rays from different cam positions → same point)
  • closer or farther points are blurry
  • (shifted cams's rays hit different points)
  • → Depth-of-field

#8 Random stuff

Hack time






Depth-of-field

#8 Random stuff

Result




#8 Random stuff

Debugging

  • focal distance  $f$, camera shift vector  $\mathbf{s}$
    $\Rightarrow$  ray shift vector  $= \frac{1}{d}\cdot \left(-\mathbf{s}\right)$
  • cam shift along sensor axes (here: X, Y)
  • ray normalization after shift-addition
  • (ray must always have unit length!)

#8 Random stuff

Ideas

  • anti-aliasing
  • matte surfacesen
  • soft shadows
  • depth of field
  • more:
    • motion blur
    • → randomized object or cam position
    • zoom blur
    • → random  RIGHT/UP vectors
    • cheap object transparency
    • → randomized hit test
    • aperture shapes ("bokeh")
    • tilt-shift! ("Scheimpflug principle")

© Studio Incanto via Flickr








Level 9 — Textures

#9 Textures

Concept

  • monochrome objects are boring
  • so are procedural textures (checkerboard)
  • textures
    • like "wallpapers", "gift wrap"
    • image → object color
    • which image pixel → which 3D point?
    • → we need a mapping



image texture

#9 Textures

Texture







(texture.ppm found in this repository)

#9 Textures

Reading PPM images

$ vim texture.ppm
P6
600 400
255
f± jÕËyÙÕ<99>äßI<93><8e> [...]
  • P6 binary PPM (1 byte per color channel, 3 bytes per pixel)
  • 600 400 → resolution 600x400
  • header size: "P6 600 400 255 " → 15 bytes
  • data size: 600 · 400 · 3 = 720000 bytes
  • expected file size: 15 + 720000 = 720015 bytes
  • $ du -b texture.ppm: 720016 bytes!
  • difference?

#9 Textures

Intermezzo: reading binary files

$ xxd texture.ppm | head -n3
0000000: 5036 0a36 3030 2034 3030 0a32 3535 0a66  P6.600 400.255.f
0000010: b1a0 6ad5 cb79 d9d5 99e4 df49 938e 75d7  ..j..y.....I..u.
0000020: d36d d2cc 85db d697 e2dd 6dd4 cc55 c9bf  .m........m..U..
  • 0a is a newline
  • 20 is a whitespace


$ xxd texture.ppm | tail -n3
00afc60: bfad 20cc bb11 b1a0 03b3 9d02 a991 0452  .. ............R
00afc70: 430a 0a09 212e 2818 9c88 16af a241 cac2  C...!.(......A..
00afc80: 3cd3 cb33 d3cd 3fd5 d016 d1ca 30d4 ce0a  <..3..?.....0...
  • extra newline at the end...
  • → can be ignored \Ü/

#9 Textures

Reading PPM images

in get_ground_color(..):
static unsigned char* texture_data{nullptr};
const int tex_w{600};
const int tex_h{400};
if (not texture_data) {
  std::ifstream texture("texture.ppm");
  texture_data = new unsigned char[tex_w*tex_h*3];
  texture.read(reinterpret_cast(texture_data), 15);  /* dummy read */
  texture.read(reinterpret_cast(texture_data), tex_w*tex_h*3);
}

#9 Textures

Hack time






Ground texture

#9 Textures

Result




#9 Texturen

Debugging

  • ground coordinates in the XZ plane
  • X = right, Z = "forwards"
  • scale up (X,Z) for image pixel coordinates

#9 Textures

Ideas

  • texture for:
    • color ✓
    • normal direction
    • $k_\text{diffuse,material}$, $k_\text{specular,material}$
    • roughness
    • transparency

texturehaven.com




diffuse color specularity normal roughness








BONUS LEVEL UNLOCKED









Level Ω — Rotation matrices

#Ω Rotation matrices

Wait, what?

  • up to now we can only shift objects
  • rotations are harder
  • 2D example:

  • vector  $(x,y)$
  • unit length
  • rotation by angle  $\alpha$

#Ω Rotation matrices

Rotation in 2D

  • basis vectors: $$ \begin{pmatrix} 1 \\ 0 \end{pmatrix} \rightarrow \begin{pmatrix} \text{cos}(\alpha) \\ -\text{sin}(\alpha) \end{pmatrix}, \begin{pmatrix} 0 \\ 1 \end{pmatrix} \rightarrow \begin{pmatrix} \text{sin}(\alpha) \\ \text{cos}(\alpha) \end{pmatrix} $$

#Ω Rotation matrices

Rotation in 2D

  • rotation: $$\begin{pmatrix} x'\\y' \end{pmatrix} = \begin{pmatrix} \text{cos}(\alpha) & -\text{sin}(\alpha) \\ \text{sin}(\alpha) & \text{cos}(\alpha) \end{pmatrix} \cdot \begin{pmatrix} x\\y \end{pmatrix}$$

#Ω Rotation matrices

Rotation in 3D

  • in 3D: rotation around Z-axis:

    $$\begin{pmatrix} x'\\y'\\z' \end{pmatrix} = \begin{pmatrix} \text{cos}(\alpha) & -\text{sin}(\alpha) & 0 \\ \text{sin}(\alpha) & \text{cos}(\alpha) & 0 \\ 0 & 0 & 1 \end{pmatrix} \cdot \begin{pmatrix} x\\y\\z \end{pmatrix}$$
  • X, Y (signs follow from our world coordinate system):

    $$\begin{pmatrix} 1 & 0 & 0 \\ 0 & \text{cos}(\alpha) & -\text{sin}(\alpha) \\ 0 & \text{sin}(\alpha) & \text{cos}(\alpha) \end{pmatrix}, \begin{pmatrix} \text{cos}(\alpha) & 0 & \text{sin}(\alpha) \\ 0 & 1 & 0 \\ -\text{sin}(\alpha) & 0 & \text{cos}(\alpha) \end{pmatrix}$$

#Ω Rotation matrices

Hack time






Rotation matrices

#Ω Rotation matrices

Result



#Ω Rotation matrices

Debugging

  • order of rotation Z-Y-X == matrices X·Y·Z·point
  • Euler angles → Gimbal lock in critical configurations

That's it!

More ideas

  • chromatic aberration
  • non-pinhole cameras (z.B. fisheye)
  • recursive raytracing
  • Fresnel effect
  • glass material (refraction)
  • wireframe material
  • multiple light sources
  • other illumination models (Blinn-Phong, Cook-Torrance)
  • HDRI illumination
  • more implicit forms (torus (="donut"))

Image sources

  • RGB cube: SharkD (Wikimedia)
  • metal texture: Texture Haven
  • wood texture: Maarten Deckers (Unsplash)