Welcome to the Glitch Gallery, an online exhibition of pretty software bugs! This is a museum of accidental art, an enthusiastic embrace of mistakes, a celebration of emergent beauty. Learn more?

Below you can browse the exhibits in our collection. Keep scrolling down to see them in more detail! If you have ever encountered such an accidental artwork yourself, we invite you to submit it to us!

Pacman Pie
Divyang Deep Tiwari, 2020

Neon Giraffe
Gavin McGough, 2024

Blue View
Atari-Frosch, 2009

Chess in the Matrix
aktaboot, 2024

Angry Fireflies
HistidineDwarf, 2023

Outside In
Vagmi Pathak, 2021

Alteration I
Adrien Limousin, 2023

Alteration II
Adrien Limousin, 2023

Alteration III
Adrien Limousin, 2023

Alteration IV
Adrien Limousin, 2023

Alteration V
Adrien Limousin, 2023

Alteration VI
Adrien Limousin, 2023

Wall Eyed
Yelahneb Unicornucopiax, 2023

View from the Pocket
Brian Lamborn and Adam Albright, 2017

The Demon Core
Chloe Gilbert, 2016

Sequence Symmetry in Asymmetry I
Priyav K Kaneria, 2023

Sequence Symmetry in Asymmetry II
Priyav K Kaneria, 2023

Sequence Symmetry in Asymmetry III
Priyav K Kaneria, 2023

Sequence Symmetry in Asymmetry IV
Priyav K Kaneria, 2023

Sequence Symmetry in Asymmetry V
Priyav K Kaneria, 2023

Sequence Symmetry in Asymmetry VI
Priyav K Kaneria, 2023

Roots of the problem
Marco Bulgarelli, 2021

Ghost in the Frame
Antoine Leblanc, 2023

Bug in the codec
Svjatoslav Agejenko, 2008

Where am I?
mrst0n3, 2021

Itch
Emily Elhoffer, 2022

3D in Isometricism?
JakeOJeff, 2023

Orange Shift
Xan Gregg, 2022

1PRINT
Forg, 2022

Behind Bars
lastfuture, 2010

Void of the Future
lastfuture, 2018

Wait, how big is an int?
polyfloyd, 2014

The Longest Noise
lastfuture, 2017

fontgedoens
pesco, 2008

Fancy Catastrophizations
Zachary Steinberg, 2018

Crumbling Giant
lastfuture, 2012

hard steps
omniskop, 2017

Bézier Pollock
omniskop, 2018

Dishes
lastfuture, 2010

Voronoi Fuzz
Souren Papazian, 2020

Empty Set
Tristan Onek, 2021

Mountains of Sierpnonski
Ithea Piko Nwawa, 2022

Rats!
Jörn Zaefferer, 2022

Ghost Town
Paul R., 2019

The Cheeky Face of Failure
Paul-Jan Pauptit, 2022

Glyphs
Clarity Flowers, 2021

Morisawa Error
linse, 2019

Viewscape
Niko Wilhelm, 2020

Memories Unravelling I
Lucas Werkmeister, 2020

Memories Unravelling II
Lucas Werkmeister, 2020

Memories Unravelling III
Lucas Werkmeister, 2020

Memories Unravelling IV
Lucas Werkmeister, 2020

Memories Unravelling V
Lucas Werkmeister, 2020

Memories Unravelling VI
Lucas Werkmeister, 2020

Memories Unravelling VII
Lucas Werkmeister, 2020

Memories Unravelling VIII
Lucas Werkmeister, 2020

Memories Unravelling IX
Lucas Werkmeister, 2020

Camera Space Voxelization
Matthias Moulin, 2018

A Poem As Lovely
Manuela Malasaña, 2020

Gemstone
ByteHamster, 2019

Lapin Noir
Jeroen Baert, 2015

Squid
Al, 2020

Moment before the infinite loop
Kofi Gumbs, 2019

Texture Recursion
Alex Chen & Andrew Desharnais & Bill Marcy & Jimmy Tang, 2020

Can We See Now?
Brian Davis, 2018

Rainbow Blur
polyfloyd, 2014

You are what you eat
Anonymous, 2018

Exploding Hedgehog
Brian Hicks, 2020

Collapsing Waves
Brian Hicks, 2020

melting a e s t h e t i c
Erty Seidohl, 2018

Tracey's Volumetric Snowstorm
Kirstin "Kiki" Rohwer, 2020

Icosahedral Flower
Hamish Todd, 2014

Hexagonal Big Bang
Hamish Todd, 2015

Geodesic Flower
Hamish Todd, 2014

Fractal Pyramidal Moire
Hamish Todd, 2011

Cylinder Wrapping Gone Great
Hamish Todd, 2017

Bond Ball
Hamish Todd, 2017

Trained Voronois
Topi Tjukanov, 2018

Psychedelic Ride
SPokDev, 2019

Degrees of a Sine Graph I
Dan Anderson, 2018

Degrees of a Sine Graph II
Dan Anderson, 2018

Luchador
2DArray, 2018

Chaos Center
Laura Klünder, 2017

Out of Line I
LambdaTotoro, 2020

Out of Line II
LambdaTotoro, 2020

Out of Line III
LambdaTotoro, 2020

Out of Line IV
LambdaTotoro, 2020

Not Quite Interpolation
blinry, 2012

Subdivision
Emile Hansmaennel, 2018

No Vectors' Land
Emile Hansmaennel, 2018

Psychedelic Rainbow
ePirat, 2018

Traversing Berlin
Daniela Berger, 2018

Galaxy Smoothie
Raphael Michel, 2017

Floating Robots
Raphael Michel, 2017

Displacement Chaos
Michael Pucher, 2017

The Eye of the Cube
Uriopass, 2016

Disco Time
Jakob Runge, 2013

Uncontained I
blinry, 2016

Uncontained II
blinry, 2016

Uncontained III
blinry, 2016

Uncontained IV
blinry, 2016

Uncontained V
blinry, 2016

The Stanford Iceberg
Marc Kassubeck & blinry, 2012

While working on a project, one day I was trying to generate a pie chart using Python.

Didn’t remember what exactly went wrong in the code but the system kind of crashed and didn’t respond for a while. When I opened the Jupyter notebook again this is what I found in the output code cell. It generated thousands of small pie charts (what you might see as Pacman-like structures) in a big pie. Hence, the name “Pacman Pie”.

While attempting to plot some lines representing the routes of buses in Edinburgh using some point-based data, I didn’t account for multiple bus routes having the same destination. This earned me some unexpected straight lines in addition to the squiggly lines I was aiming for. The glowing neon style is courtesy of QGIS with some minor tweaks.

This photo with a view out of my window on the 5th level was taken in June 2009 with a Canon EOS 350D. GIMP complains for corrupted data, but opens and saves it anyways. Funny thing is: The thumbnail inside the JPG file shows the correct image! All photos before and afterwards were ok. Until the camera broke in 2018 after only ~30,000 photos it never happened again.

Chess in the Matrix

aktaboot, 2024

These glitches appeared while running KDE’s Plama Desktop on a Framework Laptop using the latest Linux kernel.

The author summarizes it like this:

  • Trigger: unknown
  • Reason: unknown
  • Reproducibility: unknown

This was meant to be a simulation of how some species of fireflies sync their flashes in the wild with peer pressure which I found funny.

I wrote what is similar to cellular automata like Conway’s game of life and I found this bug when I first made the fireflies flash. It’s hard to remember what the issue was now but I believe it was because light did not decay with time and that new flashes overrode the light value at the affected tiles, creating this neat effect.

The program was written in Rust my beloved. There is a gif of what the program looks like completed and working as intended on the GitHub page.

Outside In

Vagmi Pathak, 2021

I was working on a video which was shot by me on Premiere Pro. After working for 10 minutes, the software stopped working. I saw a black screen and then again the Premiere Pro window editor popped up: My original video had lost its colour and the elements in the video were all outlined, with lower parts of the video missing.

I was colour-correcting the video, but after the software glitch, there were no colours. But I liked the effect – I hope to encouter this glitch again.

I used an online tool to convert a series of Street View screenshots from the TIFF image format to JPEG and ended up with these altered previews.

I see an interesting symbolism here: Street View is a digitalization of the real world. I wanted to print the Street View images – to materialize the digital, so to speak. But the glitch created a representation of “another dimension” and showed the impossibility of coming back to reality.

Took a photo of my wall with my Pixel 6 to document my floppy disk art plans.

Didn’t notice the result until the next day - absolutely no idea what happened here, but it looks cool.

View from the Pocket

Brian Lamborn and Adam Albright, 2017

My partner, Brian, noticed that his phone had been taking pictures while he had it in his pocket. He showed me a few of them and then started deleting them, which is why I’m taking partial credit as the creator here.

“What are you doing?! Those are amazing!!”

I asked him for the phone and started scrolling through the images. Most of them were gradations of color in the beige/green/gray range, many of a single color, a few of 2-3 colors. This one stood out as it ran the gamut. About half of them have the “ripped” looking edges in them, and maybe 4-5 of them have the black glitches along the edges.

The Demon Core

Chloe Gilbert, 2016

I took this photo with a Sigma DP2 Merrill camera, which crashed as the photo was taken and I had to restart it.

Since then, the photo has crashed both the camera, as well as Sigma Photo Pro and my MacBook Pro every single time I tried to open it. Eventually, something got patched somewhere and the photo no longer crashes my laptop.

I am unsure how the pink noise banding at the top happened – all I can think was that it was a random camera glitch or a particle strike that caused the error whilst the file was being written to the card. All photos taken afterwards were fine, and the card that the ‘demon core’ was on was also fine. The camera still works fine and I’ve been unable to reproduce the problem since.

I was trying to plot the sequences as visualized in a circle using Python’s “turtle” module. The circle was divided into n points. The numbers in the sequences were modded with another number, and the result was used to mark the next point in the circle while connecting it with the last point.

This created beautiful patterns, expressing the fundamental nature of symmetry with modular arithmetic in common math sequences.

You can check out the code here.

Roots of the problem

Marco Bulgarelli, 2021

This happened while working on my image resampling library in C.

I think I was trying to resize the image of a blue and magenta test grid (visible on the left) using the nearest neighbour algorithm.

Sadly, I lost the original lossless image, and only this compressed version is left, which I salvaged from an old whatsapp chat with a friend.

My editing software (DaVinci Resolve 18) ran out of memory, and mangled a frame of the video I was editing. The result was this explosion of red, purples, and yellows, in which my face is still partially visible.

I was implementing an image codec which uses the diamond-square algorithm.

There was a bug in the code - after the image passed the encoding-decoding stage, something completely different but beautiful appeared.

Where am I?

mrst0n3, 2021

Because of the pandemic, the exam for the course Realtime Computer Graphics was transformed into a large assignment that had to be done at home. For this assignment we had to implement multiple typical shaders in vulkan. One of those was the cubemap, for pointlight shadows.

When calculating shadows, you typically calculate the nearest object to a lightsource and map this onto a shadowmap. When calculating the real image you check the distance from the rendered object to the light and then check with the shadowmap, whether your distance is larger. If yes, your object is in shadow.

When creating a cubemap you have to create six images and put them into an array and then define that array as a cubemap via some imageview. The problem here was, that the documentation for vulkan didnt clearly say, which part of the array was responsible for which part of the cube. Therefore I had to manually try out all the different possibilities. To make this easier on me I mapped the location of the object that was seen onto my cubemap and then used this position as the color, that was rendered. With this it was easier to find out which part of the cube was mapped where.

While layering multiple digitally photographed self-portraits in Adobe Photoshop, the program crashed and generated this corrupted file. I’m unable to open it back up in Adobe, but the results are really exciting and fractalized.

I was trying to make a game with isometric graphics. While working on the engine, I accidentally mixed the loading and update functions. This made the game create a new group of tiles every time a rotation event was triggered, resulting in this 3D perspective.

I tried to create packed circles chart, but neglected to summarize the data first. I forgot I had orange labeling turned on for some points, so all 4300+ dots are shown with the same size and and 700 colliding labels. This happened when using the JMP statistical visualization software on a data table of Mastodon instance domain names.

I was trying to code a 10PRINT program but I ended up with this weird thing.

I was experimenting with digital TV reception with a cheap DVB-T stick if I remember correctly, and EyeTV. At some point I managed to detune and wiggle the antenna just right to get some great glitches. This one stuck out to me especially because the image quality itself is very soft but the glitch artifacts are sharp. Paired with the warm colors this looked so intentional to me that I had to save the capture.

Through a yet unknown mechanism during loading data iOS’s photos app decided to replace all photo thumbnails in year view with either light grey or black squares. The distribution paired with the number of pictures in each year of my photo library from 2007 to 2014 made it appear like a digital void infecting the past and slowly devouring the future, ever growing in the process.

After teaching myself the Go programming language in the year prior, I thought it would be cool to make something with OpenGL. I started out with a nice sphere by tessellating an icosahedron. That worked well. An then I wanted to make it better by using vertex indices… Which produced this furry ball.

What went wrong? I pondered and tried a few things for a whole day. And then I remembered that one time I read the Go language spec, specifically the part where it states that int, the data type that I was using for my indices was either 32 or 64 bits wide depending on the target architecture. Of course I had assumed it was 32 bits the whole time. This caused OpenGL to treat a single index of 64 bits as two 32 bit indices.

This incident was also used in an article about Go I wrote.

This happened when I was experimenting with a combination of 3D rendering and Slit Scanning. My goal was to create smooth surreally warping shapes by re-mapping the x-axis to the time axis and the time axis to the x-axis. Unfortunately I found out the hard way that when you do not randomize everything properly Blender animations rendered in Cycles will have nearly the same image noise in subsequent frames. Since time is represented on the x-axis in my video result this ended up having very long horizontal stripes of noise.

For this experiment I rendered a scene in Blender and used a Processing Sketch I previously created that takes each frame of a video, cuts the same column of pixels out of each of the frames and combines all the pixel columns into a new image that is as many pixels wide as the original video had frames. Multiply that times as many pixel columns as the original video has to create a new set of frames for the resulting animation.

It was supposed to become a text editor or something, but primarily I just wanted to make the smallest bitmap font I could manage and draw it on screen. After the simple solution (glDrawPixels()) exposed a driver bug (underused code path), this was supposed to be the “proper” version using textured quads. Obviously, the vertices got scrambled somehow.

The project ended up on the infinite shelf shortly after I had fixed the issue, so this glitch piece remains its only notable result. :)

Materials used: Haskell, OpenGL

I was attempting to create a node that would repeatedly save the last 100 positions of a given function, traced out in green dots here. Unfortunately, instead of saving into the very last spot in the array, it saved into the second-to-last spot in the array, creating an interesting gap and teleporting lines whenever a new position was saved. It’s always a shame when your creations accidentally learn to artistically teleport.

While animating a transition in some animation software (After Effects or Apple Motion) I accidentally deleted my asset which was a transparent pixel graphic of a red letter “x”. All instances of my “x” got immediately replaced by black squares. Unfortunately I forgot why my background was a tilted checkerboard pattern in this screenshot. In any event, the mistake resulted in something I found deeply pleasing on a visual level, which is why I snapped a screenshot of the scene before undoing my mistake and continuing the actual animation.

hard steps

omniskop, 2017

We wrote a raytracer in university and I made a mistake when calculating the angle between the ray and the object that got hit.

Bézier Pollock

omniskop, 2018

I wanted to test the creation of random Béziercurves and accidentally drew the control points of all curves instead of only the last one. This resulted in a picture that kinda looks like a Jackson Pollock painting.

While playing around with a fractal generator that generates Flames I stumbled across this beautiful pattern by randomizing certain parameters.

I was writing some custom voronoi shaders in glsl where I can dynamically change the number of moving cells and their colors. I’m actually still not sure what caused this bug but it had to do with going over the max number of cells that my shader allowed.

An unexpected error was encountered during an experiment with OpenCV and Python while looking for new ways to create algorithmic artwork. A collection of randomly selected historical artworks was provided as input, and the objective was to merge these images and apply a visual effect to make the output look more like an oil painting. While the function for creating the oil painting effect was running, some type of memory error was encountered, therefore causing only a small part of the input image set to be processed, leaving a fraction of what would have been the final product. The green and black colors present a palette like a shard from a broken motherboard, which seems appropriate given the context for this failed trial run.

Mountains of Sierpnonski

Ithea Piko Nwawa, 2022

The artist was trying to hack together a simple solution for a programming exercise: Making a Sierpinski triangle in Python with the Turtle Module. They missed one line of code, and it took some time to find the problem. The exact problem is a missing turn at the end of each cycle.

You can find the exercise (all in German), as well as other recursion exercises, here.

While trying to generate an artwork using a Google Colab based software called Disco Diffusion, I tried to get rid of some elements, which I thought were wind turbines. Using a negative weight turned out to have a very large, unintended effect.

The full text prompt was: “A windy coast on a cloudy day, by Caspar Friedrich and Albert Bierstadt, featured on ArtStation:5”, “turbines, towers:-10”

I enabled the RN50x16 model - everything else used the 4.1 default settings.

Ghost Town

Paul R., 2019

After taking a few photos of some buildings in Vancouver, I went to post them on Instagram as a Gallery Post. While processing the images, the app crashed and left me this wonderful piece on my phone.

I have no idea what exactly happened here, but it seems to have combined two of the images during the compression process.

In this animation made in the fantasy console TIC-80, Paul-Jan wanted to use circles and trigonometry to make a cool effect. Instead, this little face appeared to mock his attempts.

I was trying to render font tests: “SPHINX OF BLACK QUARTZ HEED MY VOW” and “the quick brown fox jumps over the lazy dog”. The characters are all positioned correctly, but due to swapping rows and columns, they’re all mirrored and rotated 90 degrees and kind of squished together.

This glitch happened to me in the class “recreating the past” by Zach Lieberman at the School for Poetic Computation in 2019. We were supposed to recreate one of John Maeda’s posters of the Morisawa 10 series. I tried to implement it in poststcript but could not figure out where the word was landing on the paper as

  • all the measurements were relative and
  • in an unknown scale and
  • postscript is stack based and
  • the order of operators is really peculiar, so I tried to mark the boundary box in red.
Viewscape

Niko Wilhelm, 2020

I am currently implementing a simple OpenGL renderer in Rust as a hobby project to get more familiar with graphics and the language. Somehow my specular lighting didn’t seem to work properly and in debugging I decided to visualize the view-vector, using it as the fragment color.

The expected result was seeing the same colors no matter where I moved, as the vector is location dependent… turns out I completely forgot to set the uniform variable of the camera position, so my shader was reading zeroed out memory and I effectively colored my scene with the fragment positions. Turns out this looks a lot prettier than the (by now…) working shading :) Gives me a bit of a dreamy-trippy feeling.

The default GNOME desktop background (Adwaita theme, Jakub Steiner, CC BY-SA 3.0) got corrupted somehow – initially with abstract blocky patterns; later, snippets of other things that had been shown on my screen and (apparently) other random graphics-related memory (some font tables?) started to be included, too.

The glitch is, when it happens, remarkably stable. The background does not flicker, and switching to a different background and then back to the default preserves all the artifacts as well. Sometimes GNOME switches from the “day” to the “night” variant of the background, creating a new variant of the glitch in the process (the “night” variant seems to leave more of the artifacts visible) – compare images 1 & 2 or 4 & 5.

Since this bug reappears a few times a month, this is a series which will continue to receive updates – until the GNOME bug that presumably causes it is fixed, I suppose.

During the early development of MAGE, it seemed like a fantastic optimization to perform all surface shading in camera space, and to even push this further by voxelizing the scene’s indirect illumination into an axis-aligned regular 3D grid in camera space. Unfortunately, the global voxelization data structure could only be used by a single camera, required recomputation upon camera movement, resulted in flickering artefacts due to realigning the regular grid with the camera’s changing coordinate system, and resulted in cracks and empty voxels at some camera angles due to the difficulty of conservative voxelization using a non-conservative rasterization pipeline.

The video marked the end of camera space computations and data structures in MAGE, which now uses the more convenient, stable and camera-invariant world-space.

I accidentally dragged a material used for drawing text into the slot for the leaf material on this tree. That caused the flat planes that were supposed to be used for leaves to have letters instead. Because this is an LOD group (multiple meshes are drawn on top of each other, with varying level of detail), leaves are still visible from the other trees in the group.

Additionally, the leaf meshes have vertex color – there is color information kept in the actual mesh itself independent of any material. That color is not meant to be seen as a color, it is actually used as an xyz value for determining wind animation, but the text material applies vertex color on the assumption that it is attached to actual text and vertex color is text color set by the user. This is what caused the text to have the nice gradient seen here.

For a practical course at university, I had to write an OpenGL game from scratch, without using a game engine. One of the first tasks was to load glTF files with 3d models. Actually, this image should display a yellow rubber duck. Instead, it turned out to be a colorful gem. I am not completely sure what went wrong but my guess is that I accidentally interpreted the normal buffer as the position buffer. With increasing complexity of the loaded model, the displayed object looked more and more like a sphere.

During the development of a visualizer tool for an academic paper I was writing at the time (Out Of Core Construction Of Sparse Voxel Octrees by J. Baert, A. Lagae & P. Dutré), I made an error in the code that descends into the hierarchical voxel structure we saved our models in, resulting in a render that not only omitted 80% of the voxelized triangle faces, but also did a hard cutoff of the normal information that was stored with them, resulting in this weird glitchy noir-ish classic Stanford Bunny. You’re seeing triangles which are actually built up from tiny voxels, in a 1024 x 1024 x 1024 voxel grid.

The renderer was a CPU-based raytracer that used OpenGL to display the final result in a screen-filling quad at a resolution of 512 x 512. Thanks to the inherently parallelizable nature of the raytracing algorithm I could slap OpenMP on it and get it to run at interactive speeds, whilst loading all university-provided cores at 100%, naturally.

Squid

Al, 2020

In an assignment to simulate a control system in MATLAB, I needed to plot its Lyapunov function v(t), which is a strictly decreasing function consisting of sums of errors in the control system. The function is calculated in part by multiplying the 4x1 error vector e(t) and 4x4 matrix P together as e(t)’Pe(t) (apostrophe is the transpose), which produces a scalar. In addition to some other problems, instead of assigning e(t) to the error vector at time t, I assigned error at every timestep to e, resulting in a 4x4000 matrix. As a result, plotting v(t) shows 4000 different signals that result from mixing error values from different timesteps. Since e at consecutive timesteps are relatively evenly spaced, the curves plotted are also evenly spaced aesthetically in a way that reminds me of glassblowing or squids swimming mid-stroke.

During Char’s shaders workshop, I accidentally left an infinite loop in my vertex shader. Since KodeLife is constantly recompiling and running your program, infinite loops can cause the entire IDE to crash. The resulting canvas visualizes whatever data happened to be in the GPU registers from KodeLife’s final moment.

Texture Recursion

Alex Chen & Andrew Desharnais & Bill Marcy & Jimmy Tang, 2020

Writing an OpenGL game for a game jam, this was our first attempt to render to an offscreen framebuffer object. However, we had forgotten to re-bind the textures in our main loop, and now the active texture contained the framebuffer data. This meant that every textured object was showing the previous frame’s rendered data, creating an interesting hall-of-mirrors effect.

Can We See Now?

Brian Davis, 2018

I was attempting to create an efficient approximation for a line-of-sight test between two boxes. The naive method would be to send straight rays out from the boundary of the “seeing”/source box and check intersections with all other boxes. I thought I could use fewer rays by branching them as they got further apart This was the visualization which showed me I had not coded what I was envisioning. You can see the darker blue box which was the source in the center and the green box test object in the lower right corner.

When bored with my current screen saver at one time in 2014, I thought it would be awesome to have my screen saver be a blur of my desktop at the time I locked it.

I did not keep the code that produced this wonderful catastrophe. But if I remember correctly, the faulty blurring operation produced a value that was way larger than it should have been which was subsequently wrapped to cram it into 8 bits, producing these colorful waves.

You are what you eat

Anonymous, 2018

This image was produced by a phone camera, where it seems to have combined several different photos, rendering the subject transparent. Taken at a market in southern Spain.

When I was building elm-particle I tried to make some fireworks, and messed up the speed and size in my calculation.

I had some wrong initial ideas when implementing Wave Function Collapse—this is what happens when one only considers adjacent pixels instead of adjacent patterns.

Mixed media (math, HTML tables)

Erty and two friends were working on a generator for pixelart landscapes. They implemented coastline generation and reflections, but while experimenting with color palettes, they encountered this bug.

This image was created while working on extensions for the raytracing software tracey as part of a university course. When building a volumetric light effect (rays of sunlight reflecting on dust particles in the air), there were various parameters to tweak for making the dust look like intended. Beginning with large amounts of opaque white dust, it looked more like a snowstorm at first. Which has its own kind of aesthetic and makes the sunny church scene look more like ruins on a postapocalyptic winter’s night.

There is a regular icosahedron in there and I am pretty sure the petals are meant to be a net that “wraps up” to become the icosahedron.

Imagine a grid of equilateral triangles, and imagine that every edge is a spring (which I was working on because I was making a game that involved hexagonal patterns and which I wanted to have some nice juice - I ended up scrapping this bit of juice though). But then, with that springy connectivity, initialize all the points to be at the origin. You should get this insane big bang/stereographically projected grid effect seen here.

This has a similar origin to Icosahedral Flower, but with some spherical projection. I think the projection center was at the origin, or at the corner of each triangle, when it was meant to be somewhere else - not sure though!

This is meant to be multi-colored volume-rendered god rays. Although I certainly didn’t know those fancy words at the time! It uses software rasterization and is written in SDL. At this point it should mostly just have looked black but I think that a lot of the pixels had a whiteness value that overflowed, and the pattern of overflowings is what you see here. I am pretty sure that the value that the whiteness each pixel was getting was basically coming from the number of rays originating from the center that passed through it?

This (which is screens from a video) was meant to be something so much more boring: essentially just this animation. But I was drawing the circles parametrically, and their centers ended up put in the wrong place.

This was when I was first making a 3D model of a molecule, to go into the software that is my PhD project. What’s happenning here is simple to explain: you should be seeing a few hundred spheres, most of which are connected to each other with cylinders. For atoms A and B, the way I was programming those connective cylinders was:

  1. Make a cylinder whose length is the distance between A and B
  2. Align it to be parallel to the line segment connecting A and B
  3. Reposition the cylinder so that one of its ends is touching A (therefore, implicitly, its other end should touch B)

Buuuuut, step 3 wasn’t working - this meant that all of the cylinders were sitting with their one end at the origin :)

This was part of a data visualization around train positions. The image shows multiple Voronoi analyses, accidentally drawn on top of each other using QGIS. You can learn more about the background and about spatial databases in this article.

Psychedelic Ride

SPokDev, 2019

SPokDev is developing a racing game, and was writing a post-processing effect that maps certain colors to another from a gradient texture. Accidentally using a noise image produced this result.

These graphs were made by mistake because the trigonometric functions were graphed in radian mode, while the rotation about the origin was in degrees, so the conflict of the two yielded interesting gaps and patterns.

This happend when 2DArray used a Signed Distance Field technique for rendering 3D scenes for the fantasy console PICO-8. A broken lighting operation caused the wrong color indices to be picked.

Chaos Center

Laura Klünder, 2017

This was a rendering bug caused by reading the xyrgba property in an OpenGL shader (instead of xyzrgba). Laura encountered this bug while developing c3nav, an indoor navigation system for the 34th Chaos Communication Congress at the Congress Center Leipzig.

Out of Line I-IV

LambdaTotoro, 2020

These pieces were created by a buggy version of Variations on Variations, a python bot for Mastodon that creates images inspired by Max Bill’s Variations series.

So it was always meant to generate art, but it did so very incorrectly. Angles don’t line up, colours are not unique, some polygons escape the boundaries entirely, it’s all a bit messed up. However, I also liked a few of the results for their own sake.

In university, I took a course where we implemented our own ray tracers. For a homework exercise, we had to implement texture support, and were supposed to interpolate large textures, so that they wouldn’t show up pixelated in the result. I don’t remember what went wrong.

While creating a Barnes-Hut-Tree for a galaxy-simulation, the position of new child cells broke after their position was offset in on a wrong way: (new center = old center ± width / 2) instead of (new center = old center ± width / 4).

During the simulation of galaxies, the visualization of the overall forces acting on the individual stars failed after the length of the vectors was miscalculated. The colors can be seen as the intensity of the force acting letting the vector itself only display the direction of the force acting.

It was an accidental image decoding glitch that produced these interesting and very colorful patterns, produced by gpu_jpeg2k, a JPEG 2000 codec library using CUDA for fast GPU-based de- and encoding.

Due to a bug in the file reading and parsing code, it ended up reading invalid data, causing colorful glitch art.

Traversing Berlin

Daniela Berger, 2018

I wanted to use the GTFS data of the public transport association VBB to create art about the Berlin public transport system.

I extracted the latitude / longitude of each traffic stop to use them to create paths in a SVG file.
All stops in Berlin lie between around 52°N 13°E, so I could drop the first three chars (“52.” and “13.”) and use the next three chars as coordinates in a grid.

I poured the data into my SVG generator, and this image was the result :-)
It has been pointed out that it illustrates very exactly what Berlin traffic feels like, but still…

I later found out that I had messed up the substring by throwing away the first four chars and then using the next three chars for the points in the SVG.

Galaxy Smoothie

Raphael Michel, 2017

This happened while trying to smoothen the image of a galaxy using a convolution kernel that turned out to be completely denormalized.

Floating Robots

Raphael Michel, 2017

The coordinate system for the robot’s movement were mixed up when simulating a walking movement.

Credit for the robot model is due to to Yue Hue and ORB.

Displacement Chaos

Michael Pucher, 2017

When performing finite element statics analysis, one aspect of interest is how components of an object are deformed under load, each vertex of the model has a corresponding displacement in the x-, y- and z-direction. The displacements are added to the actual vertex coordinates and one displacement component is being used to color the model. This is were the bug comes in: the nodes are being stored in a particular order, with subsequent nodes being kinda close to each other on the model. Now if there is an off-by-one error in the code that stores the displacements in the vertex buffers, the color of the nodes will be somewhat correct, while the displacements are being applied in the wrong places.

I was simulating waves with a bounding box by shoting particles in all directions, and I forgot to convert milliseconds to second, making the simulation time steps 1000 times too big, explaining the behavior.

Disco Time

Jakob Runge, 2013

This was the result of a force directed graph drawing algorithm. Nodes are initially placed randomly and spring-like forces are computed between each pair of nodes. To minimize these forces, nodes are moved, which often results in a cleaner graph picture. When moving the nodes, Jakob forgot to clean old edges that had been drawn, and just added new edges, resulting in series of similarly positioned lines of the same color.

Here’s the intended, final result.

This series of images was created when I was working on packing problems. The task was to find an algorithm which could always place the provided circles inside of the triangular container without overlapping each other. As you can see, many strategies were unsuccessful! :)

The drawings were done using SageMath.

The Stanford Iceberg

Marc Kassubeck & blinry, 2012

This happened when we were working on an assignment in a real-time computer graphics course. We were supposed to draw the Stanford Bunny using OpenGL’s Vertex Buffer Objects, but it seems we got the indices wrong! You can kind of see the shape of the bunny’s ears and nose, though!