Obviously, the artist still had enough time to document their undertaking.
This series is dedicated to the programming course “clownfish”.
]]>Most likely, the 32bit floating point images were interpreted as half/16bit by FFmpeg causing the doubling and the glitched colors.
]]>Originally posted on twitter; the document must have been Warlords and Muslims in Chinese Central Asia.
]]>What the graph should have shown are the positions of multiple sensors in a motion tracking system over time in space – so the x axis should have been time (giving it the format: time, data1, data2 data3, …). But Matplotlib only used time for the first series and then paired the coordinates to themselves (assuming the format x, y, x, y, x, … – as if it was time, data1, time, data2, time, data3, …).
]]>The result is a ton of new windows spawned - today I was closing these windows, and for some reason as I closed windows in my tiling window manager, they all coalesced into this blob of malformed window borders (yellow is the focused window, grey is unfocused.)
Quite the maelstrom of technical quirks - each piece a slightly imperfect tool, interacting unexpectedly with other imperfect tools.
]]>Tasked with rendering graphics for the game League of Legends, it took a more freestyle approach and created this piece of art.
]]>glVertexAttribPointer()
OpenGL API call.
]]>Didn’t remember what exactly went wrong in the code but the system kind of crashed and didn’t respond for a while. When I opened the Jupyter notebook again this is what I found in the output code cell. It generated thousands of small pie charts (what you might see as Pacman-like structures) in a big pie. Hence, the name “Pacman Pie”.
]]>The author summarizes it like this:
I wrote what is similar to cellular automata like Conway’s game of life and I found this bug when I first made the fireflies flash. It’s hard to remember what the issue was now but I believe it was because light did not decay with time and that new flashes overrode the light value at the affected tiles, creating this neat effect.
The program was written in Rust my beloved. There is a gif of what the program looks like completed and working as intended on the GitHub page.
]]>I was colour-correcting the video, but after the software glitch, there were no colours. But I liked the effect – I hope to encouter this glitch again.
]]>I see an interesting symbolism here: Street View is a digitalization of the real world. I wanted to print the Street View images – to materialize the digital, so to speak. But the glitch created a representation of “another dimension” and showed the impossibility of coming back to reality.
]]>Didn’t notice the result until the next day - absolutely no idea what happened here, but it looks cool.
]]>“What are you doing?! Those are amazing!!”
I asked him for the phone and started scrolling through the images. Most of them were gradations of color in the beige/green/gray range, many of a single color, a few of 2-3 colors. This one stood out as it ran the gamut. About half of them have the “ripped” looking edges in them, and maybe 4-5 of them have the black glitches along the edges.
]]>Since then, the photo has crashed both the camera, as well as Sigma Photo Pro and my MacBook Pro every single time I tried to open it. Eventually, something got patched somewhere and the photo no longer crashes my laptop.
I am unsure how the pink noise banding at the top happened – all I can think was that it was a random camera glitch or a particle strike that caused the error whilst the file was being written to the card. All photos taken afterwards were fine, and the card that the ‘demon core’ was on was also fine. The camera still works fine and I’ve been unable to reproduce the problem since.
]]>This created beautiful patterns, expressing the fundamental nature of symmetry with modular arithmetic in common math sequences.
You can check out the code here.
]]>I think I was trying to resize the image of a blue and magenta test grid (visible on the left) using the nearest neighbour algorithm.
Sadly, I lost the original lossless image, and only this compressed version is left, which I salvaged from an old whatsapp chat with a friend.
]]>There was a bug in the code - after the image passed the encoding-decoding stage, something completely different but beautiful appeared.
]]>When calculating shadows, you typically calculate the nearest object to a lightsource and map this onto a shadowmap. When calculating the real image you check the distance from the rendered object to the light and then check with the shadowmap, whether your distance is larger. If yes, your object is in shadow.
When creating a cubemap you have to create six images and put them into an array and then define that array as a cubemap via some imageview. The problem here was, that the documentation for vulkan didnt clearly say, which part of the array was responsible for which part of the cube. Therefore I had to manually try out all the different possibilities. To make this easier on me I mapped the location of the object that was seen onto my cubemap and then used this position as the color, that was rendered. With this it was easier to find out which part of the cube was mapped where.
]]>What went wrong? I pondered and tried a few things for a whole day. And then I
remembered that one time I read the Go language spec, specifically the part
where it states that int
, the data type that I was using for my indices was
either 32 or 64 bits wide depending on the target architecture. Of course I had
assumed it was 32 bits the whole time. This caused OpenGL to treat a single
index of 64 bits as two 32 bit indices.
This incident was also used in an article about Go I wrote.
]]>For this experiment I rendered a scene in Blender and used a Processing Sketch I previously created that takes each frame of a video, cuts the same column of pixels out of each of the frames and combines all the pixel columns into a new image that is as many pixels wide as the original video had frames. Multiply that times as many pixel columns as the original video has to create a new set of frames for the resulting animation.
]]>glDrawPixels()
) exposed a driver
bug (underused code path), this was supposed to be the “proper” version
using textured quads. Obviously, the vertices got scrambled somehow.
The project ended up on the infinite shelf shortly after I had fixed the issue, so this glitch piece remains its only notable result. :)
Materials used: Haskell, OpenGL
]]>You can find the exercise (all in German), as well as other recursion exercises, here.
]]>The full text prompt was: “A windy coast on a cloudy day, by Caspar Friedrich and Albert Bierstadt, featured on ArtStation:5”, “turbines, towers:-10”
I enabled the RN50x16 model - everything else used the 4.1 default settings.
]]>I have no idea what exactly happened here, but it seems to have combined two of the images during the compression process.
]]>The expected result was seeing the same colors no matter where I moved, as the vector is location dependent… turns out I completely forgot to set the uniform variable of the camera position, so my shader was reading zeroed out memory and I effectively colored my scene with the fragment positions. Turns out this looks a lot prettier than the (by now…) working shading :) Gives me a bit of a dreamy-trippy feeling.
]]>The glitch is, when it happens, remarkably stable. The background does not flicker, and switching to a different background and then back to the default preserves all the artifacts as well. Sometimes GNOME switches from the “day” to the “night” variant of the background, creating a new variant of the glitch in the process (the “night” variant seems to leave more of the artifacts visible) – compare images 1 & 2 or 4 & 5.
Since this bug reappears a few times a month, this is a series which will continue to receive updates – until the GNOME bug that presumably causes it is fixed, I suppose.
]]>The video marked the end of camera space computations and data structures in MAGE, which now uses the more convenient, stable and camera-invariant world-space.
]]>Additionally, the leaf meshes have vertex color – there is color information kept in the actual mesh itself independent of any material. That color is not meant to be seen as a color, it is actually used as an xyz value for determining wind animation, but the text material applies vertex color on the assumption that it is attached to actual text and vertex color is text color set by the user. This is what caused the text to have the nice gradient seen here.
]]>The renderer was a CPU-based raytracer that used OpenGL to display the final result in a screen-filling quad at a resolution of 512 x 512. Thanks to the inherently parallelizable nature of the raytracing algorithm I could slap OpenMP on it and get it to run at interactive speeds, whilst loading all university-provided cores at 100%, naturally.
]]>I did not keep the code that produced this wonderful catastrophe. But if I remember correctly, the faulty blurring operation produced a value that was way larger than it should have been which was subsequently wrapped to cram it into 8 bits, producing these colorful waves.
]]>Mixed media (math, HTML tables)
]]>Buuuuut, step 3 wasn’t working - this meant that all of the cylinders were sitting with their one end at the origin :)
]]>So it was always meant to generate art, but it did so very incorrectly. Angles don’t line up, colours are not unique, some polygons escape the boundaries entirely, it’s all a bit messed up. However, I also liked a few of the results for their own sake.
]]>Due to a bug in the file reading and parsing code, it ended up reading invalid data, causing colorful glitch art.
]]>I extracted the latitude / longitude of each traffic stop to use them to create paths in a SVG file.
All stops in Berlin lie between around 52°N 13°E, so I could drop the first three chars (“52.” and “13.”) and use the next three chars as coordinates in a grid.
I poured the data into my SVG generator, and this image was the result :-)
It has been pointed out that it illustrates very exactly what Berlin traffic feels like, but still…
I later found out that I had messed up the substring by throwing away the first four chars and then using the next three chars for the points in the SVG.
]]>Here’s the intended, final result.
]]>The drawings were done using SageMath.
]]>