As a cumulative hurrah to put a cap on Gray Area’s 2021 Creative Coding Immersive, all the students had the opportunity to apply the software/hardware tools they had learned during the course and generate a work to display in an online gallery. Having already premiered (you can still view the entire exhibit on New Art City here) I feel inclined to reflect on the lessons that I learned while making my piece, worlding.earth, which fell into places as an interactive website that contained 3D photogrammetry work I had previously made before the Immersive course. Here is a list of primary macro lessons I gleaned during the process:
- 🔄 It’s easier to iterate than make something new (a story on success)
- 🐢 If there are turtles all the way down, choose the right turtle (a story on failure)
- 🚪 Accessibility and deadlines are interlinked, but not opposed
Iteration and Inspiration
During the immersive, between classes, I discovered a site called pointcloud.garden made by the artist Clement Valla. I had been keen on Valla’s works since I first saw Surface Proxy, where he plays with notions of photogrammetry, surface design, digital image as proxy, and computers as seeing machines in an organic context. Those themes carried over to his latest project pointcloud.garden which is an elegant site that hosts a few of Valla’s 3D pointcloud scans of garden flora: flowers, grasses, shrubs.
The work stood out stood out with its use of subtle movement, a slight z-axis pan of the pointclouds, which when combined with what I presume to be a vertex shader that transforms the spherical points into uniform square pixels, created a spectacular firework of shimmering depth. The kinetic activity of the pointclouds were very visceral. The experience of looking at point-cloud flowers felt at once highly abstracted, as only the general silhouette and color blocked chunks could be discerned and required I blur my eyes to fully compute the image as “flower”, but also felt very intimate, as it was possible to get face-to-face with the flowers and float, like a loose atom, in and outside of them. It was an experience that reminded me that even images as non-detailed as a fragmented pointcloud, can, despite the limitations of two-dimensional screens, elicit a microscopic three-dimensional intimacy with digital images to the participant.
When it came time to choose my final Gray Area project, my mind beelined to pointcloud.garden. It used a form of tool we had learned during the immersive, and I could see myself making something similar. Marc Schroeder had taught a week-long class on A-Frame, a library built on three.js, which allows for easy embedding and manipulation of 3D assets on the web. As it was, three.js is the library pointcloud.garden was built upon. What could I show in this 3D world, in a browser, so that someone somewhere can experience it? I already had a backlog of 3D photogrammetry-collages I had been generating which could serve their duty here, I thought. Indeed, as I realized as I thought on it more, I had only shown these three-dimensioned photogrammetry scans as flat 2D raytraced renders, exported and compressed into TIFs. As lossless as these flat images were (.TIF is generally a minimally compresses version of an image), they were by fact of medium totally lossy as they cut a mere slice of the total Dasein of the massivity of photogrammetry models.
At the time I really liked the idea of still image renders as singular PNGs flattening an amalgam image. On many levels it felt proper that sculptures born of flat images, which grew up into masses in three dimensions and expanded their life experiences across X Y and Z, found themselves dead and embalmed back to their initial state as a flat image, possibly to be reconstituted as a data point for a photogrammetry in a later generation.
But as the deadline for the Gray Area exhibition loomed, and as pointcloud.garden lay front of mind, the flatland photogrammetry PNGs called out louder and louder to be refactored and brought into the dimension of material depth. More broadly, it felt improper to so harshly limit the experience of people other than myself to not be able to see the models from the side, and experience their own angles of the object. I, somehow special, should not be the only one able to zoom in out, pan, explore the crevasses and dark corners of these shapes — not when the joy of exploration I discovered in pointcloud.garden was so foregrounded. There was no logical reason to bracket the full spectrum experience of joy of digital images when there were so many tools, A-Frame and three.js both, so easily at hand to create a sharable 3D experience.
It is for this reason that I think of the notion of finding inspiration to be important and more specifically a total success in the case of my project, worlding.earth. The experience of the pointcloud.garden was so wonderful, that my project was immediately frame with it in mind, and it became very easy from there to see what the end product would be like. The vision took a backseat to the engineering of it. From an art standpoint then, a lot of the emotional heavy lifting was taken care of. And I knew from experience that software development can be plagued by poor product management. Having a strong inspiration and higher executive order barking inspiration from the top-down was immensely helpful in streamlining necessary features and filtering out nice-to-haves. The inspiration of pointcloud.garden became my product manager, setting a north star for me to work towards.
Choose Your Turtles Wisely
A-Frame was what I was taught in the class as an easy to use tool to get 3D graphics into the web. A-Frame, as mentioned earlier, is built on three.js. What I did not mentioned was that three.js is built on WebGL, a web engine that taps into a computers GPU to efficiently display complex graphics. Should we go a layer deeper, WebGL itself is based on OpenGL ES 2.0, which is itself… etc. Turtles all the way down.
At the cost of ease is always control. A-Frame is easier to use than three.js (and three.js easier than WebGL) because there are fewer knobs to turn — which in most cases it fine, but can get in the way when you need to do more pinpointed fine-tuning. As an analogy, if your goal is to drive from town A to town B, A-Frame is like buying a prebuilt car which you can drive out of the lot directly to town B. You don’t have much choice in the kind of car; you get what you see. three.js would be like getting a warehouse full of a myriad of great car parts along with thorough documentation on how to put the exact, particular car you want together — then you can drive to town B. Starting with WebGL would be like designing the molds in a CAD software to pour cast iron into to birth a custom cylinder car engine: you have total control, at the cost of convenience. In that vein, should a code-adventurer be brave enough they could dive the depths of assembly code and any range of 3D graphics out 1s, 0s, bytes, bits, above and beyond what is currently capable with three.js. But that would be a monumental task, and largely redundant as three.js and A-Frame are both already a phenomenally rich frameworks.
Over the course of the project, A-Frames ease-of-use got in its own way. Despite it’s great design, I kept stubbing my toe on a few jutting corners. For instance, A-Frame has it’s own loading mechanism. It loads assets in its sequence, with basic event handlers, which are useful, but don’t always provide enough persistent information. This pained me in specifically trying to set up a convenient loading screen. It also became a pain point when I tried to figure out how to manually sequence the loading of objects in an order dictated by a user’s actions. I ended up not finding a way to override A-Frame’s automatic scene loading behavior with something more manual — in three.js this would be a menial task. This is all to say that to begin with I am a beginner developer who doesn’t know what he’s doing, and that A-Frame is designed with a different product target: it was originally conceived as an offshoot of three.js to make WebVR development more streamlined. As you’d expect, it is very good at that task. For my uses however it would have been more ideal to use three.js, to choose that turtle not at the penthouse of abstraction, but the level right below, where a few more wires hung for the ceiling for me to tinker with.
An actual piece of technical advice amidst this rambling… When exporting .GLTF files from Blender 2.8+ with the ‘Compression’ box checked, and you are attempting to load them into an A-Frame scene, add this property:
<a-scene gltf-model=”dracoDecoderPath: https://www.gstatic.com/draco/v1/decoders/"> </a-scene>
Blender uses the open source Draco compressor made by Google to efficiently compress the file size of 3D graphics. Compression is pretty much a necessity as loading huge uncompressed 3D files over the internet can take a long time and slow down the process of loading the site. The UI in Blender does not show that it is using Draco to compress, which is normally not something you need to know, but in this case A-Frame is not able to parse Draco binary natively, and requires an imported library to decode it properly. This requirement is outlined in the documentation of A-Frame, but for me took a while to figure out, because I had no idea Blender was exporting it with a specialized encoder.
Accessibility — Hardware and Affordances
One of the struggles I encountered was the design of the loading screen. I hinted earlier that there were a few technical struggles getting the loading screen working at all, but this struggle I think is broader and more amorphous: grappling with the accessibility of the project, and dealing with deadlines.
I am fortunate to have a powerful desktop PC, on which I did most of my development. I also had fast gigabit internet, meaning that loading large models did not obstruct the experience — I later tried loading the site on slower cabin-in-the-woods Wifi and realized then I had to greatly optimize to get decent loading times. Despite optimizing it, loading times were long and when world.earth was running, it took a solid 50%+ of a computers CPU , 60%+ GPU, and swaths of memory. This meant that my project could not run “democratically” on just any machine. Someone told me the site was a sort of class signal, because to run it all required an expensive computer, which was never the intent, but given my powerful (and expensive) computer, meant that my framework of normality was imposed subconsciously on the project. This was disappointing, and obviously something I want to make sure doesn’t happen in later projects. I was then left to decide how to deal with this.
I decided to employ a loading screen to cover up this blotch of mine, which I am framing here as a type of “affordance”, a concept now ubiquitous in UX design but popularized by Don Norman in the book The Design of Everyday Things. An affordance is a design decision to provide a hint that guides a user’s behavior. Affordances on an industrial door can be a door handle (affording: PULL) or a flat sheetmetal (affording: PUSH). The affordance of a loading screen is multi-fold. It affords transparency, telling the participant that stuff is happening and that the site is not frozen. It also affords the other way: it affords the maker of the site to hide some fact of it. That means these affordances both serve to be transparent with the user what is happening, but also obscure the reason why the loading must come in the first place. At a fundamental level a loading period is a bit like a waiting room, and something that is optional, because given enough effort nearly all loading can be transferred downstream or sequestered into the infinite amount of time when something is not needed. For this reason I struggled with implementing a loading screen as a kind of defeat. But I soon came to my senses and saw it as necessary given my time constraints and deadline that, yes, in a perfect world slow loading is unnecessary, but in a world of deadlines, it is completely necessary.
The loading screen now stands for me as both a signifier of giving into deadlines, as well as a misstep in making such an inaccessible website — I haven’t even mentioned this site does not work on mobile! The loading screen is a reminder not of a lack of optimization, but an opportunity to rethink how I go about projects making them more truly transparent by making them simpler.
This is the TL;DR.
Taking inspiration from a specific source, in my case Clement Valla’s pointcould.garden, gave me a solid structure to work from. The development could be focused and streamlined by treating his project as my product manager.
Choosing A-Frame was a choice made out of ease and convenience — it is what I was taught, and while I like it a lot for what it can do, ultimately in my scenario the lack of knobs got in the way of making my bespoke experience. Therefore, I learned to choose my “turtle” carefully, i.e. choose what level of manual control before going head first, because it will define your capabilities down the road.
Lastly, in making a site that required a lot of computing power and a long time to load, I realize what it means to make an inaccessible project. Some people had to wait minutes to open the site, and other’s had poor performance on their computers. This was caused by my development machine being very powerful, and instantiated an embedded makers’ bias in that it assumed that if it worked for me it would work for everyone else — which is false. Deadlines force these faults to come to a surface, and in my case accepting the fact of the loading screen, makes me aware of the projects inaccessibility.
All in all, however, despite an overtone of negativity (as is somewhat demanded in a post mortem) the project was a ginormous success. It exposed me to countless tools, imposed deadlines, and brought to my attention my biased kind of making which I will more consciously employ on the next endeavor. I am excited for the next one.