top of page

Create Your First Project

Start adding your projects to your portfolio. Click on "Manage Projects" to get started

3d printing, Web scraping, Displacing images to models , DreamFields and photogrammetry

Location

Gothenburg, Sweden

Date

Febuary, 2023

Recently I have been working on ways of trying to represent objective approaches towards heritage. To do this I wrote a series of web scrapers which gathered and downloaded imagery reflecting the specific requirements established within the algorithm's setup.

This information was then combined into 3d models through a process of photogrammetry. It took a not insubstantial amount of time to understand how best to manipulate the software for the generation of functional meshes from the non-uniform content that was being run through it.

During this investigation I developed a number of methods of working with non-manifold or incomplete digital models. These procedures meant that I could produce 3d printed models even from the most desperate web scraped information.

I feel it is crucial to be able to create physical simulacrum of artifacts and given the scarcity of data which is sometimes available for this process the approach presented here seemed to offer a clear means of utilising computation for design purposes.



Transitioning between the digital and physical realms of representation and manifestation is a constant point of interest within my work. There are more methods than ever at our disposal for moving between alternate means of representation. These can greatly increase our communicative potential and there by enrich understanding and discourse. Here there are 3 alternate methods presented for possible ways of approaching this process.

The 1st method utilizes general adversarial networks trained on image-based content. These models us a text to image-based approach towards content generation. The rate at which they produce imagery has accelerated to an extent where it is now possible to create animations at a relatively rapid pace. For the purposes of this exploration, I created a series of animations from text prompts that I then ran through a photogrammetry process to form 3d objects which could be printed in the real world.

The 2nd method I used was a controlled direction application of seed numbers in combination with alternate prompts to create a series of individual images that potentially represented alternate sides of a singular object. This proved to be particularly effective for the creation of symmetrical objects. I deployed this method in combination with a photogrammetry process to make printable models.

The final method discussed here worked with an emergent form of technology using Zero-Shot Text-Guided Object Generation. Instead of converting a test prompt into an image this network converts it directly into a mesh-based object through a process like that used in neural radiance fields. This can then be easily converted into a real-world object through 3d printing.

It is my belief that by increasing the permeability between the digital and physical domains we can accelerant the dissemination of shared meaning and allow for richer communication.

The design experiment presented here is the result of working between Houdini and Blender for the purposes of animation. During my time exploring within these programs, I realised how quickly displacement operations could replace my conventional 3d workflow. Instead of having to fully model an object and its details I could take a simple geometry and apply a texture to its surface which I could then displace in order to create detail and form.

Starting with simple planes I applied 2d imagery to their surfaces. I generated this through
general adversarial networks which had been train on specifically historical and architectural content. The results allowed for the rapid creation of highly detailed ornamental elements.

Working with this process iteratively I developed it for application on other geometries and then began to combine the resulting forms to create representations of ever more complexity. The process drew on the knowledge of UV and texture mapping in blender which I have
acquired from previous projects. It presented one of the fastest means of going from 2d to 3d and allowed for the maximum maintenance of character and detail between the two mediums. The process is highly subjective to both the abilities of the modeler and the content at your
disposal.

One of the greatest's benefits of this modelling approach is that unlike other methods of
creating 3d representations from 2d content it can be relatively light on computational
requirements. This is because procedural geometry can be used in the initial phases. It is far lighter on the operational requirements of any system than more conventional mesh-based
approaches.

When you are satisfied with the visible results it is possible to convert the geometry into a mesh and there by transform it into a usable asset for 3d printing or any other potential
requirements the model could serve.

bottom of page