Artificial Intelligence-Aided Design

(AIAD) for Bioditigal Architecture

Abstract

Biodigital architecture is portrayed as a design archetype that combines biological paradigms and digital technologies to create novel innovative systems. At its core, biodigital architecture seeks to create designs that are based on natural intelligence, while at the same time leveraging the latest digital technologies to enhance their performance and functionality.

One key aspect of biodigital architecture could be considered biolearning, which is a design methodology that leverages principles from biology in order to guide the design process. By looking towards nature for inspiration, designers can create concepts that are better optimized for their environment, with higher performance.

Artificial Intelligence-Aided Design (AIAD) tends to become increasingly relevant in most design fields, allowing users to explore new boundaries in conceptual thinking and problem-solving. In particular, diffusion models, or AI image generators, can be used to quickly produce realistic images of potential designs based on various input criteria. Examples of diffusion models include Midjourney, DALL-E, and Stable Diffusion.

In this article, we'll look at the fundamentals of biolearning and biodigital architecture, as well as how AI diffusion models can help with design development. We will also highlight a number of student projects from the Universitat Internacional de Catalunya's Biodigital Architecture Master program and show how AI-generated images can be used to enhance and complement their designs. Towards the end of the article, readers should have a clear understanding of how AI can be applied to biodigital architecture to produce more thorough and creative designs.

Keywords.

Artificial Intelligence, Biodigital Architecture, Biolearning, Machine Learning, Computational Design, Biomimicry, Parametric Design, Visualization.

Through this article, we will explore the use of AI as a piece of a bigger puzzle that defines methodological practices in architecture and design.

The field of biodigital architecture studies how biology and digital technologies interact. This could fundamentally alter how we perceive architecture, sustainability, and the built environment. The primary methodology of biodigital architecture is biolearning, which involves employing biological systems and intelligence to inform design and construction.

As AI-diffusion models like Midjourney, DALL-E, and Stable Diffusion can be used to create intricate and realistic images based on the user's concepts and ideas, they are becoming more and more significant in architecture and design. Designers can explore new possibilities and experiment with various designs more effectively and efficiently by using these AI image generators.

As the world becomes increasingly digital, the architectural domain is rapidly evolving to stay current. Architects are utilizing AI tools as part of this evolution to enhance and broaden their design capabilities. The application of AI in architecture and design is still in its infancy, though, as the tools' full potential is yet to be fully realized due to a lack of precision and control. As AI advances, it is imperative for architects to integrate it as a tool into their design process.

Biodigital architecture requires a deep understanding of biology and computation, and AI diffusion models can play a crucial role in helping architects explore and visualize their concepts, emphasizing their creative potential. The issue is that architects who do not adopt AI technology run the risk of falling behind in a field that is developing quickly.

The main objective here is to introduce and explore the use of Artificial Intelligence-Aided Design (AIAD) in the context of biodigital architecture. Specifically, we aim to investigate how AI image generators, can be used alongside biolearning to improve the design process for biodigital architecture. We will showcase how AIAD can be used to generate images based on student project ideas, and how this can facilitate the design process, particularly from a digital standpoint.

Biolearning and Biodigital Architecture

Biolearning is a learning approach that takes inspiration from the natural world. It is based on the principle that organisms and ecosystems have evolved to become highly efficient and adaptive over time. In biodigital architecture, biolearning is used to design buildings that are responsive to their environment and adaptable to changes. This involves studying biological systems and applying the principles learned to different novel designs.[1, pp. 130–133]

The Institute for Biodigital Architecture and Genetics sets an excellent example when it comes to Biolearning, by encouraging research, design approaches, and theory for incorporating science and technology into advanced architecture. Researchers at the Institute, led by Alberto T. Estévez, are pondering the following questions:

“What role do biology, genetics, and computation play in developing forms and functions from nature for intelligent buildings?”

“What role will AI and instrumented-assisted visualization take in future design studios? And, how do we begin to express genetic and metabolic potential in-studio projects?”[2]

For instance, studying the way a plant grows towards the sun to optimize its energy intake can inspire the design of a building that uses the same principle to maximize natural lighting and energy efficiency. In other words, biolearning is the integration of natural intelligence in advanced architecture through different morphologies and technological applications. Focusing on how to approach nature, it is mandatory to involve a design strategy that incorporates natural growth, protection, and especially natural intelligence. To understand this, it is needed not just to observe through our eyes, tactile systems, and sensory abilities, but also to weave in new technologies that reach beyond our perceptions and decode a mixed intelligence system.

During the Biodigital Master at UIC, the Biolearning studio was structured around the span of one intensive week, during which students complete three design exercises to develop their methodologies for implementing natural intelligence. These exercises are aimed at developing the students' abilities to use biolearning as a foundation for design solutions.

The first exercise focuses on hybridization, in which the students are required to gather two or more natural elements and combine them to form a holistic pavilion concept that would offer a type of performance. The second step is to study how nature solves connections and develop a digital model of a connector. The third exercise is focused on the development of a digital model of a performative panel, by focusing on skins, boundaries, or natural surfaces.

Biolearning is an essential tool for designers to create innovative and sustainable design solutions. The methodology provides a systematic approach to using natural intelligence in advanced architecture, enabling designers to create structures that are not only aesthetically pleasing but also environmentally friendly and sustainable. The methodology also allows designers to develop designs that are unique and different from conventional architectural designs.

Biolearning is a critical component to be used along AIAD for biodigital architecture.

Using AI Diffusion models in Biodigital Architecture

Recent advances in artificial intelligence have resulted in the development and improvement of several image-generating models, known as diffusion models, such as Midjourney, DALL-E, and Stable Diffusion. These models have been used extensively in digital art and graphic design fields.

Diffusion models aim to capture the underlying structure of a dataset by simulating the manner in which data points propagate through the hidden space. In the field of computer vision, this involves training a neural network to deblur images contaminated with Gaussian noise, through the process of reversing the diffusion. There are various generic diffusion modeling frameworks employed in computer vision, including denoising diffusion probabilistic models, noise-conditioned score networks, and stochastic differential equations.[3]

Midjourney is an autonomous research laboratory that has developed an AI program named after itself. The program generates images based on textual descriptions and is currently in open beta since July 12, 2022.[4] It is capable of creating realistic textures and lighting effects, which can help architects and designers visualize how their designs will look in real-world conditions. One of the best properties that this model has to offer is based on it image prompt capabilities.

DALL-E, developed by OpenAI, is an image-generating model that can create high-resolution images from textual prompts. It has been used to generate a wide range of images, from simple objects to complex scenes, and can be used to generate images of buildings, landscapes, and other architectural elements.[5]

Stable Diffusion, similar to the other two can generate images based on word prompts[6], but being an open-source based platform, its community develop an extension that would be used to generate a form of 3D models from 2D images. It uses advanced algorithms to create depth maps and other data from 2D images, which can then be used to create Inpainted 3D models of architectural designs. The depth map is basically a grayscale image that holds information regarding the proximity of the pixels in regard to the viewpoint. The brighter parts of the image refer to pixels that should be closer to the viewer while the darker area represents the further ones.

Using AI diffusion models in architectural design has a number of advantages, one of which is the simplicity and speed with which design concepts can be visualized. This may prove especially useful in the field of biodigital architecture, where the integration of organic forms and systems can be intricate and hard to visualize. Architects and designers can explore various design options without having to start over repeatedly by using AI diffusion models to quickly and efficiently generate a broad range of alternative designs.

Since all generators use a form of prompting to issue image generation, it is worthwhile to understand how these prompts work in the form of prompt engineering. This refers to the process of designing and refining prompts or input text to an AI model, with the goal of generating more accurate and realistic images. In the context of AI diffusion models, prompt engineering involves the careful selection and curation of textual descriptions that accurately convey the desired visual output.[7]

This process often involves a trial-and-error approach to finding the most effective prompts, as well as a deep understanding of the capabilities and limitations of the AI model being used. Prompt engineering has been shown to be a powerful tool in generating highly specific and nuanced visual output, and is becoming increasingly important as AI diffusion models continue to advance and improve.

On the other hand, there exists a technique known as "image prompting" whereby instructions are generated based on an input image. The Midjourney model is distinguished by its advanced ability to prompt images, as it allows users to upload a reference image to be combined with a textual prompt or an entirely different image. This technique, termed "blending," entails the transfer of the style of one image onto another, resulting in the generation of novel imagery.[8]

Potential Applications of AIAD in Biodigital Architecture

As mentioned earlier, biolearning is a methodology for integrating natural intelligence into architecture through the use of morphologies and technological applications. It involves studying and understanding the behavior of natural elements, such as plants and animals, and incorporating that knowledge into the design process.

In contrast, prompt engineering is a process in which an artificial intelligence system is used to generate design prompts or suggestions based on user input. These prompts can then be used to guide the design process by generating specific design elements.

If through the biolearning methodology morphologies of natural elements are being studied, it is highly likely that corresponding images of such morphologies can be captured. Diffusion models may be employed iteratively to produce an array of visualizations by utilizing the information inherent in these images and a clear understanding of the researcher's goals, serving as a useful foundation for further analysis.

Both biolearning and prompt engineering use intelligence in the design process, but their methods are different. Whereas prompt engineering relies on the computational intelligence of the AI system, biolearning emphasizes the natural intelligence.

One way to contrast these two methods is to think about their potential benefits and drawbacks. Designers may be better able to understand natural systems and behavior thanks to biolearning, which could result in more enduring and effective designs. The complexity of natural systems could pose difficulties since it is challenging to accurately translate that knowledge into architectural design.

On the other hand, prompt engineering could provide designers a quicker and more effective means of generating design ideas and explore multiple possibilities. Nevertheless, it may also be limited by the quality of the output data as well as the biases inherent in the artificial intelligence system.

Overall, both biolearning and prompt engineering can offer valuable insights and tools for architectural design. The key is to understand their strengths and limitations and to use them in a way that complements and enhances the design process.

Study cases

Through some of the student projects that were completed as part of the biolearning studio, we will exemplify the potential of AI diffusion models when used in sync with biolearning to enhance or create novel biodigital concepts.

Mushroom Vine Pavilion

Natalia Andrea Alonzo Ramirez's project, focused on the hybridization of grape branches and mushrooms to create a pavilion concept. The central structure of the pavilion was formed by the grape branches, which rose up from the ground to form arches. The mushrooms were used as large parasols that extended from the tips of the branches to the ground to supplement the structure.

Elements used to design the Hybridized Pavilion (mushrooms and grape branches.  Image author: Natalia Andrea Alonzo Ramirez, post processed by the Author

Natalia incorporated intelligence factors from both the mushrooms and the grape branches into her design. The mushroom gills and textures were used to offer good sound insulation, while their porosity helped with heat dissipation. The grape branches were used to allow for the connection of all the services to the mushroom panels. Finally, the mushroom panels scattered both light and sound toward the different spaces that were created under their shadow.

Pavilion elevations. Image author: Natalia Andrea Alonzo Ramirez, post processed by the Author

Now, we should try to see how could this exercise would have been resolved with the use of AIAD. Firstly we have an idea of what the student wanted to portray: A pavilion with imbued natural intelligence from oyster mushrooms and grape stems.

Running a simple prompt in Midjourney such as “/imagine: oyster mushroom architectural pavilion, outdoor opera hall”, would yield the following result:

Midjourney generated image of a mushroom pavilion. Image by Author

Diving deeper we could actually use the image that the student captured of the used elements as well as their composition. Thus we could reinstate the prompt with the updated information with the following result:

Second Midjourney iteration of the mushroom pavilion. Image by Author

Now, based on the student’s composition we could observe that the proposal layout looks a lot like the Sydney Opera House. In this aspect, we could use a blend prompt and attain the following result.

5 Image of the Sydney Opera House (Author: Jasper Wilde, unsplash.com) and the newly generated pavilion (image by the Author through Midjourney)

Conclusion

In this article, we explored the potential of Artificial Intelligence-Aided Design (AIAD) in the context of biodigital architecture. We discussed the concept of biodigital architecture and its application in design, and we introduced the idea of biolearning as a methodology for integrating natural intelligence into advanced architectural designs.

We then explored the use of AI diffusion models, such as Midjourney, DALL-E, and Stable Diffusion, in the context of biodigital architecture. We highlighted the benefits of using AIAD in architectural design, including improved efficiency, and the ability to generate innovative design solutions.

Moving forward, we believe that AIAD has significant potential in the field of biodigital architecture. The use of biolearning and AI diffusion models in biodigital architecture has a number of benefits, including quicker design cycles, more variation, and better visualization. These resources can aid designers in developing their creative thinking, producing more conceptual ideas. These tools do have some drawbacks, though, like algorithmic bias and technical constraints. Therefore, it is important to use these tools responsibly.

In terms of ethical and social implications of AIAD, the use of AI in architectural design raises ethical questions, similar to that of any novel technology. One of the main worries is the possibility that AI will usurp human creativity and design abilities. This might result in job losses in the industry, especially for those who handle more standard design tasks. It's critical to think about how AI can complement human designers rather than completely replace them in the design process.

In conclusion, AIAD represents a significant advancement in the field of biodigital architecture, offering a powerful tool for generating innovative design solutions. We look forward to seeing the continued development and implementation of AIAD in the years to come.

References

[1]      A. T. Estevez, Biodigital Architecture and Genetics: writings. Barcelona: ESARQ (UIC), 2015.

[2]      D. Dollens, Autopoietic Architecture : Can Buildings Think? Independently published, 2015.

[3]      A. Ramesh, P. Dhariwal, A. Nichol, C. Chu, and M. Chen, “Hierarchical Text-Conditional Image Generation with CLIP Latents,” no. Figure 3, 2022.

[4]      A. Hertzmann, “Give this AI a few words of description and it produces a stunning image – but is it art?,” 2022. [Online]. Available: https://theconversation.com/give-this-ai-a-few-words-of-description-and-it-produces-a-stunning-image-but-is-it-art-184363. [Accessed: 09-Jan-2023].

[5]      OpenAI, “DALL·E 2.” [Online]. Available: https://openai.com/product/dall-e-2. [Accessed: 09-Jan-2023].

[6]      K. Wiggers, “Stability AI, the startup behind Stable Diffusion, raises $101M | TechCrunch.” [Online]. Available: https://techcrunch.com/2022/10/17/stability-ai-the-startup-behind-stable-diffusion-raises-101m/?guccounter=1&guce_referrer=aHR0cHM6Ly9lbi53aWtpcGVkaWEub3JnLw&guce_referrer_sig=AQAAACUn1NBG_kko-s2MzqZE7XyZHKwrJWCnDr2viiXdW5rrqjGYk2SnMCXsxmpk7FXAUm3VaRP0IE6. [Accessed: 09-Jan-2023].

[7]      A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, “Language Models are Unsupervised Multitask Learners.”

[8]      “Midjourney Mastery: A Guide to Using Image Prompts - Metaroids.” [Online]. Available: https://metaroids.com/learn/midjourney-mastery-a-guide-to-using-image-prompts/. [Accessed: 15-Jan-2023].

For other study cases, feel free to contact the author.