2018-10-03

Chengqi Wang_Lise Haarseth_Suqing Yan_Artificial automation



How AI Change the Sensibility of Architecture 

Chengqi Wang
Lise Haarseth
Suqing Yan


Introduction 

-Michael Hansmeyer

When humans design we can´t do it without having any references. The references and experience we have are a part of us and in our consciousness become a part of what we design. With this new technology designing through algorithms we could free ourselves from our experiences. Architect Michael Hansmeyer are using a subdivision of algorithms to create new forms in which can surprise us. With this process he isn't designing the form but rather the process in which makes the form. Hansmeyer inspiration behind the design process are the nature. How the cells can split and become two cells, either a copy or they could also be extinct from each other. This way of splitting and folding of algorithms gives many architectural possibilities.


If we code the algorithm it can fold a million times faster. It can fold anything and can give hundreds of different variations. You start the process with the same volume but can change the folding ratio and where you want it to fold. So with the same base but applying a different folding ratio the outcome can be something totally different. With this type of folding the folds can fold themselves, they can stretch and tear. The design process can also make the folding out of an existing form, in this case you then have established a set of rules. You give the algorithms some boundaries and you will get a family of forms. This design process has been used to create one of Hansmeyer’s projects in collaboration with Benjamin Dillenburger Digital Grotesque.

Case studies

-Digital grotesque

This is a room full of ornaments and new textures designed by algorithms. According to the designers this is the first 3d printed human-scale room. The room is an invitation to appreciate what we cannot even experience. There is no single point of focus, there are so many details in the room that you could discover new forms within the walls for hours. The room weighs 11 tons and measure 16 square meters.


In this project the algorithms used to generate the surfaces of an existing form are divided into smaller surfaces, and these surfaces are then divided again and again. By altering different division rations you can control the geometry of the form. A simple input form is recursively refined and enriched, culminating in a geometric mesh of 260 million individually specified facets. The single subdivision process produces forms that contain information at multiple scales. With this method of designing it can give complex geometries by only a few steps. The process of the subdivision remains the same but by changing the input and the altered rations creates different forms and rules for the division of the algorithms. What Hansmeyer wanted to achieve in this project was an maximal articulation of surfaces to create a sense of depth in the volume. The subdivision process that is used for this form creates information at multiple scales. The closer one gets to the form more features are discovered. There are microscopic details in the forms that can’t almost not be seen by humans. This way of detailing and richness can only be designed by algorithms, it would have taken a human a year or longer to make sections for the form.



-Digital grotesque II

This project is a full scale 3D printed grotto. It explores new relations between designer and computer and inhabits a space between natural, order and chaos. A world full of new surprises through exploring the surfaces. In digital grotesque II the design is run completely by algorithms to which the architects gave a set to create the base so different from a plain box as possible. The input was a structure that had layers and didn’t reveal itself at first glance to force the viewer to discover new aspects of the shape by feeling. The computer learns to calibrate with the goal to evoke emotions of and interest of the beholder. The architects weren’t able to see the results before the shape was printed. They wanted through this project to see what the computer was capable of designing. In Digital grotesque there are 16 386 design variations and 260 million individual surfaces.



‘in digital grotesque II, we sought to develop new design instruments,’ the architects describe. ‘we viewed the computer not as parametric system of control and execution, but rather as a tool for search and exploration. the computer was a partner in design who proposed an endless number of permutations, many of which were unforeseeable and surprising. further, the computer was able to evaluate its generated forms in respect to an observer’s spatial experience. it learned to evolve these forms to maximize their richness of detail and the number of different perspectives they offered.’



This way of designing gives the architect a whole new role. The designing through algorithms gives too many details and possibilities. So architects job here would be to explore this different possibilities and variations of form. Hansmeyer has designed with the inspiration of cell dividing but says you can cultivate the algorithms in so many other ways. In terms of population, permutation and crossing and you can explore what type of design you would get out of using these terms. The role becomes more as an curator to steering the process and lead the design process to a specific goal. In the process of making Digital Grotesque the computer came with thousands of different design proposals at every step. The architects say they learned a lot in every step of the choosing of specific design solutions, a new way of design possibilities. It is an extending of a human's imagination. The computer has the possibility to change the foundation of the discipline of architecture. Through this advanced design and repetitive processes this could lead to a new face of built environment and how we engage with it. A new architectural phase inspired by novelty and fantasy.



“I think we’re at the point where we’re using the computer to extend our imagination, to try to let the computer find things that perhaps we wouldn’t have thought of, to use the computer as a tool that can surprise us,” Hansmeyer says.



-High Resolution Architecture

The main difference between the way we think and the way computers solve problems is that our own brains never provide hardwired data for big data. When we have to deal with too many facts and numbers, we will definitely give up some - or compress them into shorter symbols that we can use more easily. They make us forget too many details that we can remember, so we can focus on the necessities. However, the computer can scan any large number of letters and numbers at any time, they don't need to save anything in any particular order. Sorting letters is a metaphor for our general way of thinking, while computers can search without sorting.



For instance, if human need to build a house with 1 million different bricks, our natural aversion to big data that we cannot solve tends to drive us simplify the bricks. Firstly, we standardize the bricks so we can assume they have same properties. Then we arrange them in regular rows with simple geometric forms so that we can ignore the differences between the materials and physical shapes of each individual brick. Most of modern buildings are built with clean and simple outlines while in the premodern time, a craftsman can build aesthetic structures without blueprint, solely following his talent, imaginations and inspirations from nature. However, this time consuming process can be replaced by computer nowadays, we can calculate, rotate, and fabricate each brick one by one with 3D printing or robotic arms. Micro designing every tiny particle of a building to a minimum size can save a lot of building materials, energy, labor and money, and can provide buildings that are more compliant. In addition, it will also create new types of buildings and aesthetics. Alisa Andrasek is working on the integration of design, computer science and index technology. She brings artificial intelligence to architecture design with microstructures.



-Cloud Pergola

Andrasek's study of high-resolution microstructures concludes that these microstructures are informative, designed in conjunction with algorithms and AI, and built by robots. The pavilion cloud structure is formed by voxels oriented along a vector field designed using a multi-agent algorithm. This mathematical cloud resonates with the complexity of cloud formation. The vectorized group is captured by the n-dimensional building structure, creating a dynamic interference pattern that drifts and ruptures in visibility, pulling visitors like invisible gravity through an amazing experience. Through the Wonderlab study, many examples of original microstructures were developed using a combination of various algorithmic strategies. Its various conditions are introducing highly complex designs and programmable multi-material printing applications to achieve ultra-high performance (structural, thermal, acoustic and material saving) and extended aesthetic possibilities. An example of this is the robotic 3D extruded lattice structure, which is designed for micro-precision and is suitable for large-scale applications in building and product design.



It is stated by Bruno Juričić, the curator of Cloud Pergola that the pavilion will break through the boundaries of aesthetic, spatial and structural consequences of the emerging intelligent paradigm of architecture, art and engineering by presenting a full-size pergola structure made using 3D robotic manufacturing and automated design. protocol. Cloud Pergola is envisioned as an example of what 21st century architecture should represent.



Traditionally, architects used to design with standard labor-intensive manufacturing methods. Now we are letting designers use robots to produce almost anything. This new manufacturing model opens up the possibility of producing very complex designs driven by data, performance and novel aesthetics. Cloud Pergola is an example of a powerful, lightweight structure with invisible aesthetic qualities.



     


Fig. High resolution architecture created with processing code_from Alisa’s studio, Suqing.Yan.



-Dreamcatcher

To differ from design programs, AI is an interconnected, self-designing system that can upgrade itself. AI will work with design tools and provide multiple design options. The architects need to input project parameters just like a design brief and the AI will provide a range of solutions which meet the project requirements.

Autodesk is working with an experimental CAD system called Dreamcatcher project. It can generate thousands of design options that all fulfill specific design purposes. Dreamcatcher can take advantage of a wide range of design input data - such as formulas, engineering requirements, CAD geometry and sensor information - and the research team is now experimenting with Dreamcatcher's ability to recognize sketches and text as input data.

Through a dedicated, scalable and parallelized cloud computing framework code-named Saturn, Dreamcatcher is able to generate and evaluate solutions that far exceed the capabilities of traditional systems. Saturn provides the high-performance computing infrastructure necessary to run computationally intensive optimization and analysis engines, including multiphysics simulation.

It can be imagined in the future, architects will have fewer drawing business, and more will be the requirement to specify the problem, so that they are more synchronized with the machines in the project.

Deep research on AI self-learning

-The brief introduction of the AI self-learning process

Deep learning is the paradigm that changed the artificial intelligence landscape with only a few years. It considerably pushes the border of tasks that can be automated, changes the way products are developed, and is available to virtually everyone.


-1.1 Artificial Neural networks

There are two milestones in the first development of the ANNs, the first one is in 1943, McCulloch and Pitts published the first model for biological neurons that weights and sums the input X by using cell body to perform a nonlinear activation function, and outputs the outcome Y eventually.

The second milestone was Rosenblatt’s perceptron in 1985, the input X is putted in to generate the output Y directly by using the connection of a feed-forward network (hidden).



-1.2 Convolutional Neural networks (CNNs)

When it comes to the 2012, the winner of achieving the lowest error rate in the yearly “ImageNet large scale visual recognition challenge” (ILSVVRC) made the error rate dropped from 26% to 16%, by using CNNs which is also called LeNet.

The deep Convolutional networks are created by a sequence of convolutional and pooling layers, connected with one or two fully connected layers which will collect the data form the convolution and poll process to get the final output Y, just like the process of the human learning that gaining new knowledge from experience.

Moreover, in recent years, by using the Algorithm of ResNet (2015), Ensemble of CNNs (2016) and squeeze-and-Extract (2017), the error rate has droop to 3.75%, 2.99% and 2.25% respectively, which is beyond the human performance of 5%. It means that using those networks or modification thereof can be the basis for more complex tasks like object detection or even the detection or the generation of the aesthetics in more abstract areas, e.g., Architecture.


-1.3 Recurrent Neural networks (RNNs)
Besides visual perception tasks, the second field that substantially advanced with deep learning is sequence processing, like natural language understanding or automatic translation.In a overall point of view, this networks consist of layers that turn the output Y back to the input X again and again by using the hidden state H which can be unrolled through time.It will be able to perform many different types of the tasks, depending on the configuration of the X, Y and H.
The sequence-to-vector networks takes an input sequence X and output only the Final Value Y, it will be able to recognize the speech or even the sentiment;The Vector-to-sequence networks are generators, which can produce the sequence X from the input value Y, and it makes it has the ability to generate the text or speech, or even the more abstract thing like orders or logics in architecture. The sequence-to-sequence networks is for the configuration, such as translating the Chinese to the English or translate the natural phenomenon to the architectural language.

Moreover, by using the ‘Long short-term Memory’ (LSTM), RNNs are able to have very deep networks to deal with the long sequences like human beings. This procedure will have an additional long-term state combined with the normal state, which makes it will be possible to have a deeper network to deal the more complex data without the increase of the time and reduce the error rate in the final outcome in an acceptable degree.











-1.4 Reinforcement Learning (RL)

This learning method different to other methods that it uses ‘rewards’ to learn something, which makes it will be suitable to perform control tasks like robot movement or automated driving.

The importance of this deep learning method is twofold; First, it will form the neural network policies. Second, it will estimate the expected rewards from raw input data in deep Q-learning.




-1.5 Deep Q-learning

Different from RL, deep Q-learning is basically estimating the expected scores and thereby generate the best approach to fulfill the target. And also, compared with other policy gradients, the DQN estimates the Q value as the outcome instead of the probabilities, and the policy is generated from the critic DQN training instead of from the given gradients form human. Q-learning is not suitable for the all settings but it usually performs faster and more accurate than policy gradients.


-2 The drawbacks of current Artificial intelligence

This science is not all that rosy and still far away from being flawless, and there are still many challenges that need to be overcame by relative specialists.



-2.1 Hyper-parameter
There are certain parameters that their values are defined prior to the beginning of the learning process. However, any minor changes in the value of those parameters would being significant changes in the model, which would make it difficult to maintain the stability of the whole learning process.



-2.2 Trial-and-Error Learning

Neural networks by nature are a black box as their operations are opaque to the humans (Garnelo et al. 2016). Deep learning creates computational models combined with multiple layers to learn the abstract data. However, it is hard to define how much depth the learning process are sufficient to have a full understanding of the certain tasks.



-2.3 Brittle Nature

The trained networks can only perform better on the task it is trained for and have a poor performance in most new tasks.



-2.4 Ex post High-Dimensional Path Attribution

The deep learning system usually take raw data as the input X to achieve the desirable outcome. However, those connection between the attribution of actions and outcomes is mainly based on the complex temporal relationships and objective functions.



Conclusion

-The possible application of those method into Architecture field.

By having the research about the several Architectural related researches above (Digital grotesque, Digital grotesque II, Cloud Pergola and Cloud Pergola) and the Deep learning research, we are going to give some proposals of how to combine the AI with Architecture more closely and make it have the ability to become more automatic.

1. From the Hansmeyer’s projects, it is quite obvious that algorithms are playing the role as an assistant to help architects to explore the boundary of their imagination. However, the final outcome is not able to be clear until the algorithm has finished subdividing the mesh, as the subdivision level is too high to be understood by human beings. However, due to its uncertainty, the model generated by those projects are more likely to be the ornamental sculpture without too much practical function.

So we are thinking is that possible to add Deep Q-learning into this project to make those complex subdivision process is performed to seek for a specific desired target, e.g., the new kind of structure or the ornament that with clear legibility. As the Deep Q-learning network has a very clear Q value and it can generate the most optimized option by using the data collected from the past outcomes or the experiment result to fulfill the value Q.



2. Can we really make the artificial intelligence help architects to define a space with high complexity instead of only generating it?

From the research of Andrasek's study of high-resolution microstructures, we are quire clear about the ability the artificial intelligence has the ability to generate the space with high complicated characters. However, after the generation, it still requires the architects to have a detailed research about the different part of the generated object to define what those space can be used for, which is still hard to be claimed as fully-automated process. Traditionally, architects used to design with standard labor-intensive manufacturing methods. Now we are letting designers use robots to produce almost anything. This new manufacturing model opens up the possibility of producing very complex designs driven by data, performance and novel aesthetics.The works of Alisa are examples of a new type of architecture that has powerful and light weight structure with aesthetic qualities.

Due to that we are think whether it is possible to combine the sequence-to-vector networks from Recurrent Neural networks to make the artificial intelligence help architects to define purpose of the space. Frist of all, the generated geometry needs to be sliced to gain large amount of sections as the input X to the pre-trained system which has ability to find the different characteristics of the section of the different functional space. After that, the RNNs may compare the input X with its learnt data to process the outcome Y which is the function of the certain part of the geometry.



Reference

“Croatian Pavilion presents Cloud Pergola as the most complex 3D Printed structure” https://worldarchitecture.org/article-links/ehghn/croatian_pavilion_presents_cloud_pergola_as_the_most_complex_3d_printed_structure_in_venice_biennale.html (September 25, 2018)

“Excessive Resolution: Artificial Intelligence and Machine Learning in Architectural Design”, Mario Carpo, June 1 2018, https://www.architecturalrecord.com/articles/13465-excessive-resolution-artificial-intelligence-and-machine-learning-in-architectural-design September 25, 2018)

“High Resolution Architecture” https://www.alisaandrasek.com/ September 25, 2018)

“Project Dreamcatcher”, Autodesk Research https://autodeskresearch.com/projects/dreamcatcher September 25, 2018)

“Building unimaginable shapes”, La Monnaie / De Munt https://www.lamonnaie.be/en/mmm-online/1088-building-unimaginable-shapes (September 25, 2018)

“This Is The Most Complex Architectural Structure In History”, Fast Company https://www.fastcompany.com/90109358/this-is-the-most-complex-architectural-structure-in-history (September 25, 2018)

“Michael Hansmeyer - Digital Grotesque I”, Michael Hansmeyer - Computational Architecture http://www.michael-hansmeyer.com/digital-grotesque-I (September 25, 2018)

“'digital grotesque II': a 3D printed grotto with 1.35 billion algorithmically-generated surfaces”, designboom|architecture&designmagazine https://www.designboom.com/architecture/digital-grotesque-grotto-2-3d-printed-michael-hansmeyer-benjamin-dillenburger-07-14-2017/ (September 25, 2018)

“Grotto”, Benjamin Dillenburger https://benjamin-dillenburger.com/grotto/ (September 25, 2018)

“Digital grotesque” MAS Context http://www.mascontext.com/tag/digital-grotesque/ (September 25, 2018)

“Deep learning; Evolution and expansion”, Ritika Wason, Available online 31 August 2018 (September 25, 2018)

“Component-based machine learning for performance prediction in building design”, Philipp Geyer, Sundaravelpandian Singaravel, Available online 3 July 2018 (September 25, 2018)

“An overview of deep learning techniques”, Michael Vogt, Available online 13 July 2018 (September 25, 2018)

“Balancing homogeneity and heterogeneity in design exploration by synthesizing novel design alternatives hasec on genetic algorithm and strategic styling decision”, Kyung Hoon Hyun,Ji-hyun Lee, Available online 20 June 2018 (September 25, 2018)

No comments: