2018-09-26

ChristopherLeungCheun_ChowLengYan_YifanZhou_ArtificalAutomation_Final Paper

Architectural Automation
Automate Melbourne Elective

Final Paper

Tutors
Sandra Manninger
Matias Del Campo

Sem 2 2018

Student Names:
Christopher Leung Cheun (s3415469)
Chow Leng Yan (3741522)
Yifan Zhou (s3694987)



Restructuring our Cities through Deep Neural Network

Abstract

Deep neural networks (DNNs) have been extensively applied to a wide range of disciplines and professions, such as system identification and control, decision making, pattern recognition, medical diagnosis, finance, data mining, visualization, and others. The ability to have a large number of free parameters, the weights and biases between interconnected units gives the flexibility to fit highly complex data in deep neural network. With advances in computing and networking technology, the field of expertise of each professions is expanding their knowledge towards the use of neural networks by researching how such tool is going to help the future of their line of work. Speculating on how such tool could impact the architect profession, we propose a hypothetical model for city planning. As cities all around the world have evolved in many ways according to their culture, location, population density, situations and a wide range of data is being stored and captured in order to analyse the potential growth of the cities. We suggest the model complexity in deep neural network shall work well to take in huge datasets of different design aspects in city planning.

Introduction


In this paper, we will identify current applications of deep neural network and its working principles which helped developed our speculation. Taking inspiration from work by Gene Kogan and his team, Invisible Cities[1], we propose if we could style transfer a city model to another city context, thus, potentially changing the lifestyles of people of that city. Along with networking technology such as Big Data which will be the source data as an inputs to train and implement our speculated model, this paper is going to identify the current use of deep learning neural networks and how it could possibly change architecture city planning by training the machine on specific areas where the process of city planning could perhaps be fully automated. 

1 What is Deep Neural Network?

A deep neural network, in general, is a technology built to simulate the activity of the human brain – specifically, pattern recognition and the passage of input through various layers of simulated neural connections[2]. It allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. Taking particular interest on two existing application of deep neural network, AlphaGo and Conditional Generative Adversarial Network (cGAN), we shall discuss application of concepts from both case studies on how it could be applied in our speculative model to redesign circulation of cities.

1.1 Current Applications of Deep Neural Network

1.1.1 AlphaGo

AlphaGo is an Artificial Intelligence (AI) designed by Google DeepMind Team in May 2017, and has beaten Ke Jie on a set of 3 matches; Ke Jie is the world champion on Go game which is a complex old ancient strategic board game. The AI was then set onto an open platform against the world top Go player and kept an undefeated record making he/she the world best player[3]. AlphaGo uses an artificial neural network that trained the brain of the machine to become smarter by learning through several generations of failures then slowly the machine start to calculate the different probabilities toward winning the game by simulating and predicting the opponent moves in relation to the current situation. The AI is designed within a supervised environment that generates its own Big Data by playing against itself over and over again; initially the AI has no knowledge how to play such game then through training it eventually understands what move should be done towards winning. In other words, the longer the AI is trained, the more accurate and bigger data is generated.

Incorporating data into the AI is the most important part of the AI system as this will be the starting point where the AI will cross reference to and most often Big data from other sources are used. Big data is a term used to refer to the study and applications of data sets that are so big and complex that traditional data-processing application software are inadequate to deal with[4].

AlphaGo is based on deep learning and early version of AlphaGo learnt their moves by playing against different levels of both amateur and professional players. AlphaGo Zero had no such help. Instead, it learned purely by playing itself millions of times over. It began by placing stones on the Go board in random but swiftly improved as it discovers winning strategies. Now, with big data that is generated available, we can feed that data into a machine-learning algorithm that then learns how to reproduce the behavior. Collecting the data is way faster than trying to understand and provide for every single eventuality. Which means that we’re now speeding up progress on the AI front. AI itself does not reason and deduce the way human minds do. Instead, it learns through trial and error from the data stored. Researchers are trying to implement such knowledge into more complex game played by human such as Starcraft 2(One of the most complex strategy game) in order to make the computer smarter opponents.

While AlphaGo Zero is a step towards a general-purpose AI, it can only work on problems that can be perfectly simulated in a computer. AI that could match human’s skills are still a long way off. In the next decade is the use of AI will help humans discover new drugs and materials, and crack mysteries in particle physics. “I hope that these kinds of algorithms and future versions of AlphaGo-inspired things will be routinely working with us as scientific experts and medical experts on advancing the frontier of science and medicine,” Hassabis said[5].

Fig. 1  Example of a designed ecosystem named Eden by Jon McCormack[6]
By the meantime, an interactive, self-generating, artificial ecosystem named Eden designed by Jon McCormack showed how AI could learn about the environment by itself and living like an ecosystem. These creature-like illustrates how AI could simulate an environment where cells creatures are generated by advanced algorithm that discover where there’s a food or a potential mate, and pass the knowledge to the next generation of creatures before they died and recycle the knowledge gain to the next generation.This shows the same characteristic with AlphaGO Zero—both creating and learning from failures to try become a better version of themselves perfecting the situations and moves[7].

These two technologies showed great reference to our speculation. For the city style transfer, we may need AI to analyze the pedestrian data of Melbourne, when this failure checking procedure applied on the data analysis, AI could train human condition itself rather than historical data and design nearly perfect attempt way much better than what human did.

1.1.2 Neural Style Transfer (Background)

Art could be identify in many forms, from Painting to Architecture, such movement has evolved through time and people have always been trying to find novelty into communicate their ideology or visions, and many of them tried to replicate or implement those style into their own. However such work required a well-trained artist and lots of time for the artist to develop such skill sets.

Since the mid-1990s, the art theories behind the fantastic artwork have been the attraction to not only artists but many computer science researchers. There are plenty of studies exploring how to automatically turn images into counterfeit artworks.The Non-Photorealistic Rendering(NPR)[8,9,10] algorithms are designed for specific artistic style and are not easily transferable to other style as it is requires a humongous knowledge and coding skill ; Style transfer is the process of extracting and transferring texture from the source to the target(s) [11]. Gatys et al.[12] was the first to study how Convolutional Neural Network (CNN) could be applied to replicate famous painting style on natural images. Such algorithm (CNN) is a subset of machine learning, most commonly applied to analyze visual imagery. It was inspired by the biological processes of brains [13] that the connectivity pattern/images between neurons resembles the organization of the animal visual cortex. Such studies has helped computer scientists to develop algorithms that enables the machine to understand and learn from input and output data which then leads towards something that is unpredictable or wanted within a monitored environment. Such tool has helped into categorising elements in an autonomous process and more research are being done how this could be applied into other field of work.

1.1.3 Invisible Cities - Conditional Generative Adversarial Network (cGAN)


invisible_cities_patchwork_02.jpg
Fig. 2  Example of Neural Style Transfer Algorithm that transfer the style of a city characteristic through deep learning onto a given city figure ground. The processed generated images is named ‘Invisible Cities” by Gene Kogan, Garbriele Gambotto, Ambhika Samsen, Andrej Boleslavsky, Michele Ferretti, Damiano Gui and Fabian Frei [14]


Invisible Cities is a project that was made during Machine learning for artists workshop at OpenDot, where participants interrogate the use of Conditional Generative Adversarial Network (cGAN) onto generating new and non-existent but realistic images that remember a certain set of features from what it has seen in the past. The project uses geographical data from online platforms and through different conversions, city elements (roads, green space, building, water bodies) were labelled using a set of colours where the Neural Network (NN) processes these aerial images and learn the resemblance between them, which then generate and produce a quite similar output to the original city[15]. This shows that Artificial Intelligence (AI) could understand the different input data from different cities and produce something that looks realistic or believable in other context.

In this project, a neural network was trained and translate map tiles to generate satellite images, where individual models of several cities; Milan, Venice and Los Angeles; where billions of image data has been cross referenced between the figure ground and the image style (Context). The satellite images and data collected are then used as an input which are then trained through the neural network allowing them to city map style transfer by applying the aerial model of one city on to the map tile of another, such style transfer also work with hand-drawing sketches[16]. Such process become quicker as the digital machine already understand the different scenarios of input and could output in real time.

Rather than training the machine on pixel data images, could such methodology be applied onto a voxel data of the context, where Big Data will improve the accuracy of the generative outcome. Like in AlphaGo, the AI self taught and learn purely by playing against itself millions of times through simulation, could we train an AI that will learn from human being behavioral moves about all cities in the world and a propose winning strategies that will help evolve different cities towards a better, efficient or optimized version of the current. Such tool could even help human to discover brand new direction that will allow another perception of designing cities or even helps to patch the current city with other elements that work in such situation, perhaps in the future we could combine all the competition proposal for any design and let the AI study the characteristic of each design and propose a solution that compromise the distinctive qualities of each design which are all successful on their own but giving more value as a whole.

Imagine we use the same idea of style transfer on to a city scale, where the Culture & Program of the city is the key aspect of linking and replicating it to another context. Such sophisticated programmed AI would enable into blending two different cultures from one city to another where people will slowly adapt this new perception of living, bringing more values within it and could also solve potential problem that could link places that work from the other environment to the propose location while dealing with the actual situation. The project Invisible Cities inspired us into proposing such design planning idea that will help to restructure the current city through a bottom up approach.

2 MIT BIG DATA Research


Big data is a term used to refer to the study and applications of big complex data sets that traditional data-processing application software are inadequate to deal with them. The analysis of the big data would help to find new links to spot business trends, prevent diseases, combat crime and so on.[17]
By incorporating this kind of big data into digital technology, a group of MIT researchers has helped architects and planners to make our city better by translating different kinds of building information from big data(Population density, exciting structure, walk able distance…)and translate them into a map which could tell stories for people to read easier.[18]
Fig. 3  Example of how BIG Data is being used to map potential cafe location in Brooklyn. Such mapping tool was designed by the Social Computing Group at MIT Media Lab



The map considers different situations. It analyzes bicycle crashes, map existing café in several cities, public green space and uses google street view to collect data. All these contribute a lot to the work of architect and urban planners.

Speculate a bit about the city which applied with this technology, the pressure of city designer would be much less because AI could somehow take their would, at the same time, the data analyzed by AI would be more precise and detail, in another way, it will design architecture which more close to perfect function.

It’s same as our speculation—designing city pedestrian network by AI. Only by working well with input big data, the analysis would be precise for the pedestrians in the city. While the researchers were using 2D mapping data analysis system, we try to push this case a further step—making mapping system 3D. Then how could we do that? By transferring pixels into voxels, which contain same big data information but way more than pixels. In this case, traditional 2D mapping becomes a more real and more designable 3D mapping system by taking the advantage of big data.

3 Speculation:Making the future city circulation or programs

3.1 Bottom-up Approach in Urban Planning

Urban planning in the late seventies was formally being made by the state, with a rational approach, enforced from a top-down perspective, supported by planning guides made by professional planners and public consultation at the end of plan making. Current urban planning has been modifying the method to a strategic and communicative model, from a bottom-up perspective. Results from the inverted planning approach proves success in many aspects. A common concern of architects and urban planners, which is the difficulties faced when handling large scale problems, as they are too complex; while when it comes to smaller scale problems, planners tend to ignore for they are too particular and trivial. The problem could be tackled by involving local governments or committees formed by local citizens, where they would have a direct experience of urban problems and could identify more desperate problems of districts to be alleviated or solved in a shorter time. Therefore, we took Hong Kong as a successful case study of such approach, the study serves for two main purposes: firstly, to analyse how the approach was implemented in the city and how it solves existing urban problems, with specific focus on circulations and programs as a main urban problem in Hong Kong; secondly, how could the concept be applied and integrated with AI, could AI possibly identify the complex pattern of city and human behaviour and provide solutions for existing cities? AI are being used in different fields of expertise by Government, Healthcare, Education, Media, Insurance and many more in order to have insights that lead to better strategic decisions. The amount of data collected is not important, however it depends on what we do with the collected data.

3.2 Bottom-Up Approach Case Study: Hong Kong


Hong Kong is a city without ground. This is because the city is built on steep slopes and culturally there is no concept of ground. Without the concept of ground, there is no figure ground relationships that shape the urban space :axism edge centre and even fabric. This is only possible by mapping Hong Kong in a three-dimensional circulation network :Cities Without ground[19]. These networks are built by different public and private stakeholders through a negotiation to activate their property capital giving space back to the public. Such complex circulation for the public has 3 distinct levels; street level, underground and overground, where pedestrians navigate through the complex network of elevated walkways and underground tunnels which have evolved over the past 50 years[20]. With the increase of population density all around the world, cities will be required to develop a tool that will allow the city to grow both vertically and horizontally compromising the needs and forecasting the future development of the current city with the Big Data collected. Many researchers are using Big Data to understand when, how and why crowds form in cities and uses AI technology to predict their movement and actions, such information has proven great success into controlling the crowd at events[21,22]. Could such data helps into predicting and solving how the current city evolve?



Fig. 4 Example how complex is Hong Kong circulation and it could not be mapped onto a 2D figure ground but in 3D[23]




What is fascinating about Hong Kong is that these networks was not developed as the result of some grand master plan but due to the scarcity of unusable land and the realisation that the space about the ground floor was not valuable [24]. The network was the result of government and property developers into finding the proper solution to smooth the circulation for their people but also without having to interrupt the movement of cars. The whole city development was the result of a bottom-up procedure where human being establish and build on top of the current city. Each cities in the world has invisible solutions that could be applied into another context, if we could imagine training the AI to understand the complex pattern of the relationship between human being behavior and the elements of the city, this could possibly help to develop a better new planning proposal tool which requires the collaboration of the different parties involved.




Fig. 5 Example of different cities figure ground around the world. "Square-mile Street Networks Visualisation"
by Geoff Boeing [25]



3.3 Proposal: Redesigning Circulation of Cities with Deep Neural Network

Many cities around the world was planned and designed to accommodate different types of building infrastructures; roads, train or tram lines, rivers, walkways, building functions, to meet the current and future needs through understanding how the city evolved. Nowadays as most cities are already established, redesigning the whole is not a solution as people already adapt to the identity and culture of the city. Our speculation is based on the idea of Invisible Cities, how they trained a neural network from one city and apply the trained model on another city context to create 2D imaginary images. However, we propose a more radical approach to solve current city problems though a bottom-up approach in a 3D environment by understanding how cities evolve in other locations and applied such complex pattern to another city. Hence, adding new elements or partially redesign the city circulation would create new conditions for the public to adapt and learn new pattern of living. Taking in the working concept of AlphaGo, we speculate to design a neural network that will grid the city and trained into a simulation environment. It could be trained to anticipate the current city circulation in order to learn from previous proposal by analysing which part of the proposal did not achieve expected results. We could then use the simulation outcome as a tool to help determine plausible solutions for the city.

As Hong Kong is a quite successful city dealing with effective pedestrian circulation, neural network could be implemented to understand and analyse a data-set of people's daily activities in Hong Kong. How they commute, which route would they pick as the best option to their destination, and why such particular route. From all these commuting Big Data collected through people monitoring and the actual environment, this could be transferred to an AI that will learn by itself about the strength and weakness of Hong Kong’s pedestrian circulation and remap such information into any targeted cities (in this case Melbourne) where individual AI model of the targeted city would be already trained to adapt with the new set of data. The source of data-sets for circulation in Melbourne city could be from OpenStreetMap , however those information are not quite accurate. For instance, the AI would align both cities by looking at the different typologies and try to blend both where this would create a unique proposal or the city of Melbourne with an exclusive network for the citizens. We propose the model would eventually generate walkways that will go through buildings and reorganise the city pedestrian flow into a more vertical integrated platform for both public and private infrastructure. This could be even pushed further into re adapting the city that is fully automated to create an artificial ecosystem for the human to live into.

Conclusion 


Even though Artificial Intelligence is being used by different professions and field of work boosting their productivity; on a planning design, such tool in Architecture would only help architects to understand new techniques into designing something that has already been thought through, and it would not generate something new but perhaps create a new perception of space. But if our speculation come true, urban planners would be much less stressful to deal with big data as AI could perfectly solve that and give a satisfying result. Meanwhile, cities would be highly efficient because every route is generated by billions of data and optimized by AI which means in the future there would be perhaps less traffic problems, safety problems, pedestrian (Density) problems and so on.
However, just like what AlphaGo could do, it is a narrow AI and could only excel at playing Go or any strategic board game, but nothing else, it means that till now we are still far from general AI, as it does not think and deduce the way human minds do, there is still a long way for AI to develop smartly as human do. As a conclusion, AI could replace some repeated simple human’s tasks but like architects which need lots of experience, design concepts, aesthetic ideas and so on are still not replaceable. So what if AI is fed with a data library of human architecture history and through the deep learning, AI create its own style of architecture? If that really happened, would these architectures be available for human or non human? How this new style of architecture by AI could coexist with human architecture? Or things could not be that bad, somehow AI may develop design searching for human and create some brand new types of architecture which perfectly suitable for human. However, what could be sure is that things would definitely change a lot after the “singularity”.




Footnotes

[1] Gene Kogan, Gabriele Gambotto, Ambhika Samsen, Andrej Boleslavský, Michele Ferretti, Damiano Gui. Fabian Frei (2016) Implementation|Invisible Cities <https://opendot.github.io/ml4a-invisible-cities/implementation/ > [accessed 27 August 2018]

[2] Technopedia.com. (n.d.). What is a Deep Neural Network? - Definition from technopedia <https://www.techopedia.com/definition/32902/deep-neural-network> [accessed 30 September 2018]

[3] Martin Hilbert (2016) The World's Technological Capacity to Store, Communicate, and Compute Information, <http://www.martinhilbert.net/WorldInfoCapacity.html/> [accessed 29 August 2018]

[4] Doug Laney (2001). "3D data management: Controlling data volume, velocity and variety". META Group Research Note.

[5] Ian Sample Science editor (2017) 'It's able to create knowledge itself': Google unveils AI that learns on its own <https://www.theguardian.com/science/2017/oct/18/its-able-to-create-knowledge-itself-google-unveils-ai-learns-all-on-its-own> [accessed 29 August 2018]

[6]All the world is watered with the dregs of eden… <http://jonmccormack.info/artworks/eden/> [accessed 23 September 2018

[7] Ian Sample Science editor (2017) 'It's able to create knowledge itself': Google unveils AI that learns on its own <https://www.theguardian.com/science/2017/oct/18/its-able-to-create-knowledge-itself-google-unveils-ai-learns-all-on-its-own> [accessed 29 August 2018]

[8]B. Gooch and A. Gooch, Non-photorealistic rendering.Natick, MA, USA: A. K. Peters, Ltd., 2001. 1

[9] T. Strothotte and S. Schlechtweg, Non-photorealistic computer graphics: modeling, rendering, and animation. Morgan Kaufmann, 2002. 1

[10]P. Rosin and J. Collomosse, Image and video-based artistic stylisation. Springer Science & Business Media, 2012, vol. 42

[11] Y. Jing, Y.Yang, Z.Feng, J.Ye, Y.Yu, M.Song, “Neural Style Transfer: A Review”, May 2017, p.p 1

[12] L. A. Gatys, A. S. Ecker, and M. Bethge, “A neural algorithm of artistic style,” ArXiv e-prints, Aug 2015. 1, 2, 5, 12, 13, 14

[13] Matusugu, Masakazu; Katshuhiko Mori; Yusuke Mitari; Yuji Kaneda (2003). “Subject independent facial expression recognition with robust face detection using a convolutional neural network”. Neural Networks

[14] Ibid. Kogan, 2016

[15] Ibid. Kogan, 2016

[16] Ibid. Kogan, 2016

[17] En.wikipedia.org. (2018) Big Data <https://en.wikipedia.org/wiki/Big_data> [accessed 30 August 2018]

[18] Brian Libby (2018) MIT Researchers Help Architects and Planners Understand the Potential of Big Data <https://www.architectmagazine.com/technology/mit-researchers-help-architects-and-planners-understand-the-potential-of-big-data_o> [accessed 30 August 2018]

[19] Adam Frampton, Jonathan D.Solomon, Clara Wong (2017) “Cities without ground”, <http://citieswithoutground.com/> [accessed 2 September 2018]

[20]David, (2012) Hong Kong City without ground , <https://randomwire.com/hong-kong-city-without-ground/> [accessed 30 August 2018]

[21] Charles Fiori,CFA (2016) #BigData for crowd control. It’s a global thing, <https://www.linkedin.com/pulse/bigdata-crowd-control-its-global-thing-charles-fiori-cfa/> [accessed 1 September 2018]

[22] Daniel Newman (2016) Big Data and The Future of Smart Cities, <https://www.forbes.com/sites/danielnewman/2016/08/15/big-data-and-the-future-of-smart-cities/#7e06b5e526b8> [accessed 29 August 2018]

[23]Adam Frampton and others, 2017.

[24] Ibid. Kogan, 2016

[25] Geoff Boeing (2017) “Square-Mile Street Networks Visulization”, <https://geoffboeing.com/2017/01/square-mile-street-network-visualization/> [accessed 2 september 2018]








More Reference //That might be use to create accuracy in the data

https://metro.strava.com/

http://www.executivestyle.com.au/strava-maps-show-where-australian-cyclists-go-gp33hl

https://behavioranalyticsretail.com/technologies-tracking-people/

Map Collections
https://www.dezeen.com/2018/10/01/kerim-bayer-maps-atlas-istanbul-design-biennial/





No comments: