DRAFT
Architecture
Elective – Artificial Automations Retrospective text
Part 01 - Luke – Abstract
Introduction / Mind mapping / medical imaging
"On Elysium,
there are many robot servants, and they serve you all day. If you live there,
you never get sick or old." Automation in today’s society has,
and throughout history, always played a role in being a hero in achieving a
utopian world where robotics and automated processes rule our world, making
life easier for the billions of people living on earth. However, there is a lot
to consider when it comes to the automation of our everyday lives, as the
affects have both tremendous negatives and significant positives. These
responses to automation have been heavily scrutinised throughout the 20th
century, primarily through popular culture such as sci-fi films. The
speculation created in these films portrays a futuristic world, not only
positively affected by robots, but a lot of social conditions amongst people.
The 2013 film, Elysium, we see a great example of how automation
portrays the current trend of automation in modern day society. Although the
film epitomises a potential future with how we live with robotics and
automation, many automated processes which exist currently in society greatly
affect the way in which we live and operate from the day to day and this is
present in most industries. Primarily, as seen in the film, the area of medical
imaging significantly plays a crucial role in our lives. To narrow this down
further, more specific to an industry that significantly affects the way in
which we live and occupy space, architecture; we are seeing a subtle shift in
the way automation starts to play a role in the design of buildings and how
architects use automation to aid their designing. The main question raised is
how can architecture recognise and adapt to a human’s physical &
psychological needs?
The 'Med Bed' from the film Elysium. |
What can we
understand about the brain of the architect as he or she designs a building?
What can we understand about the brain of the architect as he or she designs a
building?
The interface
between man and machine replaces the reality of buildings and social ground.
Social structures have been deeply affected by technological intensity, where
the deportation of people and elimination of human confrontation brings social
concentration to a post-urban, or transnational world.[1] Novak, for instance,
thinks of this new world of potential workspaces as one to be
perceived via
sensor.
“we are
proceeding from models of the eye and ear to models of thought processes and
conceptual structures in the brain.”[2]
“in order to
communicate [in virtual worlds] one must know how to build structures and
activate processes inside another person’s brain.”[3]
The process of
viewing then progresses from the metaphysical or psychological act towards a
perceptual understanding of physical experience.
The use of automation is present in all aspects of life and it
defines how life with automation can be both positive and negative for humans.
Using a few specific examples of automation in the film, we can see the social
effect it has on humans. Starting with the positive aspects, the technology in
the film from the medical point of view is extremely positive. The robotic machines
basically replace doctors, covering diagnostics to treatment, resulting in all
disease being curable and detectable from lymphoma to radiation
poisoning thus resulting in infinite life for the wealthy. This
exemplifies the use of robotics and technology in modern day society with
machines such a MIR scanning which can read brain waves and make 3d models of
the human brain, detecting and diagnosing a potential illness to the human. The
technology of mind reading has started to play a substantial role within
society today, from helmets which can read your thoughts syllable by syllable.
This is the key motivating factor for the protagonist of the Elysium to travel to Elysium since his
health is deteriorating; however, this is only accessible for the upper-class
population on Elysium. This brings us to the negative side of automation in
this world. In the scene with the automated parole officer, the use of a
programmed automatic response shows us how such a response and decision to the
protagonist’s parole is made regardless of reason and explanation. This
highlights how automation can negatively influence decisions for people since
the robots aren’t programmed with any emotion for varying train of thought. The
overall theme drawn from this involves mind reading and diagnosing technology,
whether it be medical or many other aspects of life. The programs of these
robots are determined by the rich owners of the robots who live on Elysium who
essentially are the dictators of earth and make all the decisions for what
happens on earth.
Deviating from a fictional example of
automation in society, there are many real-world examples by which automated
processes dominate our everyday lives. As mention earlier, the biggest shift in
automation in the last few decade involves the use of Medical imaging, mind
mapping and mind reading technology, The role of social media, which is purely
reliant on technology has changed the way in which we communicate with one
another, whether it be direct messaging between people, or connected with the
media and the world around us. This has become an indispensable asset to
everyone in society, however there are even more.
When it comes to automation in architecture,
once again, there are opposing opinions which both pose positive and negative
connotations to the way in which architecture is executed in modern day
society. Much like many other jobs and industries being swallowed by
automation, architecture is a very debatable industry in terms of artificial
intelligence. The idea of automation taking over the architecture industry
makes us question the role of the architect in the design and construction of a
building. The architect, whether a team of or an individual, is the champion in
the design of a space for people, so the overall thinking process of the
architect, with today’s technology, has yet to be completely replaced with
artificial intelligence since the architectural thinking process prevails as
the mastermind of the design of space. The significant aspect of architecture
which has been greatly affect by automation, is how an architect executes a
design, from the subtle processes mentioned earlier, to the design stage, and
all the way to the physical construction of the building or space. Each step of
the delivery of a building has a role of automation present, and the architect
merely is the governing body of the automation in each step. Architects won’t
simply be eliminated from the industry, but the role will be augmented to
similar scale as seen in the Elysium where the operators of technology
prevail in society. For example, the use of such software like Grasshopper
substantially increases the productivity of design through mathematics and
algorithmic design. Such forms of design have been utilised in the past, but
not to the scale by which it is done today.
The
next step, with regards to mind reading technology and medical imaging, would
be to implement such a technology within the field of architecture. Mind
reading technology, mentioned earlier, could be utilised to be able to read the
mind of a client, who is trying to envision their space. There is usually
disparity between the mind of the architect and the mind of client and their
brief, so this could potentially be a solution to solve the disproportion
between each party’s thoughts and visions. This raises the question of the role
of the architect within the design process of this space. Does the architect
still dictate the design and add improvement to the client’s intended brief or
do they merely run the mind-reading software which the client is using? What can we
understand about the brain of the client as the architect is given the role to
design a building based on their brief?
Part 02 – Qing – Sensors We Could Use for Assisting
Mind Reading in Architecture
What
are the different types of sensors which can be used based off the data from
our brains?
Similar
to mind mapping and brain scans, we can read a human in many other ways. For
example, our facial expression and our body gestures.
First,
how can a building detect what we need when we enter a space through the use of
sensors?
Secondly,
how can our body gestures affect & interact with the environment of the
building?
Motion detection
sensors
A motion detector is an electronic device which
is used to detect the physical movement (motion) in a given area and it
transforms motion into an electric signal, motion of any object or motion of human
beings. Motion detection plays an important role in the security industry.
Businesses utilise these sensors in areas where no movement should be detected
at all times, and it is easy to notice anybody’s presence with these sensors
installed. These are primarily used for intrusion detection systems, automatic
door control, Boom Barrier, Smart Camera (i.e. motion based capture/video
recording), toll plaza, Automatic parking systems, Automated sinks/toilet flusher,
hand dryers, energy management systems (i.e. automated lighting, AC, fans,
appliance control) etc.
On the other hand, these sensors can also
decipher different types of movements, making them useful to communicate with
the system by waving a hand or by performing a similar action. For example,
someone can wave to a sensor in the retail store to request assistance with
making the right purchase decision.
3D Facial Recognition
Sensor
An emerging feature in mobile
communications is to unlock smartphones by 3D face recognition instead of
fingerprint or PIN. Making authentication more convenient and more secure, it
may soon become indispensable for mobile payment applications and mobile ID.
Together with its innovation partner PMD technologies AG, Infineon has
developed a new 3D image sensor in its REAL3™ chip family, based on
Time-of-Flight (ToF) technology. It enables the world’s smallest camera module
for integration in smartphones with a footprint of less than 12 mm x 8 mm,
including the receiving optics and VCSEL (Vertical-Cavity Surface-Emitting
Laser) illumination. Image and video
sensors have thrived in the golden age of smartphones, making steady advances
in areas such as fast auto-focus, low-light sensitivity, and back-illuminated
pixel arrays. And now the powerful combination of image sensors and vision
processors is opening up new possibilities in the areas of automotive safety,
biometrics, and medical.
Today’s CMOS image sensors incorporate highly-adaptive pixel designs that can intelligently sense -- rather than merely capture -- the imaging data, while being paired with intelligent vision processors. Take biometric applications like facial recognition. Omron launched its Human Vision Component (HVC) module way back in 2013 and has advanced it since then through better components as well as improving upon its underlying OKAO face-recognition algorithm. It is now at the point where they can be used to track different facial points to interpret micro-expressions and eye movements, as well as recognise human emotions, moods, and even intentions.
Today’s CMOS image sensors incorporate highly-adaptive pixel designs that can intelligently sense -- rather than merely capture -- the imaging data, while being paired with intelligent vision processors. Take biometric applications like facial recognition. Omron launched its Human Vision Component (HVC) module way back in 2013 and has advanced it since then through better components as well as improving upon its underlying OKAO face-recognition algorithm. It is now at the point where they can be used to track different facial points to interpret micro-expressions and eye movements, as well as recognise human emotions, moods, and even intentions.
Pupil and
Glint Detection Sensor
Human
beings acquire 80%~90% of outside information from our eyes. Humans’ visual
perception information can be acquired through eye gaze tracking. With the
increasing development of computer/machine vision technology, gaze tracking
technology has been more and more widely applied in fields of medicine,
production tests, human-machine interaction, aviation military, etc.
As one of
traditional gaze tracking methods, the pupil centre-corneal reflection (PCCR)
technique has been developed and improved increasingly in recent years. Pupil
and glint (corneal reflection) centre detection plays a crucial role on gaze tracking
methods based on PCCR. There are always interference factors such as eyelashes,
eyelids, shadows and natural light reflection in the images acquired by a CCD
camera, which will cause false boundary points around pupil contour. In order
to ensure the accuracy of gaze estimation, robust and accurate method of pupil
and glint detection is essential.
A
novel and robust method of pupil and glint detection using wearable camera
sensor and near-infrared LED array for gaze tracking system is proposed in this
paper. Compared with original Starburst, the proposed circular ring ray
location (CRRL) method has higher stability, accuracy and real-time quality.
This method overcomes the location uncertainty of initial shooting point of
rays. The process of shooting rays back towards the start point to collect more
pupil boundary points is omitted. RANSAC is also omitted for the reason that
the interference points can be eliminated effectively. Pupil centre can be
detected accurately when interference points are located on or around pupil
contour. Improved Otsu method is employed to acquire the eye’s binary image.
Part of the remainder interference factors (including eyelashes and eyelids)
are eliminated by opening-and-closing operation with structure elements of different
size. Projections of 3D grey-level histogram are utilised to estimate rough
pupil radius and centre position. The circular ring area is determined by
provisional pupil radius and centre. A series of rays with equal gap are shot
from the inner to outer ring to detect pupil boundary points by calculating
gradient amplitude. Gradient amplitude of each pixel is used to eliminate false
boundary points. Spline interpolation is performed on the neighbourhood of
boundary points to obtain subpixel-precise ones. Improved total least squares
is developed to fit ellipse and then pupil centre position is calculated
through elliptic equation fitted. Because the grey levels of glint pixels are
higher than anywhere else, rough glint region is estimated by binarisation with
a fixed threshold level. According to glint’s illumination intensity (suited
for Gaussian distribution), Gaussian function deformation solved by improved
total least squares is utilised to calculate glint centre.
Human Body
Temperature Sensor
Measuring body
temperature is one of the most important measurements with a lot of
applications in physiological studies as well as clinical research. With the
advancement in technology, in recent years, numerous observations have been
reported and various methods of measurement have been employed. Continuous
monitoring of physiological signals could help to detect and diagnose several
cardiovascular, neurological and pulmonary diseases at their early onset. Some
types of temperature measurement device are mentioned below:
Thermocouple: or sometimes called as TC is one of the most
widely used temperature sensors in manufacturing, machining, and
scientific applications. The sensor has the advantage of being robustness,
low-cost, and self-powered. Such features have made them a good choice for
long-distance applications.
Resistance Temperature Detector (RTD): In a
resistor temperature detector, the resistance is proportional to the
temperature. The main metals used in these sensors include Ni (Nickle), Pt
(platinum) and Cu (copper). This resistor is capable of making a wide range of
temperature measurement as it can be used to measure temperature in the range
between -270 0C to +850 0C. However, the current produces heat in a resistive element
causing an error in the temperature measurements. PT100 and PT1000 are famous
RTD sensors.
Thermistor: This is another type of temperature sensor and
its name is the acronym of “thermally sensitive resistor”. This sensor is
relatively low cost, flexible, and easy to use. This sensor works based on
changing its resistance in response to temperature. It was first discovered by
Michael Faraday back in 1833 but was
manufactured after 1933.
manufactured after 1933.
Part 03 – Boyu –
The effect on Architecture
The sensor applies in nowadays architectural
practice and research is followed by ‘responsive architecture’. The sensor
plays the role to collect the surrounding conditions, then drives the architecture
to adapt their forms, shape, colour responsively.
This architectural field is first proposed by Nicholas Negroponte in late 1960s.
Within his work, Negroponte proposes that responsive architecture is the
natural product of the integration of computing power into built spaces and
structures. He also extends this belief to include the concepts of recognition,
intention, contextual variation, and meaning into computed responses and their
successful and ubiquitous integration into architecture. This
cross-fertilization of ideas lasted for about eight years.
Since Negroponte’s contribution, new
works1 of responsive architecture have also emerged, but as
aesthetic creations—rather than functional ones. The works of Diller &
Scofidio (Blur), dECOi (Aegis Hypo-Surface), and NOX (The Freshwater Pavilion)
are all classifiable as types of responsive architecture. Each of these works
monitors fluctuations in the environment and alters its form in response to
these changes. The Blur project by Diller & Scofidio relies upon the
responsive characteristics of a cloud to change its form while blowing in the
wind. In the work of dECOi, responsiveness is enabled by a programmable façade,
and finally in the work of NOX, a programmable audio–visual interior.
Looking
at the precedence, they remain worthy targets for design efforts, but they do
not take into account more recent developments within the fields of robotics
and artificial intelligence that are used within responsive systems today. Combined with mind reading technology in
medical research, architecture may have the possibility to response to human’s
psychological needs. human’s psychological feeling, like the needs for
different functions of space, the quality of space and inner environment can be
detected by sensors and to drive the adaption of architecture to response these
needs. The sensors detect human’s facial expression, body movement, gesture and
the mind radio wave may better grasp users subconscious mind thought,
understand better than themselves, they can transmit these message to the
computer system, further analyse the preference of users, make prediction to
their following psychological needs, and give quick feedback to let the
architecture change before you generate that kind of feeling.
When it
develops to that stage, the role of architecture will change correspondingly.
Architect can easily understand what their client really wants without have
meetings for many times to communicate for the same consensus. The previous
works can be easier to collect useful information from clients. While the
following stage may have some tasks. Architecture is not only serve for their
costumers, but also need to response to social, urban economic environment.
Comply with the trend of responsive system, architect needs to integrate more
changeable aspects to drive their design. How to balance the importance of each
elements, how to design an ever-changing prototype to satisfied different needs
of users are those which architect needs to make great efforts in.
Responsive architectures are those that measure actual environmental
conditions (via sensors) to enable buildings to adapt their form, shape, colour or character responsively (via
actuators).
How
does a human’s psychological feeling affect the design?
If
the architecture needs to satisfy multiple needs from different people, how can
we balance those needs?
What
is the role of architect? -integrate the urban conditions, like the culture,
and aesthetic with the human’s psychological needs
[1] Virilio, “The
Overexposed City,” Pages 276-83.
[2] Bill Viola, “Will
There Be Condominiums in Data Space?” Multimedia: From Wagner to Virtual
Reality, eds. Randall Packer and Ken Jordan (New York: W.W. Norton &
Company, 2001), Pages 287-98.
[3] Marvin Minsky, “The Future
Merging of Science, Art, and Psychology,” Ars Electronica: Facing the Future,
eds.
Timothy
Druckrey with Ars Electronica (Cambridge: The MIT Press, 1999), Pages 229-33.
References
1. Sterk, T.: 'Thoughts for Gen X— Speculating about the Rise of Continuous Measurement in Architecture' in Sterk, Loveridge, Pancoast "Building A Better Tomorrow" Proceedings of the 29th annual conference of the Association of Computer Aided Design in Architecture, The Art Institute of Chicago, 2009. ISBN 978-0-9842705-0-7
2. Building Upon Negroponte: A Hybridized Model of Control Suitable for A Responsive Architecture , Tristan d’Estrée Sterk (2003)
3. Toward Responsive Architectures By Philip Beesley, Sachiko Hirosue and Jim Ruxton
4. “The Architecture Machine”, 1970; “The Soft Architecture Machine”, 1975; and his multiple papers entitled “The Semantics of Architecture Machines”, of 1970
5. Virilio, “The Overexposed City,” Pages 276-83.
6. Bill Viola, “Will There Be Condominiums in Data Space?” Multimedia: From Wagner to Virtual Reality, eds. Randall Packer and Ken Jordan (New York: W.W. Norton & Company, 2001), Pages 287-98.
7. Marvin Minsky, “The Future Merging of Science, Art, and Psychology,” Ars Electronica: Facing the Future, eds. Timothy Druckrey with Ars Electronica (Cambridge: The MIT Press, 1999), Pages 229-33.
No comments:
Post a Comment