Abstract (eng)
Within the world of the built environment, there is a term known as hostile architecture. This refers to designs purposed with preventing individuals from utilizing a space beyond desired intention, or in manners deemed unappealing. Often this manifests as forms of physical barriers or protrusions hindering common uses such as rest or activity for all out of apprehensions of the perceived misbehaviors of few. This adversarial approach is as well used as a means to train machine learning algorithms by providing a negative to prove the quality of a positive. Just as the communities around these hostile architectural designs find ways to move in and around them, so to do these algorithms find their way to a relative/desired truth. This research envisions hostile/adversarial software architectures as a design foundation to shape performance and force or direct performers into new patterns by preventing, modifying or penalizing those behaviors common to them.