Abstract: The paper presents a new way of obtaining original representations of architectures well integrated into their environment, far from the generation mode supervised by textual semantics. It illustrates the compositional nature of deep learning, a perfect example of the organized complexity theories popularized by Christopher Alexander and Nikos Salingaros. Initially, the user assembles a collection of images chosen according to his or her preferences for style, structure, environment and rendering type. The challenge was to take these properties into account in a single generative chain, without using textual descriptors of the architecture. The concept developed combines two complementary approaches of Generative AI: structural (framework) and interpretive (detail and finish). It replaces and complements the classic notion of style transfer (global, often from a single image, with a fairly fixed rendering) with organic interpretation (multi-scale parts being associated to find a local hidden style). The generative abundance enabled by this coupling is illustrated on a small learned dataset of 4300 images. It reinforces the promising impression of this type of exercise, which, in our opinion, has been neglected too much since the appearance of GAN in 2014.
Loading