Conference PaperPDF Available

Abstract and Figures

The promise of artificial intelligence (AI), in particular its latest developments in deep learning, has been influencing all kinds of disciplines such as engineering, business, agriculture, and humanities. More recently it also includes disciplines that were "reserved" to humans such as art and design. While there is a strong debate going on if creativity is profoundly human, we want to investigate if creativity can be supported or fostered by AI-not replaced. This paper investigates if AI is capable of (a) inspiring designers by suggesting unexpected design variations, (b) learning the designer's taste or (c) being a co-creation partner. To do so we adopted AI algorithms, which can be trained by a small sample set of shapes of a given object, to propose novel shapes. The evaluation of our proposed methods revealed that it can be used by trained designers as well as non-designers to support the design process in different phases and that it could lead to novel designs not intended/foreseen by designers.
Content may be subject to copyright.
Co-Designing Object Shapes with
Artificial Intelligence
Kevin German1, Marco Limm2, Matthias W¨olfel3, and Silke Helmerdig2
1School of Engineering, Pforzheim University, Pforzheim, Germany
2School of Design, Pforzheim University, Pforzheim, Germany
3Faculty of Computer Science and Business Information Systems,
Karlsruhe University of Applied Sciences, Karlsruhe, Germany
kevin.german@hs-pforzheim.de, limmmarm@hs-pforzheim.de,
matthias.woelfel@hs-karlsruhe.de, kontakt@helmerdig.de
Abstract. The promise of artificial intelligence (AI), in particular its
latest developments in deep learning, has been influencing all kinds of dis-
ciplines such as engineering, business, agriculture, and humanities. More
recently it also includes disciplines that were “reserved” to humans such
as art and design. While there is a strong debate going on if creativity is
profoundly human, we want to investigate if creativity can be supported
or fostered by AI—not replaced. This paper investigates if AI is capable
of (a) inspiring designers by suggesting unexpected design variations, (b)
learning the designer’s taste or (c) being a co-creation partner.
To do so we adopted AI algorithms, which can be trained by a small
sample set of shapes of a given object, to propose novel shapes. The eval-
uation of our proposed methods revealed that it can be used by trained
designers as well as non-designers to support the design process in differ-
ent phases and that it could lead to novel designs not intended/foreseen
by designers.
Keywords: inspirational AI ·human-machine co-design ·artificial neu-
ral network ·genetic algorithm ·design process
1 Introduction
3D printing promised to revolutionize production processes and to enable any-
body to make their own products on the fly. However, this promise has not yet
been fulfilled. We believe that one of the main reasons is that the design pro-
cess is still laborious and that it simply cannot be realized by non-designers due
to time or skill constraints. To really liberate the production process, beyond
easy and accessible 3D printing, novel methods/tools for the design process are
required which permit everybody to design a product even with very limited de-
sign skills. Novel developments in artificial intelligence (AI) have demonstrated
that they are capable to do things which in the past were restricted to humans.
Artificial neural networks (ANN) and genetic algorithms (GA) are tools to make
work easier for humans, for example through automatic speech translations (for
2 K. German et al.
instance simultaneous lecture translation has been demonstrated feasible already
in 2008 by Kolss et al. [15]) or are even able to come up with solutions humans
would never come up with effortlessly, see for instance the design of an “evolved
antenna” using evolutionary algorithms published by Hornby et al. already in
2006 [9]. With further technological developments, of such processes there is a
gradual transfer of competence from human beings to technical devices, namely,
they serve as [27]:
1. tools: transfer of mechanics (material) from the human being to the device
2. machines: transfer of energy from the human being to the device
3. automatic machines4: transfer of information from the human being to the
device
4. assistants: transfer of decisions from the human being to the device
We want to exemplify this concept with the field of mobility:
1. bicycle: feet are replaced by wheels
2. motor vehicle: propulsion is replaced by engine
3. self-driving rail vehicle: control is replaced by sensors and signal processing
4. autonomous vehicle: route planning or search for a parking space are replaced
by artificial intelligence
Similarly, we can give an example from the field of art and design:
1. potter’s kick-wheel: a tool used in the shaping of round ceramic ware driven
by kicking a fly-wheel into motion
2. potter’s electric-wheel: the kicking of the fly-wheel is replaced by a motor
3. construction & 3D printing: the object is constructed with a CAD-software
according to given parameters and 3D printed
4. generated & 3D printing: the object is generated by an optimization process
given particular constraints and 3D printed
In the coming years we are in the process of moving from Step 3. to Step 4.
which raises—as it was the case from moving from Step 1. to Step 2. as well as
from Step 2. to Step 3.—discussions, rejections, ethical issues (for instance, see
the trolley problem [20]), up to fears. Our particular interested in this process is
in investigating the following questions:
Can AI be used to assist the design process to support the designer and/or
the non-designer?
Can AI inspire designers by suggesting unexpected design variations?
Can AI learn the designer’s taste to suggest only design variations the de-
signer favors?
Can AI be a co-creation partner just like other humans or serve as a muse?
How this development is perceived by designers and/or the non-designer?
4which is called Automat or automate in other languages such as German or French
respectively
Co-Designing Object Shapes with Artificial Intelligence 3
In the literature, some approaches to use AI in the design process have been
presented. We review those approaches in the following section. Because the
already introduced approaches are not available or were not fulfilling our re-
quirements it was necessary to adopt given methods to intervene in the design
process; either partially or in total. The investigated algorithms include genetic
algorithms and different versions of neuronal networks namely convolutional neu-
ral networks, generative adversarial networks, and variational autoencoder. The
developed algorithms can semi- or fully-automate the research, brainstorming
and concept phase of the design process.
To evaluate and compare our different proposed approaches the entire de-
velopment process was completed until the finished product for each approach.
The approaches have been introduced within the School of Design at Pforzheim
University, Germany and to visitors of the Salone del Mobile in Milan, Italy
where we showcased our approach. On these occasions, we were able to demon-
strate that our proposed methods can be used by trained designers as well as
non-designers to design semi-complex shapes with minimal user feedback.
2 Related Work
The idea of using algorithms to support the design process and aesthetic expe-
rience is well established and frequently referred to as generative design or pro-
cedural generation. It is used to generate geometric patterns, textures, shapes,
meshes, terrain or plants. The generation processes may include, but are not
limited, to self-organization, swarm systems, ant colonies, evolutionary systems,
fractal geometry, and generative grammars. McCormack et al. [18] review some
generative design approaches and discuss how design as a discipline can benefit
from those applications. While older approaches rely on generative algorithms
which are usually realized by program code the introduction of AI changed this
process: because it can learn patterns from (labeled) examples or by reinforce-
ment. AI or more precise ANN has been introduced to support the design process
more recently. Leading software companies in engineering and design have al-
ready included AI-driven generative design paradigms which let humans input
design goals. For instance, Project Dreamcatcher [2] is an engineering-based gen-
erative design program that enables designers to generate computer-aided design
(CAD) models based on their goals and constraints. It takes into account how
the forces will be directed best in the product and defines the best production
method. Autodesk states the benefits of generative design to [1]:
explore a wider range of design options
make impossible designs possible
optimize for materials and manufacturing methods
Most popular (at least in the mass media) are probably different variations of
image-to-image translation. The most prominent example is style transfer—the
capability to transfer the style of one image to draw the content of another. But
mapping an input image to an output image is also possible for a variety of other
4 K. German et al.
applications such as object transfiguration (e.g. horse-to-zebra, apple-to-orange,
season transfer (e.g. summer-to-winter) or photo enhancement [30]. While some
of the just mentioned system seems to be toy applications, AI tools are taking
over and gradually automate design processes which used to be time-consuming
manual processes. Indeed, the most potential for AI in art and design is seen
in its application to tedious, uncreative tasks such as coloring black-and-white
images [29]. Cluzel et al. have proposed an interactive GA to progressively sketch
the desired side-view of a car profile [3]. For this, the user has taken on the role of
a fitness function5through interaction with the system. The chAIr Project [23]
is a series of four chairs co-designed by AI and human designers. The project
explores a collaborative creative process between humans and computers. It used
agenerative adversarial network (GAN) to propose new chairs which then have
been ‘interpreted’ by trained designers to resemble a chair. It thus replaced the
designer in the brainstorming and concept phase (see Section 3). DeepWear [12]
is a method using deep convolutional GANs for clothes design. The GAN is
trained on features of brand clothes and can generate images that are similar to
actual clothes. A human interprets the generated images and tries to manually
draw the corresponding pattern which is needed to make the finished product. Li
et al. [17] introduced a neural network architecture for encoding and synthesizing
the structure of 3D shapes which—according to their findings—are effectively
characterized by their hierarchical organization. Daniel Wikstr¨om discusses the
implementation of AI into the UX design process [25]. He mentions that many
designers do not yet know technology well enough and therefore perceive it as
“magic”. But he also explains how an intelligent assistant is perceived and would
interact. Roman Lipski uses an AI muse (developed by Florian Dohmann et al.)
to foster his/her inspiration. Because the AI muse is trained only on the artists
previous drawings and fed with the current work in progress it suggests image
variations in line with Romans taste.
Most of the related work is not ready yet to be used without a thorough un-
derstanding of the technology and is more an engineering approach using neural
networks instead of common technology. What we are aiming for is different:
The whole design process—not its development—should be applicable to naive
users without any profound understanding of design or engineering. The user has
to only rely on his/her taste to cherry-pick examples he/she likes in an iterative
process until he/she ends up with the final design.
3 Design Process
Considering several common definitions of the design process, it can be simplified
into five general phases [10, 26, 11, 8].
1. The briefing in which, e.g., the specifications and the project plan are cre-
ated.
5also referred to as objective function
Co-Designing Object Shapes with Artificial Intelligence 5
2. The research phase in which project-relevant aspects such as already existing
products as well as tendencies in the market are analyzed and domain-specific
knowledge is gained.
3. The brainstorming and concept phase, in which new ideas for the design
problem are to be conceived or already existing ones improved. Countless
sketches and concepts are often created and discarded iteratively.
4. The design phase in which a concept is worked out in more detail, taking
into account the technical requirements.
5. The production in which the concept is elaborated in accordance with the
production and a prototype based on it is created.
The research and brainstorming phases are very time-consuming for the de-
signer. Since the majority of sketches are often discarded in the conception phase,
only a few of them find their way directly into the end product. This is ineffi-
cient from an economic point of view because the designer invests most of his/her
working time into the basic concept. He/she then has problems perfecting it due
to a lack of time in the design and production phase. This is particularly relevant
in product design, where fine-tuned appearance can determine sales success. Es-
pecially products that justify their selling price by their appearance are affected.
The water bottle is an example of this. The content of different bottles is almost
the same, the function is the same, but the design justifies the price difference
between a cheap and an expensive product.
Fig. 1. The three layers of product design, from left to right: silhouette, surface, and
graphics. Own representation in accordance with [21].
Another problem that becomes visible in this example is the pattern that
people memorize throughout their lives. In the brainstorming phase designing
water bottles that do not correspond to the prototypical or expected image
requires a special degree of creativity and inspiration. There two problems exist:
6 K. German et al.
The designer puts many resources into the rough design and thus has fewer
resources for its perfection.
Designing new patterns that break with the old ones requires a lot of cre-
ativity and inspiration.
The product design process can be divided into three layers [21]:
1. silhouette, which reflects the proportions of the product regardless of color,
logo or surface finish.
2. surface, which includes, for example, curves, bulges or corners of the product.
3. graphics, showing logos and color.
An example of the three layers for a bottle is given in Fig. 1. The focus in
this work has been on the silhouette, as the first and most important level of
product design. To demonstrate the ability of the algorithms presented here,
attempts were made to produce bottles (semi-)automatically. The simple rota-
tionally symmetric shape is intended to simplify the learning process as well as
the later implementation of 3D models.
Fig. 2. Using a random generation of images and an objective function to measure its
similarity to a bottle one image is assigned a confidence of 96.7% while the other has
a confidence of 0.0%.
4 Semi-Automatic Development of Shape Representation
The first approach we investigated was to automatically generate bottles using a
GA. Therefore, we simulate an evolutionary process with a population of objects.
Each object has a genome that encodes e.g. polygons or polylines. Through
targeted selection, mating, recombination and mutation, a population is created
that has adapted optimally to an objective function6. This function is in this case
6An objective function is an equation to be optimized given certain constraints and
with variables that need to be minimized or maximized.
Co-Designing Object Shapes with Artificial Intelligence 7
an ANN called MobileNet that has been pre-trained on the ImageNet dataset [4]
and can already classify objects well, including different types of bottles [16].
While the GA works and even manages to create populations that are clas-
sified by the net as bottles, the results, as shown in Fig. 2, are largely not in
line with the generally accepted definition of a bottle. Although MobileNet has
high accuracy in real images, it appears that the GA has found a vulnerability
in the ANN in solving this optimization problem. This is a known problem in
ANNs, known as adversarial examples [6]. This refers to examples that can be
clearly classified by humans, but specifically deceive ANNs [28]. This approach
using a simple objective function to decide if the shape is similar to a bottle led
to unsatisfactory results. Fig. 2 demonstrates the evaluation of the classifier for
two randomly generated images. Even though both represent patterns without
any obvious similarity to a bottle one image is assigned a confidence of 96.7%
while the other has a confidence of 0.0%. Therefore, we had to use a different
approach which better separates implausible from plausible shapes.
Original
Discriminator
Training Data . . .
. . .
. . .
. . .
. . .
Generator
Fake
Fig. 3. Flow chart of generative adversarial network and different instances according
to the different steps.
4.1 Plausible Shape Representation
As can be seen from our first experiments a “naive” approach is not leading to
satisfying results. Therefore, an approach is required which guarantees that the
produced shapes are similar to the shape of bottles. In 2014 Goodfellow et al.
proposed the special ANN architecture GAN which we have already mentioned
before [5]. The main idea of their proposal is to use two ANNs that compete
with each other. Fig. 3 demonstrates the basic principle and components: The
8 K. German et al.
generator tries to generate data from latent variables that are as similar as pos-
sible to the training data. The discriminator tries to classify the generated data
according to the original training data. Both networks play a zero-sum game: As
the system progresses, the generator as well as the discriminator are improving.
This process continues until the discriminator can no longer distinguish between
forgery and original. This is achieved when the discriminator is only correct in
50% of the cases.
Fig. 4. Black and white silhouettes of bottles.
Since the generator learns to generate data as similar as possible to the train-
ing data, it requires a training data set that corresponds as closely as possible
to the desired output [5]. In our case we were interested in generating different
variations of shapes resembling bottles. To train our system we converted 200
images of bottles into black and white silhouettes (see Fig. 4). As automatic
segmentation did not lead to satisfactory results the conversation was done by
hand. Since the data volume is small and GANs normally use data volumes in
orders of magnitude of several 1,000 images, there is a risk of over-adaptation by
the GAN [24]. To reduce over-adaptation, data augmentation is used by auto-
matically generating variations of the available training data including shearing,
enlarging, rotating and cropping.
Fig. 5 shows that the training loss in the first few generations quickly ap-
proaches zero. This is due to the fact that the network initially roughly maps
the basic form of the input data. In higher epochs many bottles of an epoch have
similar characteristics. This is a well-known problem in GAN architectures and
is called mode collapse. The generator limits itself to generating only a few ex-
amples that the discriminator classifies as original. In the worst case, all images
generated by the generator are almost identical [19]. Although in our example
we see variations the problem is still visible. Different epochs can be considered
to create more diverse bottles because the point of mode collapse shifts with
each epoch. Although the training data set only consists of symmetrical bottles,
the architecture is capable of generating asymmetric bottles. This is interesting
because the net is able to generate something it did not know could be e.g.
asymmetric. It is up to the designer to incorporate these unusual features such
as asymmetrical elements into the product design or to rate them as a mistake
and to correct them manually based on his/her taste.
Co-Designing Object Shapes with Artificial Intelligence 9
Fig. 5. Different iterations of the learning process. From left to right, iteration 50, 100,
steps of 100 until 1000. Four different examples are shown for each iteration.
Due to the required minimum complexity of the GAN architectures and
the need for sharp high-resolution images in combination with the low amount
of training data, overfitting inevitably occurs. However, subjective comparisons
with the training data set did not rate the over-adaption as critical as the ma-
jority of the bottles are unique. Instead of treating the shape as one union it
might be advantageous to separate the shape into different parts.
4.2 Semantic Shape Representation
The shape of an object can be decomposed into different features that can be
assigned with particular “meanings” and semantically annotated7. In our partic-
ular application of a bottle the semantic shape representation can be separated
and annotated into: lid, neck, wall, wall-to-neck transition and bottom8; see
Fig. 6. The classification was done manually by cutting the existing 200 images
into individual parts.
One conceivable option for creating new shapes of bottles is the random per-
mutation of the semantic parts and thus to overcome the limiting characteristics
of the former approach where many generated bottles had similar characteris-
tics. For this purpose, an ANN is to be conceptualized, which receives random
features and assembles them to form a new object. The network learned, in the
training phase, the relationship between the semantic features and the actual
bottle. After this phase, the network is able to merge features seamlessly and to
produce the shape of a consistent bottle. New permutations of features using the
trained ANN are shown below in Fig. 7. The features were determined based on
7Semantic annotation is the process of attaching additional information to various
concepts to be used by machines.
8In preliminary tests, this division turned out to be the most effective variant.
10 K. German et al.
Fig. 6. Decomposition of a semantic shape representation of a bottle.
a discrete equal distribution. It can be observed that the features are transferred
and combined successfully.
4.3 Introducing Personal Taste in Shape Representation
So far we have described the process of how to fully automatically generate
plausible shapes by varying different features of the bottle. Now it’s time to bring
back the designer by having him/her intervene in the design process: The shape
should advance iteratively towards the taste of the designer. For optimization
problems in which a solution approaches an optimum step by step, GA has
already proven to be an appropriate tool [9], which is also why a GA was used
in this procedure. The basic idea is that you have a population of objects where
each object is defined by its genes. Each gene represents a semantic feature, in
this case, e.g., the bottleneck. To transform the genes into visible features, the
ANN of the semantic shape representation is used.
Similar to the biological model, the population gradually adapts to the en-
vironment through selection, mating, gene recombination and mutation [7]. To
introduce the designer into the automatic algorithm the random permutations
of features have to be evaluated by the designer instead of a genetic objective
function. Therefore, the designer takes up the position of the fitness/objective
function, similar to the ANN MobileNet, by sitting in front of the computer and
by evaluating each instance individually; Fig. 8. The basic idea here is that the
population gradually approaches the taste of the user until his/her ideal bottle
is created. Therefore, each of the 20 individuals in the population is assigned a
fitness value between zero and one by the user. The higher the fitness value the
higher the probability of survival by an individual. Combined with the previ-
ously mentioned methods such as mutation, this results in a population which
is more precisely adapted to the taste of the designer. To cover a large search
space, the population is initialized using a discrete equal distribution. Over a
couple of iterations the final optimal shape is found.
4.4 Democratizing Shape Representation
To be able to democratize the design process we have to vary the proposed
approaches so far to be able to do some arithmetic’s; e.g. to calculate the arith-
Co-Designing Object Shapes with Artificial Intelligence 11
Fig. 7. Three variations of bottle shapes as generated by merging the decomposed
parts as given by the semantic shape representation approach.
metic mean of a set of bottles designed by different persons. To do so we use
avariational autoencoder (VAE) [13]. It is an ANN that learns to produce the
same output as input. A special feature here is that the network topology has a
bottleneck between the input and the output layers. This bottleneck stores the
compressed information as a vector of real numbers called latent variables (LV).
As a result, the autoencoder must compress information of the input into the
LV and then decompress it after the bottleneck. The VAE learns to extract the
most relevant information from an input image as LV so that it can be used to
regenerate the output image as correctly as possible [13].
The basic idea is this: After training, the LV can be accessed directly through
sliders. The trained decoder would then convert the LV into a corresponding
bottle. This would allow the non-designer with limited design skills to design an
object in a playful way (Fig. 9). A number of eight LV have delivered satisfactory
results in trials. A smaller number of LV leads to less detailed and more similar
images. More LV, on the other hand, have not achieved any significant improve-
ment in quality, but have worsened the user experience due to more necessary
sliders.
It can be seen that by moving individual sliders, the bottle can be trans-
ferred into other forms. The transformation is done simultaneously with the
12 K. German et al.
Result
Evaluation by User
Mutation
Create Start
Population . . .
. . .
. . .
. . .
. . .
0.3 0.9 0.8
Selection
Recombination
Fig. 8. Flow chart of the genetic algorithm and different instances according to the
different steps.
Fig. 9. Transforming the bottles using eight parameters. Each slider corresponds to
one LV.
Co-Designing Object Shapes with Artificial Intelligence 13
slider movement, giving the user direct and intuitive feedback. A complete dis-
entanglement of the LV could not be achieved. Consequently, a LV and thus
the corresponding slider can be responsible for several semantic features of the
object.
Because there are vectors behind the bottles, we can do bottle arithmetic
with them [22]. This makes it possible to calculate the arithmetic mean of a set
of bottles. This allows several individuals to democratically design a bottle by
first creating a bottle for each individual using the sliders and then averaging all
created bottles. There are two main pillars of democratic design. First, anyone
can design objects now even without design skills and secondly, the taste of each
individual can equally influence the final product.
5 Results, Evaluation & Limitations
Using the plausible shape representation method, it was shown that parts of the
design process can be partially automated and thus speed up using ANNs. This
architecture typically provides a good image quality. However, the algorithm
does not allow direct access by the designer, so the output is heavily dependent
on the training data. For instance, to specifically design a classic beer bottle, the
designer would have to explicitly look for the shape in the output or just specify
beer bottles as training data. Although novel bottle shapes are created, these
usually do not deviate much from the training data set. For the design process,
the user has received some suggestions from the algorithms and has decided on
one of these in several iterations. For our experiments the final selection was
then loaded into a CAD program, the shape manually traced, refined, rotated
(the latter three processes can be done fully automated) and then 3D printed;
see Fig. 10a and 11a. This allows parts of the brainstorming and concept phase
to be automated.
Using the semantic shape representation, new bottles could also be created
automatically. In comparison to the previous method, these objects are more
diverse and creative looking; see Fig. 10b and 11b. At present, there must be
a database of the specific objects and their associated features available, which
is not ideal. The image quality is slightly worse than in the plausible shape
representation, but still at a very high level. In addition, as with the plausible
shape representation, the problem is that the suggestions are not adapted to the
user.
To tackle the latter problem, personal taste was introduced into the semantic
shape representation. The bottles successfully adapted to the taste of the user
through evolution; see Fig. 10c and 11c. A selection from a large amount of
output data as in the last two algorithms is thereby eliminated (apart from the
fitness score evaluation). In our opinion this is one of the most promising ways
to liberate design processes in the future because designing personalized objects
according to his/her taste becomes possible for everybody. The algorithm also
adapts through the direct feedback dynamically to changes in the user’s taste,
for instance, during a lifetime. Since the architecture is based on the semantic
14 K. German et al.
Fig. 10. 3D print of generated bottles using a. plausible shape representation, b. seman-
tic shape representation, c. personal taste in shape representation, and d. democratized
shape representation
Co-Designing Object Shapes with Artificial Intelligence 15
shape representation, the image quality is at the same level and a database of
associated features is also needed.
Fig. 11. Rendering of generated bottles using a. plausible shape representation, b.
semantic shape representation, c. personal taste in shape representation, and d. de-
mocratized shape representation
Through the democratic approach, see Fig. 10d and 11d, collectives can de-
sign objects together. With the introduction of variable parameters (sliders), ev-
ery human being is able to design things, whether talented or not. This bypasses
the designer and allows the end-user to take on the role of a designer directly.
Secondly, the opinion of each individual can be incorporated into a final prod-
uct. There’s no need for a central design instance anymore. The zeitgeist of the
collective can (anonymously) create something together, on which the majority
can agree on. Also, the manual sketches of the concept phase were eliminated.
Within a few seconds, countless new variants could be created, for which oth-
erwise individual manual sketches would be needed. However, the image quality
and diversity are worse compared to the previous algorithms.
16 K. German et al.
Table 1. Comparison of the different methods presented here.
plausible semantic personal democratic
shape shape taste approach
Affordance medium high medium medium
Automation full full semi semi
Shape quality very good good good medium
Creativity medium very good very good medium
Personalization low low high medium
In Table 1 we compare the different approaches according to the parameters
described next:
Affordance (in data preparation) describes how much time has to be spent
to prepare the data to train the ANN.
Automation describes how much the process is automated and how much
amount has to be done by the designer.
Shape quality describes the subjective quality of the shape including detail
density, image sharpness, resolution and number of image artifacts.
Creativity describes to what extent the automatically generated results have
a creative or inspiring effect on the designer.
Personalization describes how much individuality is kept in the design pro-
cess and how much of the personal taste is represented in the outcome.
As previously mentioned all variants shown here were trained with a well-
defined data set consisting of 200 relatively simple 2D images. This procedure
was sufficient to analyze the process. If the same procedures can be applied to
more complex shapes and higher dimensionality is unclear because these variants
might encounter additional problems. A possible solution in the future would be
the use of voxels or a polygon mesh, which allows a 3D representation. However,
experience shows that the necessary amount of training data increases with
increasing complexity. A manually created data set is therefore no longer a valid
option.
With automatically created 2D data sets, e.g. by web scraping, this leads to
problems because these images often have a lack of quality for this application,
for instance by having other objects in the image or through image artifacts
(which is however desired for image classifications due to better generalization).
For 3D objects, this is not to this extent the case, e.g. CAD files in most cases
only depict the desired object. To get this data, there are already large databases
that have high quality [14]. Because CAD is an industry-standard, companies can
also use their existing data-sets. The disadvantage of the increased complexity
due to the 3D representation can potentially be partly compensated by the high
quality and quantity of the training data.
Co-Designing Object Shapes with Artificial Intelligence 17
6 The Doom of the Designer or a new Beginning
Today, designers explore solutions concerning the semiotic, the aesthetic and the
dynamic realm, as well as confronting corporate, industrial, cultural and politi-
cal aspects. The relationship between the designer and the designed is directly
connected through their intentions, although currently mediated by third-parties
and media tools. In addition to the design process today generative methods ap-
pear, which utilize the concept of creating and modifying interacting rules and
systems to autonomously generate a finished design, rather than the designer
manipulating/altering the artifact itself. Therefore, the designer orchestrates the
rules and systems involved in the process of creating designs (through AI), re-
sulting in the emergent properties of the newly interconnected and constantly
self-enhancing scheme. The skill here is to master the neither formalized nor
instruction-based methodology as well as to control the relationship between
process specifiers, the environment and the generated artificial. As in conven-
tional design, the human designer remains at the center of the design process.
Do we still need a designer in times of AI and automation? Not only is this
the first question that crosses the minds of non-designers, but it is an even more
important question for the design world. Designers are not the only ones to feel
the thread of AI. For instance, translators are concerned that they could be
replaced through machine translation and truck drivers fear to lose their jobs
because of autonomous driving.
Many people associate AI with machines taking over and completely replac-
ing everything, in our case especially the design process. Instead of encouraging
the thought of AI as a thread, one should consider the opportunity to explore
and question the core and root of design. So instead of getting rid of it we fully
embrace it and find new ways of creating it. Just like a potter’s wheel is helping
the potter to create more symmetric shapes.
One of the leading questions was how the time of the designer can be utilized
more efficiently. What if AI, within the creative process, can support the work of
the designer? Is it possible to implement the 80/20 principle within the creative
process, where the computer takes over 80% of the necessary work? What else
can the designer do with his/her time, when suddenly 80% of the work is done
by a computer, generating a result as good as the one before.
After testing the algorithms extensively, the results confirmed the previously
proposed idea. Paired with AI, the computer can fulfill the majority of time-
consuming work, while the designer’s sole responsibility lies in determining spec-
ifications and adjusting the system’s final result one last time. It is the designer,
who teaches the computer about good and bad designs, by feeding the system
with information about personal needs and a more or less subjective aesthetic
point of view. The computer learns about a specific taste and proposes individual
solutions.
Now that there is a proof of concept that it is possible to teach AI form
and shape of a product and it is able to reproduce it even more efficiently than
humans, the question is: What do we need human designers for anymore? One
reason why people might keep asking this question is that the recognition factor
18 K. German et al.
of a designer comes from the “creative” gen. And thus, people are most surprised
about the creation of something so intangible solely by logic and numbers. Is
there an equation for design and creativity? Or is there an option where both
can exist together in coexistence.
What will the job of a designer look like in the future? One thing you can
count on in our human evolution is that as soon as someone creates what makes
a task at hand more efficient that approach will push through. You can look back
at all the industrial improvements that were created and, in the end, they all
have improved our living standards. The next thing is that the job of a designer
will change and become more diverse. The need for creative and new approaches
for problems we face now and in the future was never greater.
7 Conclusion & Outlook
In this work, we set out to prove that most of the design processes could be au-
tomated or at least semi-automaed and that a workflow from the first sketches
to the final product could be significantly streamlined. In particular, the brain-
storming phase of the design process could be automated and it was possible
to go directly from the technical drawing into the 3D model of a bottle. This
became possible by generating design proposals from different algorithms includ-
ing ANN and GA. This drastically accelerated the design process and saved the
designer tedious labor time. The algorithms have also provided inspiration for
the designer. Also, the end-user and collectives can now act as designers without
having the appropriate abilities, which means individualized as well as collab-
orative design is now easier than ever. We chose a simple object—a bottle—to
prove our concept. Any other object could, in principle, be designed the same
way. It should be also possible to extend our proposed approach to include
a third dimension. More complex shapes and higher dimensionality, however,
raises complexity and therefore more data and other solutions might need to be
introduced.
We live in an era of accelerating technological progress which is already influ-
encing our daily lives. We cannot ignore technological developments and pretend
these changes are not happening. Instead, we should embrace the development—
but also reflect its impact—and see it as a new set of opportunities for us to
explore and prosper. We have to reflect on what makes us human and remem-
ber that we are still the ones who are conceiving something that we think of as
beautiful and therefore value it. “Successful designs are not necessarily ‘made’:
new functionality may ‘evolve’ through the use and interpretation of artifacts by
an audience” [18]. There are many examples today where AI has influenced the
creative process letting the designer cherry-pick and approve adjustments based
on the proposed variations. Let us start exploring these possibilities today and
see where they can take us.
Co-Designing Object Shapes with Artificial Intelligence 19
References
1. Autodesk Research: Generative design (2019),
https://www.autodesk.com/solutions/generative-design, accessed: 2019-05-21
2. Autodesk Research: Project dreamcatcher (2019),
https://autodeskresearch.com/projects/dreamcatcher, accessed: 2019-05-17
3. Cluzel, F., Yannou, B., Dihlmann, M.: Using evolutionary design to interactively
sketch car silhouettes and stimulate designer’s creativity. Engineering Applications
of Artificial Intelligence 25(7), 1413–1424 (2012)
4. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A large-
scale hierarchical image database. In: Conference on computer vision and pattern
recognition. pp. 248–255. IEEE (2009)
5. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair,
S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in neural
information processing systems. pp. 2672–2680 (2014)
6. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial
examples (2015)
7. Gupta, D., Ghafir, S.: An overview of methods maintaining diversity in genetic
algorithms. International journal of emerging technology and advanced engineering
2(5), 56–60 (2012)
8. Haik, Y., Sivaloganathan, S., Shahin, T.M.: Engineering design process. Nelson
Education (2018)
9. Hornby, G., Globus, A., Linden, D., Lohn, J.: Automated antenna design with
evolutionary algorithms. In: American Institute of Aeronautics and Astronautics
Conference on Space, San Jose, CA. pp. 19–21 (2006)
10. Howard, T., Culley, S., Dekoninck, E.: Creativity in the engineering design process.
In: 16th International Conference on Engineering Design, ICED (2007)
11. Howard, T.J., Culley, S.J., Dekoninck, E.: Describing the creative design process
by the integration of engineering design and cognitive psychology literature. Design
studies 29(2), 160–180 (2008)
12. Kato, N., Osone, H., Sato, D., Muramatsu, N., Ochiai, Y.: Deepwear: a case study
of collaborative design between human and artificial intelligence. In: Twelfth In-
ternational Conference on Tangible, Embedded, and Embodied Interaction. pp.
529–536. ACM (2018)
13. Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint
arXiv:1312.6114 (2013)
14. Koch, S., Matveev, A., Jiang, Z., Williams, F., Artemov, A., Burnaev, E., Alexa,
M., Zorin, D., Panozzo, D.: ABC: A big CAD model dataset for geometric deep
learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition. pp. 9601–9611 (2019)
15. Kolss, M., W¨olfel, M., Kraft, F., Niehues, J., Paulik, M., Waibel, A.: Simulta-
neous german-english lecture translation. In: International Workshop on Spoken
Language Translation (2008)
16. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep con-
volutional neural networks. In: Advances in neural information processing systems.
pp. 1097–1105 (2012)
17. Li, J., Xu, K., Chaudhuri, S., Yumer, E., Zhang, H., Guibas, L.: Grass: Genera-
tive recursive autoencoders for shape structures. Transactions on Graphics (TOG)
36(4), 52 (2017)
20 K. German et al.
18. McCormack, J., Dorin, A., Innocent, T., et al.: Generative design: a paradigm for
design research. Proceedings of Futureground, Design Research Society, Melbourne
(2004)
19. Metz, L., Poole, B., Pfau, D., Sohl-Dickstein, J.: Unrolled generative adversarial
networks. In: Proceedings of 5th International Conference on Learning Represen-
tations (2017)
20. Nyholm, S., Smids, J.: The ethics of accident-algorithms for self-driving cars: An
applied trolley problem? Ethical theory and moral practice 19(5), 1275–1289 (2016)
21. Of, J.: Brand Formative Design-Development and Assessment of Product De-
sign from a Future, Brand and Consumer Perspective. Ph.D. thesis, Univer-
sit¨atsbibliothek Mainz (2014)
22. Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep
convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434
(2015)
23. Schmitt, P., Weiss, S.: The chair project—four classics (2018),
https://philippschmitt.com/work/chair, accessed: 2019-05-17
24. Wang, J., Perez, L.: The effectiveness of data augmentation in image classification
using deep learning. Convolutional Neural Networks Vis. Recognit (2017)
25. Wikstr¨om, D.: Me, myself, and ai: Case study: human-machine co-creation explored
in design (2018)
26. Wilson, N., Thomson, A., Riches, P.: Development and presentation of the first
design process model for sports equipment design. Research in Engineering Design
28(4), 495–509 (2017)
27. olfel, M .: Der smarte Assistent. In: Smartphone- ¨
Asthetik: zur Philosophie und
Gestaltung mobiler Medien. Ruf, O. (editor). Transcript pp. 269–288 (2018)
28. Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: Attacks and defenses for
deep learning. Transactions on neural networks and learning systems (2019)
29. Zhang, G., Qu, M., Jin, Y., Song, Q.: Colorization for anime sketches with cycle-
consistent adversarial network. International Journal of Performability Engineering
15(3) (2019)
30. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation
using cycle-consistent adversarial networks. In: Computer Vision (ICCV), 2017
IEEE International Conference on (2017)
... Li et al. [20] introduced an artificial neural network for encoding and synthesizing the structure of 3D shapes which-according to their findings-are effectively characterized by their hierarchical organization. German et al. [21] have applied different AI techniques trained by a small sample set of shapes of bottles, to propose novel bottle-like shapes. The evaluation of their proposed methods revealed that it can be used by trained designers as well as nondesigners to support the design process in different phases and that it could lead to novel designs not intended/foreseen by the designers. ...
... Those novel AI tools are shifting the creativity process from crafting to generating and selecting-a process which yet can not be transferred to machine judgment only. However, AI can already be employed to find possible sweet spots or make suggestions based on the learned taste of the artist [21]. AI is without any doubt changing the way we experience art and the way we do art. ...
Conference Paper
Full-text available
The promise of artificial intelligence (AI), in particular its latest developments in deep learning, has been influencing all kinds of disciplines such as engineering, business, agriculture, and humanities. More recently it also includes disciplines that were exclusively reserved for humans such as art and design. While there is a strong debate going on if creativity is profoundly human, we investigate if creativity can be fostered by AI. To get a better understanding of the creative potential offered by AI we open the black box and investigate where and how the magic is happening. Besides the potentials of AI, we also point out and discuss ethical and social implications caused by the latest developments in AI with respect to the creative sector.
... Li et al. [20] introduced an artificial neural network for encoding and synthesizing the structure of 3D shapes which-according to their findings-are effectively characterized by their hierarchical organization. German et al. [21] have applied different AI techniques trained by a small sample set of shapes of bottles, to propose novel bottle-like shapes. The evaluation of their proposed methods revealed that it can be used by trained designers as well as nondesigners to support the design process in different phases and that it could lead to novel designs not intended/foreseen by the designers. ...
... Those novel AI tools are shifting the creativity process from crafting to generating and selecting-a process which yet can not be transferred to machine judgment only. However, AI can already be employed to find possible sweet spots or make suggestions based on the learned taste of the artist [21]. AI is without any doubt changing the way we experience art and the way we do art. ...
Preprint
Full-text available
The promise of artificial intelligence (AI), in particular its latest developments in deep learning, has been influencing all kinds of disciplines such as engineering, business, agriculture, and humanities. More recently it also includes disciplines that were exclusively reserved for humans such as art and design. While there is a strong debate going on if creativity is profoundly human, we investigate if creativity can be fostered by AI. To get a better understanding of the creative potential offered by AI we open the black box and investigate where and how the magic is happening. Besides the potentials of AI, we also point out and discuss ethical and social implications caused by the latest developments in AI with respect to the creative sector.
... Krizhevsky et al. [13]). However, as has been pointed out by German et al. [4] this "naive" approach is not leading to satisfying results. The problem here is that not only shapes which are similar to bottles get a high score, but also shapes with random patterns are able to get similar scores. ...
... Semantic annotation is the process of attaching additional information to various concepts to be used by machines.4 In preliminary tests, this division turned out to be the most effective variant. ...
Article
Full-text available
The promise of artificial intelligence (AI), in particular its latest developments in deep learning, has been influencing all kinds of disciplines such as engineering, business, agriculture, and humanities. More recently it also includes disciplines that were “reserved” to humans such as art and design. While there is a strong debate going on if creativity is profoundly human, we want to investigate if creativity can be enhanced by AI—not replaced. To be an inspiring co-creation partner by suggesting unexpected design variations and by learning the designer’s taste. To do so we adopted AI algorithms, which can be trained by a small sample set of shapes of a given object, to propose novel shapes. The evaluation of our proposed methods revealed that it can be used by trained designers as well as non-designers to support the design process in different phases and that it could lead to novel designs not intended/foreseen. Besides the potentials of AI, we also point out and discuss moral threads caused by the latest developments in AI with respect to the creative sector. Full Paper: https://eudl.eu/doi/10.4108/eai.26-4-2019.162609
Preprint
Full-text available
The TriRhenaTech alliance presents a collection of accepted papers of the cancelled tri-national 'Upper-Rhine Artificial Inteeligence Symposium' planned for 13th May 2020 in Karlsruhe. The TriRhenaTech alliance is a network of universities in the Upper-Rhine Trinational Metropolitan Region comprising of the German universities of applied sciences in Furtwangen, Kaiserslautern, Karlsruhe, and Offenburg, the Baden-Wuerttemberg Cooperative State University Loerrach, the French university network Alsace Tech (comprised of 14 'grandes \'ecoles' in the fields of engineering, architecture and management) and the University of Applied Sciences and Arts Northwestern Switzerland. The alliance's common goal is to reinforce the transfer of knowledge, research, and technology, as well as the cross-border mobility of students.
Chapter
Realities, Intelligences and Interfaces …. from Augmented, through Virtual, and Mixed; Artificial, Machine, and Natural …. technologies are here to stay and are investigated across disciplines. This contribution introduces studies and applications from various countries that are both creative and thought provoking to inspire and motivate readership and scholarship toward furthering this fast-advancing field.
Conference Paper
Full-text available
In an increasingly globalized world, situations in which people of different native tongues have to communicate with each other become more and more frequent. In many such situations, human interpreters are prohibitively expensive or simply not available. Automatic spoken language translation (SLT), as a cost-effective solution to this dilemma, has received increased attention in recent years. For a broad number of applications, including live SLT of lectures and oral presentations, these automatic systems should ideally operate in real time and with low latency. Large and highly specialized vocabularies as well as strong variations in speaking style-ranging from read speech to free presentations suffering from spontaneous events-make simultaneous SLT of lectures a challenging task. This paper presents our progress in building a simultaneous German-English lecture translation system. We emphasize some of the challenges which are particular to this language pair and propose solutions to tackle some of the problems encountered.
Conference Paper
Full-text available
We introduce ABC-Dataset, a collection of one million Computer-Aided Design (CAD) models for research of geometric deep learning methods and applications. Each model is a collection of explicitly parametrized curves and surfaces, providing ground truth for differential quantities, patch segmentation, geometric feature detection, and shape reconstruction. Sampling the parametric descriptions of surfaces and curves allows generating data in different for- mats and resolutions, enabling fair comparisons for a wide range of geometric learning algorithms. As a use case for our dataset, we perform a large-scale benchmark for estimation of surface normals, comparing existing data driven methods and evaluating their performance against both the ground truth and traditional normal estimation methods.
Technical Report
Full-text available
With rapid progress and great successes in a wide spectrum of applications, deep learning is being applied in many safety-critical environments. However, deep neural networks have been recently found vulnerable to well-designed input samples, called adversarial examples. Adversarial examples are imperceptible to human but can easily fool deep neural networks in the testing/deploying stage. The vulnerability to adversarial examples becomes one of the major risks for applying deep neural networks in safety-critical scenarios. Therefore, the attacks and defenses on adversarial examples draw great attention. In this paper, we review recent findings on adversarial examples against deep neural networks, summarize the methods for generating adversarial examples, and propose a taxonomy of these methods. Under the taxonomy, applications and countermeasures for adversarial examples are investigated. We further elaborate on adversarial examples and explore the challenges and the potential solutions.
Article
Coloring animation sketches has always been a complex and interesting task, but as the sketch is the first part of animation creation that neither presents gray value nor presents semantic information, the lack of real animation sketches is the biggest difficulty in current model training. It is also usually expensive to collect such data. In recent years, some methods based on generative adversarial networks (GANs) have achieved great success. They can generate colorized anime illustration on given sketches. Many existing sketch coloring tools are based on this supervised learning method, but the marking of data is particularly important for supervised learning, and much time is spent on the marking of data. To address these challenges, we propose a novel approach for unsupervised learning based on U-net and periodic consistent confrontation. Specifically, we combine the periodic consistent antagonism framework with the U-net structure and residual network, enabling us to robustly train a deep network to make the resulting images more natural and realistic. We also adopted some special data generation methods, so that our model can not only color anime sketches but also extract line drafts from colored pictures. By comparing the mainstream models of supervised learning, we show that the image processed by the proposed method can achieve a similar effect.
Chapter
Mobiltelefon, PC, AV-Player, Radio, Webbrowser - im Smartphone konvergieren vielfältige Medieninhalte und -phänomene. Zwangsläufig bilden medientechnologische Produktionsformen die Basis hierfür. Hinzu kommen gestalterische Implikationen, die im Umgang mit der vorgeschalteten, »unsichtbaren« Technik deren Optik hervorbringen: etwa das User Interface Design oder interaktive Applikationen. Die Ästhetik des Smartphones selbst jedoch - verstanden als dessen mediale Wahrnehmungsmöglichkeiten - ist bislang nicht untersucht worden. Der Band nimmt sich dieses Desiderats erstmalig aus transdisziplinärer Perspektive an.
Book
Readers gain a clear understanding of engineering design as ENGINEERING DESIGN PROCESS, 3E outlines the process into five basic stages -- requirements, product concept, solution concept, embodiment design and detailed design. Designers discover how these five stages can be seamlessly integrated. The book illustrates how the design methods can work together coherently, while the book’s supporting exercises and labs help learners navigate the design process. The text leads the beginner designer from the basics of design with very simple tasks -- the first lab involves designing a sandwich -- all the way through more complex design needs. This effective approach to the design model equips learners with the skills to apply engineering design concepts both to conventional engineering problems as well as other design problems.
Conference Paper
Deep neural networks (DNNs) applications are now increasingly pervasive and powerful. However, fashion designers are lagging behind in leveraging this increasingly common technology. DNNs are not yet a standard part of fashion de sign practice, either clothes patterns or prototyping tools. In this paper, we present DeepWear, a method using deep convolutional generative adversarial networks for clothes design. The DNNs learn the feature of specific brand clothes and generate images then patterns instructed from the images are made, and an author creates clothes based on that. We evaluated this system by evaluating the credibility of the actual sold clothes on market with our clothes. As the result, we found it is possible to make clothes look like actual products from the generated images. Our findings have implications for collaborative design between machine and human intelligence.
Article
In this paper, we explore and compare multiple solutions to the problem of data augmentation in image classification. Previous work has demonstrated the effectiveness of data augmentation through simple techniques, such as cropping, rotating, and flipping input images. We artificially constrain our access to data to a small subset of the ImageNet dataset, and compare each data augmentation technique in turn. One of the more successful data augmentations strategies is the traditional transformations mentioned above. We also experiment with GANs to generate images of different styles. Finally, we propose a method to allow a neural net to learn augmentations that best improve the classifier, which we call neural augmentation. We discuss the successes and shortcomings of this method on various datasets.
Article
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.