Figure 2 - uploaded by Frieder Nake
Content may be subject to copyright.
Frieder Nake: 13/9/65 Nr. 2. Also called “Homage à Paul Klee”. 1965 

Frieder Nake: 13/9/65 Nr. 2. Also called “Homage à Paul Klee”. 1965 

Source publication
Conference Paper
Full-text available
The story of some early computer art drawings in 1965 is told. It is a story of randomness. Computer art is viewed here as the programming of classes of aesthetic objects. In the mid 1960s, information aesthetics was a powerful and radical theory that had some influence on constructive and concrete forms of art in Europe. A connection is drawn to c...

Contexts in source publication

Context 1
... distribution function and which random number generator to use. Needless to say that the system’s clock was read upon start of the generators. That random time then determined the start value for the random sequence. In consequence, hardly anybody would ever be able to repeat exactly the same drawing sequence. Almost all early experimenters in visual computer art made use of random numbers. In the publications by Noll, Nees, or Nake you will find examples that look very similar. If we were talking about natural art (the opposite of artificial art), we would interpret such an event as style : the common manner by a group of artists to draw or paint. In our case, however, a pattern of a variety like “random polygon” would be attributed to its simplicity. Irrespective of the details of the program, its results look much one like the other. This indicates that style and program may have some commonalities. On the other hand, Fig. 1 shows a sample of random polygons of distinctly different visual appearance. The macro-aesthetics of such drawings consist of two components: the overall geometry and the set of probability distributions. Appropriate definition of the distributions may have a great influence on the micro- aesthetics. A polygon with alternating vertical and horizontal edges differs from one with all possible edge directions only in the choice of the next direction. Otherwise, the pattern is the same. The descriptive power gained in writing executable concepts of pictures (also known as programs ) is enormous. It is where computer art superseded concept art. In Fig. 1, we see two large and many small random polygons. The central one is by Michael Noll. Only horizontals and verticals were allowed. Everyone in the mid 1960s tried this pattern. The panel by Georg Nees is a matrix of repeat- edly connecting 23 vertices by horizontals and verticals, and closing the polygon by an oblique line. The third ex- ample allows for a larger but finite selection of directions. This requires a discrete probability distribution. Is it far-fetched to draw some sort of analogy between the artist’s intuitive decisions and the realizations of random processes, which – as such – are governed by probability distributions? Such an analogy would definitely be far- fetched if we claimed that the artist’s intuition was equal to a set of probability distributions. It would still be far- fetched (and thus wrong) if we claimed that the artist’s decision process was simulated by the random process. We could, however, justify saying that the place of intuition in a human creative process was taken by an ordered set of probability distributions in a computer generative process. As with any analogy, the comparison contains an element that is equal on both sides, and other elements that are different. There is just nothing equal between a human being and a computer. Only on rather abstract levels, similarities show up, or may be constructed. If intuition is what is immediatedly clear to us, what we understand in a single moment ever so short, what does need no further reason other than being exposed to the situation, then an artist’s decision process is governed by intuitive as well as discursive steps. (The same, by the way, is true for the scientist.) In the aesthetic program, the set of probability distributions exactly takes that place of immedi- acy – if there is anything like that. When we look at a work of computer art, and analyze its overt structure, we may in many cases fairly well describe the global geometry, which here I call macro-aesthetics. We are at odds, however, with the details that, as we know, are decided by random numbers. These details on the local scale constitute the micro-aesthetics. They remain hidden to us. The same is true for the artist’s intuition. We take a second look at the distinction of macro- and micro-aesthetics, introduced in the previous section. Consider Fig. 2. It depicts one of the better-known examples of early computer art, the graphics “13/9/65 Nr. 2”. In an attempt to avoid any connotations and allow the name of a picture only to identify it, my naming schema gave the date of pro- duction and the running number during that day. The successful artist, Manfred Mohr, has been using a similarly simple, clear, and context-poor naming device for decades. But the picture of Fig. 2 is also called Homage à Paul Klee . It has been described as incorporating some stylistic rules gained from analyzing drawings by Paul Klee and putting them into computable form such that they could become part of an automatic process. I myself didn’t do enough against this misinterpretation, although it is clear from read- ing [Nake 1974:214ff] that we see something totally differ- ent from anything an artist like Paul Klee would probably have done. In actual fact, Fig. 2 is related to some of Klee’s art in so far as it clearly distinguishes micro- from macro- structures. The first impression many observers have of Fig. 2 is its horizontal orientation. The broken lines running from the left to the right boundary have a definite tendency of “going there”. Their bends add to this impression. They are main roads. As you proceede along one of them, several events may occur. You may get along freely and fast because nothing is preventing you. But you may also get to some place with a lot going on in criss-crossing short paths, which at this place determine everything. Or you may have to cross roads of a secondary kind running perpendicular to your direction. Although the bulk of the graphic line material is of the second, local kind, those local parts align with the global structure. These are the micro- and macro-aesthetics. There are two components that combine the two. One are the vertical local roads that may run across not only one, but two of the horizontal stretches. They thereby create connections between micro and macro levels. The other combining component is the set of circles. Their visual appearance distinguishes them sharply from all the straight lines. Therefore, they counterbalance the drawing’s general impression. They introduce, on the macro-level, a contradiction between straight and circular, or between open and closed forms. The lesson learned from this exercise was that it could be an easy task to generate graphics in a layered design – a lesson well known to anyone in the field of design and art. “To encourage explorations in this new artistic domain, Computers and Automation will hold an informal contest for similar examples of visual creativity in which a computer plays a dominant role. We invite any reader to submit to us examples – which we shall consider for publication in Computers and Automation .” Such read a short announcement on page 21 of the February 1963 issue of the magazine, Computers and Automation. The editors, mainly Edmund C. Berkeley himself, had been motivated to take such a step by the front cover of their January 1963 issue. It had shown the output of a mathematical transform of some optical data. The August issue singled out two plotter drawings that were visualizations of certain physical processes. Their titles were “Splatter Pattern” and “Stained Glass Window”. We are told that they both were automatically graphed by a dataplotter of Electronic Associates. The names of the authors are not given. The August issue in 1964 again has, on its cover, a drawing of some mathematical function. The 1965 award went to A. Michael Noll for his “Computer Composition with Lines”. It had been included in the April show at Howard Wise, and was Noll’s famous simulation of a Mondrian painting. Four more graphics were reprinted inside the magazine, two more of them by Noll, the other two simple mathematical patterns. Twelve examples were displayed as award winning or hon- orary mention of the 1966 contest. My “Distributions of Elementary Signs” appeared on the cover page only par- tially. Its blue part had vanished, most likely due to repro- duction problems. Otherwise, some speculation about color in early computer graphics could have been settled already then. Other contributors were Maughan S. Mason, Petar Milojedic, C.K. Messinger, L.W. Barnum, and D.K. Robbins. Far more entries from far more people caused Ed Berkeley in 1967 to announce “Computer art: the turning point”. The award went to Charles Csuri for his famous Sine Curve Man (Fig. 3). Six more of his works, mostly with his pro- grammer, James Shaffer, were reprinted in the August issue. Other authors represented were Leslie Mezie, Petar Milojedic, Darel Eschbach Jr., Stan Vanderbeek and Ken Knowlton, Donald K. Robbins, Lloyd Sumner, Frieder Nake, D.J. DiLeonardo, Maughan S. Mason, and Craig Sul- livan. “Art in the future will be as profoundly influenced by the computer as by any other medium of expression.” is the thrust of Berkeley’s more elaborate comment on the con- test’s results. He goes on to praise to artists the technical advantages of using a computer. He raises the question whether the human being will be superseded. He answers with a clear “No”. At that turning point, the figurative element had clearly entered the scene, through Csuri and Mezei. The genre of immediate visualization of mathematical things and rela- tions was still prevailing. But Csuri had a background in the arts, so had Mezei, and in Europe several galleries had put up shows of large constructivist repertoires of art that in- volved computers. Once more, in 1968, the number of entries increased, the breadth of visual expression increased, and the background of people contributing became more diverse. Color was no great deal any more. The work by Evan Harris Walker, constructivist in character, depended much on color. Leslie Mezei contributed his Tower of Babel series. The Japanese Computer Technique Group appeared. The award went to two representatives of Calcomp Plotters. A total of 165 entries reached the editor for the 1969 contest, a year ...
Context 2
... line-segments in varying directions and of varying lengths. If the number of segments, the direction and the length in each case were chosen at random, the result would be a polygon unpredict- able in all its detail but known in advance in general form. The program would allow drawing all possible polygons of finitely many segments. It would constitute the definition of that infinite set, and the device to generate each of its members. So I started to force the machine to draw like mad and pro- duce small, later larger, formats of drawings consisting of random polygons. Because of my knowledge of pseudo- random number generators, I used uniform, exponential, Gaussian, Poisson, and arbitrary discrete probability distribution functions to control the myriads of random decisions needed for each of the drawings. As one further source of variability, I included in my palette of random number generators various methods for the basic uniformly distributed numbers. My computer art programs soon developed to a point where they first had to decide which distribution function and which random number generator to use. Needless to say that the system’s clock was read upon start of the generators. That random time then determined the start value for the random sequence. In consequence, hardly anybody would ever be able to repeat exactly the same drawing sequence. Almost all early experimenters in visual computer art made use of random numbers. In the publications by Noll, Nees, or Nake you will find examples that look very similar. If we were talking about natural art (the opposite of artificial art), we would interpret such an event as style : the common manner by a group of artists to draw or paint. In our case, however, a pattern of a variety like “random polygon” would be attributed to its simplicity. Irrespective of the details of the program, its results look much one like the other. This indicates that style and program may have some commonalities. On the other hand, Fig. 1 shows a sample of random polygons of distinctly different visual appearance. The macro-aesthetics of such drawings consist of two components: the overall geometry and the set of probability distributions. Appropriate definition of the distributions may have a great influence on the micro- aesthetics. A polygon with alternating vertical and horizontal edges differs from one with all possible edge directions only in the choice of the next direction. Otherwise, the pattern is the same. The descriptive power gained in writing executable concepts of pictures (also known as programs ) is enormous. It is where computer art superseded concept art. In Fig. 1, we see two large and many small random polygons. The central one is by Michael Noll. Only horizontals and verticals were allowed. Everyone in the mid 1960s tried this pattern. The panel by Georg Nees is a matrix of repeat- edly connecting 23 vertices by horizontals and verticals, and closing the polygon by an oblique line. The third ex- ample allows for a larger but finite selection of directions. This requires a discrete probability distribution. Is it far-fetched to draw some sort of analogy between the artist’s intuitive decisions and the realizations of random processes, which – as such – are governed by probability distributions? Such an analogy would definitely be far- fetched if we claimed that the artist’s intuition was equal to a set of probability distributions. It would still be far- fetched (and thus wrong) if we claimed that the artist’s decision process was simulated by the random process. We could, however, justify saying that the place of intuition in a human creative process was taken by an ordered set of probability distributions in a computer generative process. As with any analogy, the comparison contains an element that is equal on both sides, and other elements that are different. There is just nothing equal between a human being and a computer. Only on rather abstract levels, similarities show up, or may be constructed. If intuition is what is immediatedly clear to us, what we understand in a single moment ever so short, what does need no further reason other than being exposed to the situation, then an artist’s decision process is governed by intuitive as well as discursive steps. (The same, by the way, is true for the scientist.) In the aesthetic program, the set of probability distributions exactly takes that place of immedi- acy – if there is anything like that. When we look at a work of computer art, and analyze its overt structure, we may in many cases fairly well describe the global geometry, which here I call macro-aesthetics. We are at odds, however, with the details that, as we know, are decided by random numbers. These details on the local scale constitute the micro-aesthetics. They remain hidden to us. The same is true for the artist’s intuition. We take a second look at the distinction of macro- and micro-aesthetics, introduced in the previous section. Consider Fig. 2. It depicts one of the better-known examples of early computer art, the graphics “13/9/65 Nr. 2”. In an attempt to avoid any connotations and allow the name of a picture only to identify it, my naming schema gave the date of pro- duction and the running number during that day. The successful artist, Manfred Mohr, has been using a similarly simple, clear, and context-poor naming device for decades. But the picture of Fig. 2 is also called Homage à Paul Klee . It has been described as incorporating some stylistic rules gained from analyzing drawings by Paul Klee and putting them into computable form such that they could become part of an automatic process. I myself didn’t do enough against this misinterpretation, although it is clear from read- ing [Nake 1974:214ff] that we see something totally differ- ent from anything an artist like Paul Klee would probably have done. In actual fact, Fig. 2 is related to some of Klee’s art in so far as it clearly distinguishes micro- from macro- structures. The first impression many observers have of Fig. 2 is its horizontal orientation. The broken lines running from the left to the right boundary have a definite tendency of “going there”. Their bends add to this impression. They are main roads. As you proceede along one of them, several events may occur. You may get along freely and fast because nothing is preventing you. But you may also get to some place with a lot going on in criss-crossing short paths, which at this place determine everything. Or you may have to cross roads of a secondary kind running perpendicular to your direction. Although the bulk of the graphic line material is of the second, local kind, those local parts align with the global structure. These are the micro- and macro-aesthetics. There are two components that combine the two. One are the vertical local roads that may run across not only one, but two of the horizontal stretches. They thereby create connections between micro and macro levels. The other combining component is the set of circles. Their visual appearance distinguishes them sharply from all the straight lines. Therefore, they counterbalance the drawing’s general impression. They introduce, on the macro-level, a contradiction between straight and circular, or between open and closed forms. The lesson learned from this exercise was that it could be an easy task to generate graphics in a layered design – a lesson well known to anyone in the field of design and art. “To encourage explorations in this new artistic domain, Computers and Automation will hold an informal contest for similar examples of visual creativity in which a computer plays a dominant role. We invite any reader to submit to us examples – which we shall consider for publication in Computers and Automation .” Such read a short announcement on page 21 of the February 1963 issue of the magazine, Computers and Automation. The editors, mainly Edmund C. Berkeley himself, had been motivated to take such a step by the front cover of their January 1963 issue. It had shown the output of a mathematical transform of some optical data. The August issue singled out two plotter drawings that were visualizations of certain physical processes. Their titles were “Splatter Pattern” and “Stained Glass Window”. We are told that they both were automatically graphed by a dataplotter of Electronic Associates. The names of the authors are not given. The August issue in 1964 again has, on its cover, a drawing of some mathematical function. The 1965 award went to A. Michael Noll for his “Computer Composition with Lines”. It had been included in the April show at Howard Wise, and was Noll’s famous simulation of a Mondrian painting. Four more graphics were reprinted inside the magazine, two more of them by Noll, the other two simple mathematical patterns. Twelve examples were displayed as award winning or hon- orary mention of the 1966 contest. My “Distributions of Elementary Signs” appeared on the cover page only par- tially. Its blue part had vanished, most likely due to repro- duction problems. Otherwise, some speculation about color in early computer graphics could have been settled already then. Other contributors were Maughan S. Mason, Petar Milojedic, C.K. Messinger, L.W. Barnum, and D.K. Robbins. Far more entries from far more people caused Ed Berkeley in 1967 to announce “Computer art: the turning point”. The award went to Charles Csuri for his famous Sine Curve Man (Fig. 3). Six more of his works, mostly with his pro- grammer, James Shaffer, were reprinted in the August issue. Other authors represented were Leslie Mezie, Petar Milojedic, Darel Eschbach Jr., Stan Vanderbeek and Ken Knowlton, Donald K. Robbins, Lloyd Sumner, Frieder Nake, D.J. DiLeonardo, Maughan S. Mason, and Craig Sul- livan. “Art in the future will be as profoundly influenced by the computer as by any other medium of expression.” is the thrust of Berkeley’s more elaborate comment on the con- test’s results. He ...
Context 3
... its members. So I started to force the machine to draw like mad and pro- duce small, later larger, formats of drawings consisting of random polygons. Because of my knowledge of pseudo- random number generators, I used uniform, exponential, Gaussian, Poisson, and arbitrary discrete probability distribution functions to control the myriads of random decisions needed for each of the drawings. As one further source of variability, I included in my palette of random number generators various methods for the basic uniformly distributed numbers. My computer art programs soon developed to a point where they first had to decide which distribution function and which random number generator to use. Needless to say that the system’s clock was read upon start of the generators. That random time then determined the start value for the random sequence. In consequence, hardly anybody would ever be able to repeat exactly the same drawing sequence. Almost all early experimenters in visual computer art made use of random numbers. In the publications by Noll, Nees, or Nake you will find examples that look very similar. If we were talking about natural art (the opposite of artificial art), we would interpret such an event as style : the common manner by a group of artists to draw or paint. In our case, however, a pattern of a variety like “random polygon” would be attributed to its simplicity. Irrespective of the details of the program, its results look much one like the other. This indicates that style and program may have some commonalities. On the other hand, Fig. 1 shows a sample of random polygons of distinctly different visual appearance. The macro-aesthetics of such drawings consist of two components: the overall geometry and the set of probability distributions. Appropriate definition of the distributions may have a great influence on the micro- aesthetics. A polygon with alternating vertical and horizontal edges differs from one with all possible edge directions only in the choice of the next direction. Otherwise, the pattern is the same. The descriptive power gained in writing executable concepts of pictures (also known as programs ) is enormous. It is where computer art superseded concept art. In Fig. 1, we see two large and many small random polygons. The central one is by Michael Noll. Only horizontals and verticals were allowed. Everyone in the mid 1960s tried this pattern. The panel by Georg Nees is a matrix of repeat- edly connecting 23 vertices by horizontals and verticals, and closing the polygon by an oblique line. The third ex- ample allows for a larger but finite selection of directions. This requires a discrete probability distribution. Is it far-fetched to draw some sort of analogy between the artist’s intuitive decisions and the realizations of random processes, which – as such – are governed by probability distributions? Such an analogy would definitely be far- fetched if we claimed that the artist’s intuition was equal to a set of probability distributions. It would still be far- fetched (and thus wrong) if we claimed that the artist’s decision process was simulated by the random process. We could, however, justify saying that the place of intuition in a human creative process was taken by an ordered set of probability distributions in a computer generative process. As with any analogy, the comparison contains an element that is equal on both sides, and other elements that are different. There is just nothing equal between a human being and a computer. Only on rather abstract levels, similarities show up, or may be constructed. If intuition is what is immediatedly clear to us, what we understand in a single moment ever so short, what does need no further reason other than being exposed to the situation, then an artist’s decision process is governed by intuitive as well as discursive steps. (The same, by the way, is true for the scientist.) In the aesthetic program, the set of probability distributions exactly takes that place of immedi- acy – if there is anything like that. When we look at a work of computer art, and analyze its overt structure, we may in many cases fairly well describe the global geometry, which here I call macro-aesthetics. We are at odds, however, with the details that, as we know, are decided by random numbers. These details on the local scale constitute the micro-aesthetics. They remain hidden to us. The same is true for the artist’s intuition. We take a second look at the distinction of macro- and micro-aesthetics, introduced in the previous section. Consider Fig. 2. It depicts one of the better-known examples of early computer art, the graphics “13/9/65 Nr. 2”. In an attempt to avoid any connotations and allow the name of a picture only to identify it, my naming schema gave the date of pro- duction and the running number during that day. The successful artist, Manfred Mohr, has been using a similarly simple, clear, and context-poor naming device for decades. But the picture of Fig. 2 is also called Homage à Paul Klee . It has been described as incorporating some stylistic rules gained from analyzing drawings by Paul Klee and putting them into computable form such that they could become part of an automatic process. I myself didn’t do enough against this misinterpretation, although it is clear from read- ing [Nake 1974:214ff] that we see something totally differ- ent from anything an artist like Paul Klee would probably have done. In actual fact, Fig. 2 is related to some of Klee’s art in so far as it clearly distinguishes micro- from macro- structures. The first impression many observers have of Fig. 2 is its horizontal orientation. The broken lines running from the left to the right boundary have a definite tendency of “going there”. Their bends add to this impression. They are main roads. As you proceede along one of them, several events may occur. You may get along freely and fast because nothing is preventing you. But you may also get to some place with a lot going on in criss-crossing short paths, which at this place determine everything. Or you may have to cross roads of a secondary kind running perpendicular to your direction. Although the bulk of the graphic line material is of the second, local kind, those local parts align with the global structure. These are the micro- and macro-aesthetics. There are two components that combine the two. One are the vertical local roads that may run across not only one, but two of the horizontal stretches. They thereby create connections between micro and macro levels. The other combining component is the set of circles. Their visual appearance distinguishes them sharply from all the straight lines. Therefore, they counterbalance the drawing’s general impression. They introduce, on the macro-level, a contradiction between straight and circular, or between open and closed forms. The lesson learned from this exercise was that it could be an easy task to generate graphics in a layered design – a lesson well known to anyone in the field of design and art. “To encourage explorations in this new artistic domain, Computers and Automation will hold an informal contest for similar examples of visual creativity in which a computer plays a dominant role. We invite any reader to submit to us examples – which we shall consider for publication in Computers and Automation .” Such read a short announcement on page 21 of the February 1963 issue of the magazine, Computers and Automation. The editors, mainly Edmund C. Berkeley himself, had been motivated to take such a step by the front cover of their January 1963 issue. It had shown the output of a mathematical transform of some optical data. The August issue singled out two plotter drawings that were visualizations of certain physical processes. Their titles were “Splatter Pattern” and “Stained Glass Window”. We are told that they both were automatically graphed by a dataplotter of Electronic Associates. The names of the authors are not given. The August issue in 1964 again has, on its cover, a drawing of some mathematical function. The 1965 award went to A. Michael Noll for his “Computer Composition with Lines”. It had been included in the April show at Howard Wise, and was Noll’s famous simulation of a Mondrian painting. Four more graphics were reprinted inside the magazine, two more of them by Noll, the other two simple mathematical patterns. Twelve examples were displayed as award winning or hon- orary mention of the 1966 contest. My “Distributions of Elementary Signs” appeared on the cover page only par- tially. Its blue part had vanished, most likely due to repro- duction problems. Otherwise, some speculation about color in early computer graphics could have been settled already then. Other contributors were Maughan S. Mason, Petar Milojedic, C.K. Messinger, L.W. Barnum, and D.K. Robbins. Far more entries from far more people caused Ed Berkeley in 1967 to announce “Computer art: the turning point”. The award went to Charles Csuri for his famous Sine Curve Man (Fig. 3). Six more of his works, mostly with his pro- grammer, James Shaffer, were reprinted in the August issue. Other authors represented were Leslie Mezie, Petar Milojedic, Darel Eschbach Jr., Stan Vanderbeek and Ken Knowlton, Donald K. Robbins, Lloyd Sumner, Frieder Nake, D.J. DiLeonardo, Maughan S. Mason, and Craig Sul- livan. “Art in the future will be as profoundly influenced by the computer as by any other medium of expression.” is the thrust of Berkeley’s more elaborate comment on the con- test’s results. He goes on to praise to artists the technical advantages of using a computer. He raises the question whether the human being will be superseded. He answers with a clear “No”. At that turning point, the figurative element had clearly entered the scene, through Csuri and Mezei. The genre of immediate visualization of mathematical things and rela- tions was still prevailing. But Csuri had a background in the arts, so had Mezei, ...
Context 4
... various methods for the basic uniformly distributed numbers. My computer art programs soon developed to a point where they first had to decide which distribution function and which random number generator to use. Needless to say that the system’s clock was read upon start of the generators. That random time then determined the start value for the random sequence. In consequence, hardly anybody would ever be able to repeat exactly the same drawing sequence. Almost all early experimenters in visual computer art made use of random numbers. In the publications by Noll, Nees, or Nake you will find examples that look very similar. If we were talking about natural art (the opposite of artificial art), we would interpret such an event as style : the common manner by a group of artists to draw or paint. In our case, however, a pattern of a variety like “random polygon” would be attributed to its simplicity. Irrespective of the details of the program, its results look much one like the other. This indicates that style and program may have some commonalities. On the other hand, Fig. 1 shows a sample of random polygons of distinctly different visual appearance. The macro-aesthetics of such drawings consist of two components: the overall geometry and the set of probability distributions. Appropriate definition of the distributions may have a great influence on the micro- aesthetics. A polygon with alternating vertical and horizontal edges differs from one with all possible edge directions only in the choice of the next direction. Otherwise, the pattern is the same. The descriptive power gained in writing executable concepts of pictures (also known as programs ) is enormous. It is where computer art superseded concept art. In Fig. 1, we see two large and many small random polygons. The central one is by Michael Noll. Only horizontals and verticals were allowed. Everyone in the mid 1960s tried this pattern. The panel by Georg Nees is a matrix of repeat- edly connecting 23 vertices by horizontals and verticals, and closing the polygon by an oblique line. The third ex- ample allows for a larger but finite selection of directions. This requires a discrete probability distribution. Is it far-fetched to draw some sort of analogy between the artist’s intuitive decisions and the realizations of random processes, which – as such – are governed by probability distributions? Such an analogy would definitely be far- fetched if we claimed that the artist’s intuition was equal to a set of probability distributions. It would still be far- fetched (and thus wrong) if we claimed that the artist’s decision process was simulated by the random process. We could, however, justify saying that the place of intuition in a human creative process was taken by an ordered set of probability distributions in a computer generative process. As with any analogy, the comparison contains an element that is equal on both sides, and other elements that are different. There is just nothing equal between a human being and a computer. Only on rather abstract levels, similarities show up, or may be constructed. If intuition is what is immediatedly clear to us, what we understand in a single moment ever so short, what does need no further reason other than being exposed to the situation, then an artist’s decision process is governed by intuitive as well as discursive steps. (The same, by the way, is true for the scientist.) In the aesthetic program, the set of probability distributions exactly takes that place of immedi- acy – if there is anything like that. When we look at a work of computer art, and analyze its overt structure, we may in many cases fairly well describe the global geometry, which here I call macro-aesthetics. We are at odds, however, with the details that, as we know, are decided by random numbers. These details on the local scale constitute the micro-aesthetics. They remain hidden to us. The same is true for the artist’s intuition. We take a second look at the distinction of macro- and micro-aesthetics, introduced in the previous section. Consider Fig. 2. It depicts one of the better-known examples of early computer art, the graphics “13/9/65 Nr. 2”. In an attempt to avoid any connotations and allow the name of a picture only to identify it, my naming schema gave the date of pro- duction and the running number during that day. The successful artist, Manfred Mohr, has been using a similarly simple, clear, and context-poor naming device for decades. But the picture of Fig. 2 is also called Homage à Paul Klee . It has been described as incorporating some stylistic rules gained from analyzing drawings by Paul Klee and putting them into computable form such that they could become part of an automatic process. I myself didn’t do enough against this misinterpretation, although it is clear from read- ing [Nake 1974:214ff] that we see something totally differ- ent from anything an artist like Paul Klee would probably have done. In actual fact, Fig. 2 is related to some of Klee’s art in so far as it clearly distinguishes micro- from macro- structures. The first impression many observers have of Fig. 2 is its horizontal orientation. The broken lines running from the left to the right boundary have a definite tendency of “going there”. Their bends add to this impression. They are main roads. As you proceede along one of them, several events may occur. You may get along freely and fast because nothing is preventing you. But you may also get to some place with a lot going on in criss-crossing short paths, which at this place determine everything. Or you may have to cross roads of a secondary kind running perpendicular to your direction. Although the bulk of the graphic line material is of the second, local kind, those local parts align with the global structure. These are the micro- and macro-aesthetics. There are two components that combine the two. One are the vertical local roads that may run across not only one, but two of the horizontal stretches. They thereby create connections between micro and macro levels. The other combining component is the set of circles. Their visual appearance distinguishes them sharply from all the straight lines. Therefore, they counterbalance the drawing’s general impression. They introduce, on the macro-level, a contradiction between straight and circular, or between open and closed forms. The lesson learned from this exercise was that it could be an easy task to generate graphics in a layered design – a lesson well known to anyone in the field of design and art. “To encourage explorations in this new artistic domain, Computers and Automation will hold an informal contest for similar examples of visual creativity in which a computer plays a dominant role. We invite any reader to submit to us examples – which we shall consider for publication in Computers and Automation .” Such read a short announcement on page 21 of the February 1963 issue of the magazine, Computers and Automation. The editors, mainly Edmund C. Berkeley himself, had been motivated to take such a step by the front cover of their January 1963 issue. It had shown the output of a mathematical transform of some optical data. The August issue singled out two plotter drawings that were visualizations of certain physical processes. Their titles were “Splatter Pattern” and “Stained Glass Window”. We are told that they both were automatically graphed by a dataplotter of Electronic Associates. The names of the authors are not given. The August issue in 1964 again has, on its cover, a drawing of some mathematical function. The 1965 award went to A. Michael Noll for his “Computer Composition with Lines”. It had been included in the April show at Howard Wise, and was Noll’s famous simulation of a Mondrian painting. Four more graphics were reprinted inside the magazine, two more of them by Noll, the other two simple mathematical patterns. Twelve examples were displayed as award winning or hon- orary mention of the 1966 contest. My “Distributions of Elementary Signs” appeared on the cover page only par- tially. Its blue part had vanished, most likely due to repro- duction problems. Otherwise, some speculation about color in early computer graphics could have been settled already then. Other contributors were Maughan S. Mason, Petar Milojedic, C.K. Messinger, L.W. Barnum, and D.K. Robbins. Far more entries from far more people caused Ed Berkeley in 1967 to announce “Computer art: the turning point”. The award went to Charles Csuri for his famous Sine Curve Man (Fig. 3). Six more of his works, mostly with his pro- grammer, James Shaffer, were reprinted in the August issue. Other authors represented were Leslie Mezie, Petar Milojedic, Darel Eschbach Jr., Stan Vanderbeek and Ken Knowlton, Donald K. Robbins, Lloyd Sumner, Frieder Nake, D.J. DiLeonardo, Maughan S. Mason, and Craig Sul- livan. “Art in the future will be as profoundly influenced by the computer as by any other medium of expression.” is the thrust of Berkeley’s more elaborate comment on the con- test’s results. He goes on to praise to artists the technical advantages of using a computer. He raises the question whether the human being will be superseded. He answers with a clear “No”. At that turning point, the figurative element had clearly entered the scene, through Csuri and Mezei. The genre of immediate visualization of mathematical things and rela- tions was still prevailing. But Csuri had a background in the arts, so had Mezei, and in Europe several galleries had put up shows of large constructivist repertoires of art that in- volved computers. Once more, in 1968, the number of entries increased, the breadth of visual expression increased, and the background of people contributing became more diverse. Color was no great deal any more. The work by Evan Harris Walker, constructivist in character, depended much on color. Leslie Mezei contributed his Tower of Babel series. The Japanese Computer ...

Similar publications

Article
Full-text available
This article aims the tendency of lowsumerism directed towards a new discourse of advertising campaign, that approaches the consumption of sustainable. This article criticizes the unsustainability of publicity, which is often lost between ethics and aesthetics. The Minas Gerais brand Green Co. - which operates in the fashion segment - is an empiric...
Conference Paper
Full-text available
The purpose of this study is to determine visual aesthetic attributes for user experience. As interactive digital media and their associated content have diversified, there are difficulties in finding universal visual aesthetic guidelines. While previous studies look into each unique user experience, there is little focusing on meta-analysis of vis...

Citations

... However these two concepts are not necessarily in conflict and in some cases they can intersect. In fact, computers have been used to produce art since the early 60's by computer scientists [32] as well as by artists that treated algorithms as a new artistic medium like Vera Molnar [31] or Harold Cohen [29]. Being able to produce geometric shapes and lines using programming languages was the gateway to produce more complex agents like Cohen's AARON [29], the Painting Fool [9] or, more recently, Paul the drawing robot by Patrick Tresset [43] or D.O.U.G the collaborative system created by Sougwen Chung. ...
Chapter
This paper presents a study conducted in naturalistic setting with data collected from an interactive art installation. The audience is challenged in a Turing Test for artistic creativity involving recognising human-made versus AI-generated drawing strokes. In most cases, people were able to differentiate human-made strokes above chance. An analysis conducted on the images at the pixel level shows a significant difference between the symmetry of the AI-generated strokes and the human-made ones. However we argue that this feature alone was not key for the differentiation. Further behavioural analysis indicates that people judging more quickly were able to differentiate human-made strokes significantly better than the slower ones. We point to theories of embodiment as a possible explanation of our results.
... Almost as soon as computers became available, they were used to create generative digital art. While artists like Frieder Nake (Nake 2005) and Vera Molnar (Roe-Dale 2019) created algorithmic art in the sixties, these works did not employ techniques commonly thought of as AI. Among the first AI-based artworks was AARON by Harold Cohen (Cohen 2016). ...
Article
Full-text available
Intelligent environments combine the promise of ubiquitous computing with artificial intelligence and are increasingly being used in public art. The agent-based approach to artificial intelligence (AI) uses the intelligence function to characterize agent-based behavior. The inputs to the intelligence function, perception of the environment and the agent's internal state, combined with the outputs of the function, actuation and changes in internal state, provides a lens with which to categorized AI-based public art. Such works can be classified as generative, reactive, interactive, learning, or static. To illustrate this taxonomy, this paper gives examples of public artworks that fit into each of the five categories and uses the taxonomy to suggest new areas of creative inquiry.
... Classic examples include Conway's Game of Life (Conway, 1970) and reaction diffusion equations (Turing, 1952). Generative systems have been used by a number of artists, from pioneering early work by Algorists such as Manfred Mohr (Mohr and Rosen, 2014), Frieder Nake (Nake, 2005), Ernest Edmonds (Franco, 2017) and Paul Brown (DAM, 2009a), to more recent work by artists such as William Latham (Todd and Latham, 1992), Yoichiro Kawaguchi (DAM, 2009b), Casey Reas (Reas, 2018), and Ryoji Ikeda (Ikeda, 2018). ...
... Classic examples include Conway's Game of Life (Conway 1970) and reaction diffusion equations (Turing 1952). Generative systems have been used by a number of artists, from pioneering early work by Algorists such as Manfred Mohr (Mohr and Rosen 2014), Frieder Nake (Nake 2005), Ernest Edmonds (Franco 2017), and Paul Brown (Digital Art Museum 2009a) to more recent work by artists such as William Latham (Todd and Latham 1992), Yoichiro Kawaguchi (Digital Art Museum 2009b), Casey Reas (Reas 2018), and Ryoji Ikeda (Ikeda 2018). ...
Article
Full-text available
This article reviews the development of the author’s computational art practice, where the computer is used both as a device that provides the medium for generation of art (‘computer as art’) as well as acting actively as an assistant in the process of creating art (‘computer as artist’s assistant’), helping explore the space of possibilities afforded by generative systems. Drawing analogies with Kasparov’s Advanced Chess and the deliberate development of unstable aircraft using fly-by-wire technology, the article argues for a collaborative relationship with the computer that can free the artist to more fearlessly engage with the challenges of working with emergent systems that exhibit complex unpredictable behavior. The article also describes ‘Species Explorer’, the system the author has created in response to these challenges to assist exploration of the possibilities afforded by parametrically driven generative systems. This system provides a framework to allow the user to use a number of different techniques to explore new parameter combinations, including genetic algorithms, and machine learning methods. As the system learns the artist’s preferences the relationship with the computer can be considered to change from one of assistance to collaboration.
... The three main traditions focus on signs as a social system (European), the individual use of a sign (American) or a philosophical inquiry where language is its sign type among others (Peircean) [25]. It has been suggested that software is to be understood as an algorithmic sign [26]. Because of signs in general stand for something else, signs need to be interpreted. ...
... Because of signs in general stand for something else, signs need to be interpreted. What is unique about the algorithmic sign is that it not interpreted by humans alone, but also by the machine [26]. It is important to note that the word 'interpretation', when concerned with computers, does not share the nature of the human act of interpretation. ...
Article
Full-text available
Computational Thinking has gained popularity in recent years within educational and political discourses. It is more than ever crucial to discuss the term itself and what it means. In June 2017, Denning articulated that computational thinking can be viewed as either “traditional” or “new”. New computational thinking highlights certain skills as desired in solving problems, whereas traditional computational thinking is a skill set resulting from engaging in traditional computing activities. By looking at computational thinking through the perspective of semiotics, it is possible to dissolve the traditional vs new distinction and concentrate on computational thinking having both an explicit and implicit nature. In this perspective, a computer program becomes an algorithmic sign which can both be interpreted by humans and machines. The double interpretation allows for a dialectic relationship between computing activities and Computational Thinking instead of the dualistic traditional vs new approach.
... As an alternative, we introduce the algorithmic signdeveloped within computer semiotics [1,4,5] -as a framework for developing a dialectic perception of CT. This enables CT to take on an implicit or explicit nature depending on the context in which it is used. ...
... Software can be understood as an algorithmic sign [4]. What is special about the algorithmic sign is that it is not only interpreted by humans, but also by machines [4]. ...
... Software can be understood as an algorithmic sign [4]. What is special about the algorithmic sign is that it is not only interpreted by humans, but also by machines [4]. It is important to note that the word 'interpretation', when concerned with computers, does not share the nature of the human act of interpretation. ...
... An early example is 'Hommage a Paul Klee' by Fredier Nake 1 . To create this work Nake programmed randomly specified instances of variables which allowed the computer to make formal choices based on probability theory (Nake, 2005). At the same time, Noll created a computer-generated Mondrian-like artwork, which when shown next to a reproduction of a real Mondrian "Composition with Lines" (1917) was indistinguishable and often preferred over the true Mondrian (Noll, 1966). ...
Article
Full-text available
As artificial intelligence (AI) technology increasingly becomes a feature of everyday life, it is important to understand how creative acts, regarded as uniquely human, can be valued if produced by a machine. The current studies sought to investigate how observers respond to works of visual art created either by humans or by computers. Study 1 tested observers' ability to discriminate between computer-generated and man-made art, and then examined how categorization of art works impacted on perceived aesthetic value, revealing a bias against computer-generated art. In Study 2 this bias was reproduced in the context of robotic art; however, it was found to be reversed when observers were given the opportunity to see robotic artists in action. These findings reveal an explicit prejudice against computer-generated art, driven largely by the kind of art observers believe computer algorithms are capable of producing. These prejudices can be overridden in circumstances in which observers are able to infer anthropomorphic characteristics in the computer programs, a finding which has implications for the future of artistic AI. (PsycINFO Database Record
... The points in evidence by way of the illustrations above is that, first, the coding of programs is laden with meanings that can tell us interesting things about the programmer's frame of mind, and that, second, a semiotic analysis combining internal and external program signs can be further explored in certain domains. For example, in Computer Art there are interesting discussions about the conceptual dimensions of images produced by a computer program (Boden & Edmonds, 2009;Nake, 2005). The kind of semiotic analysis carried out by PoliFacets with AgentSheets suggests that there may be room to develop exciting EUD art tools based on semiotic dimensions advanced by Semiotic Engineering and other related theories. ...
Chapter
Theories have an important role to play in research areas whose application faces rapid technological changes. They can provide longer-term intellectual references that shape deeper investigations and contribute to consolidate the identity of such research areas. A recent survey of EUD-related work published between 2004 and 2013 suggests that our field is remarkably techno-centered and could increase its scientific impact by diversifying some of its research approaches and practices. In this paper we show concrete examples of how Semiotic Engineering, originally a semiotic theory of human-computer interaction, can provide a unified theoretical framing for various EUD-related topics of investigation. Our contribution to the collection of chapters in this book is to demonstrate this particular theory’s potential as a catalyst of new kinds of transdisciplinary debate, as well as a source of inspiration for new breeds of technological developments.
... The terms "generative art" and "computer art" have been used in tandem, and more or less interchangeably, since the very earliest days. For the first exhibition of computer art was called "Generative Computergraphik" (see the description of the event in Nake 2005). It was held in Stuttgart in February 1965 and showed the work of Georg Nees. ...
Article
Full-text available
There are various forms of what's sometimes called generative art, or computer art. This paper distinguishes the major categories and asks whether the appropriate aesthetic criteria—and the locus of creativity—are the same in each case.
... Nake [Nak05], one of the pioneers of the computer or algorithmic art (i.e., art explicitly generated by an algorithm), considers a painting as a hierarchy of signs, where at each level of the hierarchy the statistical information content could be determined. He conceived the computer as a Universal Picture Generator capable of "creating every possible picture out of a combination of available picture elements and colors" [Nak74]. ...