Making Art with Artificial Intelligence: Artists in Conversation (Google I/O’19)

Making Art with Artificial Intelligence: Artists in Conversation (Google I/O’19)


[MUSIC PLAYING] KENRIC MCDOWELL: If you’ve
been at the conference, maybe you had a little
chance this morning, or you were here
yesterday, you’ve probably seen some of the
artwork that’s around, some video installations. Or if you were at
presentations and performances last night in this
very room, you would have seen some
of the programming that Alex Czetwertysnki and
I curated for the I/O Arts program at I/O this year. So we’re going to have a
little conversation today, and I just wanted to introduce
myself and talk a little bit about the– just set up some
context for the program. And then we’ll get
to some conversation with artists, which I know
we’re all looking forward to. So as I mentioned, there
are some installations and performances that were
curated for the I/O Arts program. A lot of the work has
to do with artists that use artificial intelligence. And we have two of them here
with us today, Cedric Kiefer from onformative,
whose installation is out here on the boardwalk– you may have seen it,
“Meandering River”– and Sougwen Chung, whose
performance was last night, and whose work is hanging here. So please, a round of
applause for both of them. [APPLAUSE] Cedric and Sougwen are going
to give some presentations on their work,
and then we’ll get into conversation about that. CEDRIC KIEFER: OK. KENRIC MCDOWELL:
Thank you, guys. CEDRIC KIEFER: Thank you. Yeah, I will start right away. Thanks for having me. And it’s a great
pleasure to speak in front of such an audience. And I want to talk a little bit
about our work, our practice. And when I talk about our
practice, what I mean by that– or if I have to describe
it in one short sentence, I would usually say
that we are searching for new ways of
creative expression. And we do that by
using technology. And to give a little
bit of a background where I’m coming from, or where
onformative– the studio I’m running– is coming
from, about 10 years ago, we co-published the
“Generative Design” book. And for us, that has always
been the starting point. And since then,
our work has always been focusing
around digital art, using code and using technology
to create art and design. And what I want to talk
about today, though, is more what inspiration
and improvisation actually means in such a setup– in a technology-driven
setup, but also in general. Because if you look
back, improvisation plays an important role
in arts in general, in totally different
disciplines. So for example, looking
at [? traffic ?] scores and classical
music composition is a really good
example of how people start to think about how
they can foster improvisation by creating visually
abstract [? traffic ?] scores that leave a lot of
room for interpretation. So they started to develop
tools to actually come up with something new to
create moments of chance. And I think that’s
really interesting. But it is not only happening
in classical music. Performance and dance is another
case where such a thing exists. For example,
improvisation technologies by William Forsythe,
developed in the ’90s, where he started
to actually come up with an imaginary
system of rules. It’s kind of like a toolset
for him to improvise. So he was thinking
about lines and boxes– imaginary ones. And he started to
interact with them by either avoiding
them or following them. I think that’s a really
interesting thought. And we have been using
technology in a similar way. We use it to get inspiration
and to do things differently, because I think that’s
what it’s about. When you want to search for
new ways of visual expression, you have to do
things differently. And as a student,
we are not really focused on specific
media, which means we create work that ranges from
interactive art installations to kinetic sculptures, but also
work that is purely visual. This is a project that I want
to talk a little bit about more. It’s “Collide,” something
that we did a few years ago, a digital art piece
which is based on the idea of visualizing
dance and motion. And we do it with
a lot of projects. We started with researching,
exploring, writing software, writing code and
tools, and trying to understand how, actually,
energy flows through the body. And during the work and time
we spent with the dancers and performers that we tracked–
so we used motion tracking to actually detect
their movement and use that as an
input for the art piece. And one thing that they always
mentioned and brought up during the rehearsals is
the fact that, at one point, as a performer, you get
lost in what you do. You get lost in the performance. You are in the zone. Everything blurs. And we really like the
idea of actually exploring how energy actually expands
and flows into the space. And at this point, we
started to interpret– [VIDEO PLAYBACK] [SLOW CELLO MUSIC] –the screens
actually as a window into that abstract world. So what you see here
is our interpretation of what it feels like if you
get lost in a performance, in a creative process. And yeah. You see parts of bodies
coming towards the screen, becoming a little bit more
concrete, and then also quickly disappearing again. But I don’t want to talk
much about the visual side of this project, but I want
to talk about the music side, because for this piece,
we also want to create a powerful soundscape. [END PLAYBACK] And we thought a lot
about what the best way would be to do that. So instead of
handing the visuals over to a sound
designer and just asking him to do
some sound on top, we wanted to make
sure that we actually stay true to the concept,
which is actually being immersed, and
being totally focused. And at the same
time, we also wanted to actually foster this moment
of creativity, inspiration, improvisation. And for that
purpose, we actually invited three cello players– [VIDEO PLAYBACK] –and put VR glasses
on their heads, and put them into
that virtual world. So they were actually being
present in that virtual world at different positions and
were actually improvising and reacting to everything that
happened in front of their eyes the moment it happened. And we recorded
that and used that as the base for the soundscape. [SLOW CELLO MUSIC] So this is obviously not the
most or not the easiest way to actually create
music, but it’s a way that actually leads to
some new, unexpected results. And this is what we
always try to do. We try to push boundaries. We try to actually
look for new things. And this is a
pretty good example how we incorporate
technology to do that. [END PLAYBACK] The last project I want to
talk about a little bit more in detail is the one that
Kenric already mentioned, the one that you can see
outside at the boardwalk. And with a lot of
our project, it started with some inspiration. In this case, satellite
images of rivers. So for another project,
I did some research and came across these
beautiful river landscapes. And I was totally fascinated
about how they actually leave these patterns. And I wanted to– I needed to understand
how it actually works. And it’s really interesting,
because you might not know it, but in reality,
they’re really moving. They’re just moving
really, really slow. [VIDEO PLAYBACK] So if you look at Google Earth
time lapse of such river, over the course
of 20 or 30 years, you can really see how the river
carves through the landscape. And this is happening
because there’s some sediment being taken
from one side where the water pressure is higher and
left on the other side. And through decades,
the river really moves through the landscape. And these are the
moments where we feel that there’s
something in such a system that we need to understand, that
we need to recreate, that we– yeah, we need to understand. And at that point,
we start, again, researching,
exploring, and building software representations
of such a system. So this is our river simulation,
one of the few, or one– yeah, we did quite a
lot, because we also need to explore these things. And this is a combination of
different algorithms actually simulating this behavior. And the moment you have
the logic in place, you can start thinking about
what is the visual aesthetic that I put on top of that. And in that sense, you
can use such a river as a digital brush
on your canvas making [INAUDIBLE]
beautiful, abstract images. But we also felt that looking
at all those satellite images as inspiration
references, there’s such a beauty
in these images already which has a really painterly
aesthetic purely on its own. And we felt that we need to
stay close to that aesthetic in some ways. [END PLAYBACK] So on top of this simulation
that I just showed, we developed some
new visual shaders. And all these
shaders were actually representing different aspects,
like vegetation, or erosion, and so on. And in the end,
we developed a lot of different visual
aesthetics– about 20, and these are just
four of them– that were representing abstract
interpretations of rivers influencing and
changing landscapes. And while I talked a lot about
the algorithms and the logic behind the visuals,
there’s another story about the sound creation again. Because one of the biggest
struggles or questions that we asked ourselves
is, how do you actually create a soundscape or music
for an endlessly-moving visual? Because what you see there
is actually a real-time piece which is constantly moving. It’s a lot about change
and unpredictability, so you don’t really
know what happens next. So theoretically,
it can run forever. And we like the idea of actually
having somebody improvising or interpreting
what is happening. But you can’t do that forever. So this is where, actually, the
idea of using an AI came in. So for this part, we
collaborated with a– [VIDEO PLAYBACK] –studio called kling klang
klong, and great sound designers we’ve been working
with in the past as well. And we came up with the
idea to actually have human piano players– we had four of them
interpreting all the different visual aesthetics
in a lot of different styles. So we had four
people interpreting about eight different aesthetics
in four different styles and three different tempos. And we ended up with about
350 different interpretations, which was our training
data set for the AI. And for that, we
used Google Magenta to create the musical score. And what we did was just
feeding all the data in there, and training it over
and over and over again, leading to a lot of different
outputs in terms of music. And they differ a lot
in terms of complexity. And you can see that, depending
on the training iterations, like 200, 500, 1,000, 6,000. It was getting more and more
complex, and more and more– I wouldn’t say interesting,
but at one point, we also reached a point where
we had some overfitting. It was too much. [END PLAYBACK] But these were then actually
used to create the soundscape. And I really liked the idea of
actually having this multi-view where you have the visual
side of this piece interpreted by the human. [VIDEO PLAYBACK] And the machine learns
from that and does their own interpretation
based on what the machine understands,
which is the raw data. And this led to this
beautiful combination of sound and visuals. And this is the
final piece the way we showed it first time at our
exhibition at Funkhaus Berlin, and quite similar
to how we have it executed here at the boardwalk. [LIGHT PIANO MUSIC] SOUGWEN CHUNG: It’s really nice. CEDRIC KIEFER: So thank you. I’m going to hand over to you. SOUGWEN CHUNG: Great. KENRIC MCDOWELL:
Thank you, Cedric. [END PLAYBACK] CEDRIC KIEFER: Thank you. KENRIC MCDOWELL: Fascinating. [APPLAUSE] SOUGWEN CHUNG: Good
morning, everyone. Thank you for
coming, by the way. It’s so early, so I
really appreciate it. And it’s a pleasure to
be up here with everyone. Actually, I’ll go back. I did a performance
yesterday in this room where I showcased a
duet performance of some of the things I’m going
to be talking about. I’m going to try
to speed through it really quickly, though, because
I think the heart of this is the discussion. I’ll do my best. I make no promises. So this is an
interesting premise that I like to bring out
when I speak about my work. And the question is, if
technology is the answer, what is the question? It’s really cool to be
able to ask that at I/O. So my work, I use the mark
made by hand and the mark made by machine, handmade
and digital approaches to understanding systems. I’ve been working on this body
of work for about five years now, exploring the realm of
human and robot collaboration. And it’s in many phases of
generations of features, essentially, but these
are the three that I’m going to discuss today. The first one involves
mimicry, the second one memory, and the third multiplicity,
and a collaboration with a multi-agent body. So it started off really simply. I had background as a
performer and a violinist. So I’ve been really– I’ve thought a lot
about gesture and how that might relate to cognition
and an improvisation, and that’s always been
at the heart of the work. It’s almost like the
control by which I throw a lot of different
digital approaches. The first generation
started with mimicry, where I programmed a
robotic arm to mimic my gesture in real time using
a very simple computer vision software. And I found it became a really
interesting performative moment of human and
machine co-creation that I’ve been exploring
now with machine learning over the past few years. This is a generation
that’s trained just the very early iterations
on my own drawing style at an attempt of speculating
at robotic memory. This third piece is a
really interesting project called “Omnia per
Omnia,” where I designed a multi-robotic
system that was linked to the flow of the city. I just flew in from
New York, and we trained the robot’s motions on
publicly-available surveillance feeds in New York City. So all this comes from a few
different sparks of curiosity that we’re going to talk about
today, which is this idea of– and something that I read
in Cade Metz’s documentation of the DeepMind,
AlphaGo, Lee Sedol moment, which I’m sure most
people in this room know about. But when DeepMind and AlphaGo
beat Lee Sedol, the top Go player in the world, he said
that in move 33, move 33, it represented a really
watershed moment in how he thought about his
practice and his game, which is that the system outputted a
non-human move that he deemed impossibly beautiful. So a lot of what
I do is an attempt to find that impossibly
beautiful non-human move. Because I think
there’s a lot of– this is a really tried
and cliched binary of men and machine that
ends up being really quite dystopian that I try to– think we can really
imagine beyond now. So I can show– [VIDEO PLAYBACK] –some of the videos for it. This is the first video
that shows the Generation 1. In the performance
yesterday, I performed with two robotic arms,
Generation 1, Generation 2, that showed the process of me
working with these evolving machines and their behavior. [MUSIC – ERIK SATIE,
“GNOSSIENE NO. 1”] It was so experimental. This is just actually a
really mediocre webcam feed that I just had up in
my performance area, so. [CHUCKLES] But I
thought this showed this interesting attention of
agency and making that really, really sparked a lot of
ideas in me creatively. [END PLAYBACK] I’ll go through to
the next one too. [VIDEO PLAYBACK] And then shortly after
I released that project, I got really interested
in the idea of generating new types of machine behaviors. Not just a simple relay
of the positional data that I was inputting,
but something that could function
a little bit more as a creative catalyst
for my own drawing style. [LIGHT MUSIC] [END PLAYBACK] I’ll just go through that. And a lot of this,
I think we have to think a lot
about how language and how the words we use
to describe these systems end up shaping how
we think about it. And in English, the etymological
origin for the word “computer” is that it’s a system meant
to produce automation. And I’m Chinese, and I always
found it really interesting to compare it to the Chinese
interpretation of computer, which is [CHINESE],,
which is electric brain. So I think that speaks a lot to
the possibilities and promises and a little bit of
where we’re heading in working with these
systems, and thinking about ways we can align– I’ve been reading
a little bit more about this idea
of cosmotechnics, and the origin of some
of these technologies, and how they can inform
our way moving forward. So this is actually a
diagram of the I Ching. But doesn’t it look a little
bit like an ML architecture? I thought there’s some
interesting parallels there, and something that I’m exploring
in my forthcoming works. I’m going to really run
through these, because I know we have a lot to talk about. But I think this comes from
this idea of hybrid seeing. We see in our own
unique spectrum. But also, the systems that we
work with see in their own way. And that presents interesting
particularities and biases. You could even describe
an artist’s artistic style through landscape painting
as a type of visual bias. That, of course, we know is
quite a topic of discussion in a lot of how we think
about these systems of data and how they’re trained. So I’ve always found
an interesting parallel between how a human
sees a landscape and how a computer
might, and how we might be training it to do so. This is the screencap of
the project– hold on. Anyway, so the project that
I’m going to show after this is inspired by that idea. Thinking about how
I would interpret the flow of a city versus how
the algorithms that were used interpret the flow
of the city, and how that might be able to combine
to a human and machine agency that’s performed in real time. This is inspired by some of
the work of Akira Kanayama, from the Gutai collective
in 1960s Japan, thinking about ways of expanding
our idea of collaboration. [VIDEO PLAYBACK] [MUSIC PLAYING] We did a short film around
this project that’s available online for you to see. I’m going to skip
through because we don’t have much time left. [END PLAYBACK] And it’s all
informed by this idea that we’re part of a larger
creative ecosystem, which I’m sure is a sentiment
that many of you share here. So I just guess I’ll
leave it at that. Thank you. KENRIC MCDOWELL: Thank you. [APPLAUSE] So it’s so exciting
to see this work. The group that I lead at Google
that’s called Artists + Machine Intelligence, and we put on the
first exhibition and auction of neural net-based artworks
in San Francisco’s Gray Area in 2016 after Deep
Dream came out. And I feel like these practices
that you’ve shared with us– and thank you so much
for sharing them– really represent a maturation
of the relationship of artists with machine learning
and with AI in the sense that we’re really looking at
not just what the AI can produce visually, but how it can be part
of a actual artistic practice in a really integrated
way, and what types of– how those integrations might
not necessarily appear visually, whether it’s through
music, or it’s through the process of
creation in the gesture itself. So one of the things
that came up early on when we started doing this
work with Artists + Machine Intelligence is this question– when people learn that
AI and machine learning can be used to make
art, they often go straight to this
question of automation. And they say, like, does
this mean that artists are going to be replaced? Is there going to be robots
that make all the art and then there’s no need
for human artists anymore? So I don’t think
that that’s the case. And I think we can
clearly see that there are other alternatives. I don’t think we need
to reiterate that, but I would like to ask,
having seen somewhat the role that automation does play
in both of your practices, how did you come to an
interest in integrating that into your practice? Whoever wants to– CEDRIC KIEFER: What
you just meant. I think integrating,
that’s the important part. You raised the question if AI
is actually replacing artists at one point– or
it’s not about art. It’s about replacing
everything, right? I definitely disagree,
specifically for art. But I think it’s also the
question of how you actually use it in your art,
in your practice. And I think it’s
really about– at least for me– integrating it. I like to actually
be responsible, and I like to define the
starting point, which in this case, could
be the training data, define the variables that
are actually responsible. But I also would like
to actually take over. I rarely like to hand the
end result fully to the AI or to the algorithm. But I like to include
it in the process. I think that’s important,
because it also means that I start to
rethink how I do things, and that helps me to actually
come up with new results. I know that there are artists
who actually do it differently, and they hand things
over to the machine, and the output is the piece. That’s not necessarily
what I like to do, because I like to
have control over what happens to a certain
degree, definitely. But it’s an important topic,
because that question, as you said, comes up quite often. It’s like, oh, do you just
write a little bit of code and then press art,
art, art, more art. And that’s not
exactly how it is. SOUGWEN CHUNG: [CHUCKLES] KENRIC MCDOWELL: Is that
what you do, Sougwen? Do you just– SOUGWEN CHUNG: That would
be a great art project. KENRIC MCDOWELL: You don’t
have a button either? SOUGWEN CHUNG: Oh, I don’t
yet, but I want to now. When I think of the conversation
around automation and agency, and how I employ
my robots, for me, it’s very much about
re-contextualizing the visual metaphor of the robotic
arm– which is a symbol of the Industrial Revolution
and of automation– into something that’s a
little bit more collaborative. I’m really incensed
by this idea that AI can serve as a creative catalyst
to take my practice, even in its simplest form, to places
that I wouldn’t otherwise be comfortable going. I think, in that way,
showing this project as a co-creation and a
collaborative process I think reminds us that
the human hand is always present in these systems anyway. So there’s no such thing as
a true relinquishing of it to this AI deity
that sometimes I think we’re tempted
to really align to. But again, that collaborative
co-creative agent is something that I– it draws me to the work. And yeah, and that’s what I
want to continue to evolve. KENRIC MCDOWELL: So one logical
step that’s often taken– I think this is what
you’re speaking to– with the word “integration,”
but also augmentation. The idea that we
might be, rather than automating or replacing
aspects of ourselves with machine learning and AI, we
could augment our capabilities, or produce new types
of capabilities, new types of imagination. So I wanted to ask,
have you experienced– well, first, how has
working with these systems changed your conception
or imagination in the process of making work? And have you experienced
moments of emergence, or serendipity, or
co-creation, co-thinking with these tools that might not
have been possible otherwise? CEDRIC KIEFER: In my
specific case, definitely, because we started
with code-based design from the very beginning. And one of the things
that draw my attention is just the way that the
creation process actually differs from the, I would
say, like the classical design process, where
you have a vision, and you just try to
execute it, and then get closer and closer step by step. But in this case, you start
to actually form and define a system, a rule set. But there are also some parts
that are actually left open. And this is what you just
described, this lucky accident, and this little bit of chance. And then this is the same with
the system that I mentioned, the William Forsythe
improvisation technologies. It just leads to
a point when you start to rethink
what you usually do, and it is some kind of a
trigger to actually do things differently. And I really like to
foster these moments and make them part of
the creation process, because that’s when
beautiful things happen. KENRIC MCDOWELL: Mm. Mm-hm. SOUGWEN CHUNG: I think my
response to that would be, when– and what I’m really
trying to do is I’m trying to create an
artistic behavior that’s really quite linked
to my control, which is a drawing style. I think when that
happens, it’s one way to think about that as
automation or augmentation, but it’s also– for me, it’s a
different way of being able to look at my own practice. It becomes a bed of
not only introspection, but also a way to create
behaviors in a time capsule. So it’s a way to engage with
my different drawing behaviors over time, and to see how
that’s evolved in a really physical and embodied way. And also, it’s something that– a new project that
I’m working on– something that I would be able
to share with other people. So it becomes this
polygeographic artistic agent that I have a lot of control
and agency and design over, and it’s a very
consensual system. But it’s a way to
really expand my process and bring other people into the
work in a way that would not be possible if it was just
me and a stick of graphite. So I find that really exciting. KENRIC MCDOWELL: Yeah. This is interesting
that you bring up these notions of training sets,
personalizing the training data as well as the
notion of consent, and who is part of the system. Along with the
question of agency, this is, I think,
a important point where we can imagine that these
practices are, in some ways, experiments in how to
relate with technology. And so in the past,
artists did work with different tool
makers, and their influence was felt in the
work in some way. But I think we can look
at machine learning and say that working
with Magenta, or creating a data set versus
using an already-existing data set, or a trained neural net,
are moments where artists are making choices
about which partners they’re bringing into their
technological practice. And this is something that
maybe is probably new, or is relatively new with media art. So I wanted to really ask the
question about collaboration, and, really, agency. So if this is a
collaboration, whether it’s with a machine learning
system that you describe as an independent
entity, or a machine learning system that’s
a sort of extrusion of Google, or other open
source research, with whom does the agency lie when
we’re using these systems? And what does it
mean to you to use a tool that is
likely not created for an artistic purpose? Maybe Magenta
might be different, but a lot of neural
nets are not. How does that play into your
thinking about your practice? CEDRIC KIEFER: Good question. A tricky one. But I think it’s also partly
the answer to the question that you raised
in the beginning. Are computers or AI
replacing artists? And my answer to that would be,
again, no, because there’s– as you said– an intention
that artists have. They really have the will
to produce something. I think that’s something
that machines are missing. I’m not saying they
will miss that forever. Might change, for sure. But at least in
the beginning, it’s always like the artist that
comes first that actually tells the machine to create something,
that even creates the machine to create art. So until this isn’t
changing, I think there’s no replacement happening. And it’s also, still, as you
said, the artist who actually defines what the input
is, which could be input that you created yourself,
like you do in your pieces, but we also did with the music. But also other artists, other
creatives actually taking input from totally different
sources, which, again, leads to different results. But what this source is is,
again, defined by the artist. So there’s still this agency
missing that you mentioned, I feel. KENRIC MCDOWELL: Mm-hm, mm-hm. SOUGWEN CHUNG: I
think one reason why we’re so interested
in talking about AI and art is because it’s
about this idea of agency. And agency has been
something that I think artists have historically
addressed the meta-narrative of that as well over time. But when I think about that,
I get into my head a lot about whether– I sort of question
my own agency. If I am to question the
agency of a machine, I question my own, and
that gets into notions of free will and all this. But I think it’s made me
think a little bit more about what actually motivates
an individual’s decision. And is it the microbiome? Is it the gut bacteria
in your system? Is that a driving force? I think our ideas of agency
have become really pluralistic. And that, alongside machines,
alongside our own physiology, has been a really
unexpected, creative “a-ha” moment in a lot of the work
that I’ve been trying to bring in through these kinetic robotic
sculptures as a way of really confusing and making
that ambiguous. That’s a very long-winded way
of answering your question. KENRIC MCDOWELL:
[CHUCKLES] Yeah. If we could figure out
free will, then maybe we can answer that question. So I wanted to address
questions individually for the last couple questions,
and for the last part of the talk. And I think this– I’ll address this
question to you, Sougwen. If we imagine that
cities in the future are managed in part by local
intelligence systems for– a mundane example, self-driving
cars that interact with, say, a intelligent traffic light– then we can view your
practice as a prototype for a co-creative relationship
with our built environment. What do you think
designers, architects, and urban planners should
know about working creatively with intelligent systems? SOUGWEN CHUNG: That’s a
really big question, actually. I think it’s important to
view the intelligent system, the entity that we’re
collaborating with, as something that’s
already happening. Actually, we have
these layered systems, like Uber and all those places. Those already exist in
our architectures now. I think there’s ways to
integrate that in a way that– when I think about this,
I think about music. Something with a
rhythm, and something that responds to the broadest
spectrum of human need. Because there’s
obviously a desire for efficiency in urban flow. But there’s also moments of
pause that need to be baked in. And even in the
design of this space, there’s places for digital
detox and places for meditation. And obviously, I
wouldn’t be the one to speak to about
optimization of traffic flow. But I think to think about
it as a whole organism that represents the best
interests of the collective, I think is a way to set goals
for these intelligent systems in urban environments. KENRIC MCDOWELL:
Well, thank you. And Cedric, your piece
with onformative, “Meandering River,”
as well as many of the works at I/O Arts,
including Jenna Sutela’s film, “nimiia cetii,” which
will be screened tonight, and Anna Ridler’s
“Mosaic Virus,” which is also on the
boardwalk, reflect not only a co-creative relationship
with machines, but also a co-creative relationship
with plants and animals. And in the case of
“Meandering River,” I think a new vision of
how to see natural systems through technology. As we collaborate with
machines, can we not also collaborate with plants
and animals, or even the Earth itself? What can you share
with us about relating to these natural systems
through the tools that you’ve used, for example
with “Meandering River?” CEDRIC KIEFER:
Yeah, good question. What we usually like
to do in our work is we like to look a little
bit beyond what we usually do. And I find that in a lot
of different disciplines. Could be from dance
to classical music. But again, could also be nature. Inspiration comes
from everywhere. And especially with the
“Meandering River” piece, I think one of the
interesting things that resonate with us so strongly
is the fact that it’s heavily about change and
unpredictability. And you find that
a lot in nature. So if we talk about, if there
could be a collaborative work with nature directly, then
that might be hard to answer, because a collaborative process
requires some kind of feedback, and the feedback is a
little more indirect. Not saying that nature doesn’t
react to what we do as humans. We definitely know that’s true. Not always for the best. But definitely. Our work is not really
focused on nature, so I think there are different
artists who can answer that better, and who have much
stronger focus on actually including– as you said– animals, plants,
or just the world surrounding them into their art. For us, it’s a little bit
more about making the process visible, making change visible. And for that reason, nature
is always an important topic. But to answer the
question, I think, yeah, that’s definitely possible. And machine learning
is just one example of how such a collaborator
could look like, but you can find it everywhere. KENRIC MCDOWELL: Wow. Well, I want to thank you both
for contributing your work to this experience for everyone
at I/O. I think it’s added a lot to the experience here. It has for me. And for sharing with
us today your practice and this conversation. Thank you so much. Everyone, a round of applause,
please, for Sougwen and Cedric. CEDRIC KIEFER: Yeah, thank you. SOUGWEN CHUNG: Thank you. [MUSIC PLAYING]

Dereck Turner

5 thoughts on “Making Art with Artificial Intelligence: Artists in Conversation (Google I/O’19)

  1. Praful Patel says:

    Good video 👌

  2. Prayash says:

    Beautiful stuff.

  3. Tim Großmann says:

    Where can I apply for this team? 😀

  4. Ramón Springer says:

    Great content! Amazing artworks!

  5. Federico Moreno says:

    I´m in Buenos Aires using Machine Learning as a creative tool to write theatre plays. if you are interested contact me.

Leave a Reply

Your email address will not be published. Required fields are marked *