On an evening in late May last year, towards the end of summer in Bengaluru, Harshit Agrawal started thinking about some of Rembrandt’s works. One piece interested him in particular: The Anatomy Lesson of Dr. Nicolaes Tulp, an early oil on canvas masterpiece painted by young Rembrandt in 1632.
Rembrandt’s painting was done at a time when medical technology, especially surgery, was nascent and the general public saw it with fear and fascination. The portrait planted germs of an idea in Agrawal’s mind.
Some days later, from his apartment in Bellandur, now famous for the burning lake (not the title of a pretty work of art, I assure you), Agrawal started trawling the web for images and videos of human surgeries being performed. He was getting out of the artists’ frame of mind to slip into the engineer’s mind.
In next few days between his day job as a human-computer interaction designer, the feverish 26-year old, curated a dataset of 60,000 images of human surgery, fed it to an algorithm he jerry-rigged on remote servers somewhere on the cloud and waited for the machine to do its magic. Agrawal’s labour of love was a set of disquieting images he named: “The anatomy lessons of Dr Algorithm.” It is in many ways the work of artificial intelligence algorithms too.
Welcome to the world of Cyborg Artists: an emerging set of artists who use AI to create art. By one estimate, less than 100 such artists exist in the world and just a handful are in India. AI-powered art is all the buzz in the art world these days and that makes Agrawal special and his set of images an early work.
“There’s an interesting back and forth between an artist and a technologist,” Agrawal says as he explains the creative process. “You have to think possibilities and how that may translate into an algorithm and the data that feeds into it.” For Agrawal, the datasets and algorithms are art materials. Like colour and canvas is to a traditional artist.
On August 17 last year, Agarwal’s work was showcased at Gradient Descent, a first of its kind exhibition at Nature Morte, a contemporary art gallery in Delhi. Curated by 64/1, an art collective founded by brothers K K Raghava and Karthik Kalyanaraman, the exhibition showcased art created using AI and was in many ways genre defining.
“We’re really in a very, very new space. Art history is since the beginning of man. Here we’re only three years old. It’s really a baby. Only now we are hearing of shows,” says Aparajita Jain, the co-director of Nature Morte.
It was the world’s first exhibition at a mainstream art gallery to feature work created only with AI and featured the works of seven artists – Anna Ridler, Mario Klingemann, Jake Elwes, Memo Akten, Nao Tokui and Tom White, besides Agrawal – among the pioneers in this emerging space.
“We call this the photography moment of our time. This is the second existential crisis that art is facing. This is the second moment in history where art is being challenged,” says Raghava, 38.
Photography challenged the purpose of art at a time when artists were mostly used to represent real-life imagery. “Suddenly you had this new media which was amazing at perspective, representation which had been the main goal of art since the Renaissance,” says Kalyanaraman,40, a former professor of econometrics at University College London and University of Maryland, and Raghava’s elder sibling.
Early photographs mimicked art. Perfect frames, symmetry, portraits and so on. But then, photography developed as its own medium and in turn, it influenced artists. “Painters started to change. We had the impressionists, modern movement, where it became less about representation but more about creating presence,” says Kalyanaraman.
Brothers on a quest
In December 2017, after many years in New York, artist KK Raghava and his brother, econometrist Karthik Kalyanaraman, had come back to live in the house their parents built in Bengaluru. Both were going through breakups of long relationships (15 and 17 years respectively).
“I was going through a crisis. Now that my assumptions about certain things in life were broken, how do I rebuild myself,” Raghava asked himself. The brothers, both obsessed with art, began talking of working together. In the process, the duo came up with a thesis that lay the groundwork for much of what they do now.
For over a decade, Raghava has been pursuing a line of thought: how does technology manifest a consciousness? “Like if you have a feeling, you use brush, pencil, hands it doesn’t matter what you use and I think of technology like that,” says Raghava.
Raghava’s early works involved using the iPad as a medium, or sensors to tap into brainwaves and such to create art. For the last few years, however, the brothers have been focused on using AI to create art.
Their work is underpinned by a belief that the definition of what it means to be human needs to change. This is also the genesis of a thesis they begin to propose: a new way of thinking about human evolution. “We’re already augmented humans. We’re not just humans. We think of ourselves as cyborgs,” Raghava tells me over filter coffee at his family home in Jayamahal, a tony central Bengaluru neighbourhood.
Their thesis, published in the August 2018 edition of art journal Critical Collective, takes issue with art’s obsession with the past and proposes that artists must find ways of co-creating the future, “like the Soviet modernists of the pre-Stalin era who collectively dreamed of the future”.
“They felt integrated with the society because they were contributing to the future of that society and not alienated individuals sitting in their cubby holes reacting to the past,” says Kalyanaraman.
Raghava and Kalyanaraman outline six key shifts in their thesis. The first shift is when Newton said that the world is nothing but a machine. The second shift is when Darwin told us that man is nothing but an animal. With the discovery of the DNA, the third shift told us that man is nothing but a gene. “We call these three shifts as the fall of man,” says Raghava.
“The role of art has changed from reactive to life to a creator of life using new tools. We’ve embraced AI as a tool of the future,” says Raghava.
Now comes the “rise of the cyborg.” Shift one: machines are now becoming our natural environment. Shift two: man is not the only agent and that larger systems have a life of their own. Final shift: machines are replacing humans starting from repetitive tasks to tasks that require cognitive abilities.
Using art as a metaphor for the highest human faculty, the brothers ask if AI has started to replace creative labour, can it also replace spiritual labour? They also delve into why the affective, emotional and ethical aspects of art can’t be replaced by AI. “The role of art has changed from reactive to life to a creator of life using new tools. We’ve embraced AI as a tool of the future,” says Raghava.
Powered by an abundance of data and computing power, AI has started augmenting humans in many areas. Sentient robots aren’t here yet but if you’re a believer, futurist Ray Kurzweil predicts that we’ll reach singularity (the point in time at which computers will overtake humans) by 2040. The early signs are there for all of us to see. Driverless cars, robots running back offices or making decisions, trading stock and dispensing medical advice already exist amidst us.
The world of art isn’t any different.
The inner workings of Dr Algorithm
Creating art with AI started off as a fun project as early as 2014. It was mostly technologists rigging up their own algorithms to create art or using Google’s DeepDream. But then came artists who were comfortable with technology. The movement is now at the cusp of a major expansion with more and more artists starting to use the medium.
How are artists creating algorithms, training them and managing the output? In the early days, many people used a technique called style transfer. You train a neural network with images of a particular style. Then you use the neural network to transfer the style onto another image. Think of it as applying filters. “These become boring after you see them a few times,” says Kalyanaraman.
But then came Generative Adversarial Networks or GANs. These are a set of algorithms which use two neural networks – one which generates an output from a training set and another which evaluates it – to try and create completely new results. The generative network’s job is to fool the second network which evaluates its results. The method was first introduced by research scientist Ian Goodfellow in 2014.
Just like a painter mixes colours or changes his brush, a cyborg artist can manipulate the output by various means: by picking a certain type of training data, by restricting the training data, by creating a consortium of algorithms, by picking the final works based on their judgement and so on. Klingemann, for instance, chose portraits of old masters to train the algorithm for his exhibit at Gradient Descent.
Many of these algorithms, including the one that formed the basis of the AI art that was sold by Christie’s for a six-figure sum recently, can be found on Github. (See: We made our own artificial intelligence art, so can you)
Commercials of AI art
Feingold chuckled as he turned to the robot.
“Andrew, are you pleased that you have money?”
“Yes, sir.”
“What do you plan to do with it?”
“Pay for things, sir, which otherwise Sir would have to pay for. It would save him expense, sir.”
Andrew is a robot, the central character of Isaac Asimov’s 1976 sci-fi classic Bicentennial Man. Gerald Martin(Sir) owns Andrew. John Finegold is a lawyer. The robot had been making chairs and carving wood. Gerald had sold nearly $200,000 worth of Andrew’s works and put away half of that in the name of Andrew Martin. Now he wanted to know if it was legal.
“There are no precedents, Gerald. How did your robot sign the necessary papers?” Finegold suggests setting up a trust to manage the finances and insulate Andrew from the “hostile world.” “If anyone objects, let him bring a suit,” Finegold advised.
There still aren’t any precedents of robots getting paid. Who gets paid is a big question in the AI-art world.
In October 2018, New York-based Christie’s sold Edmond de Belamy, from La Famille de Belamy, a AI-created portrait for $432,500. The piece was created by an art collective called Obvious using code uploaded by a 19-year old programmer Robbie Barrat on Github and training data scraped from WikiArt.
But the big moment for AI in art was tainted by questions about originality and ownership.
“You could have pulled off a show like that three years ago. It is only now that there is the aesthetic and conceptual richness that could actually be called out. That encounter with art which alters you,” Kalyanaraman says. “Peter Nagy and @naturemorte_delhi already did a great show of this sort of work,” Pulitzer winning art critic Jerry Saltz wrote on his Instagram feed.
Until recently, much of AI art was done by technologists who were trying to be creative. That meant the work wasn’t really appreciated in the art world for their aesthetic or conceptual quality. That’s changing now. “There are these newer artists who are coming from the art world so they get what makes their work more conceptually rich,” says Raghava.
Moreover, Barrat isn’t likely to make any money from the sale and he wasn’t credited, as per this Wired story. This sparked another debate in the art world. Who makes money from the art created by AI? Ian Goodfellow, the inventor of Generative Adversarial Networks? WikiArt? Barrat, who posted the code online?
Raghava and Kalyanaraman try to answer some of these questions in their exhibition. “We came up with something that felt fair for the first exhibition,” says Raghava. At the show, different pieces were priced differently, starting from $1000 and going up to $25,000 depending on various factors. Jain says that the gallery has sold 4-5 pieces from the exhibition but declined to share more details.
The idea, when it comes to pricing, was to avoid complexifiers — like copyrighted images in training sets. Then they decided not to sell the code that was used to create a piece. “We decided we’re going to think about it as selling limited editions of an experience. Because art is an encounter with the artefact,” says Raghava. Now the plan was to limit the number of pieces a GAN can create and number it. Contracts were drafted and signed to that effect.
All eyes on AI
There’s a lot of attention on AI-art right now. But the market for this kind of art isn’t yet proven. New York-based artist Nitin Mukul, for instance, doesn’t see it disrupting the stability of existing mediums in the market. Like Raghava and Kalyanaraman, Mukul believes that the legacy will rely more on the vision of the artist using AI than the “sensational charms intrinsic to the medium”. However, he has opposing views when it comes to the value of AI-art. “I don’t think it will ever have the commodity value of hand made art objects like a painting or a printed photograph,” he says. That has mostly to do with the physicality and uniqueness of a painting or sculpture. To quote Saltz: “GAN is not an END. GAN is a paintbrush, a ruler, etc.. The rest is what the artist does with the tools. Artists use materials; GAN is a material; digital files are materials. USE them in an original way. Book it.”
Klingemann’s new work, Memories of Passersby I, goes under the hammer in London on March 6. Christie’s, as per this Bloomberg report, estimates this could sell at about 30,000 to 40,000 pounds. The response to the sale by Sotheby’s will tell if art buyers who make up a $63.7 billion market are warming up to Cyborg artists.
“We have sold a few pieces. I’m not sure where it is going to go but in the next four years, you’ll see it becoming a definitive market,” says Jain of Nature Morte.
Later this month, Raghava and Kalyanaraman plan to invite collectors from across India to address their fears: how does a work retain its value? Will it be replicated or copied?
“This requires a different mindset. We’re grooming an entire generation of art investors who are looking at it not from the perspective of regular art investor but from the perspective of its impact on humanity, and the defining moment of the cyborg nation,” says Raghava. The brothers believe that a blockchain based solution can address many of these concerns.
“We’re inventing this as we go. It’s still not fully solved,” says Raghava.
Subscribe to FactorDaily
Our daily brief keeps thousands of readers ahead of the curve. More signals, less noise.
To get more stories like this on email, click here and subscribe to our daily brief.
Lead image: From the Machinic Situatedness Series by Harshit Agrawal. Updated at 08:31 am on February 13, 2019 to update lead image credit.