{"id":150554,"date":"2022-12-07T07:40:10","date_gmt":"2022-12-07T07:40:10","guid":{"rendered":"https:\/\/harchi90.com\/lensa-reignites-discussion-among-artists-over-the-ethics-of-ai-art\/"},"modified":"2022-12-07T07:40:10","modified_gmt":"2022-12-07T07:40:10","slug":"lensa-reignites-discussion-among-artists-over-the-ethics-of-ai-art","status":"publish","type":"post","link":"https:\/\/harchi90.com\/lensa-reignites-discussion-among-artists-over-the-ethics-of-ai-art\/","title":{"rendered":"Lensa reignites discussion among artists over the ethics of AI art"},"content":{"rendered":"
\n

For many online, Lensa AI is a cheap, accessible profile picture generator. But in digital art circles, the popularity of artificial intelligence-generated art has raised major privacy and ethics <\/strong>your concerns.<\/p>\n

Lensa, which launched as a photo editing app in 2018, went viral last month after releasing its \u201cmagic avatars\u201d feature. It uses a minimum of 10 user-uploaded images and the neural network Stable Diffusion to generate portraits in a variety of digital art styles. Social media has been flooded with Lensa AI portraits, from photorealistic paintings to more abstract illustrations. The app claimed the No. 1 spot in the iOS App Store’s \u201cPhoto & Video\u201d category earlier this month. <\/p>\n

But the app’s growth \u2014 and the rise of AI-generated art in recent months \u2014 has reignited discussion over the ethics of creating images with models that have been trained using other people’s original work. <\/p>\n

Lensa is tinged with controversy \u2014 multiple artists have accused Stable Diffusion of using their art without permission. Many in the digital art space have also expressed qualms over AI models producing images en masse for so cheap, especially if those images imitate styles that actual artists have spent years refining. <\/p>\n

For a $7.99 service fee, users receive 50 unique avatars \u2014 which artists said is a fraction of what a single portrait commission normally costs. <\/p>\n

Companies like Lensa say they’re \u201cbringing art to the masses,\u201d said artist Karla Ortiz. \u201cBut really what they’re bringing is forgery, art theft [and] copying to the masses.\u201d <\/p>\n

Prisma Labs, the company behind Lensa, did not respond to requests for comment.<\/p>\n

In a lengthy Twitter thread posted Tuesday morning, Prisma addressed concerns of AI art replacing art by actual artists. <\/p>\n

\n

\u201cAs cinema didn’t kill theater and accounting software hasn’t eradicated the profession, AI won’t replace artists but can become a great assisting tool,\u201d the company tweeted<\/a>. \u201cWe also believe that the growing accessibility of AI-powered tools would only make man-made art in its creative excellence more valued and appreciated, since any industrialization brings more value to handcrafted works.\u201d<\/p>\n

The company said that AI-generated images \u201ccan’t be described as exact replicas of any particular artwork.\u201d The thread did not address accusations that many artists didn’t consent to the use of their work for AI training. <\/p>\n

For some artists, AI models are a creative tool. Several have pointed out that the models are helpful for generating reference images that are otherwise difficult to find online. Other writers have posted about using the models to visualize scenes in their screenplays and novels. While the value of art is subjective, the crux of the AI \u200b\u200bart controversy is the right to privacy. <\/p>\n

Ortiz, who is known for designing concept art for movies like \u201cDoctor Strange,\u201d also paints fine art portraits. When she realized that her art was included in a dataset used to train the AI \u200b\u200bmodel that Lensa uses to generate avatars, she said it felt like a \u201cviolation of identity.\u201d<\/p>\n

Prisma Labs deletes user photos from the cloud services it uses to process the images after it uses them to train its AI, the company told TechCrunch. The company’s user agreement states that Lensa can use the photos, videos and other user content for \u201coperating or improving Lensa\u201d without compensation. <\/p>\n

In its Twitter thread, Lensa said that it uses a \u201cseparate model for each user, not a one-size-fits-all monstrous neural network trained to reproduce any face.\u201d The company also stated that each user’s photos and \u201cassociated model\u201d are permanently erased from its servers as soon as the user’s avatars are generated. <\/p>\n

The fact that Lensa uses user content to further train its AI model, as stated in the app’s user agreement, should alarm the public, artists who spoke with NBC News said. <\/p>\n

\u201cWe’re learning that even if you’re using it for your own inspiration, you’re still training it with other people’s data,\u201d said Jon Lam, a storyboard artist at Riot Games. \u201cAnytime people use it more, this thing just keeps learning. Anytime anyone uses it, it just gets worse and worse for everybody.\u201d <\/p>\n

Image synthesis models like Google Imagen, DALL-E and Stable Diffusion are trained using datasets of millions of images. The models learn associations between the arrangement of pixels in an image and the image’s metadata, which typically includes text descriptions of the image subject and artistic style. <\/p>\n

The model can then generate new images based on the associations it has learned. When fed the prompt \u201cbiologically accurate anatomical description of a birthday cake,\u201d for example, the model Midjourney generated unsettling images that looked like actual medical textbook material. Reddit users described the images as \u201cbrilliantly weird\u201d and \u201clike something straight out of a dream.\u201d <\/p>\n

\n

The San Francisco Ballet even used images generated by Midjourney to promote this season’s production of the Nutcracker. In a press release earlier this year, the San Francisco Ballet’s chief marketing officer Kim Lundgren said that pairing the traditional live performance with AI-generated art was the \u201cperfect way to add an unexpected twist to a holiday classic.\u201d The campaign was widely publicized by artist advocacy. A spokesperson for the ballet did not immediately respond to a request for comment.<\/p>\n

\u201cThe reason those images look so good is due to the nonconsensual data they gathered from artists and the public,\u201d Ortiz said. <\/p>\n

Ortiz is referring to the Large-scale Artificial Intelligence Open Network (LAION), a nonprofit organization that releases free datasets for AI research and development. LAION-5B, one of the datasets used to train Stable Diffusion and Google Imagen, includes publicly available images scraped from sites like DeviantArt, Getty Images and Pinterest. <\/p>\n

Many artists have spoken out against models that have been trained with LAION because their art was used in the set without their knowledge or permission. When an artist used the site Have I Been Trained, which allows users to check if their images were included in LAION-5B, she found her own face and medical records. Ars Technica reported that \u201cthousands of similar patient medical record photos\u201d were also included in the dataset. <\/p>\n

\n

\u201cAnd now we are facing the same problem the music industry faced with websites like Napster, which was maybe made with good intentions or without thinking about the moral implications.\u201d<\/p>\n

artist mateusz urbanowicz<\/cite><\/p>\n<\/div>\n

Artist Mateusz Urbanowicz, whose work was also included in LAION-5B, said that fans have sent him AI-generated images that bear striking similarities to his watercolor illustrations. <\/p>\n

It’s clear that LAION is \u201cnot just a research project that someone put on the internet for everyone to enjoy,\u201d he said, now that companies like Prisma Labs are using it for commercial products. <\/p>\n

\u201cAnd now we are facing the same problem the music industry faced with websites like Napster, which was maybe made with good intentions or without thinking about the moral implications.\u201d<\/p>\n

The art and music industry memorial by stringent copyright laws in the United States, but the use of copyrighted material in AI is legally murky. Using copyrighted material to train AI models might fall under fair use laws, The Verge reported. It’s more complicated when it comes to the content that AI models generate, and it’s difficult to enforce, which leaves artists with little recourse. <\/p>\n

\u201cThey just take everything because it’s a legal gray zone and just exploiting it,\u201d Lam said. \u201cBecause tech always moves faster than law, and law is always trying to catch up with it.\u201d <\/p>\n

There’s also little legal precedent for pursuing legal action against commercial products that use AI trained on publicly available material. Lam and others in the digital art space say they hope that a pending class action argument against GitHub Copilot, a Microsoft product that uses an AI system trained by public code on GitHub, will pave the way for artists to protect their work. Until then, Lam said he’s wary of sharing his work online at all. <\/p>\n

Lam isn’t the only artist worried about posting his art. After his recent posts<\/a> calling out AI art went viral on Instagram and Twitter, Lam said that he received \u201can overwhelming amount\u201d of messages from students and early career artists asking for advice. <\/p>\n

The internet \u201cdemocratized\u201d art, Ortiz said, by allowing artists to promote their work and connect with other artists. For artists like Lam, who has been hired for most of his jobs because of his social media presence, posting online is vital for landing career opportunities. Putting a portfolio of work samples on a password-protected site doesn’t compare to the exposure gained from sharing it publicly.<\/p>\n

\u201cIf no one knows your art, they’re not going to go to your website,\u201d Lam added. \u201cAnd it’s going to be increasingly difficult for students to get their foot in the door.\u201d <\/p>\n

Adding a watermark may not be enough to protect artists \u2014 in a recent Twitter thread<\/a>graphic designer Lauryn Ipsum listed examples of the \u201cmangled remains\u201d of artists’ signatures in Lensa AI portraits. <\/p>\n

\n

Some argue that AI art generators are no different from an aspiring artist who emulates another’s style, which has become a point of contention within art circles. <\/p>\n

Days after illustrator Kim Jung Gi died in October, a former game developer created an AI model that generates images in the artist’s unique ink and brush style. the creator said<\/a> the model was an homage to Kim’s work, but it received immediate backlash from other artists. Ortiz, who was friends with Kim, said that the artist’s \u201cwhole thing was teaching people how to draw,\u201d and to feed his life’s work into an AI model was \u201creally disrespectful.\u201d <\/p>\n

Urbanowicz said he’s less bothered by an actual artist who’s inspired by his illustrations. An AI model, however, can churn out an image that he would \u201cnever make\u201d and hurt his brand \u2014 like if a model was prompted to generate \u201ca store painted with watercolors that sells drugs or weapons\u201d in his illustration style, and the image was posted with his name attached.<\/p>\n

\u201cIf someone makes art based on my style, and makes a new piece, it’s their piece. It’s something they made. They learned from me as I learned from other artists,\u201d he continued. \u201cIf you type in my name and store [in a prompt] to make a new piece of art, it’s forcing the AI \u200b\u200bto make art that I don’t want to make.\u201d <\/p>\n

Many artists and advocates also question if AI art will devalue work created by human artists. <\/p>\n

Lam worries that companies will cancel artist contracts in favor of faster, cheaper AI-generated images.<\/p>\n

Urbanowicz pointed out that AI models can be trained to replicate an artist’s previous work, but will never be able to create the art that an artist hasn’t made yet. Without decades of examples to learn from, he said, the AI \u200b\u200bimages that looked just like his illustrations would never exist. Even if the future of visual art is uncertain as apps like Lensa AI become more common, he’s hopeful that aspiring artists will continue to pursue careers in creative fields.<\/p>\n

\u201cOnly that person can make their unique art,\u201d Urbanowicz said. \u201cAI cannot make the art that they will make in 20 years.\u201d <\/p>\n


<\/div>\n