{"id":55303,"date":"2022-08-24T20:51:05","date_gmt":"2022-08-24T20:51:05","guid":{"rendered":"https:\/\/harchi90.com\/uncensored-ai-art-model-prompts-ethics-questions-techcrunch\/"},"modified":"2022-08-24T20:51:05","modified_gmt":"2022-08-24T20:51:05","slug":"uncensored-ai-art-model-prompts-ethics-questions-techcrunch","status":"publish","type":"post","link":"https:\/\/harchi90.com\/uncensored-ai-art-model-prompts-ethics-questions-techcrunch\/","title":{"rendered":"Uncensored AI art model prompts ethics questions \u2013 TechCrunch"},"content":{"rendered":"
\n

A new open source AI image generator capable of producing realistic pictures from any text prompt has seen stunningly swift uptake in its first week. Stability AI’s Stable Diffusion, high fidelity but capable of being run on off-the-shelf consumer hardware, is now in use by art generator services like Artbreeder, Pixelz.ai and more. But the model’s unfiltered nature means not all the use has been completely above board.<\/p>\n

For the most part, the use cases have been above board. For example, NovelAI has been experimenting with Stable Diffusion to produce art that can accompany the AI-generated stories created by users on its platform. Midjourney has launched a beta that taps Stable Diffusion for greater photorealism.<\/p>\n

But Stable Diffusion has also been used for less savory purposes. On the infamous discussion board 4chan, where the model leaked early, several threads are dedicated to AI-generated art of nude celebrities and other forms of generated pornography.<\/p>\n

Emad Mostaque, the CEO of Stability AI, called it \u201cunfortunate\u201d that the model leaked on 4chan and stressed that the company was working with \u201cleading ethicists and technologies\u201d on safety and other mechanisms around responsible release. One of these mechanisms is an adjustable AI tool, Safety Classifier, included in the overall Stable Diffusion software package that attempts to detect and block offensive or undesirable images.<\/p>\n

However, Safety Classifier \u2014 while on by default \u2014 can be disabled.<\/p>\n

Stable Diffusion is very much new territory. Other AI art-generating systems, like OpenAI’s DALL-E 2, have implemented strict filters for pornographic material. (The license for the open source Stable Diffusion prohibits certain applications, like exploiting minors, but the model itself isn’t fettered on the technical level.) Moreover, many don’t have the ability to create art of public figures, unlike Stable Diffusion . Those two capabilities could be risky when combined, allowing bad actors to create pornographic \u201cdeepfakes\u201d that \u2014 worst-case scenario \u2014 might perpetuate abuse or implicate someone in a crime they didn’t commit.<\/p>\n

Women, unfortunately, are most likely by far to be the victims of this. A study carried out in 2019 revealed that, of the 90% to 95% of deepfakes that are non-consensual, about 90% are of women. That bodes poorly for the future of these AI systems, according to Ravit Dotan, VP of responsible AI at Mission Control.<\/p>\n

\u201cI worry about other effects of synthetic images of illegal content \u2014 that it will exacerbate the illegal behaviors that are portrayed,\u201d Dotan told TechCrunch via email. \u201cEg, will synthetic child [exploitation] increase the creation of authentic child [exploitation]? Will it increase the number of pedophiles’ attacks?\u201d<\/p>\n

Montreal AI Ethics Institute principal researcher Abhishek Gupta shares this view. \u201cWe really need to think about the lifecycle of the AI \u200b\u200bsystem which includes post-deployment use and monitoring, and think about how we can envision controls that can minimize harms even in worst-case scenarios,\u201d he said. \u201cThis is particularly true when a powerful capability [like Stable Diffusion] gets into the wild that can cause real trauma to those against whom such a system might be used, for example, by creating objectionable content in the victim’s likeness.\u201d<\/p>\n

Something of a preview played out over the past year when, at the advice of a nurse, a father took pictures of his young child’s swollen genital area and texted them to the nurse’s iPhone. The photo automatically backed up to Google Photos and was flagged by the company’s AI filters as child sexual abuse material, which resulted in the man’s account being disabled and an investigation by the San Francisco Police Department.<\/p>\n

If a legitimate photo could trip such a detection system, experts like Dotan say, there’s no reason deepfakes generated by a system like Stable Diffusion couldn’t \u2014 and at scale.<\/p>\n

\u201cThe AI \u200b\u200bsystems that people create, even when they have the best intentions, can be used in harmful ways that they don’t anticipate and can’t prevent,\u201d Dotan said. \u201cI think that developers and researchers often underappreciated this point.\u201d<\/p>\n

Of course, the technology to create deepfakes has existed for some time, AI-powered or otherwise. A 2020 report from deepfake detection company Sensity found that hundreds of explicit deepfake videos featuring female celebrities were being uploaded to the world’s biggest pornography websites every month; the report estimated the total number of deepfakes online at around 49,000, over 95% of which were porn. Actresses including Emma Watson, Natalie Portman, Billie Eilish and Taylor Swift have been the targets of deepfakes since AI-powered face-swapping tools entered the mainstream several years ago, and some, including Kristen Bell, have spoken out against what they view as sexual exploitation<\/span><\/p>\n

But Stable Diffusion represents a newer generation of systems that can create incredibly \u2014 if not perfectly \u2014 convincing fake images with minimal work by the user. It’s also easy to install, requiring no more than a few setup files and a graphics card costing several hundred dollars on the high end. Work is underway on even more efficient versions of the system that can run on an M1 MacBook.<\/p>\n

Sebastian Berns, a Ph.D. researcher in the AI \u200b\u200bgroup at Queen Mary University of London, thinks the automation and the possibility to scale up customized image generation are the big differences with systems like Stable Diffusion \u2014 and main problems. \u201cMost harmful imagery can already be produced with conventional methods but is manual and requires a lot of effort,\u201d he said. \u201cA model that can produce near-photorealistic footage may give way to personalized blackmail attacks on individuals.\u201d<\/p>\n

Berns fears that personal photos scraped from social media could be used to condition Stable Diffusion or any such model to generate targeted pornographic imagery or images depicting illegal acts. There’s certainly precedent. After reporting on the rape of an eight-year-old Kashmiri girl in 2018, Indian investigative journalist Rana Ayyub became the target of Indian nationalist trolls, some of whom created deepfake porn with her face on another person’s body. The deepfake was shared by the leader of the nationalist political party BJP, and the harassment Ayyub received as a result became so bad the United Nations had to intervene.<\/p>\n

\u201cStable Diffusion enough customization to send out automated threats against individuals to either pay or risk having fake but offers potentially damaging footage being published,\u201d Berns continued. \u201cWe already see people being extorted after their webcam was accessed remotely. That infiltration step might not be necessary anymore.\u201d<\/p>\n

With Stable Diffusion out in the wild and already being used to generate pornography \u2014 some non-consensual \u2014 it might become incumbent on image hosts to take action. TechCrunch reached out to one of the major adult content platforms, OnlyFans, but didn’t hear back as of publication time. A spokesperson for Patreon, which also allows adult content, noted that the company has a policy against deepfakes and disallows images that \u201crepurpose celebrities’ likenesses and place non-adult content into an adult context.\u201d<\/p>\n

If history is any indication, however, enforcement will likely be uneven \u2014 in part because few laws specifically protect against deepfaking as it relates to pornography. And even if the threat of legal action pulls some sites dedicated to objectionable AI-generated content under, there’s nothing to prevent new ones from popping up.<\/p>\n

In other words, Gupta says, it’s a brave new world.<\/p>\n

\u201cCreative and malicious users can abuse the capabilities [of Stable Diffusion] to generate subjectively objectionable content at scale, using minimal resources to run inference \u2014 which is cheaper than training the entire model \u2014 and then publish them in venues like 4chan to drive traffic and hack attention,\u201d Gupta said. \u201cThere is a lot at stake when such capabilities escape out ‘into the wild’ where controls such as API rate limits, safety controls on the kinds of outputs returned from the system are no longer applicable.\u201d<\/p>\n

Editor’s note: An earlier version of this article included images depicting some of the celebrity deepfakes in question, but those have since been removed.<\/em><\/p>\n<\/p><\/div>\n","protected":false},"excerpt":{"rendered":"

A new open source AI image generator capable of producing realistic pictures from any text prompt has seen stunningly swift uptake in its first week. Stability AI’s Stable Diffusion, high fidelity but capable of being run on off-the-shelf consumer hardware, is now in use by art generator services like Artbreeder, Pixelz.ai and more. But the …<\/p>\n

Uncensored AI art model prompts ethics questions \u2013 TechCrunch<\/span> Read More »<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"site-sidebar-layout":"default","site-content-layout":"default","ast-global-header-display":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","spay_email":"","jetpack_publicize_message":"","jetpack_is_tweetstorm":false,"jetpack_publicize_feature_enabled":true},"categories":[4],"tags":[],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack-related-posts":[{"id":45867,"url":"https:\/\/harchi90.com\/open-source-rival-for-openais-dall-e-runs-on-your-graphics-card\/","url_meta":{"origin":55303,"position":0},"title":"Open-source rival for OpenAI’s DALL-E runs on your graphics card","date":"August 15, 2022","format":false,"excerpt":"Image: Stable Diffusion Der Artikel kann nur mit aktiviertem JavaScript dargestellt werden. Bitte aktiviere JavaScript in deinem Browser und lade die Seite neu. OpenAI's DALL-E 2 is getting free competition. Behind it is an AI open-source movement and the startup Stability AI. Artificial intelligence that can generate images from text\u2026","rel":"","context":"In "Technology"","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/mixed-news.com\/en\/wp-content\/uploads\/2022\/08\/Stable-Diffusion-V1-Merged-Title-860x344.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":15167,"url":"https:\/\/harchi90.com\/metas-make-a-scene-ai-blends-human-and-computer-imagination-into-algorithmic-art\/","url_meta":{"origin":55303,"position":1},"title":"Meta’s ‘Make-A-Scene’ AI blends human and computer imagination into algorithmic art","date":"July 15, 2022","format":false,"excerpt":"Text-to-image generation is the hot algorithmic process right now, with OpenAI's Craiyon (formerly DALL-E mini) and Google's Imagen AIs unleashing tidal waves of wonderfully weird procedurally generated art synthesized from human and computer imaginations. On Tuesday, Meta revealed that it too has developed an AI image generation engine, one that\u2026","rel":"","context":"In "Technology"","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":42980,"url":"https:\/\/harchi90.com\/xiaomi-debuts-cyberone-ahead-of-teslas-ai-day\/","url_meta":{"origin":55303,"position":2},"title":"Xiaomi debuts CyberOne ahead of Tesla’s AI Day","date":"August 12, 2022","format":false,"excerpt":"Xiaomi CEO, Lei Jun announced the company's CyberOne humanoid robot at its launch event in Beijing on August 11th. The debut is ahead of Tesla's AI Day which many are anticipating a working Optimus Bot prototype. It will be interesting to see the two robots side by side. According to\u2026","rel":"","context":"In "Technology"","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"fifu_image_url":"https:\/\/techcrunch.com\/wp-content\/uploads\/2022\/08\/ai-gen-unfiltered.jpg?w=711","_links":{"self":[{"href":"https:\/\/harchi90.com\/wp-json\/wp\/v2\/posts\/55303"}],"collection":[{"href":"https:\/\/harchi90.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/harchi90.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/harchi90.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/harchi90.com\/wp-json\/wp\/v2\/comments?post=55303"}],"version-history":[{"count":0,"href":"https:\/\/harchi90.com\/wp-json\/wp\/v2\/posts\/55303\/revisions"}],"wp:attachment":[{"href":"https:\/\/harchi90.com\/wp-json\/wp\/v2\/media?parent=55303"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/harchi90.com\/wp-json\/wp\/v2\/categories?post=55303"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/harchi90.com\/wp-json\/wp\/v2\/tags?post=55303"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}