I had already spent an embarrassing amount of money downloading about 1,000 high-resolution images of myself generated by AI through an app called Lensa as part of the new “magic avatar” feature. There are many reasons to frown on the results, some of which have been widely discussed in recent days amid growing panic in Moral He as Lensa ranked #1 in the app stores.
How it works is that users upload 10-20 photos of themselves from their camera roll. Here are some suggestions for best results. Photos should show different angles, different outfits, and different facial expressions. They should not all be from the same day. (“No photography allowed”) There is only one of him in the frame, so the system cannot confuse you with someone else.
Lensa runs on Stable Diffusion, a deep learning mathematical technique that can generate images based on text or image prompts. That composite image can then be used to create a second generation image, so that without the same image hitting somewhere between the uncanny valley and her one of the magic mirrors Snow White’s stepmother had, what You get hundreds of variations. This technology has been around since 2019 and can be found in other AI image generators with Dall-E being the most famous example. Using its latent diffusion model and her 400 million image dataset called CLIP, Lensa is able to spit out 200 photos in 10 different art styles.
It’s been a few years since this technology was introduced, but the increase in usage over the past few days may have caught you off guard by a singularity that seems to have suddenly occurred before Christmas. ChatGPT made headlines this week with the potential to create term reports, but that’s the bare minimum. You can program code, break down complex concepts and equations to explain to her sophomore year, generate fake news or prevent its spread.
When we’re confronted with the Asmiroff reality we’ve been waiting for, the excitement, the fear, or a mixture of both, the first thing we do is use it for selfies and homework. filled a cell phone full of pictures of fairy princesses, cartoon characters, metal cyborgs, Lara Croftian figurines, cosmic goddesses and more.
And from Friday night through Sunday morning, I saw new sets revealing me more and more. I found that almost every photo reveals cleavage or is completely topless, even though there weren’t any. This was just as true for photos where I identified myself as a woman, as well as for photos where a man was identified (Lensa also offers an “other” option, but I haven’t tried it. ).

Drew Grant
I changed my gender of choice from female to male and suddenly I was in space and looked like Elon Musk’s Twitter profile. He dresses like Tony Stark. But regardless of which photo I entered or how I self-identified, one thing became clear as the weekend progressed. Lensa imagined me without clothes. And it got better and better.
Confused? a bit. The fusion of arms and boobs was funniest of all, but as someone with big breasts, it would have been weirder if the AI missed that detail entirely. However, in some images the head was completely cropped in order to focus only on the chest.
According to AI expert Sabri Sansoy, the problem is most likely human fallibility rather than Lensa’s technology.
Sansoy, a robotics and machine learning consultant based in Albuquerque, New Mexico, said: Sansoy has been working on AI since 2015 and argues that human errors can lead to unstable results. “Data Almost 80% of science and AI projects are about labeling data. When you’re talking about billions of pictures, people get tired and bored and mislabel things. , the machine will not function properly.”
Sansoy gave the example of a liquor customer who needs software that can automatically identify brands in photos. To train the program to perform this task, consultants first had to hire human production assistants to comb through images of bars, draw boxes around every bottle of whiskey, and so on. bottom. Ultimately, however, the daunting task of making the assistant tired or distracted led to mistakes and forced the AI to learn from bad data and mislabeled images. . The program doesn’t confuse the cat with the bottle of whiskey because the cat is broken. Because someone accidentally circled the cat.
So maybe someone forgot to circle the nudity when programming the Stable Diffusion neural network used by Lensa. That’s a very generous interpretation that describes the baseline amount of cleavage shots. It was the evolution from cute profile pictures to bra thumbnails.
When I emailed for comment, a Lensa spokesperson actually took the time to respond to each point I raised, not by directing us to a PR statement. “It’s not entirely accurate to say that this issue is limited to female users,” said a spokeswoman for Lensa. Sporadic sexualization is observed in all sex categories, albeit in varying ways. See attached example. Unfortunately, they weren’t external, but you can tell they were a shirtless man with a rippling six-pack.
“Since our stable diffusion model was trained on unfiltered internet content, it reflects the biases we incorporate into human-generated images,” the answer continues. Creators acknowledge the possibility of social bias. So are we. The company reiterated that it is working on updating the NSFW filter.
Regarding my take on gender-specific styles, the spokesperson added: The following styles are applicable to all groups regardless of identity: anime and stylish.
I had suspected that Lensa might also be using AI to handle PR, but I was surprised that they didn’t care that much. Even if I couldn’t tell, did it matter? This is a testament to how quickly our brains adapt and become numb to even the most incredible situations. Or the unfortunate state of the hack-flak relationship, where the gold standard of communication is streamlined information transfer without things getting too personal.
What about AI-generated weird girlfriends? “Occasionally, users encounter blurry silhouettes of people in generated images. ”
So gender is a social construct that exists on the internet. If you don’t like what you see, you can blame society. It’s Frankenstein’s monster and we created it according to our own image.
Or, as the language processing AI model ChatGPT puts it, “Why do AI-generated images always look grotesque and disturbing? It’s because we humans are monsters and our data reflects that. No wonder AI produces such terrifying images, which are simply reflections of our own giant selves.”
from an article on your site
Related articles on the web