Nudify Apps That Undress Women’s Photos Surge In Popularity.

According to researchers, there is a rising trend in the use of artificial intelligence by apps and websites to digitally disrobe women in photos.

In the month of September, websites that digitally undress individuals saw a significant surge in traffic. A staggering 24 million individuals visited these sites, as discovered by Graphika, a company specializing in social network analysis.

 

Graphika, has reported that a significant number of services, often referred to as “nudify” or undressing services, are leveraging popular social media platforms for their marketing efforts.

Since the beginning of this year, there has been a staggering increase of over 2,400% in the number of social media links promoting these undressing applications. This includes platforms such as X and Reddit.

Interestingly, these AI-driven services predominantly target images of women, further intensifying the ethical concerns surrounding their operation.

The emergence of applications that facilitate non-consensual pornography is a deeply concerning trend. These applications, fueled by advancements in artificial intelligence, are contributing to the rise of a specific type of fabricated media known as ‘deepfake pornography’.

Deepfake pornography involves the use of AI to create or alter video content, making it appear as though the individuals featured in it are engaging in sexual activities they did not consent to. This form of non-consensual pornography is becoming increasingly prevalent, largely due to the ease with which these applications can access and manipulate digital content.

A significant portion of the images used in these deepfakes are sourced from social media platforms, often without the knowledge, consent, or control of the individuals featured in them.

The surge in popularity of these applications is closely linked to the advent of several open-source diffusion models. These models, powered by artificial intelligence, are capable of generating images of a quality far surpassing those produced just a few years prior, as noted by Graphika.

Open-source diffusion models have democratized access to advanced AI capabilities. Since these models are open-source, they are freely available for use.

“You can create something that actually looks realistic,” said Santiago Lakatos, an analyst at Graphika, noting that previous deepfakes were often blurry.

An image was recently posted on a platform, referred to as ‘X’, which was promoting an application with the capability to digitally undress individuals in photographs. The language used in the advertisement insinuated that users of the app could generate explicit images and then distribute them to the person who was depicted in the original, non-explicit photograph.

In a related development, one such application has been found to have sponsored content on Google’s YouTube platform. This particular app has managed to secure a prominent position in the search results. Specifically, when users input the term “nudify” into the search bar, this app is the first result that appears. This raises concerns about the ease of access to such applications and the potential misuse of technology.

A Google spokesperson said the company doesn’t allow ads “that contain sexually explicit content.”

“We’ve reviewed the ads in question and are removing those that violate our policies,” the company said.

A representative from Reddit has stated that the platform has a strict policy against the non-consensual dissemination of fabricated sexually explicit content. This policy is in place to protect the privacy and dignity of individuals. In line with this policy, Reddit has taken decisive action by banning several domains that were found to be in violation of these rules. These bans were implemented following a thorough investigation and research process.

On the other hand, ‘X’, another platform implicated in the issue, has remained silent on the matter. Despite attempts to reach out to them for a statement or comment, ‘X’ has not provided any response.

The services in question have been experiencing a significant increase in online traffic. Some of these services operate on a subscription model, charging users a monthly fee of $9.99. According to claims made on their official websites, these services are attracting a substantial number of customers, indicating a growing user base.

Lakatos, an observer of this trend, has noted the commercial success of these services. He stated, “They are doing a lot of business.”

He further elaborated on one of the undressing apps, providing insight into its popularity. According to Lakatos, if the claims made by the app are to be believed, their website boasts of having more than a thousand users per day.

The internet has long been plagued by the issue of non-consensual pornography, particularly involving public figures. This form of violation not only infringes upon the privacy of individuals but also poses serious ethical concerns. However, with the advent of advanced Artificial Intelligence (AI) technology, the situation has become even more alarming.

Deepfake software, powered by AI, has become increasingly accessible and effective. This technology allows for the creation of hyper-realistic fake videos or images by superimposing existing images or videos onto source media.

“We are seeing more and more of this being done by ordinary people with ordinary targets,” said Eva Galperin, director of cybersecurity at the Electronic Frontier Foundation. “You see it among high school children and people who are in college.”

According to Galperin, a significant number of victims remain unaware of the existence of explicit images created without their consent. This lack of awareness can be attributed to the vastness of the internet and the ease with which such content can be disseminated across various platforms.

However, even for those who do discover such violations, the path to justice is fraught with challenges. One of the primary obstacles they face is getting law enforcement agencies to initiate investigations into these cases. The complexities of digital crimes, coupled with jurisdictional issues and the anonymity that the internet provides, often make it difficult for law enforcement to take prompt action.

In the United States, the legal landscape surrounding the creation of deepfake pornography is complex and evolving. As it stands, there is no overarching federal law that explicitly prohibits the creation of deepfake pornography involving adults.

However, the situation is different when it comes to the creation of deepfake pornography involving minors. The US government has enacted laws that strictly prohibit the generation of these types of images. This legislation is part of a broader effort to protect children from exploitation and abuse, and violations of these laws are taken very seriously.

A landmark case in November highlighted the severity of the consequences for those who break these laws. A child psychiatrist from North Carolina was found guilty of using applications to digitally undress photos of his patients, effectively creating deepfake child sexual abuse material. This was the first prosecution of its kind under the specific law banning the generation of deepfake child sexual abuse material.

The psychiatrist was sentenced to 40 years in prison, demonstrating the gravity with which such offenses are treated.

ALSO READ | Apple’s Urgent iOS 17.1.2 Update: A Necessary Fix for Critical Security Vulnerabilities

Leave a Reply

Your email address will not be published. Required fields are marked *