Hacker News new | past | comments | ask | show | jobs | submit login
DOJ: Man sentenced to 14 years for posession of deepfake CSAM (justice.gov)
59 points by popcalc 14 days ago | hide | past | favorite | 96 comments



Somewhat controversial but honestly, if someone will consume this sort of material, I rather it is deepfake than the real thing. I think it's pretty authoritarian for the state to outlaw – and jail to 14 years in prison someone – because he rendered the “wrong arrangement of pixels ” on his computer. The whole way society deals with problem, regarding the consumption of this material, there's a lot moral panic going on and folks using this to increase government overreach.


I was about to comment that use of the newish term "CSAM" over the old fashioned "CP" seemed out of place here, given that the argument to prefer "Child Sexual Abuse Material" over "Child Porn" is to emphasise the sexual abuse of a child that took place to create it. So, at first glance, my immediate thought was that deepfakes would more accurately be described as child pornography than child sexual abuse material by their very nature.

However, it isn't clear to me from the press release which was the origin of the "nude bodies and bodies engaged in sex acts" on which the condemned man superimposed the faces of child celebrities. Were those bodies from consenting adults, or were they from non-celebrity children?

If it's the first case I would argue that the images possessed by this man were CP, not CSAM, and you may have a good point to raise a debate. If on the other hand the original body images were indeed from children then this instance can accurately be described as CSAM (but I'm an old fogey and will continue to call it CP) and it wouldn't be a case of government overreach — the fact that the man used deepfakes to replace the faces of the children being exploited is a mere curiosity.


> use of the newish term "CSAM" over the old fashioned "CP" seemed out of place here

Training data though.


I think a secondary question is: how is this illegal but someone doing the same with a children’s fashion magazine, a Playboy and some glue isn’t?

Which is a wider question for deepfakes in general: how is it possible that I can grab a pencil and legally draw, say, Halle Berry in some sort of pornographic pose, but if I do the same via deep learning, it’s suddenly illegal.

It doesn’t make sense, and makes me feel like the arguments behind it are on very shaky legal ground.

Note that this is separate from sharing these images, where the real harm happens.


The legislative and executive branches have always wanted that type of drawn child porn to be illegal. Going by the laws on the books, it still is.

The only reason it is not illegal is the supreme court ruling such laws unconstitutional under the first amendment.

Deepfakes are sufficiently novel, that the old precedent does not completely apply, so the executive is free to enforce the law until the appeals courts start ruling to the contrary.

And with the current make up of the Supreme Court, I'm not sure which was the constitution will end up going on this issue.


There are men sitting decades in the US for drawings.


From the article it seems like the content wasn’t entirely AI generated, it “digitally superimposed the faces of child actors onto nude bodies”, which makes this a lot worse than purely AI generated content


Why specifically? The child actors aren’t harmed, nor is anyone else as far as I can see. So what exactly makes this worse than say AI art or a painting?


It's not explicitly stated, but also isn't ruled out that the nude bodies they were real photos of real children in which case, the deepfake element would be irrelevant.


Sure, that’s a reasonable reaction assuming it applies. But people seem to have a strong reaction even if it didn’t, which is more what I was trying to understand.

I could see using adult models for source material as a sign of mental illness and proactively institutionalizing someone for that is on the table, but prison IMO implies some kind of harm.


> But people seem to have a strong reaction even if it didn’t, which is more what I was trying to understand.

There's some images and concepts where humans generally have a strong disgust reflex; I'm not sure how much any of these various reflexes are innate vs. learned, but in either case overcoming them is likely to also have severe negative consequences.

One of the common tropes I've seen in every debate about human sexual desires and what should be forbidden, there's always someone who treats it as a slippery slope. I don't think they've been right, but I can see the possibilities for how they might be.

But we may have to be, despite those possibilities.

Unfortunately, given how easy it now is to fake such material and how this is absolutely going to impact elections going forwards because people are going to use the tech to create fake images of politicians.

20-odd years ago I saw a Photoshopped picture where someone had put the faces of George W. Bush and Osama bin Laden onto a gay porn photo, I think it was to protest the American invasion of Afghanistan; more recently, we've already had a newsworthy generated image of Trump resisting arrest, and I'd be shocked if nobody's yet put him into an image with Stormy Daniels.

There's a possibly apocryphal but widely believed quote from Trump, "If Ivanka weren't my daughter, perhaps I'd be dating her", and at the (supposed) time of the quote Ivanka was 13 — and remember, this is believed because, to a first approximation, someone put some text on a photo and tweeted it — someone is almost certainly going to use AI to generate that scene.

As Trump has (despite the legal battles which needed a much higher standard for their evidence even to get started) a reasonable chance of winning the next election, there's a real chance of people seeing that generated image, taking it seriously, and repeating the Jan 6 2020 attempt to prevent the transfer of power. Only this time, without needing a charismatic leader to lead them.

And because that Ivanka quote isn't well-evidenced yet is widely believed, that's also a problem that almost every democracy will have to face forever, no matter what you think of Trump himself.


> There's a possibly apocryphal but widely believed quote from Trump, "If Ivanka weren't my daughter, perhaps I'd be dating her"

Not even slightly apocryphal. You can see the clip on YouTube, e.g. [1] But she was 25 at the time, not 13.

« When Trump was the star of the reality TV show “The Apprentice,” he appeared on the ABC talk show “The View” with his daughter in 2006 and said, “If Ivanka weren’t my daughter, perhaps I’d be dating her. Isn’t that terrible? How terrible? Is that terrible?” »

> And because that Ivanka quote isn't well-evidenced

I would dispute that.

[1] https://www.youtube.com/watch?v=diMp241gAcw


The claim that is disputable is specifically that he said it when she was 13, not that he also said it when she was 25.

Half-truths have always been a fantastic method for propaganda.


I've never heard that subclaim before despite the quote coming up many times so maybe it's not as widespread as you think.

And you specifically said the quote was "believed".


I've heard a different quote that was allegedly made when she was 13 (I guess taking an proven quote and transposing the age to another unproven quote is an easy step, possibly even accidentally) but Snopes currently rates that as "unproven".

(Given the things he's actually on record as saying, it wouldn't surprise me though.)

https://www.snopes.com/fact-check/trump-sexually-attracted-i...


The face actors weren’t harmed, the body actors (unidentifiable, but in all likelyhood youth) were.


How do you know it's not bodies of willing adult pornographic actors that merely appear child-like?

Why would you swap faces when you already had the pictures, if not to hide the adult facial features?


By that argument, deep fake porn shouldn’t exist at all. But it does, so clearly that’s not the way these people see the world. Some faces are simply “better” than others, at least in so far as those with perverted pornographic desires are concerned.

Regardless, child faces in adult bodies is in fact still illegal, see this detailed court report for more info: https://casetext.com/case/united-states-v-smelko-2

To amend my prior comment, the law says that the face actors in fact are harmed as well, reputationally. Fair enough; I’m not here to play “defend the pedo”.


It exists because face is what gets visibly aged on people even if the body isn't easily distinguishable from children. Why is it called "child sexual abuse material" if no child was abused in its creation?

Ad reputational damage - OK, I agree if they distribute it, that's fair. But if they don't distribute and the police finds it only after they take their hard drives?


It was trained on macabre images. So there is real CP involved.


Not to mention that a good quality deep fakes (once legalized) will completely destroy any economic incentive for real CP production.


We don’t know that. There’s a perfectly sane argument that it’ll amplify it by increasing availability and allowing more people to “dip their toes” so to speak.

I think this is something researchers should try to figure out rather urgently. If it can destroy incentives then great, if it amplifies then, obviously, not great.


If people can "dip their toes" into something and then develop a proclivity towards it, doesn't that imply sexuality is not innate? If it is innate then this is not a concern, and if it is not innate, the prevailing views we have on this topic are going to challenge that conclusion.


You're wondering whether preferences and tastes can develop or shift over time and with exposure? Obviously they can and do.

Presumably you’re pointing to “ahh so we CAN pray the gay away!” or whatever other ridiculous intervention you can imagine. The pushback on this mode of thinking has little to do with whether people’s preferences can hypothetically change and more to do with 1) who the fuck cares outside of theological arguments which have no standing in developed legal systems, and 2) the “interventions” are generally barbaric and across the board not shown to be effective.

Obviously people who want to voluntarily attempt these interventions are more than welcome to (and do). But that’s never what the conversation is about.


I said nothing about praying.

It is commonly accepted in our time that you're born with your sexual proclivities, and it is a sensitive topic, saying otherwise can get you into serious social trouble. So, the idea that "preferences and tastes" as you put it with regard to sexual orientation can change is widely a taboo in modern western society. To me, this directly conflicts with the concept that someone can develop a sexual attraction via exposure to pornography; the two beliefs are mutually exclusive. I'm just pointing out the difficult social consequences of investigating and confirming whether exposure can alter someone's attractions in that way.


> I’m just pointing out the difficult social consequences of investigating and confirming whether exposure can alter someone’s attractions in that way

Not true. You’re mistaking that for “the difficult social consequences of forcing other people to be studied to confirm whether exposure can alter someone’s attraction that way.”

You’re more than welcome to conduct such experiments on yourself. And sure, some people might disagree with it, but some people will disagree with anything someone does.


> preferences and tastes can develop or shift over time and with exposure

You'd be surprised how many progressive(?) people would balk and call you out if you suggest that...


Nope, as explained they’re balking at the social and political implications of someone who is hostile to LGBT existence suggesting that.

Justifiably, given that historically people trying to “enact” this belief through political and social means have reliably resorted to incredibly awful (and again, ineffective) “interventions.”


Many (not all, not the most) perceive the notion as a threat because it goes against the belief that it is a fixed quality

People still buy real diamonds.


Do you expect a huge marketing campaign from BigCSAM focused on convincing pedophiles that only real CSAM proves their husband/wife truly cares about them?


If you think that people prefer real to artificial anything only because of marketing, you may be acoustic.


This particular case is very well known and proven to be caused directly by the marketing, yes.

There is no justifiable difference like beauty, durability or function between "real" and "artificial" diamonds. Only an expert can tell, and only because the "real" ones are less perfect. And of course - both are real diamonds.


I'm not disputing that.


Do they come along with lifelong sentences as opposed to artificial diamonds that are legal?


One of the problems with this kind of thing, in general, is that it tends to escalate.

We may start off, reading raunchy comics, but the need for "more" grows, until we find that only The Real Thing will suffice.

This doesn't happen to everyone, but it happens to enough folks, that it can be a serious problem (and also, serious money. Pr0n companies bank on getting folks to go deeper into the rabbithole).

I should add that, in the US, sexual offense convictions are really a life sentence. Offenders need to register as sex offenders on public databases, for the rest of their lives, and, quite often, are not allowed to live near schools.

I know that in one city, there was a bridge that a whole bunch of sex offenders lived under, because there was nowhere else they could rent apartments.


This is the same argument they used in the 90s to ban Mortal Kombat and other violent games. It wasn't a good argument then and it isn't a good argument today.


I don’t remember what the fallacy is called, but there’s one, where folks take something that belongs to one instance, and apply it to another instance, where the veracity is tenuous.

An example is Marie Nyswander’s “broken brain” theory, that she invented, to push methadone. It’s never even come close to being proven, but sounds downright sensible; especially when someone wearing a lab coat spouts it. It’s the one, where they tell you that your brain is “broken,” because of drug use, and can no longer produce endorphins (or other brain chemicals -it can vary). There is a significant element of shame involved, telling the user that it’s “their fault” that their “brain is broken,” but they are in luck, because this pill will fix it.

It was pretty much B. S., anyway, even for opiate drug use, but I have also seen it applied to alcoholism, pot smoking, kleptomania, sexual compulsion, gambling, anorexia, cults, video games, violent behavior, and habitual criminal behavior.

It’s one of those theories that makes sense, if you don’t sweat the details, but has never actually been proven, by anyone reputable.

Lot of CBD snake oil uses similar stuff.

Like I said, the progression only actually seems to happen to a fairly small number of people. The crimes they commit, though, are pretty bad. A risk management approach needs to score both impact and probability. People tend to get caught up in only one axis; usually the one that supports their preconceptions.

I have an acquaintance that is a passionate pool safety advocate. That’s because she had a nephew die in a pool. If you talk to her, pool safety is the #1 national priority, and all other issues are secondary.


It seems like there are two conflicting theories:

Either A - its a gateway to worse behaviour. Thus criminalizing this prevents future victims.

Or B - its gives pedophiles a release not involving real people, thus preventing victimization of real people that would have otherwise happened.

What laws we should make as a society depends on which of these opposite theories are true. Is there any actual research on this? Surely psychologists must have studied this.


>Is there any actual research on this? Surely psychologists must have studied this.

That's not an easy thing to study, not many pedophiles would be willing to participate in studies. Existing research mostly examines just the ones who have offended. There's some research about non-contact offenders though: https://sci-hub.se/https://journals.sagepub.com/doi/abs/10.1...

TLDR

> while some have argued that exposure to child pornography may promote contact sexual offending by validating and reinforcing attitudes surrounding the sexualization of children (Bourke & Hernandez, 2009), others have argued that child pornography acts as a substitute for contact offending, thereby preventing the direct sexual victimization of children (Riegel, 2004). Although plausible, such causative positions are yet to be directly examined or established within the existing empirical literature base, limiting the strength of these arguments.

> Nonetheless, the available evidence does not appear to support the idea of a direct causal relationship between child pornography and contact sexual offending, at least in the short-term. This is consistent with the findings of McCarthy (2010), who reported that the majority of dual offenders in her sample (84%) had committed contact sexual offenses prior to, rather than following, their involvement with child pornography. Furthermore, if child pornography directly promoted contact sexual offending, one would reasonably expect rates of contact sexual offending to have similarly increased over the last two decades (Glasgow, 2010). Fortunately, official crime statistics indicate that this has not been the case (Brennan, 2012; Motivans & Kyckelhahn, 2007; Victoria Police, 2014).

> Taken together, these findings suggest that although some CPOs do go on to commit sexual offenses against children, engaging in child pornography offending does not inevitably lead to the direct sexual victimization of children.


There’s also the possibility that both are true.

Some people stop at weed, but some people go on and get hooked on bad drugs.

Maybe, some day, they’ll develop an unaddiction drug as well as an unpervert drug or treatment.


I agree that they are both probably true, but I don't think that fully addresses bawolff's point which I would interpret as harm reduction. If, on net, the B class dominates the other by a big margin then the net harm would be reduced and laws shouldn't be harsh - even if the A scenario plays out sometimes. The law should try to figure out who is actually causing abuse to happen and then target them.

My bet is it would be a net reduction because if you hit someone with dopamine when they are staring at a screen they spend more time staring at screens (I'm pretty sure we found that was how violent video games work out, most gamers are a bit more of the weedy and harmless style in real life). But some stats would be helpful.


There is the possibility as in weed legalization that now you get more users who would otherwise avoid it. An increased user base means additional users going for the hard stuff.


Yeah, but that is sort of on the path of assuming there are latent paedophiles out there which just seems a stretch. Everyone encounters children, usually quite regularly. Most people have their own, in fact. If they have an attraction to them that is so strong they're going to go off the rails they've probably had it triggered.

The paedophiles already know who they are. It is quite plausible that the number of additional offenders from easy access to pornography would just be a reduction with negligible other effects. The stats don't suggest traditional pornography availability has led to more sex - quite the reverse, in fact, based on my cursory readings.

But, as every comment in the thread points out, there really needs to be some effort put in to figuring out what would happen if it hasn't already been done. I'm not sure if you're looking for something here, we all agree there are multiple ways any change could play out.


If the deep fake is of a real person, there is still reputation damage to the person featured.

That said, you're touching on something important. These people appear to be incurable, and it's about time we started talking about giving them some kind of outlet that mitigates the harm they might cause under the current regime of simply telling them they just have to restrain themselves. I don't know what the right answer is: VR? Sex dolls? Anything that will reduce the number of attempted incidents against actual children should be considered.


> If the deep fake is of a real person, there is still reputation damage to the person featured.

Only if the material was distributed by the individual. If the material remained on the individual's computer, I just don't understand how someone can argue that harm has occurred.

Like, maybe there is an argument that some non-zero level of "harm" occurred due to perceived normalization of untoward behavior, but is this worth allowing a slippery slope with no immediately apparent bottom? Where will our benevolent government draw the line on what kind of thought crime is permissible?

It's getting really tiring that the only people routinely coming to bat over these kinds of issues are cybersecurity professionals and privacy advocates; the average person still seems to buy into this propaganda and it harms everyone in the end, as our collective ability to effectively fight bureaucratic tyranny becomes systematically reduced.


> VR? Sex dolls?

Until now, VR child porn and child sized sex dolls were not on my mind when I thought of state sponsored healthcare.

I say this half jokingly, I am not really sure how I would want society to deal with that kind of mental illness. It's hard to have a nuanced perception of it, yet without nuance we are no better than the people we judge...


Especially since we’ve witnessed, after 20 years of porn on the internet, that far from resulting in youths being unfathomably promiscuous, it rather results in a youths not going out enough.


Isn't imagination enough?


I think this new dimension makes things even worse. Soon everyone will be able to steal your image to depict you doing whatever they can think up that's disturbing to me. I know that you can do this with photo editing skills already but AI will be like the iPhone for abuse.


I can already imagine you in drag and induce others to as well. Stop morally panicking before we accidentally cede even more liberty to the State!


If everyone fake, no one can. Maybe it will sabotage video as viable evidence and eliminate the surveillance apparatchik of cameras everywhere as useful in proving anything.


That everyone can libel doesn't mean no one can.

That everyone can perjure themselves by libelling someone while under oath doesn't mean no one can.

And if anything, it would strengthen rather than eliminate surveillance apparatchik of cameras everywhere — as term "apparatchik" has its origins in soviet government, in this context it would therefore be exactly the subset of surveillance apparatus[0] that the government deemed trustworthy.

[0] and now I'm wondering if "apparatchik" was an auto-corrupt of "apparatus"?


You’re aware some people like having security cameras available to prove things, like who broke into their homes or jumped them on the street, right?

“Finally! We’ll have no way to prove anything at all! This is a big win!” Makes zero sense.


Thankfully the legions of scammers have made many non-techies distrustful by default in some settings (e.g. personal finance, computer security). I think something like it will happen with gen AI stuff too. If everything can be fabricated at scale, doubt will be at the top of the mind and trusted channels will form.


I'm not sure that's true though. It's always been more trivial to doctor a screenshot, even just editing a webpage in dev tools prior to taking a screenshot.

And yet people reshare and react to screenshots of tweets and similar all the time. The authenticity is only really questioned if it strongly goes against whatever narrative one already believes. And then that ability to fake them also gets used to dismiss legitimate honest information because it could have been faked.


Hard agree, I like to keep up with the newest models by throwing new haircuts on friends, making bad AI memes, just random stuff - and there's tons of models strictly for generating NSFW content. I saw a LoRA last night that was straight up called "facesitting", on the most popular hub to post/download models.

This is going to probably sound insane, but my primary concern is tech not being able to bridge the gap fast enough for this to be used to enhance our ability instead of enfeebling it.

But hey, maybe this is our centuries calculator and I'm just crazy.


> pictures that digitally superimposed the faces of child actors onto nude bodies and bodies engaged in sex acts

The question is whether these bodies were those of grown adults engaged in consensual activities. The answer is, in all likelihood, no. Many folks trying to give him the benefit of the doubt here, not sure why he deserves it (given he’s a repeat offender too).


I don't think people are trying to give him the benefit of the doubt as much as this story is misleading. It only really makes sense if the pictures are of adults because if it was child porn it wouldn't be a news story. "Man goes to jail for having edited CP" isn't newsworthy in the same way as "man goes to jail for having AI generated CP". People's bias here is to just assume that they are being told something interesting because otherwise there is no reason to share this information.


This isn’t a news story as much as a DOJ press release, of which there are many many thousands available. News agencies have picked it up to get clicks, that doesn’t make the guy less guilty. News agencies do the same when a mass murder goes to jail, should we assume that is unexpected too?

Anyways, I’m not interested in playing this game of “defend the pedo”, but this site has way more background on the case: https://casetext.com/case/united-states-v-smelko-2

But I was actually incorrect, child faces on adult bodies is still unlawful, the linked article goes into more detail and prior casework on the matter.


There are many adult actors explicitly cashing in on their child-like appearance, especially in Japan. How do you determine that the picture is a child or an adult with child-like appearance?


This is partly why so much hentai in places like Canada and Japan features adult women with massive, unrealistic breasts; better safe than sorry.


I don't know about hentai but given the amounts of Japanese actors I have to skip because I don't like them appearing like children suggests they're not that interested in the safety thing...


It's a large industry with many different opinions, but also it's different with live actors as proving their age is simple.

Drawn characters don't have government IDs and birth certificates, so caution is often exercised.


This article does not make it clear if the material was actually deepfaked, or just photoshopped; it simply mentions superimposing a face onto a naked body.

Not that it matters, the possession of either are clearly protected by the first amendment of the US Constitution, as no individual was harmed during the production of such material. This applies to any drawn, rendered or otherwise synthetic image, regardless of the subject content. You cannot make the mere possession of a synthetic image illegal.

I'll entertain arguments about the need to limit distribution of such material as that can cause real harm to minors. But possession of synthetic images of ethical provenance? Give me a fucking break.

Just setting precedent for the further erosion of all citizens' rights, via the thoroughly beaten and dead trojan horse of Think of The Children.

Make no mistake. This is all to create a trail of precedence in order to restrict access to foundational generative models. We cannot allow our government to restrict our access to these new generative technologies while simultaneously wielding them against us.


>Not that it matters, the possession of either are clearly protected by the first amendment of the US Constitution, as no individual was harmed during the production of such material.

Wrong. Men are sitting decades for pen and paper art.

I read more details, Smelko was caught up in someone else's search warrant. Arguably Smelko did nothing directly other than being at the wrong place/time to set off the chain of events which lead to his 15 year sentence.

This is why I eagerly await jury duty notices, had I been on Smelko's trial I would have no-reason acquitted (known as jury nullification).


Not wrong. Reread the post. I said "clearly protected by the first amendment". That doesn't mean that our government hasn't enacted laws which go against the constitution. They do this all the time, it just goes unchallenged in the higher courts. And even the higher courts can be compromised.

I'm speaking on legal consistency. I know people are sitting in jail for drawings, why do you think I'm making this case in the first place?


>I know people are sitting in jail for drawings

People are taking this for granted but I have yet to see an actual case. Every time this topic appeared on HN it also turned out that the offender had real CSAM on his devices as well.


I don't personally know of any specific cases but I think it's sufficient to recognize that some countries, such as Canada, outright criminalize lolicon and other drawn material, and that the chilling effect alone is enough to push back.

I've personally brushed against the effects of these laws while operating and moderating international discussion forums and boards, because it's easiest just to outright ban the stuff than deal with a myriad of laws across each country. And that doesn't even get into individual state laws.


Notable cases:

John R. Farrar 2015 (the specific case alluded to)

Dwight Whorley 2005

Christopher Handley 2008

Steven Kutzner 2010

Christian Bee 2012

David R. Buie 2017

Elmer E. Eychaner, III 2018

Thomas Alan Arthur 2022

https://en.wikipedia.org/wiki/Legal_status_of_fictional_porn...


This is a misinformed and incorrect take. The PROTECT Act of 2003 [0] makes it illegal to possess CSAM that is generated by superimposing faces of minors onto sexually explicit imagery, or vice versa.

This bill predates generative AI models by decades. There is no need to engage in conspiracy theories here — the law is clear that this kind of imagery is illegal.

[0] https://www.congress.gov/bill/108th-congress/senate-bill/151


It's not misinformed or incorrect. I'm aware of the current law and how it stand in opposition to the first amendment. Reread my post.

> There is no need to engage in conspiracy theories here

Please do not mischaracterize my post in such a light. There are no conspiracy theories here.

There are ongoing, completely public, campaigns by both the Executive and Legislative branch to regulate access to generative models. There is a reason this press release vaguely uses the term "deepfake", which is completely distinct from dragging and dropping a minor's face onto an adult's body. Whether that reason is deliberate or negligent, it still serves the greater purpose.

The debate over access to generative models with respect to CSAM has been hot for a while now, to ignore that debate and characterize my post as perpetuating conspiracy theories is just disingenuous.


1. The PROTECT Act provisions have repeatedly been upheld by both appellate the Supreme Court as constitutional as long as the CSAM in question meets the Miller or Ferber standards. Either the law is constitutional, or you’re proposing that the courts are illegitimate, the latter of which is conspiratorial.

2. You are right that there is a campaign to limit access to open source generative AI models, but it is not an initiative led by the government. Companies such as OpenAI, Anthropic, and Google are leading the charge when it comes to emphasizing the danger of open source models and are lobbying every day to limit access. The executive and legislative branches are following suit with what industry executives tell them because they are deferred to as experts.

Industry policy teams have invented vague, ill-defined terms such as “frontier models” and equate these models as having the same power as nuclear weapons. They have a vested interest in being the sole controllers of this technology.

If you want to counter governmental efforts to limit access to such models, start by countering the FUD pushed by industry in this space.


> The PROTECT Act provisions have repeatedly been upheld by both appellate the Supreme Court as constitutional as long as the CSAM in question meets the Miller or Ferber standards. Either the law is constitutional, or you’re proposing that the courts are illegitimate, the latter of which is conspiratorial.

I'm sorry, but it is absolutely not "conspiratorial" to suggest that the judicial system is compromised. You have to be living under a rock to not understand that all three branches of government are effectively compromised to party-line political agendas backed by corporations, NGOs, etc.

Our forefathers would have spit in disgust at the idea of the bill of rights being perverted to the point that drawing the wrong lines on a piece of paper and keeping it in one's own home calls for removing one's liberty and placing them in prison. And this extends to modern technology such as image editors. And I stress, we are talking about production/possession, not distribution. There is actually a case for restricting distribution of such material.

Generally speaking, only a fool could look around at the state of the US government and say, "the laws are just and anyone who questions their justness is conspiratorial." That is textbook gaslighting, whether you intend for it or not.

> You are right that there is a campaign to limit access to open source generative AI models, but it is not an initiative led by the government.

Again, calling bullshit. [0]

Our government is in the business of staying in business, at the expense of individual liberty. This is well established going back decades. I'm not even going to argue that point with you. And because of this, they will absolutely treat foundational models with the same playbook as cryptography in the 90's if they feel like it's necessary. [1]

The government already did all of this with cryptography, and it was a war hard won. So you have to make the airtight case that they won't do it again. Not the other way around. You have to prove that they have changed for the better. I don't have to prove anything because history is on my side.

Please, I beg of you, do not delude yourself that the US government wants what is best for you, while they are spending billions bombing hospitals overseas with tax dollars that could greatly benefit our own citizens. Do not delude yourself that it is just corporations, or just the government. It is CorpGov. They are, in the end, one in the same, in that they play ball when it suits them, and play against each other when it suits them. Don't be a sucker. And please don't accidentally gaslight others by throwing out accusations of conspiratorialism the moment they question the credibility of the US government.

I expect any followup response to dispense with the gaslighting and ad hominem, and focus on fostering a constructive debate. If you can't do that, just end the conversation here and do some hard thinking.

[0]: https://www.whitehouse.gov/briefing-room/statements-releases...

[1]: https://en.wikipedia.org/wiki/Crypto_Wars


Deepfake sounds a lot more serious than photoshopping a face onto a picture. The case does not give details on what "nude bodies" means but I sure hope it was known CSAM otherwise this sets a ridiculous precedent with image generation.

Were they focusing on the intent, or the "I know it when I see it" definition because there is a reason you sometimes have to add to the negative prompts of these models because you never know what you are going to get. Does this mean that someone running a long batch of images may accidentally get something that can land him in jail?.


> this sets a ridiculous precedent with image generation

No need for 'precedents', 'slippery slope', 'This will lead to...', 'I can imagine a future where...'.

It's here now and it's here to stay.


Poor guy :/ Got bad genes but didn't harm anyone, still went to prison.


"Recidivist sex offender" strongly implies that this man is not in the "didn't harm anyone" bracket, no? It means there's a conviction before 2023.


"Recidivist sex offender" is an extremely broad category that doesn't imply anything about whether he harmed someone. It is a category that includes everything from vile monsters to Assange, who has secured his place in the history books and should be celebrated as a hero. And a bunch of sex offence crimes are harmless depending on the jurisdiction.

In this case maybe he'd done more of the harmless stuff they're going to charge him with based on the article. Presumably if he'd harmed someone they'd be leading with it.


The great thing about the internet is you can look people up, and you can read the indictments.

He has an unusual name. The current indictment is on the internet. And so is an indictment from 2002.


So long as possession can be prosecuted, any one of us ITT could be set up by malware. Possessing even the real stuff has to be legalized if this overpowered weapon of spycraft is to be broken. This political weapon is a legit concern in circles (See Tucker's Rogan interview).


You are grossly under-estimating the ability of the FBI’s cyber forensics teams to discern whether or not data was planted maliciously or produced overtly, as well as under-estimating the ability of the courts and a jury to understand when someone is willingly producing CSAM versus accidentally being in possession.

This prosecution is the first of its kind for the DOJ. It is highly unlikely that they would pick this case to take to trial if there was not certainty about the actions the perpetrator engaged in.


This case is not about production, only possession.

There is a good reason why possession and creation of artificially created abuse material is punishable.

Perpetrators often have a history of ever escalating offences. Consuming material usually precedes actively harming another person.

From a cognitive behavioural therapy angle, that makes sense: Repetitive envisioning an action can reduce inhibitions.

Hence we outlaw those material, to prevent people from training themselves to be abusive.


Right, but we have to acknowledge that while some of us watched Minority Report as a cautionary tale, others clearly see it as a blueprint :)

We can have a meaningful discussion of CBT and which way exposure leans; but I'll put in my vote for us to be a society that punishes actions that harm others, not the thoughts and preferences that may or may not one day lead to any actual actions that may harm others.

It is easy to point out just... an enormous library of movies and video games that use some form of violence. I have a 5 year old kid and already have to explain to him that even the cartoon violence he's inevitably exposed to does not mean that violence is OK in real life. But while I may as a parent try hard to guide and even limit what he's exposed to when, I am nowhere near the point where I lobby to illegilize such violence... LET ALONE JAIL SOMEBODY FOR WATCHING A SCHWARZENEGGER MOVIE! I will absolutely be seemingly unfair and ask - if we feel strongly that watching CSAM will lead to action, what about violent movies, and present a duality of options - is the next step criminalizing violent movies, or are we in a double-standard guided by something other than seemingly scientific studies of exposure?

(gawd, I hate to even appear to be in a place to even seems to be anywhere near condoning even AI generated CP; I cannot underline enough how much that's not my kink! but I am also genuinely terrified by the seemingly rational-presenting suggestions that "bad thoughts lead to bad actions so we need to outlaw bad thoughts" - there's just too much gnarly catholic dogma disguised in 'cognitive therapy angle' in there for my level of comfort :O )


The State relies on that cycle to consolidate power. These problems circle back to what people think the role of the State should be, and all these Wars on X (like crime).

There was a time you would have to bring a criminal action to court, there was no bs "The People of the State of X" charging. Also, mere possession of things being criminal is a very recent invention.


That logic would ban a lot lot lot more things. Including good old violent video games.


The difference is that there is evidence of one and not of the other.

https://smart.ojp.gov/somapi/chapter-4-internet-facilitated-...


Can you be more specific? That's a big document and I couldn't find such a comparison quickly reading through.

Though this is interesting in the introduction: "The increase in internet sexual offending has been paralleled by a decrease in the number of reported child sexual abuse cases, and a decrease in violent crime more broadly"


Which it probably should at a certain level of realism.

It doesn’t seem obvious to me why we can’t/won’t eventually make 100% completely convincing video games and then how our brain will handle that differently than experiences that really do distort people’s psychology like war, domestic violence, etc.


It's a lot harder to get PTSD when you have a pause button and nothing can actually harm you.


probably, but for example a horror movie is successful to the extent that it induces actual fear in people. I don’t see why we can’t or won’t produce thoroughly convincing games that can actually produce those feelings.

As another data point, it is absolutely possible to actually traumatize yourself with a hallucinogenic drug despite the complete absence of “actual danger”.


We’re talking about “thought crime” and punishing crimes that haven’t been committed here and as usual, we’re using “think of the kids” arguments to justify it. I’m not a fan.


> usually precedes

Do you remember your source? I’m suspicious of generalizations.

I’d especially like to check whether most people who acted upon it have been previously caught or whether there is an overlap between people who acted upon it and people who consumed; and I’d like to check what is the loss rate between consumers and the next step, because surely it is not 100%.


Sounds reasonable at 1st glance. But is no different than thought police.

Say that a person has an immense crush on some celebrity, fantasizes about doing <unspeakable things> to said celeb, but never acts on it. Would that deserve punishment?

Imho, not. And CP material is essentially no different.

Now as for people actually abusing children, or doing so to produce such material: go ahead & do <unspeakable things> to them.

A -much- stronger argument is when someone is preparing to commit crimes: buying supplies, terrorists making bombs, someone preparing a basement as a prison cell for yet-to-be-grabbed victim, etc. Or when sharing fake images cause actual harm to those depicted.

Yeah the 'sliding slope' argument counts. But so does the distinction between fantasizing about X, vs. actually doing it.


Do you think GTA and such should be illegal? You don't only watch you literally practice crime including much worse stuff (running people over, mass murder with assault rifles etc)

Certain films come to mind too




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: