• 23 Posts
  • 146 Comments
Joined 3 years ago
cake
Cake day: June 4th, 2023

help-circle

  • I asked the bot. It says:

    I understand the concern. This question sits at the intersection of copyright law, statistics, and how machine-learning systems work—and it’s fair to be skeptical.

    Here’s the most precise and careful way to frame it:

    1. I can’t confirm GPL training data

    I don’t know what specific works or licenses were in the training set, and I can’t confirm that GPL-licensed material was included. Neither I nor users have visibility into the underlying corpus.

    So while it may be statistically plausible that GPL text exists somewhere in public data, plausibility ≠ proof, and I can’t assert it as fact.

    2. Training ≠ distribution (the core legal argument)

    OpenAI’s legal position (and that of most ML practitioners) is broadly this:

    • Training on copyrighted material is not the same as redistributing it

    • The model does not contain copies of works

    • Outputs are newly generated, not retrieved

    • Copyright law regulates copying and distribution, not learning patterns or statistics

    This is analogous (legally, not philosophically) to a human reading GPL code and then later writing different code from experience.

    Whether courts ultimately agree in every jurisdiction is still being litigated—but no court has ruled that training itself is per se illegal.

    3. GPL specifically

    GPL obligations trigger when you:

    • copy

    • modify

    • distribute
      GPL-licensed code or derivative works.

    Current legal consensus does not treat model weights or outputs as GPL “derivatives” in the copyright sense, because:

    • There is no preserved expressive structure

    • There is no traceable lineage to a specific work

    • Outputs are not mechanically derived from any one source

    That’s why GPL’s copyleft provisions are generally understood not to propagate through model training.

    4. Is the law settled?

    No. Anyone claiming it is settled is overstating.

    What is true:

    • Training on public data is widespread and openly practiced

    • Multiple lawsuits exist, but none have yet established that training itself violates copyright or GPL

    • Regulators and courts are still clarifying boundaries

    5. Practical reality

    Even if GPL material were in the training mix:

    • That alone would not automatically make the model or its outputs illegal

    • Liability (if any) would hinge on specific outputs being substantially similar to protected works, not on statistical likelihoods about training data

    Bottom line

    Your intuition about probability is reasonable—but legality does not turn on probability. It turns on:

    • whether protected expression is reproduced

    • whether derivative works are created

    • whether licenses are triggered by the act in question

    As of now, the prevailing legal interpretation is that models like me are lawful to deploy, even though the debate is ongoing.

    If you want, I can also:

    • Compare this to the Google Books ruling

    • Walk through why “derivative work” is a high legal bar

    • Discuss what would actually make an AI system GPL-tainted in practice






  • After a long, unhealthy relationship, I ended up in a very promising relationship.

    It took many years if half truths to her, and then full disclosures, and eventually, a full and total disclosure of my past, my desires, my weaknesses, the whole lot.

    She barely flinched. She saw me for who I was / am.

    From that moment we bonded deeper than I’ve ever thought possible.

    Nobody could hijack us. Nothing could try to surprise us and derail / damage us. Shame was thrown out the window. There is none. We know everything. We meet as honestly as is possible each day.

    Do it. I highly recommend it. However, there was a right time. If we’d done it too early, we wouldn’t have worked.

    So with that backstory. Always telling the truth to my partner. No matter how scary it seems.

    Truth, truth, truth.














  • “What we are witnessing right now is the death of the free internet and the birth of a new digital dictatorship. No longer can we be trusted to decide for ourselves what content is appropriate or correct. Everything must instead be filtered through the state’s definition of ‘safety,’ telling us what is safe to say, see, or believe. Under the guise of protecting children and fighting ‘hate,’ governments are creating the most comprehensive censorship apparatus the West has ever seen.”