More dramas for Wizards Of The Coast?

Best Selling RPGs - Available Now @ DriveThruRPG.com
If I take a video and shrink it down from 4K to 192x108 then it's still a derived copy of the original.
Is it still a violation if you reduce it to a single pixel and still only take every fourth frame? That's the level of compression we're talking about.
 
Well, to reiterate, Open AI owns both Stable Diffusion & Dance Diffusion. They also directly fund and use datasets like the various LAION datasets which contain copyrighted information (& are now being used for commercial purposes). This data is being used without the consent of artists, and they receive no compensation or credit for their work being scraped. This needs to change.

Dance diffusion has major restrictions as to what is allowed into its datasets and training models, going so far as to say that their earlier diffusion models do in fact violate copyright laws. The fact that they changed Stable Diffusion 2.1 to not allow the use of artists names should tell you that they are aware that what they are doing could potentially result in litigation.

I'll post Open AI's direct quote again because you seem to have missed it...



Why is it ok to scrape billions of copyrighted works into a dataset, but the same company will adamantly state they are creating new datasets composed entirely of copyright free material?

They have admitted that their own diffusion models are prone to the very issues I am trying to address.

You keep saying that the final output is OK, and you keep saying the datasets are not in violation of copyright.

Due to the fact that this is being disputed in courts as we speak, I think you have to be a little bit more convincing than simply stating that "the dataset is not itself a copyright violation." I think some kind of concrete proof should exist to better reinforce your claim that everything is kosher.


Just because someone takes steps to avoid litigation doesn't mean they've broken the law. Or that the law is on or against their side.
To me they are answering a call from users to use different types of data and responding.
There's also the question of do I need to pay someone to learn their art style. I don't now as a human. I can look at their works and copy the style. At least in drawing that's true. Music has a weird thing going on right now where folks are trying to copyright the equivalent of combined brush strokes and say no one else can use that. That i don't get.
 
Well, to reiterate, Open AI owns both Stable Diffusion & Dance Diffusion.
OpenAI owns Chat GPT and Dall-E. They don't own Stable Diffusion and Dance Diffusion, which are both open source. A good rule of thumb is that if an AI is open source, it's not owned by OpenAI.
 
Just because someone takes steps to avoid litigation doesn't mean they've broken the law. Or that the law is on or against their side.
To me they are answering a call from users to use different types of data and responding.
There's also the question of do I need to pay someone to learn their art style. I don't now as a human. I can look at their works and copy the style. At least in drawing that's true. Music has a weird thing going on right now where folks are trying to copyright the equivalent of combined brush strokes and say no one else can use that. That i don't get.
I agree there is a lot of confusion because there are a variety of different issues at play. I'm also not talking about style (because that's another issue), I am specifically asking about the use of datasets that contain copyrighted information.

Here is a website called Have I Been Trained? where you can put in the name of any artist and it shows you the images scraped and present in the current LAION-5B dataset.

Here is a screenshot of a search for Frank Frazetta, who is not currently to the best of my knowledge in the public domain. Both his artwork and his personal signature are in the LAION 5B dataset.
 

Attachments

  • LAION 5B dataset.jpg
    LAION 5B dataset.jpg
    1.4 MB · Views: 4
OpenAI owns Chat GPT and Dall-E. They don't own Stable Diffusion and Dance Diffusion, which are both open source. A good rule of thumb is that if an AI is open source, it's not owned by OpenAI.
Oops! my apologies, I meant to say Stability AI, not Open AI, I will edit my post to correct the error. Thanks!
 
Why is it ok to scrape billions of copyrighted works into a dataset, but the same company will adamantly state they are creating new datasets composed entirely of copyright free material?
I wasn't clear about this in my posts. So I am correcting it now. I am talking about models, not datasets. I see now that I conflated the two terms. I will make sure I am consistent in the future. My apologies for the confusion.

Datasets contain either the actual works themselves or links to the works. Once a model has been trained it no longer needs the dataset. And trained models can be distributed independently from a data set.

Hope that clarifies my point.

As for this

Dance Diffusion is also built on datasets composed entirely of copyright-free and voluntarily provided music and audio samples. Because diffusion models are prone to memorization and overfitting, releasing a model trained on copyrighted data could potentially result in legal issues. In honoring the intellectual property of artists while also complying to the best of their ability with the often strict copyright standards of the music industry, keeping any kind of copyrighted material out of training data was a must.


releasing a model trained on copyrighted data could potentially result in legal issues
Of course, a model trained on copyrighted data could lead to legal issues. Publishers and authors have been suing people who make indices and concordance for decades despite being slapped down most of the time. Why?

Sometimes like in the Harry Potter case from the 2000s, the indexer or writer goes too far and excerpts too much. But as that case held RDR Books and the author did go too far in copying excerpts however people have to right to prepare works of this type without the author's permission.

One way to forestall a lawsuit is to make sure an artist or music publisher's work is not included. This means they lack standing to sue which is far cheaper to defend. Given the research focus of the project it makes sense they want to be cautious. But their statement is a far cry from saying a model based on copyrighted works is a infringing use.
 
I agree there is a lot of confusion because there are a variety of different issues at play. I'm also not talking about style (because that's another issue), I am specifically asking about the use of datasets that contain copyrighted information.

Here is a website called Have I Been Trained? where you can put in the name of any artist and it shows you the images scraped and present in the current LAION-5B dataset.
Actually what LAION-5B has are links to where images of his art have been publically shared along with the associated ALT text.

1691419865791.png

Harvesting links has been subject to lawsuits in the United States. It has been held in a series of lawsuits involving search engines using links and their content is a fair use. What you can't do is harvest a bunch of text, videos, or images and share them as a collection which is the trouble that many datasets (as opposed to models) run into. LAION is careful about providing only links and a tool that allows a researcher to download those images themselves.
 
I wasn't clear about this in my posts. So I am correcting it now. I am talking about models, not datasets. I see now that I conflated the two terms. I will make sure I am consistent in the future. My apologies for the confusion.

Datasets contain either the actual works themselves or links to the works. Once a model has been trained it no longer needs the dataset. And trained models can be distributed independently from a data set.

Hope that clarifies my point.
Cool thank you for the update I will check it out.

Of course, a model trained on copyrighted data could lead to legal issues. Publishers and authors have been suing people who make indices and concordance for decades despite being slapped down most of the time. Why?

Sometimes like in the Harry Potter case from the 2000s, the indexer or writer goes too far and excerpts too far. But as that case held RDR Books and the author did go too far in copying excerpts however people have to right to prepare works of this type without the author's permission.

One way to forestall a lawsuit is to make sure an artist or music publisher's work is not included. This means they lack standing to sue which is far cheaper to defend. Given the research focus of the project it makes sense they want to be cautious. But their statement is a far cry from saying a model based on copyrighted works is a infringing use.
As to this part I will refer you to my above post #683, where you can see that Frank Frazetta's art and signature are currently present in the LAION-5B dataset.

A very vast array of artists works (all copyrighted) are still present in the database, which should at least seem problematic going forward with ethical datasets.
 
As to this part I will refer you to my above post #683, where you can see that Frank Frazetta's art and signature are currently present in the LAION-5B dataset.

A very vast array of artists works (all copyrighted) are still present in the database, which should at least seem problematic going forward with ethical datasets.
I just addressed this in #686

But to be clear there are datasets out there that are pirated collections of copyrighted works. But in the specific case of LAION-5B it is a collection of links to copyrighted images that been made publically available.
 
Actually what LAION-5B has are links to where images of his art have been publically shared along with the associated ALT text.
This is still not a valid explanation, given that the database took images directly from websites like Deviantart & Artstation before artists had an option to opt out in any way. IMPO, its the greatest data heist of the early 21st century, and will likely be seen as such to future generations.

Regardless of their own statement on the matter their database includes the entire collected work of artists like Greg Rutkowski, just put his name into the search engine and it will show you hundreds of images from his own Artstation page. Search for Masamune Shirow, and you will get direct scans of physical books along with his entire life's work.

Not to mention, that hiding behind "research purposes" is an attempt to find a legal loophole (which they are currently trying to beat in court & in public opinion). Look up AI Data Laundering, and see how they use these datasets to skirt copyright laws. I don't want to post a direct link because it might be considered against the rules (not sure so I will err on the side of caution).
 
But to be clear there are datasets out there that are pirated collections of copyrighted works. But in the specific case of LAION-5B it is a collection of links to copyrighted images that been made publically available.
Made publicly available in places like Pinterest which illegally repost anything. Publicly available on the internet is a slippery slope. That means that anything anyone posts on the internet is fair game, when I would beg to differ.

You are again ignoring that the dataset includes art taken directly from Artstation & Deviantart before they had an option in place for artists to opt out, which would effectively place their copyrighted works & their IP into the public domain (by your argument). This could be considered a serious violation of artists rights.
 
This is still not a valid explanation, given that the database took images directly from websites like Deviantart & Artstation before artists had an option to opt out in any way.
Opt out in what way? From a computer gathering a url and getting the alt text, which can be done on every computer by a person, and in fact is done with identifiers that makes it very difficult to identify the activity? They functionally did exactly what a user did.

Not to mention, that hiding behind "research purposes" is an attempt to find a legal loophole (which they are currently trying to beat in court & in public opinion)
As a researcher who does not work with ML or AI but does work with large scale network activity, I can assure you that most of us (the broader internet community) are not interested in breaking laws. There are some folks who are, but they are, by far, the smallest of minorities. To the point where I rely on it in my research as an indicator of bad.

Made publicly available in places like Pinterest which illegally repost anything.
A link is not the work. It’s an address of where to get the work. If I can go and get it at that address, then that is another step, if I use it and enjoy it legally, then no laws were broken. If I then claim it is mine or republish it, then that is a different story too. But it’s not the work. That’s an important distinction.

You are again ignoring that the dataset includes art taken directly from Artstation & Deviantart before they had an option in place for artists to opt out, which would effectively place their copyrighted works & their IP into the public domain (by your argument).
Opting out of what, exactly? An address book with alt text? That’s not even the artists to own. That’s artstation or deviant art. Edit: the index was not even theirs. They looked at the alt and the picture to make sure they matched and were appropriate. See
1691422507010.png
 
Last edited:
This is still not a valid explanation, given that the database took images directly from websites like Deviantart & Artstation before artists had an option to opt out in any way. IMPO, its the greatest data heist of the early 21st century, and will likely be seen as such to future generations.

Regardless of their own statement on the matter their database includes the entire collected work of artists like Greg Rutkowski, just put his name into the search engine and it will show you hundreds of images from his own Artstation page. Search for Masamune Shirow, and you will get direct scans of physical books along with his entire life's work.

Not to mention, that hiding behind "research purposes" is an attempt to find a legal loophole (which they are currently trying to beat in court & in public opinion). Look up AI Data Laundering, and see how they use these datasets to skirt copyright laws. I don't want to post a direct link because it might be considered against the rules (not sure so I will err on the side of caution).
While the full LAION 5B database is very large (750 TB) you can see what it looks like from smaller subsets.


For example

1691422262186.png
Again this particular dataset doesn't contain an actual copy of the image.

Moreso the website you linked too

The author explained how it works

HaveIBeenTrained is a tool that uses clip retrieval to search the largest public text-to-image datasets, Laion-5B and Laion-400M, to remove links to images that artists want to opt-out from being used to train generative AI systems.

These datasets are typically shared as files that contain links to images on the internet and captions that describe them. Stability and Laion partner to remove links that have been flagged for removal, ensuring that future models will not be trained with the opted-out work.

As for the legal loophole in the United States, there are strict limits to what copyright covers and doesn't cover. In another thread, I posted the relevant references from the US Code and the Copyright Office.

Post One (What is covered by copyright)
Post Two (What is not covered by copyright)
 
As a researcher who does not work with ML or AI but does work with large scale network activity, I can assure you that most of us (the broader internet community) are not interested in breaking laws. There are some folks who are, but they are, by far, the smallest of minorities. To the point where I rely on it in my research as an indicator of bad.
Ok, Surely I am not trying to accuse engineers and programmers of being deliberate on this, but the big corporations are the ones calling the shots, not the programmers. There are surely conflicts of interest here, and part of that does relate to the unfortunate fact that there are bad actors in every facet of life.

It is a pretty delicate and complicated subject, but can we at least agree that the issue artists have is that their work is being included in datasets without their consent.

Yes opting out hasn't been presented as an option yet, that's why its easy to argue that implementing it will be hard now, because its already too late I guess?

To restate my position: I am not 100% against the idea of ai-tools, but I am against using ai-tools from companies that can't or won't agree to use datasets which are opt-out friendly and ethically sourced.
 
The author explained how it works


HaveIBeenTrained is a tool that uses clip retrieval to search the largest public text-to-image datasets, Laion-5B and Laion-400M, to remove links to images that artists want to opt-out from being used to train generative AI systems.

These datasets are typically shared as files that contain links to images on the internet and captions that describe them. Stability and Laion partner to remove links that have been flagged for removal, ensuring that future models will not be trained with the opted-out work.

ok that's actually cool.
 
Ok, Surely I am not trying to accuse engineers and programmers of being deliberate on this, but the big corporations are the ones calling the shots, not the programmers. There are surely conflicts of interest here, and part of that does relate to the unfortunate fact that there are bad actors in every facet of life.
Actually no, the big corporations are late to the party. The bulk of the work was driven by independent researchers using open source software. Which is why you have so many pirated datasets floating around. If big corporations started this we would not even see their data.

But now it is a "thing" that has potential, big corporations are in on it. However, they have the resources to build their own datasets independently of what is publically available.

The effect of restrictive rules and lawsuits will just ensure that the little guys are cut out of AI research and the big corporation will dominate.





It is a pretty delicate and complicated subject, but can we at least agree that the issue artists have is that their work is being included in datasets without their consent.
I think there is no debate over the use of datasets of pirated works. They are still floating around but largely are untouchable now that Generative AI is a hot debate of research. Which is a good thing.

It is a problem that artists don't understand that newer datasets like Laion-5B are just a collection of links. That the issue of linking to a publically available image while not 100% settled has not gone well for those trying to restrict the use of the link.



To restate my position: I am not 100% against the idea of ai-tools, but I am against using ai-tools from companies that can't or won't agree to use datasets which are opt-out friendly and ethically sourced.
There are many who think, myself included that the current state of copyright is a case of extreme overreach that has strayed far from its original purpose of promoting the arts and science. There is no issue with folks having the exclusive right to copy their work for 50 or so years. But when it starts to extend to trying to control stuff like second sales, criticism, commentary, analysis, overzealous sense of what is derivative, then there going to be pushback. Stuff that doesn't involve the copying of the original work.
 
Last edited:
There are many who think, myself included that the current state of copyright is a case of extreme overreach that has strayed far from its original purpose of promoting the arts and science. There is no issue with folks having the exclusive right to copy their work for 50 or so years. But when it starts to extend to trying to control stuff like second sales, criticism, commentary, analysis, overzealous sense of what is derivative, then there going to be pushback. Stuff that doesn't involve the copying of the original work.
On this we actually agree. In college I was an intern at the electronic music studio there for three years, and I used my fair share of samplers and sampling to create my own music, but I never intended it to be used commercially or even publicly.

I get that there are valid concerns of overreach, but this is a two-way street.

From the perspective of artists, who are the affected party in this situation, most people don't care at all, and are telling us to live with the fact that every working artist will now have to compete with countless ai imitations & iterations into oblivion. By devaluing artists we are devaluing the very thing that makes us human. Just my opinion I guess.

The effect of restrictive rules and lawsuits will just ensure that the little guys are cut out of AI research and the big corporation will dominate.
That is often the case, yes, but we are currently in uncharted territory, regarding the future of the law regarding the use of ai-tools and how it will affect both companies and artists. No one knows at this point.

What artists DO deserve is the chance for their voices to be heard on this subject. Just because visual artists don't have a nice SAG union to protect them does not mean they don't deserve the same rights as other creatives.

The fact that corporations are in the position to benefit the most from laws changing does not mean that they shouldn't be changed if people are still being exploited as a result.

Furthermore, many of these companies investing into ai are also investing in nft's, crypto, and blockchain technology that has been nothing but rotten to the core.
 
and I used my fair share of samplers and sampling to create my own music, but I never intended it to be used commercially or even publicly.

By devaluing artists we are devaluing the very thing that makes us human. Just my opinion I guess.

That is often the case, yes, but we are currently in uncharted territory, regarding the future of the law regarding the use of ai-tools and how it will affect both companies and artists. No one knows at this point
Yes, this is why it is important to make sure terms and definitions are well defined and well understood, so we are in the same reality.
What artists DO deserve is the chance for their voices to be heard on this subject. Just because visual artists don't have a nice SAG union to protect them does not mean they don't deserve the same rights as other creatives.
I don’t think anyone here is arguing that. A corporation might, but I don’t think anyone here would. We all benefit far too much from artists.
The fact that corporations are in the position to benefit the most from laws changing does not mean that they shouldn't be changed if people are still being exploited as a result.
Of course. Thus, the law is at an early state.
Furthermore, many of these companies investing into ai are also investing in nft's, crypto, and blockchain technology that has been nothing but rotten to the core.
So… this is a spurious characterization and really an ad hominem attack. It doesn’t help. There are MANY more companies investing in ML and AI (and I have a presentation on the differences I can get from a colleague, but really, you want to use ML), far more than are in those fields.

I absolutely understand it’s a passionate and life altering subject for you but I ask that you take a step back and just make sure your facts and understanding are correct. It is a very complex subject and very new territory. We have a new and powerful tool, and there should be rules around its use. It just doesn’t help to swing the bat widely :smile:
 
Yes, this is why it is important to make sure terms and definitions are well defined and well understood, so we are in the same reality.

I don’t think anyone here is arguing that. A corporation might, but I don’t think anyone here would. We all benefit far too much from artists.

Of course. Thus, the law is at an early state.

So… this is a spurious characterization and really an ad hominem attack. It doesn’t help. There are MANY more companies investing in ML and AI (and I have a presentation on the differences I can get from a colleague, but really, you want to use ML), far more than are in those fields.

I absolutely understand it’s a passionate and life altering subject for you but I ask that you take a step back and just make sure your facts and understanding are correct. It is a very complex subject and very new territory. We have a new and powerful tool, and there should be rules around its use. It just doesn’t help to swing the bat widely :smile:
Cool no prob. I will happily take a step back for the sake of civility. I guess I have said enough on the subject so apologies to all.
 
Cool no prob. I will happily take a step back for the sake of civility. I guess I have said enough on the subject so apologies to all.
No apologies necessary, and it was not a mod statement (I do this color for mod statements). It was more of a request for objectivity, which I can absolutely understand is hard here.
 
What artists DO deserve is the chance for their voices to be heard on this subject. Just because visual artists don't have a nice SAG union to protect them does not mean they don't deserve the same rights as other creatives.
They do have a chance to be heard. In a fit of sanity, Congress passed an act establishing a copyright small claims court.

As well as the DMCA route which I have taken advantage of whenever a link to one of my works on a pirate site appears too high in a Google search results.

In either case, if a dataset that is publically accessible actually has a copy of one of their images then it is as slam dunk as it gets.


The fact that corporations are in the position to benefit the most from laws changing does not mean that they shouldn't be changed if people are still being exploited as a result.

One way to stop exploitation is to give the small guy a figurative gun that makes being big and strong in order to wear armor and use a sword irrelevant. Support open content, open source, and open data. And open source also means artists can contribute as well by participating in the discussions and helping shape the overall flow of the project. Open Source projects require more than just coding to be useful.


Furthermore, many of these companies investing into ai are also investing in nft's, crypto, and blockchain technology that has been nothing but rotten to the core.
NFTS, crypt, and blockchain (to an extent) are just the latest version of the Tulip Mania. Not a paradigm shift in life as we know it. Scammers are making AI a target as well but then again the point of their grift is to latch on to whatever is the hot topic at the moment. I.E> biotech.
 
A Song of Ice and Fire...completed by AI.
User - “Chatbot, write a conclusion to A Song of Ice and Fire in the style of George R. R. Martin.”

Chatbot - “Working…”

[20 years later]

Chatbot - “I’m just about done. Promise. Just wait a little longer.”

User - “I REALLY should have been more specific with that prompt.”
 
Yeah, everyone got fat
I WAS fat when I started high school. I’m in much better shape now that I’m in my 50’s (a lot stronger, better endurance, etc.).

Amusingly, I was talking to one of the younger guys at the dojo one evening who is crazy strong, fit, fast, and very, very good at fighting. I told him to enjoy it now, because he did it backwards. When I was his age, I was overweight, smoked, could barely lift any weight, and would pass out if I ran 20 feet. But the year I turned 50, I earned my first black belt.

Sure, he’s in fantastic shape NOW, but there’s nowhere for him to go but down.
 
I WAS fat when I started high school. I’m in much better shape now that I’m in my 50’s (a lot stronger, better endurance, etc.).

Amusingly, I was talking to one of the younger guys at the dojo one evening who is crazy strong, fit, fast, and very, very good at fighting. I told him to enjoy it now, because he did it backwards. When I was his age, I was overweight, smoked, could barely lift any weight, and would pass out if I ran 20 feet. But the year I turned 50, I earned my first black belt.

Sure, he’s in fantastic shape NOW, but there’s nowhere for him to go but down.
I have never been an exercise guy. I've been between 179 - 207 lbs most of my adult life. 185-190 is where my body seems to be happy. At 5'11" that puts me Overweight by a little bit. I watch my much fitter and more active friends get older and deal with knee trouble, back issues, etc. I don't have any of those. I keep expecting a hammer to drop on my health but so far it keeps hitting everyone else.
 
No apologies necessary, and it was not a mod statement (I do this color for mod statements). It was more of a request for objectivity, which I can absolutely understand is hard here.
Thank you for letting me know. I know its a tender subject at the moment so I will do my best to remain objective with respect to other pubbers.

I spent a bit of time this morning writing about how its affected my work, so I'll just post this here and leave it at that. Thank you to everyone for their patience and consideration.

---------------------------------------------------------------------------------------------------------------------------------------

One concern that I have as an analog visual artist are lead times and the production time for actual art.

Real art, analog or digital (& I do both) takes time. A lot of time, effort, and a lot of trial and error to get to the point where something feels unique.

Some easier pieces can take up to two weeks, involved pieces a few months, and some of my art has literally gone through multiple stages over the course of years to reach a state of completion.

ai-tools can do all that in a few short moments, and as an analog artist it's pretty depressing (the fact that people only care about the final output, not the work it takes to get there).

I have devoted years to producing my own ttrpg, commissioning multiple artists over the past two years, and I am still commissioning more art and I am currently about halfway through the process with the current artist I am working with, with realistically at least another five to six months to go for that step in the process.

The OGL debacle also set me back by at least a year as I have had to abandon my previous OGL-based project completely to work on my own in house system. With all these setbacks, I don't see that it's possible to compete with a creator that decides to cut corners and use ai-tools. I also can't afford make a full-color interior book, so its going to be black and white (with possibly a bit of color), another mark against the project when trying to compete with full-color ai-generated images.

While I try and do right by the artists I work with, I don't know if the project will succeed by its own merits. I am largely unknown, and this is my first ttrpg project, so I don't have the same support as an established creator. I'm cool with that, I am just expressing the difficulty of getting your work out there and finding an audience.

-Final Note: As an experienced artist of over 23 years, I can tell you that ATM its almost impossible to get your work noticed if you aren't making art that is mainstream-pop art (both due to public preferences and algorithms). If its impossible now, how are working artists expected to compete with the potential deluge of ai-generated content in the future?
 
Last edited:
My hope is that the law/public opinon will work out such that commercial uses of AI art are discouraged, but that there's a thriving artistic scene of people doing it noncommercially and without fear of harassment. Fanfiction, GMing, and a number of other scenes are proof that you can have nearly completely noncommercial scenes, and while I don't begrudge the jobs of anybody who's managed to get them, I prefer the vibe in the spaces where professional status is basically off the table.

(I once tried to find advice for voice acting for GMing purposes and - though I'm sure that this is a result of doing a very shallow search - I found very little on doing voices but a whole lot of material on making it professionally as a voice actor, which just isn't my goal.)
NFTS, crypt, and blockchain (to an extent) are just the latest version of the Tulip Mania. Not a paradigm shift in life as we know it. Scammers are making AI a target as well but then again the point of their grift is to latch on to whatever is the hot topic at the moment. I.E> biotech.
This, but also a lot of generative AI inference requires the same chips that were required for bitcoin mining, so there's a natural fit in the same way that a field you bought to raise tulips can also be used to grow, say, pumpkins.

(NFTs and crypto were technologies searching for a use case; the bubble appeared when "sell them to someone else for more" appeared as the most meaningful use case, which was unsustainable without a primary use case. The primary use cases of generative AI are pretty obvious.)
 
The Onion has weighed in on the ai art in D&D issue:
 
Banner: The best cosmic horror & Cthulhu Mythos @ DriveThruRPG.com
Back
Top