“AI Art” (at the current time of this writing) has gone viral on the web and there is a lot of controversy regarding the ethics surrounding it, how data is acquired, and what it could mean for the future. I’ll do my best to explain both sides. If you would like to have a civil discussion on the topic please comment below and how to solve problems if we need to, after consuming this content. At the end of the day there isn’t really anything we can do to stop AI, other than writing to the people who house them asking for changes, not using their product, and bringing awareness to all aspects as a whole. Eventually maybe laws will get put into place. All of this is SO new and is changing constantly.
1. Claim: AI Generators are stealing art
First let’s look at the accusation that artists are accusing the LAION database/Stable Diffusion of doing so we can look at the problem as a whole. From my understanding artists are claiming that because AI Generators can mimic the style of their art (because the AI was potentially given copyrighted material without their consent) that they’re stealing it. Keep in mind that some AI Generators could have used open source or royalty free stock images.
In a nutshell, AI generators create products, such as images, based on the material they are trained with. They do this by identifying the similarities (which might or might not make sense to humans) this can include the image itself, the tags associated with the image, and other related information. As an example, if an AI has been trained to differentiate between cats and dogs, it may be able to accurately identify a cat or a dog in an image. However, if the AI is shown a picture of a duck, it may not be able to recognize the specific type of animal, and may instead classify it as a cat or a dog based on the similarities it has identified. In other words, the AI may not be able to say “duck” when shown a picture of a duck, but it may be able to correctly identify it as either a cat or a dog. However, if the AI is trained exclusively on Van Gogh’s work, it will produce Van Gogh-like images. But if the AI is trained on a variety of styles, the resulting images may be completely random.
I’m going to describe how diffusion models like Dalle2, Stable Diffusion, and Google’s Imagen work. Images begin as they are and over a certain number of steps gaussian noise is added to the images. This takes it from looking like a coherent image to something that resembles the static on your TV. When it adds this noise gradually it leaves a path for it to follow back to reverse the addition of noise that it began with. So obviously that just leaves you with the same image right? How do we get new images?
Enter the noise predictor or model. I’m going to describe how it works on one image and then imagine that applied to billions. First we take all the image, generate some random noise, we pick an amount of noise that we want (say 0 is no noise and 10 is all the noise), and then add noise to the image in that amount. We then repeat this process multiple times choosing different noise amounts on all images. Imagine this is varying snapshots of your memory where 0 is the memory that just happened and 10 is it happened many years ago and you can’t recall any details. Now let’s take this and multiply it. First we have all the images in the database and we start the denoise process, the AI will then refer to the text prompt (which we get into below) and looks at the average of each pixel if an image that has those “alt tags.” Overtime in each step it realizes what is or what isn’t the text prompt to eventually making an image. All images start the same as random noise but over each step choose a different path to become a new image. Here is an simplified infographic from reddit user PhyrexianSpaghetti that explains the process.
So how does the text come into play? In the Stable Diffusion we use OpenCLIP and CLIP is trained on a dataset of images and their captions. Most likely CLIP is trained off of images crawled from the web and their “alt” tags.
CLIP is a combo of an image and text encoder. In this process we are assigning words to images and embedding those words and images together. Until they are embedded they won’t relate to each other, and this process may take multiple iterations. Eventually we will have embeddings that can take the image of a dog and tie it to the words “a picture of a dog.” The process also includes negative examples of images and captions to make sure they aren’t similar.
Now that text has been included on those images we simply add it as an additional factor in the image creation process that was described above and allows us as humans to interact and tell the generator what to do.
Now the issue becomes that if the data the generator was trained on has less random material then its guesses will be less random, this is called overfitting. Overfitting can happen when the AI/ML is trained with materials that are too similar to each other, allowing too many degrees of freedom or parameters inside the model, and/or allowing the model to be trained with too many steps or iterations, and the images that result from that will be less random or not wide enough. With this being the case it’s possible that the image generated will be close to the material it was trained on instead of being random. Our goal here is not to generate images that are the same, we are going for the style. To read more about this and have visuals go here.
The reason this isn’t illegal persay is because a “style” can’t be copyrighted. On top of that copyright only applies to humans and not AI. Now does that apply to humans who program the AI? I don’t know. In time this might change via extension but for now it’s completely legal.
In Dave Grossman Designs v. Bortin, the court said that:
“The law of copyright is clear that only specific expressions of an idea may be copyrighted, that other parties may copy that idea, but that other parties may not copy that specific expression of the idea or portions thereof. For example, Picasso may be entitled to a copyright on his portrait of three women painted in his Cubist motif. Any artist, however, may paint a picture of any subject in the Cubist motif, including a portrait of three women, and not violate Picasso’s copyright so long as the second artist does not substantially copy Picasso’s specific expression of his idea.”
The part missing from this quote is that it doesn’t mention the intent or action to distribute and or monetarily gain. Simply copying something doesn’t infringe on copyright, it’s what you do with it afterwards. Again this only applies to humans and not AI but I think it’s important that we have some context on what has been set in court previously, and we will be referring to later.
Now does that make it right? Just because something is legal doesn’t make it OK. Humans can do everything that machines can do (in essence). Machines can just do it faster. I could do the exact same thing the machine does looking at thousands of images in a style and copying it, it would just take me a lot longer, if that is what we trained the machine to do. Now copying someone’s work isn’t going to set me apart from the crowd by any means and people still talk. Does this apply to AI? With enough time it’s easy to tell what is made with AI, and with Stable Diffusion 2.0 they have invisible watermarks that will help distinguish what is human and machine.
When it comes to image transformation or deviations the legal system is still pretty weird on the matter. When we look at Andy Warhol and the work he created in the majority of cases he didn’t commit copyright infringement. The biggest difference between this and using copyrighted art for AI is that it could impact artists in a negative way, if the copyrighted material wasn’t given with consent.
– What is stealing?
On the point of stealing art. I would argue that the majority of people have illegally downloaded or streamed music, movies, tv shows, video games, etc.. Would this not be considered stealing art? If this person had illegally downloaded or streamed art, and has an issue with AI “stealing” art would that not be hypocritical? It doesn’t matter if they are making a profit or not. Stealing is still stealing. I just want to make sure we have a level playing field and we aren’t lording ourselves as “better.”
On the same notion, what about the artists at Comic-con who draw characters that are not their Intellectual property and they still make a profit off of it? Or fan artists/cosplayers who have a Patreon and charge for images of the fan art or cosplay? This could also apply to Instagram Reels and making money off of that with their fan art/cosplay content. In my opinion, from what I can tell, that is no different and is illegal. Using something that isn’t yours to make profit/art. This same argument can be applied with the style of caricature and how someone created that a long time ago and we don’t consider that stealing.
Now I’ve had people argue that if the IP holder says it’s OK then it’s OK to break copyright infringement. Unless there is a blanket statement out there from the IP saying it’s OK then you should ask permission correct? I believe (and could be wrong) that most people in this field ask for forgiveness rather than permission. With that the IP holder has the ability to withdraw that ability to break copyright at any time. And even then this only applies to a certain point until the person breaking copyright makes too much money or gets too big. Regardless, it’s still breaking the law which is illegal. I’m not saying I’m any better as I’ve enabled and benefited from this behavior, but it’s important I’m fair to parties on all sides.
In most recent news there is the case of an artist using someone’s photograph and making an artistic version of it and making a profit, that in my opinion is 95% identical minus a few details. With how I was brought up to understand plagiarism this is a blatant example of it. However the judge ruling the case said that the photograph itself, or rather the pose, wasn’t unique enough to warrant a copyright. So the artist, who the photographer accused of plagiarizing, won.
At first glance it’s pretty easy to say “they obviously plagiarized” but consider this. If the photographer and the artist are in the same place at the same time and the photographer and artist had the same POV, and the images they created are 95% the same is it still plagiarism? All of this to say the artist didn’t take the photograph itself and transform it, but instead used it as a source material. The artist began with a blank canvas and with their artistic vision used different tools to create something. Whether you agree with this or not is up to you. I know I personally feel icky about it but I’m not an expert in law either.
The photographer in question is going to try and appeal the decision but this is our most current information. For the case of artists claiming AI Art is stealing their work this does not bode well for that argument at least in copyright law. See it for yourself here. It’s important to note that regardless of the tool used whether a person uses a AI generator or pen and paper to reproduce something that could be a plagiarized work it’s not the tool that is at fault but the individual.
If one person steals does that make it OK for other people to steal? Of course not, but it seems that when it starts to affect people personally that it becomes an issue. We see this time and time again throughout history.
So the question I have to ask is why is it that we’ve decided it’s OK to steal art in some ways but not other ways?
References on how selling fanart or cosplay can be considered infringing on copyright.
2. The Dance Diffusion Problem
The Dance Diffusion Problem is what brought me from being pro Stable Diffusion to being against its current practices. With how I believed I understood how Stable Diffusion worked I thought the models were fine but you’ll see why I feel different now. Here is a quote from Hamonai, which operates in Stability AI.
So it sounds like they’re afraid of copyright in music because it’s more restrictive than art. When the dataset for Stable Diffusion was first made they may have had the oversight of using copyrighted materials when they shouldn’t have and are avoiding making that same mistake with music. It’s an easy jump to say it’s a double standard in this instance (and it very well could be) is it for certain? I don’t know.
Now while using Stable Diffusion and making a profit wasn’t the original intention of the platform, Lensa allegedly took advantage of it, and instead of training their own model on non copyrighted material decided to go with what was already there. Do we know this for sure? Nope. Generally those who create with no intention to profit or cause harm are not considered bad and Stability AI didn’t create Stable Diffusion with the intention that Lensa allegedly took advantage of. This brings us back to Comic-con artists selling work of copyrighted characters because it’s similar (not the same) to the actions that Lensa allegedly took.
3. Claim: AI Generators are going to put people out of a job
Yes, maybe. With every new technology jobs are going to change. This same argument was made when photography came about. Painters were up in arms because now their job could be done in the click of a button. To experience these arguments yourself and see the similarities check out the references below. Or when the ATM was introduced in the 1970s people thought that would lead to unemployment to bank tellers, in fact it did the opposite and increased the amount of jobs. Albeit in our day and age it can happen much quicker but the principle is the same.
That doesn’t mean we have to like it and there will always be work for things crafted from human hands. People still want legitimate paintings of themselves. AI (as of the time of this writing) can’t compete with the majority of what photography can do. It can do individual portraits at best (and that’s only close up), anything else requires a lot more work of the human hand, and it has no consistency. If you want your subject to be holding any kind of instrument, in most cases AI can’t make heads or tails of it. And a full body image or fingers? Forget having a coherent face or the right amount of fingers.
What I am going to explain next is adapted from Frank C. who has worked in the video game development world since 1996. I found his comments in a thread on Lensa and thought they provided a unique outlook I hadn’t seen yet. I have shortened the name for privacy sake. Also keep in mind this is just one example and is not applicable to everything.
Any tools that deal with automation have never made the milestones that go with video game development complete any faster. Once those that produce the game see the work happen in less time they just add more work on top, so that it makes the time automation saved, moot.
Most game and movie studios struggle to create content at the same rate that customers consume it.
He thinks that concept artists, in the realm of video games at the very least, will be fine. Sure AI images can be created quickly, yet let’s ponder this likely scenario. You generate a helmet using AI, awesome that’s great. Do you just submit it and be done? Not at all.
The client may ask questions such as: Can we see this helmet in 15 different material styles or see how the helmet animates and the straps function? What about a bunch of different skins for DLC? Oh we will need a model of said helmet can you create that and send it to our 3D animators? We will also require HUD elements, and themes that feel unique for guilds within the game, and the need for renders for UI to make icons of the helmet. How will it look on our website? Make sure you check with Quality Assurance that it reads well on desktop and mobile.
This example shows how “image generation” is just a very small part of a much bigger job.
There are loads of other skills that are needed as a concept designer that deal with communication with humans, design with graphics, mechanics, ergonomics, world building, etc. Those are all still skills that are needed, just being able to prompt well won’t get you a job if you don’t have those.
Another great example that is in a similar vein was when Jurassic Park was created. Originally the entire movie was going to be done with stop motion and practical effects for the dinosaurs. CG was still very much an infant at that time and some of the CG artists decided they were going to try and see if it would be possible to create the motion of dinosaurs with CG, after hours on their own time. They ended up making a set of T. Rex bones that moved and walked. Producers came up to see something else and saw it playing on a monitor and from then everything changed.
It’s important to note that in the film practical effects were still used just not to the extent that they were originally planned. With every advance in technology jobs are going to change and CG shook the cinema world to its core. Watch part of a documentary on The Jurassic Park process here.
References on painters reaction to photography
UKEssays. (November 2018). Reception of photography. Retrieved from https://www.ukessays.com/essays/photography/reception-of-photography.php?vref=1
4. Claim: AI Art isn’t art
This one doesn’t make sense to me because art is subjective. Always is always will be. When photography was invented it wasn’t considered art until 100+ years later. Does that mean it was art at one point and not art at another point? I don’t believe that is how it works.
There are multiple websites that are outright banning AI Art which is completely in their right to do, it is their platform. How they can tell that exactly I don’t know other than “look at it.” In their own way they are defining art, whether that is good or bad is up to you, the individual. Art being subjective is what makes it beautiful as it allows a child’s drawing for their parent to be art to them, but isn’t art for anyone else.
In light of that if we as humans aren’t constantly challenging “What is art?” Are we attempting to explore any further? A good example of this is of Duchamp when he submitted a urinal, upside-down, with the words “R. Mutt, 1917” written on it and titled “Fountain.”
This quote which is believed to have been from Beatrice Atwood though from an anonymous editorial said this:
Another example is “Untitled” (Portrait of Ross in L.A.) by Felix Gonzalez-Torres. When you look at this piece you simply see a bunch of candy, in a pile, in the corner of a room. What are your first thoughts? Probably “that isn’t art” right? Now what if we add context.
The weight of the candy is 175 pounds, which is the ideal weight of the piece whether that was the weight of the average man of the time or that of Ross in the portrait. Ross Laycock, the person in this portrait and artists’ partner, died of AIDS complications in 1991. As visitors participate in the art piece by taking candy the configuration of the piece changes, emulating the act of loss. See it for yourself here.
I find that if we apply this to AI Art it’s no different. After all, if a urinal with writing on it can be art, I’m pretty sure that whatever AI can make should be art too. That’s just my opinion though.
5. Claim: The LAION Database has pictures of people from medical records, violent images, and non-consensual sex.
This is partially accurate from what I can find as of writing this. Because they use URL scraping (which in the USA is completely legal) images can be found and cataloged from all over the web, they don’t store them anywhere. If you use the website Have I Been Trained, people have found medical record pictures of themselves that were illegally obtained after their doctor died and uploaded to the web. Obviously this isn’t ok (and apparently there is no previous legal writing stating that it’s wrong, it just needs to be made) but shouldn’t have been uploaded to the web to begin with.
According to this article from VICE (I don’t know where they got their information from whether it was a direct interview or not) Stable Diffusion which uses the LAION database stated:
“Stable Diffusion does not regurgitate images but instead crafts wholly new images of its own just like a person learns to paint their own images after studying for many years,” a Stability AI spokesperson told Motherboard. “The Stable Diffusion model, created by the University of Heidelberg, was not trained on the entire LAION dataset. It was created from a subset of images from the LAION database that were pre-classified by our systems to filter out extreme content from training. In addition to removing extreme content from the training set, the team trained the model on a synthetic dataset to further reduce potential harms.”
When presented with specific images of ISIS executions and non-consensual pornography, Stability AI said it could not say whether Stable Diffusion was trained on them, and distanced itself from the LAION dataset.
With Stable Diffusion moving away from that dataset I think this solves the issue. Google’s Imagen, which is still private, has put measures in place to avoid this data as well.
The real issue with these large datasets is creating a dataset that is appropriate vs obtaining one. Do you start totally fresh with an empty folder and input images one at a time (to make sure copyright isn’t infringed) or find a folder that has millions of images and check them one at a time to make sure they hit the standard you’ve set. If we assign this task to a computer how accurate will that computer be? If human does it, will they be unerring in their judgment?
6. Claim: LAION procured every piece of art legally through Common Crawl
First, what is Common Crawl? Common Crawl is a nonprofit organization that maintains an open repository of web crawl data. This data is collected by periodically crawling the web and storing the resulting data in a distributed and open format. The goal of Common Crawl is to make this data easily accessible and useful for research, education, and other purposes.
Now how do we know it was legally procured? Websites such as ArtStation have it listed in their terms of service stating:
“Shared” is the key word here and it’s a pretty big gray area what that even means. Most of the time (myself included) we skip over these Terms of Service and don’t give it a second thought. Even if I did understand this legalese I don’t know what the potential implications of the rights I’m giving them has the potential to do.
Some ways to fight against Common Crawl is if you host your own website is to employ a nofollow script so that Common Crawl won’t crawl your website for information. If you find your work on Common Crawl you can submit a copyright claim. Your success in this situation depends on how well you can demonstrate that your images are being redistributed without being transformed.
If creating art is part of your livelihood (and even if it’s not) we need to be adamant about reading and researching the Terms of Service. This is a prime example of how the public is potentially being taken advantage of without even knowing it.
Update: Artstation has now allowed artist to opt in or out of being a part of an AI database. It has been met with mixed reviews on the process.
I learned about this first from this Twitter thread by Gothlytical Art.
7. The Lensa App, Midjourney, Dalle 2 and others
Lensa uses Stable Diffusion but we don’t know what model they used. As for Midjourney, Dalle 2, and others the dataset they use for training isn’t available to the public so there isn’t any way to tell what is being used. Now people/companies/whatever are innocent until proven guilty (at least here in the USA), if we are going to hold Stable Diffusion accountable then shouldn’t we force(?) the other AI generators to make their dataset public as well? Or does that infringe on some other right? I don’t know.
If we are going to hold them accountable we also have to look at their original intent. Did Stable Diffusion remove the ability to reference contemporary artists because they were wrong or because people were pointing fingers? Did they take this action to improve their public image or was using contemporary artists actually wrong for them to do?
8. Claim: The signatures that are generated on AI Art is proof that the art was stolen.
While on the surface this argument makes sense, let’s dive in and see how valid it is. Since we now know how AI Art works it’s inevitable that when it’s learning that signatures may arise because of the dataset it was based on. Many artists in our current age will sign their work as a way of authentication, and those artists who do obviously have copyright to their work, but does the signature mean they have copyright?
No.
If we look back in history, the act of signing your work isn’t new. Currently with how copyright law works is that you own the copyright until 70 years after your death. Thanks Disney. After that it becomes public domain. So if we have evidence of painters for example since the 17th century signing their work with a signature and the AI decides to mimic that and create its own version to create something that feels authentic does that mean it’s stealing? If the image is public domain I don’t see how it can be.
Secondly, if we end up having an opt-in option for artists who do want their images to be trained on by AI or the images used are public domain and the signatures still arise, is that stealing? No it isn’t because they opted in. So I find this argument doesn’t hold water very well. No matter the intellectual property, if you have signed away your rights the person who legally obtained the IP can do whatever they want as long as it is to the letter of the contract.
Lastly, it’s not uncommon that with the prompts that you use for the words themselves to become transposed into the AI image. AI has learned that artists in general will sign their work, so to emulate that practice it will generate some kind of gibberish squiggle because it’s trying to emulate artists.
Examples of paintings with signatures in the 17th century and others can be found here.
9. People claiming AI art as something they made by hand
Just be honest about how your art was created. I see multiple artists who use a combination of Stable Diffusion, Midjourney and Photoshop in order to create something they are happy with. They still have a human hand involved. If you have an issue with that then we need to discuss where the line is drawn.
10. Why is it that people are upset about AI Art but not all the other avenues that AI is used in?
AI Copywriting has been around for quite awhile now, being able to write entire blogs or articles. It was trained from millions of literary sources online and yet I haven’t seen any backlash to the level it has for artists. Would that also not affect the jobs of copywriters? Or what about AI being used in rotoscoping or tracking in videos? That is taking away the man hours that someone would do by hand and putting it into the “hands” of a machine. Does that make it wrong? Maybe because AI Art as it were is just more tangible than AI Copywriting and this is the straw that finally broke the camel’s back. I hope that the zeal for ethics in AI Art carries onto others.
11. Why is it that we have ethics about AI Art and not other things?
I realize this question is a huge can of worms. At first glance it appears at least to me that because AI Art is the hot new thing that people are allowed to be upset about. You have every right to feel the way you do and that’s fine. What about the other aspects of our lives? Do you research where the materials were sourced and how they were made for everything you consume or obtain? Phones, clothing, cars, food, entertainment all of these have their own processes and the majority of people do not research the practices behind these products. Correct me if I’m wrong but if we are going to have ethics about one thing we should have them about all things yes?
Now you can argue that we have a chance to change the ethics here because it’s so new and we can potentially shape it before it gets out of hand. On that point I agree. We have already seen changes in Dalle2, and Stable Diffusion 2.0 has already helped move in the right direction.
I hope that there is some way to take the rush and zeal in this issue and transfer it to other problems like the fashion industry for example. It’s very easy for companies like Shein to be just a flash in the pan in drama and nothing actually changes.
12. For the sake of argument let’s say that copying styles and using artists’ work for reference is not OK, it is currently completely legal, how do we change that?
Now of course we have the artists who can have the opt in/out option, and if that works then that’s amazing! If it doesn’t though, what are our other options? If we copyright a style what does that entail? How will that affect future artists? If my art uses a specific tool and I copyright the use of that tool because it’s part of my style does that mean no one else can use it? Now we are on a slippery slope. I realize this could be going into semantics but I would rather cover all my bases and make sure it’s discussed than to leave it alone.
13. If we as humans use another artist’s work for reference (much like how I understand AI does, in the overall theme) should we not credit artists that inspire us?
If creating AI Art needs to have who was referenced in order to make it then all human art should also do so in respect? Art inspires art. If someone creates a mood board as inspiration for their work should every artist in that mood board also be credited? If we are going to hold AI to this level we should also hold humans to the same level. On top of that humans are inspired by things just through everyday life and will forget those little moments or people that inspired them, we’re humans we forget things. So if we do keep humans to the same level of responsibility we should allow for more grace, vs a machine that doesn’t forget.
14. In what way can AI art (with the Stable Diffusion 2.0 data set) still be used? If I don’t reference any artist and just describe my scene, is that valid?
In much of my own art currently I’ve been using it as a way to make AI photographs of friends and people who have given their consent or to animate my own work where I describe the scene, use my photo as the reference, and not reference any artists. Does that make it OK? Or maybe I need to make my own models and then that solves the issue. The tool itself isn’t bad, it’s just the copyrighted material that was used is, and or if it’s used for profit, which goes back to again the fanart/cosplayer problem. If you disagree, please expound.
15. How much human interaction does there need to be for AI Art to become copyrightable or “art?”
There is no right answer to this question. We know that AI art itself doesn’t have copyright according to USA law. Yet, AI Art when created does have some human interaction if even only typing words into a prompt box. Otherwise AI Art wouldn’t get created at all, it needs a human at some point. So how much of a “percentage” does a human need to interact with AI Art to make it copyrightable? Does it need to be mixed with other pieces of art (AI generated or not) to become something new by a human hand? Do parts need to be changed significantly from the original output?
There is a current trending case of a graphic novel whose images were made by AI, but the speech bubbles, layout design, and story were created by a human. Does that constitute? We simply don’t have the discourse currently to determine this in the court of law.
16. What does AI mean for the future?
When it comes to AI and advancements in technology we need to ask the question, who is it going to benefit the most?
Big Tech and a few wealthy individuals currently control the advancements in AI. In this system, each advancement will likely bring maximum profit to the already wealthy few, while the rest of the world suffers. Most people share the same fears about AI and the technocrats who have amassed so much wealth and power. However, we (the public) have become divided through many avenues including that of the “left” and “right”, and this division has created a stalemate in democratic governments in the process of breaking up the monopolies that Big Tech are building.
If this continues I imagine we will continue towards the age of dystopia similar to that of CyberPunk Edge Runners where only a select few are able to innovate and benefit from technology, while the rest of humanity relies on whatever scraps the elites decide to give them.
This doesn’t have to come to pass however.
If we can unify together on important issues overcoming what divides us we can determine the direction that this technology goes and allow it to serve all of humanity and not just the wealthy. Machines have the abilities and potential to do everything a human does at the fraction of the cost when it comes to food, water, medicine, etc but if the public doesn’t take ownership and control, prices will continue their trajectory, rising upwards, and resources hoarded by the wealthy elite.
Conclusion
There is a lot of info to go through here and it’s important that you make the decision that you think is right. I’ve done my best to be impartial but I’m also not perfect. I really love this technology and it’s given me the opportunity to work with people I didn’t think I ever would. At the end of the day I believe the issue is using copyrighted material for profit without permission. We are entering an AI Renaissance and nobody really knows what it holds, but hopefully we can remember to keep ethics at the forefront. If the majority on an issue believe something is ethically wrong how does that majority make that into law? When laws in our current time come about it’s because those in power and money either are affected by it or can benefit from it. We can’t trust Big Tech to be ethical, we’ve seen them fail time and time again. It is up to those passionate about the situation to make a difference.
Huge shout out to those who helped me edit this article and help educate me on those areas I was misinformed on.