Popular iPad design app Procreate is saying no to generative AI, and has vowed to never introduce generative AI features into its product.
Popular iPad design app Procreate is coming out against generative AI, and has vowed never to introduce generative AI features into its products. The company said on its website that although machine learning is a “compelling technology with a lot of merit,” the current path that generative AI is on is wrong for its platform.
Procreate goes on to say that it’s not chasing a technology that is a threat to human creativity, even though this may make the company “seem at risk of being left behind.”
Procreate CEO James Cuda released an even stronger statement against the technology in a video posted to X on Monday.
I agree, but as long as we still have capitalism I support measures that at least slow down the destructiveness of capitalism.
AI is like a new powertool in capitalism's arsenal to dismantle our humanity.
Sure we can use it for cool things as well. But right now it's used mostly to automate stuff that makes us human - art, music and so on.
Not useful stuff like loading the dishwasher for me. More like writing a letter for me to invite my friends to my birthday. Very cool. But maybe the work I put in doing this myself is making my friends feel appreciated?
Edit:
It's also nice to at least have an app that takes this maximalist approach. Then people can choose. If they're half-assing it there will be more and more ai-features creeping in over time. One compromise after the next until it's like all the other apps.
It's also important to have such a maximalist stand in order to gauge the scale in a way.
… In a world of adequate distribution and a form of universal income, we should all relish automation.
That doesn’t preclude capitalism (investing for profit, the use of currency, interest rates etc), however, just needs a state with guts and capability to force redistribution.
Procreate is amazing. I bought it for my neurodivergent daughter and used it as a non-destructive coloring book.
I’d grab a line drawing of a character that she wanted to color from a google image search, add it to the background layer, lock the background so she can’t accidentally move or erase it, then have her color on the layer above it using the multiply so the black lines can’t be painted over. She got the point where she prefers to have the colorized version alongside the black and white so she can grab the colors from the original and do fun stuff like mimic its shading and copy paste in elements that might have been too difficult for her to render. Honestly, she barely speaks but on that program, she’s better than most adults already even at age 8. Her work looks utterly perfect and she knows a lot of advanced blending and cloning stuff that traditional media artists don’t usually know.
No doubt his decision was helped by the fact that you can't really fit full image generation AI on iPads - for example Stable Diffusion needs at the very least 6GB of GPU memory to work.
That said, since what they sell is a design app, I applaud him for siding with the interests of at least some of his users.
PS: Is it just me that finds it funny that the guy's last name is "Cuda" and CUDA is the Nvidia technology for running computing on their GPUs and hence widelly used for this kind of AI?
you can't really fit full image generation AI on iPads - for example Stable Diffusion needs at the very least 6GB of GPU memory to work.
You can currently run Stable Diffusion and Flux on iPads and iPhones with the Draw Things app. Including LoRAs and TIs and ControlNet and a whole bunch of other options I'm too green to understand.
Technically the app even runs on relatively old devices, though I imagine only at lower resolutions and probably takes ages.
But in my limited experience it works quite well on an iPad Pro and an iPhone 13 Pro.
Does it? I worked on training a classifier and a generative model on freely available galaxy images taken by Hubble and labelled in a citizen science approach. Where's the theft?
Hard to say. Training models is generative; training a model from scratch is costly. Your input may not infringe copyright but the input before or after may have.
The way I understand it, generative AI training is more like a single person analyzing art at impossibly fast speeds, then using said art as inspiration to create new art at impossibly fast speeds.
The art isn't being made btw so much as being copy and pasted in a way that might convince you it was new.
Since the AI cannot create a new style or genre on its own, without source material that already exists to train it, and that source material is often scraped up off of databases, often against the will and intent of the original creators, it is seen as theft.
Especially if the artists were in no way compensated.
With this logic photography is a painting, painted at an impossible high speed - but for some reasons we make a difference between something humans make and machines make.
That's a blanket statement. While I understand the sentiment, what about the thousands of "AIs" trained on private, proprietary data for personal or private use by organizations that own the said data. It's the not the technology but the lack of regulation and misaligned incentives.
Is it really not true? How many companies have been training their models using art straight out of the Internet while completely disregarding their creative licences or asking anyone for permission? How many times haven't people got a result from a GenAI model that broke IP rights, or looked extremely similar to an already existing piece of art, and would probably get people sued? And how many of these models have been made available for commercial purposes?
The only logical conclusion is that GenAI steals art because it has been constantly "fed" with stolen art.
Why do you think it ingests all its content from. Problem isn't the AI itself it's the companies that operated but it's not inaccurate to conflate the two things.
Ironically, I think AI may prove to be most useful in video games.
Not to outright replace writers, but so they instead focus on feeding backstory to AI so it essentially becomes the characters they’ve created.
I just think it’s going to be inevitable and the only possible option for a game where the player truly chooses the story.
I just can’t be interested in multiple choice games where you know that your choice doesn’t matter. If a character dies from option a, then option b, c, and d kill them as well.
Realising that as a kid instantly ruined telltale games for me, but I think AI used in the right way could solve that problem, to at least some degree.
Yeah, ultimately a lof of devs are trying to make "story generators" relying on the user's imagination to fill in the blanks, hence rimworld is so popular.
There's a business/technical model where "local" llms would kinda work for this too, if you set it up like the Kobold Horde. So the dev hosts a few GPU instances for GPUs that can't handle the local LLM, but users with beefy PCs also generate responses for other users (optionally, with a low priority) in a self hosted horde.
Something like using a LLM to make actually unique side quests in a Skyrim-esque game could be interesting.
The side quest/bounty quest shit in something like Starfield was fucking awful because it was like, 5 of the same damn things. Something capable of making at least unique sounding quests would be a shockingly good use of the tech.
Very good news for artists. AI image generation is founded upon art theft, and art theft is something that artists are not fond of, so it's really nice to see the developer being open about his respect to the artists who use the app!
Perhaps the most stupid take on this subject I have seen. Nothing will stop humans creating, definitely not a new creative medium! That's all it is, by the way, a new medium, like photography a hundred and some years ago, or digital painting more recently. Most of the same arguments were made against pre mixed paints - Turner was dragged for using them, for example!
It is problematic though. People start relying on content generation more and more and stop learning how to do it properly. Once they start relying on AI shit, that's when capitalism does its thing and locks you into monthly subscription costs. Just look at what Adobe is doing. They create a dependency and then start changing their business model. Cloud this and cloud that is the same kind of problem.
Plus, ai generated content often looks alike. You kind of take away signature looks of creators.
I'm not entirely against AI generated content. A friend of mine hates social media but his small business relies on it. Most of his posts are ai generated just so he doesn't have to deal with that cancer.
The “they stop learning how to do it properly” is as old as time itself!
How many of today’s Illlustrator artists know how to blend oil colours and layer them on cloth? How many software developers could build what they do in pure assembler?
We stand on the shoulders of giants, have been since the Stone Age. Specialisation and advancement has meant we don’t need to start from first principle. You could argue that is what “progress” is; being able to get a little bit further because your parents got a little bit further because their parents got a little bit further.
I’m super concerned about what the future holds for humanity and I worry that AI will leave millions and millions without an income and further concentrate wealth towards the few.
That said this is clearly a “we can’t compete, let’s make a press release to say ‘this is all wrong and we choose not to compete’”-statement.
There's plenty of AI out there that's not built on theft. You can train them solely on your own data if you want them to. There's open source models out there trained only on data they were expressly given consent to use.
You can get machine learning algorithms to learn how to play basic games completely on their own, etc.