NaNoWriMo (National Novel Writing Month) started in 1999 to get writers to spend their November writing a 50,000-word novel. The idea is that quality doesn’t matter – you get into the rhythm …
I joined a writing meetup here in Amsterdam which gathers every week in a bar to write, to talk about their writing, to bounce ideas, etc. I kinda got tired of going because there were a worrying number of people using chatgpt to generate ideas. I was the only one trying to write non-fiction, and most of what I was writing would be crit of tech (sometimes genAI) so talking about my writing was always fun. But nonetheless, their use of chatgpt seemed extra weird because we were there, together, to write and support each other, for free.
It's strange to use solidarity, support, and just general helpfulness from others as an explanation for how AI opens writing up to classes or abilities when that's probably one of the top things that social media (and pre-social media social media) gave us on the internet.
A while back one of their reps did say somewhere on Reddit that they have no intention of adding any LLM features to Scrivener. Granted, they said that in the context of moving towards a subscription model and talking about features that don't work with their current business model, but still. Unless something has changed recently, they seem to want to stick to being a one-time purchase without any cloud-based services whatsoever, including AI, for their next major version too.
our position on this is that we do not include any AI tools in our apps and allow users their own choice of where to back up work, allowing them to choose services that don't allow AI access. Thanks :)
I use NovelAI myself. But you gotta provide good context since it mimics your own writing and isn't an instruct model. It's more of a "yeah, and—" for brief passages.
I'd rather have a little discourse than an echo chamber. I know people hate that term for good reason, but as we've seen many times, it can happen anywhere. We should always be vigilant to prevent that.
I'll step up and say, I think this is fine, and I support your use. I get it. I think that there are valid use cases for AI where the unethical labor practices become unnecessary, and where ultimately the work still starts and ends with you.
In a world, maybe not too far in the future, where copyright law is strengthened, where artist and writer consent is respected, and it becomes cheap and easy to use a smaller model trained on licensed data and your own inputs, I can definitely see how a contextual autocomplete that follows your style and makes suggestions is totally useful and ethical.
But i understand people's visceral reaction to the current world. I'd say, it's ok to stay your course.