Skip Navigation

[Help] Trying to run a local Story telling model with KoboldCpp

Hi,

Just like the title says:

I'm try to run:

With:

  • koboldcpp:v1.43 using HIPBLAS on a 7900XTX / Arch Linux

Running :

--stream --unbantokens --threads 8 --usecublas normal

I get very limited output with lots of repetition.

Illustrattion

I mostly didn't touch the default settings:

Settings

Does anyone know how I can make things run better?

EDIT: Sorry for multiple posts, Fediverse bugged out.

16

You're viewing a single thread.

16 comments
  • What I'd advise you to do is something like this:

    python3 koboldcpp.py --unbantokens --contextsize 8192 --stream models/mythomax-l2-13b.Q4_K_M.gguf
    

    If you're unsure about your settings, just use mirostat:

    python3 koboldcpp.py --unbantokens --contextsize 8192 --usemirostat 2 5.0 0.1 --stream models/mythomax-l2-13b.Q4_K_M.gguf
    

    You can download MythoMax 13b and there's an 33b. It's probably not 'the best' model out there, but I think a pretty solid one. It isn't super picky with the settings and does a variety of tasks well. [Edit: Watch out when using the 33b. I think it's broken. It generates lots of repetetive words for me.]

    Notes: You probably need to add your --usecublas normal to that.

    Fiddling with the contextsize comes with many caveats. Skip this parameter if you don't need it. Or learn how scaling works and which model needs which of the two methods with what kinds of numbers. Letting koboldcpp do the ropeconfig itself, makes it assume a few things that are probably wrong. You oftentimes have to set the 'ropeconfig' explicitly when doing any funny stuff. (Use a normal model and have it in gguf format. That's most likely to get the automatic configuration doing something sane.)

    Regarding --unbantokens: You can delete that from your commandline and it will make the EOS token disappear, hence making the model unable to stop generating text. But, it'll also make it write an end to the story, following it up with "In the end ..." and then another "And together, they lived..." and then following it up with more and more repetitive stuff because the story is over yet you made it unable to stop.

    With 'unbantokens' and once it won't continue on its own: Tick "Allow Editing", append a line break (or two) and hit 'Generate' again.

    You can set the 'Amount to Generate' to something you like. You don't need to put in my value.

    Getting your model to work: It is a LLaMA 1 based model. So it originally had 2048 tokens context. It is in the old ggml format, so koboldcpp probably doesn't know that. And judging by the name, I guess it was trained to have an 8k context with linear scaling. So I'd guess the following parameters are correct. And you absolutely need to specify it explicitly when using a model like this:

    --contextsize 8192 --ropeconfig 0.25 10000
    

    I don't have the time to fiddle with the parameters to get them right, so I just used mirostat. Feel free to instead try a few of the presets and play with the numbers. Maybe see if someone commented a good configuration for your favorite model somewhere.

    python3 koboldcpp.py --unbantokens --contextsize 8192 --ropeconfig 0.25 10000 --usemirostat 2 5.0 0.1 --stream models/WizardLM-Uncensored-SuperCOT-StoryTelling-30b-superhot-8k.ggmlv3.q4_1.bin
    

    Regarding the prompt format: I'm not sure what a 'storywrite' model expects. You're giving it an instruction, but don't write it in any of the usual formats. That's most likely wrong in any case. If these models are trained with plain stories (which I don't know), you have to start with the story and let it autocomplete. You can't give it an instruction and hope for the best, it'll lead to subpar results. If you give it instructions, do it correctly. Find out the correct prompt format for that specific model.

    But maybe just don't use this model. These SuperHOT or 8k/16k/... models are from a time where we didn't have NTK-aware scaling yet. Nowadays you might just use the latter together with a 'normal' model. And as long as there isn't a reason against it, maybe use a model based on Llama2 which has twice the context to begin with (compared to the old LLaMA).

    There are many other models like that out there. In fact there are many many models out there. And many of them aren't very good or are just obsolete. Don't pick one based on the fancy name. Choose one other people use. Rely on word of mouth. If it's good, other people will use it and maybe also talk about it, recommend good settings and how to prompt it. For popular models, you'll get that information on top.

    https://huggingface.co/models?sort=modified&search=storywrite

    Judging by the output you can see on my screenshots, the MythoMax writes way better stories anyways.

    Here are some more prompt ideas:

    ### Instruction:
    Write an award winning childrens book about a stray cat finding a new home in a small european village. Make sure the narration and characters are vivid, detailed and generally warm and likable.
    
    ### Response:
    
    ### Instruction:
    You are the worlds greatet writer. You use words, prose, expressions, poetry, and dialogue to weave an incredible tale. Write in the style of book chapters. Write a fairy tale for children about a stray cat finding a new home in a small european village.
    
    ### Response:
    
    • Thanks a lot for your input. It's a lot to stomach but very descriptive which is what I need.

      I run this Koboldcpp in a container.

      What I ended up doing and which was semi-working is:

      • --model "/app/models/mythomax-l2-13b.ggmlv3.q5_0.bin" --port 80 --stream --unbantokens --threads 8 --contextsize 4096 --useclblas 0 0

      In the Kobboldcpp UI, I set max response token to 512 and switched to an Instruction/response model and kept prompting with "continue the writing", with the MythoMax model.

      But I'll be re-checking your way of doing it because the SuperCOT model seemed less streamlined and more qualitative in its story writing.

    • I like to do my storywriting a bit differently, though. I had better results guiding it a bit more. Have a look at the following output. The process is as follows: I give it an instruction and let it generate the answer. Then I append the instruction for the next step underneath. During each step I obviously let it regenerate replies until I like the result and also generously edit things to my likings before resuming things.

      The approach has several steps. I let it write a synopsis that I like. Let it come up with interesting main characters, the story arc, maybe a scenario. I split the actual writing into parts/acts/chapters. Doing it this way seems to generate better results than just instructing it to write something. Doing that results in a very basic arc of suspense (if at all) and random pacing of the story and characters enter and exit the stage at random. YMMV.

      Remember, you can always let the LLM help you. Let it come up with 10 interesting titles for a book. Tell it to "Come up with some deep questions regarding ..." or challenging issues to face your main character with, that'll advance the story.

      You can also give it extra instructions if you don't like the results. For example I sometimes write "Write chapter 1 ... of the story. Introduce the main character and set the atmosphere for the story". You'll have to do lots of trial and error. One thing: It'll pick up on the style of writing, the pacing and so on. So it'll get easier once you get past the first few paragraphs.

      The following is only an example. The way I word things isn't necessarily the best choice. Also feel free to tell it to write a short story (these often result in several pages being generated) or a novel. Or a 5000 word story. You can word it like a school assignment or tell it to write in the style of Tolkien. (Regarding the 5000 words: LLMs can't count. It'll only give it a rough idea of something long or short. Don't expect it to do maths or give you something a precise length. But that is a frequent problem. Just make sure it roughly knows what you expect it to do.)

      ### Instruction:
      Write a character card for private investigator cat. She is a female, anthropomorphic cat who recently moved to a small european village.
      
      ### Response:
      Name: Private Investigator Cat
      Species: Anthropomorphic Cat
      Age: 28 (appears younger due to her feline agelessness)
      Occupation: Private Investigator
      Personality: Curious, observant, independent, and slightly aloof. Despite being new to the village, she maintains a professional demeanor in her work as a private investigator. She's quick-witted and resourceful when solving cases but can also be stubborn at times. Her feline nature sometimes shows through in her curiosity about small details and her tendency to groom herself compulsively during intense moments.
      Appearance: A sleek black cat with striking green eyes, she stands at around 5'4" tall on her hind legs. Her tail swishes confidently behind her as she walks. She dresses professionally in a fitted suit jacket, slacks, and heels that accentuate her feline grace. A pair of cat-eye glasses perches on her nose, adding an air of intelligence to her appearance.
      Background: Originally from a big city, Private Investigator Cat decided to move to this small European village for a change of pace and to escape the hustle and bustle of city life. She has a knack for solving mysteries and enjoys the slower pace of the village, which allows her to focus on her work without distractions. However, her unique appearance often raises eyebrows among the locals, making it challenging for her to blend in completely.
      Skills: Excellent observation skills, sharp intellect, proficient in martial arts, fluent in multiple languages including French and German.
      Case Files: Recently, she was hired by a local farmer to investigate strange occurrences on his property - livestock disappearing without a trace. Using her keen senses and knowledge of animal behavior, she discovered that a group of thieves were responsible for the thefts. Another case involved tracking down a missing heirloom necklace for a wealthy family in town. Her success rate is high, but some villagers still view her with suspicion due to her unconventional methods and appearance.
      
      ### Instruction:
      Imagine a summary for a childrens book called "Detective Cat and the Case of the Stolen Necklace".
      
      ### Response:
      Detective Cat, a clever and curious anthropomorphic feline detective, arrives in a quaint European village determined to solve mysteries. When a valuable family heirloom disappears, she springs into action using her sharp observation skills and feline agility to track down clues. With the help of new friends in the village, she uncovers a daring jewel thief's plan and retrieves the stolen treasure, proving that even in a place where everyone knows each other, crime doesn't pay. Will Detective Cat be able to solve more mysteries and earn the trust of the villagers? Find out in this exciting adventure!
      
      ### Instruction:
      Write the table of contents for this children's book.
      
      ### Response:
      Table of Contents:
      1. Introduction to Detective Cat
      2. The Mysterious Disappearance of the Heirloom Necklace
      3. Gathering Clues and Making Connections
      4. Following the Thief's Trail
      5. A Daring Rescue Mission
      6. Unmasking the Culprit
      7. The Aftermath and New Beginnings
      
      ### Instruction:
      Write chapter 1 of the story.
      
      ### Response:
      Chapter 1: Introduction to Detective Cat
      In a small European village nestled between rolling hills and picturesque vineyards, there lived an unusual character named Private Investigator Cat. She was not your typical resident; she stood tall at 5'4" on her hind legs and had striking green eyes that matched her sleek black fur. Despite being new to the area, she decided to set up shop as a private investigator after moving from the big city. The locals found her fascinating, but they also wondered what kind of trouble someone like her might bring to their peaceful community.
      One day, Madame Dupont, a wealthy woman in town, approached Detective Cat with a peculiar request. Her family's priceless necklace had gone missing, and she wanted it back. The necklace held great sentimental value and was passed down through generations. It was made of precious gems and gold, making it irresistible to any thief. Detective Cat took on the case, eager to prove herself to the villagers. As she began her investigation, she knew that solving this mystery would be no easy feat.
      
      [...]
      

      (Generated with MythoMax-L2 13b (Q4_K_M), KoboldCPP on Linux. Settings are in the screenshot on the comment before. CLI arguments were in my case:

      python3 koboldcpp.py --threads 2 --unbantokens --contextsize 8192 --ropeconfig 1.0 32000 --port 5001 --smartcontext --stream models/mythomax-l2-13b.Q4_K_M.gguf
      

      Instructions are my words, everything after the ### Response: to the next instruction is what the LLM came up with. I haven't edited the output to move the story in any direction in this case. And I often just use the 'Story Mode', write all of the prompt myself and edit the text when doing things like this. You can set up Instruct Mode if you like.

      I'm not an expert. There may be better ways to do it.

16 comments