It finds the “most similiar tokens” in the list. Similarity is the theta angle between the token vectors.
The angle is calculated using cosine similarity , where 1 = 100% similarity (parallell vectors) , and 0 = 0% similarity (perpendicular vectors).
Negative similarity is also possible.
How can I use it?
If you are bored of prompting “girl” and want something similiar you can run this notebook and use the “chick” token at 21.88% similarity , for example
You can also run a mixed search , like “cute+girl”/2 , where for example “kpop” has a 16.71% similarity
There are some strange tokens further down the list you go. Example: tokens similiar to the token "pewdiepie</w>" (yes this is an actual token that exists in CLIP)
Each of these correspond to a unique 1x768 token vector.
The higher the ID value , the less often the token appeared in the CLIP training data.
To reiterate; this is the CLIP model training data , not the SD-model training data.
So for certain models , tokens with high ID can give very consistent results , if the SD model is trained to handle them.
Example of this can be anime models , where japanese artist names can affect the output greatly.
Tokens with high ID will often give the "fun" output when used in very short prompts.
What about token vector length?
If you are wondering about token magnitude,
Prompt weights like (banana:1.2) will scale the magnitude of the corresponding 1x768 tensor(s) by 1.2 . So thats how prompt token magnitude works.
So TLDR; vector direction = “what to generate” , vector magnitude = “prompt weights”
How prompting works (technical summary)
There is no correct way to prompt.
Stable diffusion reads your prompt left to right, one token at a time, finding association from the previous token to the current token and to the image generated thus far (Cross Attention Rule)
Stable Diffusion is an optimization problem that seeks to maximize similarity to prompt and minimize similarity to negatives (Optimization Rule)
For every step (20 in total by default) for SD1.5 :
Prompt text => (tokenizer)
=> Nx768 token vectors =>(CLIP model) =>
1x768 encoding => ( the SD model / Unet ) =>
=> Desired image per Rule 3 => ( sampler)
=> Paint a section of the image => (image)
Disclaimer /Trivia
This notebook should be seen as a "dictionary search tool" for the vocab.json , which is the same for SD1.5 , SDXL and FLUX. Feel free to verify this by checking the 'tokenizer' folder under each model.
And the T5 text encoder for FLUX. I have 0% understanding of FLUX T5 text_encoder.
//---//
If you want them , feel free to include them yourself and share the results (cuz I probably won't) :)!
That being said , being an encoding , I reckon the CLIP Nx768 => 1x768 should be "linear" (or whatever one might call it)
So exchange a few tokens in the Nx768 for something similiar , and the resulting 1x768 ought to be kinda similar to 1x768 we had earlier. Hopefully.
I feel its important to mention this , in case some wonder why the token-token similarity don't match the text-encoding to text-encoding similarity.
Note regarding text encoding vs. token
To make this disclaimer clear; Token-to-token similarity is not the same as text_encoding similarity.
I have to say this , since it will otherwise get (even more) confusing , as both the individual tokens , and the text_encoding have dimensions 1x768.
They are separate things. Separate results. etc.
As such , you will not get anything useful if you start comparing similarity between a token , and a text-encoding. So don't do that :)!
If you spot errors / ideas for improvememts; feel free to fix the code in your own notebook and post the results.
I'd appreciate that over people saying "your math is wrong you n00b!" with no constructive feedback.
//---//
Regarding output
What are the </w> symbols?
The whitespace symbol indicate if the tokenized item ends with whitespace ( the suffix "banana</w>" => "banana " ) or not (the prefix "post" in "post-apocalyptic ")
For ease of reference , I call them prefix-tokens and suffix-tokens.
Sidenote:
Prefix tokens have the unique property in that they "mutate" suffix tokens
Example: "photo of a #prefix#-banana"
where #prefix# is a randomly selected prefix-token from the vocab.json
What is that gibberish tokens that show up?
The gibberish tokens like "ðŁĺħ</w>" are actually emojis!