Mythomax 13b settings
Mythomax 13b settings. cppを用いて動かします。. Values greater than 1 encourage more diversity. load-in-4bit params: Mythalion 13B A merge of Pygmalion-2 13B and MythoMax 13B Model Details The long-awaited release of our new models based on Llama-2 is finally here. With SillyTavern on top with the roleplay and mancer settings as noted above. Way more people should be using 7b's now. Model card Files Community. Component 2: Merge Xwin x Hermes with SLERP gradient [0. I am aware this is something of a tall order, but if one of you reading this should be using Mythomax 13b on his local pc using oobabooga and with answers, it would be immensely helpful if you could upload screenshots of your different relevant tabs, so that I could simply copy your settings. Consider these guidelines: Experiment with temperature and top k values, balancing creative chaos with control. You signed out in another tab or window. Holomax 13B by KoboldAI: Adventure: This is an expansion merge to the well-praised MythoMax model from Gryphe (60%) using MrSeeker's KoboldAI Holodeck model (40%). 5 Its uncensored already, besides of em any Mythomax (especially Mythomax-Kimiko for NSFW) models or mixes is good at any purposes. Click “Save settings for this model” and then “Reload the Model” in the top v-- Enter your model below and then click this to start Koboldcpp Aug 10, 2023 · The adjusted repetition penalty settings also fixed issues with other models I experienced recently, like MythoMax-L2-13B talking/acting as the user from the start, so it's not just Vicuna 13B v1. Finally, both Component 1 and Component 2 were merged with SLERP using Nous-Hermes 2 13B (main) / 7B (budget) -> Assistant Roleplay -> Oobabooga (Despite Nous-Hermes being my assistant model, it's also good for roleplay/storywriting; I just prefer MythoMax) For character roleplay or anything requiring creativity, I use Mirostat settings. For 20B, I use the default context template and the Alpaca instruct template. 6. 1). 70 Presence_penalty: 0. Step 8: Generate Text Dec 19, 2023 · MythoMax is a Llama 2 13B (twice as large as the current Griffin model) and was specifically optimized for storytelling. Code of Conduct. Edit: I just checked out LM Studio and you can open up a chat, go to Settings in the top right corner, then scroll In the Model dropdown, choose the model you just downloaded: MythoMix-L2-13B-GPTQ; The model will automatically load, and is now ready for use! If you want any custom settings, set them and then click Save settings for this model followed by Reload the Model in the top right. 8 GB. tech offers a free model called Mytholite, which is Mythomax but with lower context (2. EDIT 2: Rough fix: editing the response, replacing everything with a period and a line break, then using Continue gets a different response. ai . PR & discussions documentation. Under Download custom model or LoRA, enter TheBloke/MythoMax-L2-Kimiko-v2-13B-GPTQ. 17. tiefighter 13B is freaking amazing,model is really fine tuned for general chat and highly detailed narative. I've been having good results with best guess 6b (if it isnt available for you to choose, go into sillytaverns public folder and copy it from koboldai settings to textgen settings) with mirostat at 2, 5, 0. 15, 1. Use in Transformers. For a 13B model, that is, comparing it to other writing/rp oriented 13B models. Sort by: Add a Comment. For example, if you have 8GB VRAM, you can set up to 31 layers maximum for an older 13b model like MythoMax with 4k context. 5-16K Big Model Comparison/Test (13 models tested) Winner: Nous-Hermes-Llama2 SillyTavern's Roleplay preset vs. So here's my setup. Initial GGML model commit 4 months ago. Literally the first generation and the model already misgendered my character twice and there was some weirdness going on with coherency(i don't know how to best explain it but i've seen some text that contextually makes sense, but it kinda feels off like in an "unhuman" way. Still trying to find settings I like for MythoMax but it’s been well tuned for uncensored creative storytelling/role play from my experience. Top_p: 0. You switched accounts on another tab or window. q8_0. 10 with sillytaverns roleplay preset LLaMA 2 Holomax 13B - The writers version of Mythomax. 1-0. To be specific, the ratios are: Component 1: Merge of Mythospice x Xwin with SLERP gradient [0. LLaMA 2 Holomax 13B - The writers version of Mythomax. migtissera/SynthIA-70B-v1. Yeah, 13b is likely the sweet spot for your rig. bin. Knowledge about drugs super dark stuff is even disturbed like you are talking with somene working in drug store or hospital. Adjust specific configurations if needed. For the CPU infgerence (GGML / GGUF) format, having enough RAM is key. I've tried writing various instructions for Sep 5, 2023 · 10. Click Download. There's probably some sort of magic phrase that'll make it stop happening, but I haven't found it yet. You want as many GPU layers as To download from a specific branch, enter for example TheBloke/MythoMax-L2-13B-GPTQ:main. The difference is noticeable but I find 13B good enough as well. Whereas, at this point, MythoMax will usually understand when a subject is done and transition to the next scene (i. 8 T/s. ggmlv3. For context and instruct templates, I used 2 different ones. Stick with 13b for now. Please use it with caution and with best intentions. You can probably also get it (or other 7b or 13b models) running on your local system, unless you're using a potato or something. Feb 4, 2024 · Model MythoMax 13B. The model will start downloading. 25, 0. Easily beats my previous favorites MythoMax and Mythalion, and is on par with the best Mistral 7B models (like OpenHermes 2) concerning knowledge and reasoning while surpassing them regarding instruction following and understanding. If you want to use mythomax, use mythomax 13b. cpp team on August 21st 2023. Train. MythoMix-L2-13b. Feb 26, 2024 · These are the settings I used to use on my 12GB VRAM graphics card (RTX 3060) for full offload of 13b, 4_K_M. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. This is an expansion merge to the well praised Mythomax model from Gryphe (60%) using MrSeeker's KoboldAI Holodeck model (40%) The goal of this model is to enhance story writing capabilities while preserving the desirable traits of the Mythomax model as much as possible (It does limit chat You signed in with another tab or window. 5 16K that benefited from the change. gpu-memory in MiB for device :0. All Synthia models are uncensored. Blog post (including suggested generation parameters for SillyTavern) A heavily dumbed-down explanation is that a model is the AI that generates text. 1. ago. It is also supports metadata, and is designed to be extensible. 7 Range: 0 ≤ temperature ≤ 100 The merging process was heavily inspired by Undi95's approach in Undi95/MXLewdMini-L2-13B. The model will automatically load, and is now ready for use! If you want any custom settings, set them and then click Save settings for this model followed by Reload the Model in the top right. 10. For Airoboros L2 13B, TFS-with-Top-A and raise Top-A to 0. Mythalion is a merge between Pygmalion 2 and Gryphe's MythoMax . In the **Model** dropdown, choose the model you just downloaded: `MythoMax-L2-13B-GPTQ` 7. For Llama2 there are no specific erotic models, so if you want Erebus specifically where its highly NSFW on its own then there isn't a model out there. model-specific prompt format SynthIA (Synthetic Intelligent Agent) is a LLama-2-70B model trained on Orca style datasets. To download from a specific branch, enter for example TheBloke/MythoMax-L2-13B-GPTQ:main. 25]. 2. Mode is set to 2, tau to 3-5 (usually 5) and eta to 0. Pygmalion 2 is the successor of the original Pygmalion models used for RP, based on Llama 2. The GGML format has now been superseded by GGUF. In the top left, click the refresh icon next to Model. 80 Frequency_penalty: 0. Q5_K_M". I have yet to find a decent 70b model, the one i've tried (airoboros) was extremely underwhelming and honestly felt dumber while being much slower. 7 GB. Links to other models can be found in the index at the bottom. Summarizing; Vectorizing; World Info / Lorebooks; Randomization macros People have free access to mytholite, which encourages a sub to mythomax; credits from that sub would allow testing of other comparable models, letting them test out the "work horse" 13B models, and experiment with large models. As of August 21st 2023, llama. The models are trained on top of pre-existing text and then released, often as base models (e. Aug 27, 2023 · MythoMax (L2) 13Bとは 大規模言語モデル(LLM)のうち、ロールプレイングとストーリーライティングに特化したマージモデルです。. This model is proficient at both roleplaying and storywriting due to its unique nature. So my question is: Is there a way to use this model without Horde? New Model Comparison/Test (Part 1 of 2: 15 models tested, 13B+34B) Winner: Mythalion-13B New Model RP Comparison/Test (7 models tested) Winners: MythoMax-L2-13B, vicuna-13B-v1. 0 means the output is deterministic. Holomax feels smarter than Mythomax to me. You will need to set the GPU layers count depending on how much VRAM you have. It's one step more complicated to set up than regular model, but I saw good feedback about this combination, on par or better than some 33B models. Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Waste knowledge Aug 10, 2023 · MythoMax-L2-13b. 18, and 1. You can try Mythomax for free. This model is proficient at both roleplaying and storywriting, so if you want to try it on the net before you run it local try Infermatic. But right now you might want to look at Mythalion. 4 (usually 0. Setting compress_pos_emb to 2 should give 8k. Mancer. api_server --model TheBloke/Mythalion-13B-AWQ --quantization awq. 2023-08-19: After extensive testing, I've switched to Repetition Penalty 1. We’re excited for you to try it and see how it compares! As a callout, we want to thank the amazing open-source developers and creators responsible for sharing these models with the broader AI community. Finer details of the merge are available in our blogpost. For MythoMax (and probably others like Chronos-Hermes, but I haven't tested yet), Space Alien and raise Top-P if the rerolls are too samey, Titanic if it doesn't follow instructions well enough. cpp. Describe the problem. It will be MUCH MUCH MUCH better than an ancient Erebus. License: other. I don't normally have problems with any other model, I've run stock llama-2-70b and it didn't take nearly as long as this model does. 70 Repetition_penalty: 1. g. UPDATES: 2023-08-30: SillyTavern 1. New Model Comparison/Test (Part 1 of 2: 15 models tested, 13B+34B) Winner: Mythalion-13B New Model RP Comparison/Test (7 models tested) Winners: MythoMax-L2-13B, vicuna-13B-v1. You signed in with another tab or window. " "WARNING:models\Gryphe_MythoMax-L2-13b\special_tokens_map. This means it can process and understand information on a scale that was previously unimaginable. The thing is that this one is only appearing on very rare occasions in the list of models Horde is proposing. But IME, it can't really instruct or reason to save itself. AMD 6900 XT, RTX 2060 12GB, RTX 3060 12GB, or RTX 3080 would do the trick. ) Mythomax being very SFW. I'm getting around 1. It handles storywriting and roleplay excellently, is uncensored, and can do most instruct tasks as well. UPDATE: There's an improved version now! Check it MythoMax! A requested variant of MythoLogic-L2 and Huginn using a highly experimental tensor type merge technique. If you dont know what it is --> An improved, potentially even perfected variant of MythoMix, my MythoLogic-L2 and Huginn merge using a highly experimental tensor Pygmalion 2 (7B & 13B) and Mythalion 13B released! News. temperaturenumber. MythoMax-L2-13b. It runs surprisingly well on my rX580 and OC'd processor & ram. Aug 27, 2023 · It is either customized or outdated. Gryphe_Mythomax-13b set up on RunPod on an a100. KoboldAI/LLaMA2-13B-Tiefighter-GGUF. Mythomax-L2-13b: Users of this model prefer a mix of creativity and coherence with settings that favor slightly varied outputs without straying too far from relevant content. The model will automatically load, and is now ready for use! 8. If you're using the GPTQ version, you'll want a strong GPU with at least 10 gigs of VRAM. I haven’t used Vicuna personally but I second MythoMax and Nous-Hermes. This is the repository for the 13B pretrained model, converted for the Hugging Face Transformers format. q5_K_M model. 0 Release! with improved Roleplay and even a proxy preset. When using vLLM from Python code, pass the quantization=awq parameter For vanilla Llama 2 13B, Mirostat 2 and the Godlike preset. 2 across 15 different LLaMA (1) and Llama 2 EDIT: Ahh, from MythoMax description An improved, potentially even perfected variant of MythoMix, my MythoLogic-L2 and Huginn merge using a highly experimental tensor type merge technique. So far i think MythoMax 13b blows everything out of the water, even 30b models (chronoboros 33b was barely coherent for me). It is a replacement for GGML, which is no longer supported by llama. Play with response length and context size to achieve a harmonious flow. 18, Range 2048, Slope 0 (same settings simple-proxy-for-tavern has been using for months MythoMax L2 Kimiko v2 13B - GGML Model creator: Undi95; Original model: MythoMax L2 Kimiko v2 13B; Description This repo contains GGML format model files for Undi95's MythoMax L2 Kimiko v2 13B. I don’t have a deep understanding but a 20b is basically smarter, so it follows instructions better and coherency is better. 20. q5_K_M running on koboldcpp_nocuda. . 13. 10 Top_a: 1. You can also use AI21's Jurassic-2 Model or Google PaLM, both are only recommended If you wanna generate Pashax22. I also personally like Chronos-Hermes 13B. This model was created in collaboration with Gryphe, a mixture of our Pygmalion-2 13B and Gryphe's Mythomax L2 13B. But what comes close is indeed as others suggested models like Holomax which Aug 11, 2023 · mythomax-l2-13b. Both performs great at NSFW and SFW scenario, but cant certainly come to a conclusion which is better, I mean higher parameters doesnt always mean greater, MythoMax 13B beats some 30B Models. The 30b and 22b models of everything are rather new, and a bit experimental. Knowledge for 13b model is mindblowing he posses knowledge about almost any question you asked but he likes to talk about drug and alcohol abuse. If you're in the mood for exploring new models, you might want to try the new Tiefighter 13B model, which is comparable if not better than Mythomax for me. 8 which is under more active development, and has added many major features. But I've used none so far and am open to suggestions. Whoever made the "Mythomax L2 33B" is an asshole hijacking the name fully knowing he is tricking people into thinking they are getting a 33B version of Mythomax, while in reality they are getting It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. Initial GGML model commit 6 months ago. I've been using SillyTavern on my mobile with KoboldAi Horde for quite some time now, and I am happy with the model called "koboldcpp/ mythomax-12-13b. This is an expansion merge to the well praised Mythomax model from Gryphe (60%) using MrSeeker's KoboldAI Holodeck model (40%) The goal of this model is to enhance story writing capabilities while preserving the desirable traits of the Mythomax model as much as possible (It does limit chat These settings worked fine for me. However, i think there's an even better one. " Settings: Model loader: Transformers. In the Model dropdown, choose the model you just downloaded: “LLaMA2-13B-Tiefighter-GPTQ. In the Model dropdown, choose the model you just downloaded: L2-MythoMax22b-Instruct-Falseblock-GPTQ. 1 @ August 31, 2023 This update features a new fine-tune for general improvements to the overall coherence and quality of AI-generated text during gameplay. In the Model dropdown, choose the model you just downloaded: MythoMax-L2-13B-GPTQ. MythoMax-13B with Kimiko LoRA MythoMax appears to be the smartest 13B so far, meanwhile Kimiko should steer it towards your special needs. The MythoMax settings hold the key to unleashing your creativity. I've recently tested a whole load of models with these settings: Big Model Comparison/Test (13 models tested) : LocalLLaMA. With 13B parameters, MythoMax boasts a brain bigger than any other language model currently in existence. By the way, my repetition/looping issues have completely disappeared since using MythoMax-L2-13B with SillyTavern's "Deterministic" generation settings preset and the new "Roleplay" instruct mode preset with these settings and the adjusted repetition penalty. 3 that is SUPER impressive for a 7B model. model-specific prompt format Lastly, a brand new model “Mistral 7B” came out a couple of days ago, and has beaten Llama 2 13B in all benchmarks, and Llama 133B in some math and coding benchmarks. Click the refresh icon next to Model on the top left. Model MythoMax L2 13B available again. Unleashing the Bard Within: Craft captivating Right now, my top three are probably Xwin-Mlewd 13B, my old faithful MythoMax 13B, and a hot new model in town: MergeMonster from Gryphe (who also made MythoMax), which is based on a new dynamic merging system where software selects from various possible models and datasets to achieve a goal (reduced censorship, less GPTisms, etc. To download from a specific branch, enter for example TheBloke/MythoMax-L2-Kimiko-v2-13B-GPTQ:main. Stable Diffusion was a lot easier to get running. You are responsible for how you use Synthia. Maybe give that a try, too. LFS. Stock text-generation-webui setup with the base transformers model. This is an expansion merge to the well praised Mythomax model from Gryphe (60%) using MrSeeker's KoboldAI Holodeck model (40%) The goal of this model is to enhance story writing capabilities while preserving the desirable traits of the Mythomax model as much as possible (It does limit chat On the command line, including multiple files at once. It’s a mix of Nous-Hermes (very good) + Chronos (to make it more creative in theory). entrypoints. At this point they can be thought of as completely independent programs. MythoMax is a Llama 2 13B (twice as large as the current Griffin model) and was specifically optimized for storytelling. cpp no longer supports GGML models Tiefighter - A new and excellent 13B parameter model. According to our GGUF is a new format introduced by the llama. It has been fine-tuned for instruction following as well as having long-form conversations. Initial GGUF model commit (model made with llama. Q8_0. Strangely it was a net gain in performance. 5k I think instead of 4k). I'm always using SillyTavern with its "Deterministic" generation settings preset and the new "Roleplay" instruct mode preset with these settings . 5. When using vLLM as a server, pass the --quantization awq parameter, for example: python3 python -m vllm. 1. Kobold. Deploy. SillyTavern is a fork of TavernAI 1. I now consider vicuna-13B-v1. Its an official mix of MythoMax and Pygmalion made by PygmalionAI team. The goal of this model is to enhance story-writing capabilities while preserving the desirable traits of the MythoMax model as much as possible (It does limit chat reply length). Tried Pyg 13B 2(q5KM running via koboldccp and using recommended settings as found on pyg's website). Run the following cell, takes ~5 min (You may need to confirm to proceed by typing "Y") In Chat settings - Instruction Template: Custom Dec 19, 2023 · MythoMax is a Llama 2 13B (twice as large as the current Griffin model) and was specifically optimized for storytelling. 4-1. temperature to use for sampling. cpp commit 2ba85c8) 6 months ago. It is either customized or outdated. Documentation on installing and using vLLM can be found here. I updated my recommended proxy replacement settings accordingly (see above link). model-specific prompt format New Model Comparison/Test (Part 1 of 2: 15 models tested, 13B+34B) Winner: Mythalion-13B New Model RP Comparison/Test (7 models tested) Winners: MythoMax-L2-13B, vicuna-13B-v1. 00 Top_k: 50. Text Generation Transformers PyTorch English llama Inference Endpoints text-generation-inference. I use Gryphe/MythoMax-L2-13b with KoboldAI and no matter what I try it insists on using words like member, womanhood, delightfully supple rear-end like wtf. q6_K. gguf. •. Kinda jank, but works as a temporary fix! EDIT 3: I am dumb. It sometimes gives words like cock and pussy but it's almost always something like member or "sensitive feminine folds". 5]. Important note regarding GGML files. Griffin v2. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. Suddenly, we're getting used to Mistral 7Bs giving those 13B models a run for their money, and then Yi-34B 200K and Yi-34B Chat appear out of nowhere. Also fixed MythoMax-L2-13B's "started talking/acting as User" issue as well. Engage “Do Sample” and “Add BOS Token” for a seamless Regarding 2: Mythomax is a llama 2 model so should have 4k context by default (at compress_pos_emb = 1). 35-0. MythoMax-L2-13B Q8_0: MythoMax-L2-13B-GGUF WebUI. Then you can download any individual model file to the current directory, at high speed, with a command like this: huggingface-cli download TheBloke/MythoMax-L2-Kimiko-v2-13B-GGUF mythomax-l2-kimiko Could someone please screencap their settings page for any good RP models, because I understand the configuration part zero. Use Kabold Horde with Xwin 70B and Emerhyst 20B. It's prose is quite nice, even it's dialogue. mythomax-l2-13b. When I use KoboldCpp (Lite), ContextShifting works near flawlessly. 00 MythoMax-L2-Kimiko-v2-13B-GGML, developed by TheBloke, is a part of the MythoMax series, which is an improved variant of the MythoMix series. json is different from the original LlamaTokenizer file. Its very good at writing and it follow instructions very well. Thank you for help. e. We’re on a journey to advance and democratize artificial intelligence through open source and open science. 1, 1. Quantized models are available from TheBloke: GGML - GPTQ (You're the best!) One moment, we seem to be agreed MythoMax is the bee's knees, then suddenly we've got Mythalion and a bunch of REMM variants. I found it Goliath incoherent and incapable of story logic compared to Izlv 70b or heck even xwin 13b (at least on my complexity of storytelling - multi-characters, detailed sci-fi and fantasy settings with complex sex scenarios) . When I use SillyTavern instead of ContextShifting despite nothing changing, it simply processes the entire prompt. ローカルでこの言語モデルをKobold. cppで使う為に、以下のGGMLをダウンロードします(Q5_K_M Aug 31, 2023 · For beefier models like the MythoMax-L2-13B-GPTQ, you'll need more powerful hardware. see Provided Files above for the list of branches for each option. Click the Model tab. The bloke has quantized a great merge based on Mistral called “Synthia-7B-v1. I recommend using the huggingface-hub Python library: pip3 install huggingface-hub>=0. Congrats to the Pygmalion team, their previous models never worked for me, but this one finally does and is a real winner in my opinion! Kudos also for providing their own official SillyTavern setup recommendations for this model - my experience was that both the Roleplay preset and their settings worked equally well. In my experience I have had extremely immersive roleplay with Mythalion 13B 8tgi-fp16/8k context size from Kobold Horde (with an average response time of 13/20 seconds and no more than 50) and I must admit that it knows how to recognize the anatomy of the characters in a decent way without the need to use formats such as: Ali:Chat + Plist Serving this model from vLLM. New Model RP Comparison/Test (7 models tested) : LocalLLaMA. For 13B, on its huggingface page, there are downloads for both a context and instruct template. Merosian • 3 mo. ” The model will automatically load for use! Step 7: Set Custom Settings. This is the best 13B I've ever used and tested. Default value: 0. As an alternative i‘d highly recommend Noromaid-Mixtral-8x7b-instruct v3. 45 to taste. LLaMA) of specific sizes (e. Weave epic sagas of gods and heroes, craft enchanting folklore, or spin modern myths that resonate with our times. Dec 27, 2023 · MythoMax 13B is your personal myth maker. Also this model ( LLaMA2-13B-Erebus-v3-GGUF or Mistral version which mostly faster and smaller in size but need to check at your PC) is very good because trained especially at erotic books/novels. Reload to refresh your session. Altho I am curious about mythical destroyer as they put in a little coding and instruct to try and increase it's logic/coherency (mythomax's coherency isn't great) Aug 30, 2023 · Mastering the Art of Settings. If only there was a way to get shorter outputs, it would be great for RP. 00 Min_p: 0. According to this chart, it seems like q5_K_M GGML loses a negligible amount of quality from the higher bit quants with much better performance. Import and use them when using 13B, they perform much better on average. Llama 2. Once it's finished it will say "Done". 3, 0. 7B, 13B, 33B, and 70B). Decent, out-of-the-box RP mixes and fine-tunes of that surely won't "WARNING:models\Gryphe_MythoMax-L2-13b\tokenizer_config. Packaging also encourages using more models, and the higher the discount the more that is encouraged instead of punished. And it's only this model that does it, which is very strange. I was hoping to try Airoboros 33b, Wizard LM 30b, Nous Hermes L2 13b. 10. henk717. The real Mythomax 33b does not exist because Mythomax is a mix of various Llama 2 fine tunes, and there is no 33b version of those ingredients. Things move fast and the focus is on 7b or mixtral so recent 7b's now are much better then most of the popular 13b's MythoMax-L2-13b q5 GGUF ** recommended most. Note that you do not need to and should not set manual GPTQ LLaMA 2 Holomax 13B - The writers version of Mythomax. The main difference with MythoMix is that I allowed more of Huginn to intermingle with the single tensors located at the front and end of a model, resulting Mythomax is good, was a favorite for a bit for proze, RP and creative writing tasks. cpu-memory in MiB: 0. 4, 0. Text Generation Transformers Safetensors English llama text-generation-inference 4-bit precision. EDIT: Specifically, I am using TheBloke's mythomax-l2-13b. The model will automatically load and is now ready for use! Step 7: Set Custom Settings (Optional) If you want any custom settings, set them and then click “Save settings for this model” followed by “Reload the Model” in the top right. main. I do not have. I tested it thoroughly. YMMV, of course, but I wanted to report back my latest experiences and conclusions. 75 Temperature: 0. it might summarize that you two spent the rest of the day talking and then afterwards, you went to your chambers) or put a SUDDENLY and cause something dramatic to happen. " Settings: Model loader: Transformers Aug 11, 2023 · MythoMax-L2-13B-GPTQ. Explore ancient Greek epics, delve into Norse sagas, or invent your own mythologies – the possibilities are as boundless as your imagination. Xwin, Mythomax (and its variants - Mythalion, Mythomax-Kimiko, etc), Athena, and many of Undi95s merges all seem to perform well. These base models are simple and often referred to as "glorified autocomplete" or "text completion" models Effectively Noromaid 20B is smarter and heftier. In terms of models, there's nothing making waves at the moment, but there are some very solid 13b options. 5-16K one of my favorites because the 16K context is outstanding and it even works with complex character cards! I've done a lot of testing with repetition penalty values 1. qj su xj zr vw wa uw sz bo ye