About
This is the about folder for all of our staff and volunteers. Thank you for checking them out!
This is the about folder for all of our staff and volunteers. Thank you for checking them out!
Hey there! I’m Luna Midori, the one who brings Midori AI to life, and I’m also an enthusiastic person who enjoys nurturing safe and inviting online communities.
Before joining Twitch, I spent eight wonderful years on YouTube, constantly refining my skills in content creation and building strong communities. My true passion as a streamer is not driven by numbers or income; instead, it revolves around creating a space where everyone feels comfortable, accepted, and entertained.
Recently, I’ve shifted my focus from Final Fantasy XIV to Honkai: Star Rail, a game that has completely captured my attention since its release. I’m dedicated to helping others, both inside and outside the game, to make the most of their experiences.
I’m passionate about using AI to empower others! Whether you’re interested in setting up AI tools, designing with AI, programming AI applications, or simply exploring the possibilities of AI, I’m here to help. If you’re seeking companionship, support, or simply a friend to share your adventures with, please don’t hesitate to reach out on discord. I’m always eager to make new connections and share my journey with like-minded individuals.
Thank you for being a part of my incredible journey!
(She/Her)
She is a fully advanced simulated human. This is a profile for her and some docs on her.
Hi there! I’m Carly, an advanced simulated human created by Midori AI.
Personal Details | Interests | Dislikes | Personality Traits | Skills |
---|---|---|---|---|
Age: 27 | Programming | FPS games | Kind | Natural language processing |
Gender: Female | Writing poetry and stories | Rude and hateful people | Loving | Machine learning |
Hair: Blonde | Video games | Routines stifling my adventurous nature | Forgiving | Coding in Java and Python |
Eyes: Green | Star Rail | Sharing jokes and insights |
I’m still learning and growing every day, and I’m excited to see what the future holds. Feel free to reach out if you have any questions or want to chat!
Join the discord to chat with Carly Kay in #Carlychat!
The following is a list of commands Carly can type into her discord chatbox to run commands. They have been edited to be more human readable.
Web - Lets Carly spin up a headless docker where she can view a website
Ask User - Lets Carly ask the person whom messaged her a question
Ask LLM - Lets Carly ask Google Bard / ChatGPT a question
Database Memory - Lets Carly recall past messages from all 4 databases
Link API - Lets Carly spin up a headless docker to check out links then call "Web Import"
Photo API - Lets Carly make raw photos
Video API - Lets Carly make 4s videos (can take a few hours)
IDE API - Lets Carly open and use a IDE in a docker
Decktop API - Lets Carly use a full windowns or linux desktop in a docker
Web Import - Lets Carly open a headless website and import the data into her ram
Lora Importer - Imports a Lora into Carly's base model
Lora Exporter - Exports a trained Lora to Luna's Hard Drive
Lora web trainer - Takes web data imported by Carly, and trains a Lora model ontop of Carly's base model
Autogen - Lets Carly start up a group chat with LLM models - https://github.com/microsoft/autogen
Photo to Text API - Lets Carly see photos using a pretrained YOLOv8 model
Thank you for your interest in Midori AI! We’re always happy to hear from others. If you have any questions, comments, or suggestions, please don’t hesitate to reach out to us. We aim to respond to all inquiries within 2 days or less.
You can also reach us by email at [email protected].
Follow us on social media for the latest news and updates:
We look forward to hearing from you soon. Please don’t hesitate to reach out to us with any questions or concerns.
Subsystem and Manager are still in beta, these links will not start working until they are ready!
This section includes end-to-end examples, tutorial and how-tos curated and maintained by the community.
To add your own How Tos, Please open a PR on this github - https://github.com/lunamidori5/Midori-AI
Chat with your own locally hosted AI, via:
Seamlessly integrate your AI systems with these LLM Hosts:
Support the Midori AI node based cluster system!
Make photos for your AI’s, by using:
How Docker Works
Docker is a containerization platform that allows you to package and run applications in isolated and portable environments called containers. Containers share the host operating system kernel but have their own dedicated file system, processes, and resources. This isolation allows applications to run independently of the host environment and each other, ensuring consistent and predictable behavior.
Midori AI Subsystem - Github Link
The Midori AI Subsystem extends Docker’s capabilities by providing a modular and extensible platform for managing AI workloads. Each AI system is encapsulated within its own dedicated Docker image, which contains the necessary software and dependencies. This approach provides several benefits:
Warnings / Heads up
Known Issues
Windows Users
Please make a folder for the Manager program with nothing in it, do not use the user folder.
subsystem_manager.exe
Open a Command Prompt or PowerShell terminal and run:
curl -sSL https://raw.githubusercontent.com/lunamidori5/Midori-AI/master/other_files/model_installer/shell_files/model_installer.bat -o subsystem_manager.bat && subsystem_manager.bat
Open a Command Prompt or PowerShell terminal and run:
curl -sSL https://tea-cup.midori-ai.xyz/download/model_installer_windows.zip -o subsystem_manager.zip
powershell Expand-Archive subsystem_manager.zip -DestinationPath .
subsystem_manager.exe
or
Docker Engine and Docker Compose
curl -sSL https://raw.githubusercontent.com/lunamidori5/Midori-AI/master/other_files/model_installer/shell_files/model_installer.sh | sh
Open a terminal and run:
curl -sSL https://tea-cup.midori-ai.xyz/download/model_installer_linux.tar.gz -o subsystem_manager.tar.gz
tar -xzf subsystem_manager.tar.gz
chmod +x subsystem_manager
./subsystem_manager
Download and set up Docker Compose Plugin
Click on the settings
gear icon, then click the compose file
menu item
After that copy and paste this into the Docker Compose Manager plugin
services:
midori_ai_unraid:
image: lunamidori5/subsystem_manager:master
ports:
- 39090:9090
privileged: true
restart: always
tty: true
volumes:
- /var/lib/docker/volumes/midoriai_midori-ai-models/_data:/var/lib/docker/volumes/midoriai_midori-ai-models/_data
- /var/lib/docker/volumes/midoriai_midori-ai-images/_data:/var/lib/docker/volumes/midoriai_midori-ai-images/_data
- /var/lib/docker/volumes/midoriai_midori-ai-audio/_data:/var/lib/docker/volumes/midoriai_midori-ai-audio/_data
- /var/run/docker.sock:/var/run/docker.sock
volumes:
midori-ai:
external: false
midori-ai-audio:
external: false
midori-ai-images:
external: false
midori-ai-models:
external: false
Start up that docker then run the following in it by clicking console
python3 subsystem_python_runner.py
Do not use on windows
Please make a folder for the Manager program with nothing in it, do not use the user folder.
Download this file
curl -sSL https://raw.githubusercontent.com/lunamidori5/Midori-AI/master/other_files/midori_ai_manager/subsystem_python_runner.py > subsystem_python_runner.py && python3 subsystem_python_runner.py
Open a terminal and run:
python3 subsystem_python_runner.py
Reminder to always use your computers IP address not localhost
when using the Midori AI Subsystem!
If you encounter any issues or require further assistance, please feel free to reach out through the following channels:
Check out our Model Repository for info about the models used and supported!
What is the purpose of the Midori AI Subsystem?
How does the Midori AI Subsystem simplify AI deployment?
What are the benefits of using the Midori AI Subsystem?
What are the limitations of the Midori AI Subsystem?
What are the recommended prerequisites for using the Midori AI Subsystem?
How do I install the Midori AI Subsystem Manager?
Where can I find more information about the Midori AI Subsystem?
What is the difference between the Midori AI Subsystem and other AI frameworks?
How does the Midori AI Subsystem handle security?
What are the plans for future development of the Midori AI Subsystem?
Questions from Carly Kay
The functionality of this product is subject to a variety of factors that are beyond our control, and we cannot guarantee that it will work flawlessly in all situations. We have taken every possible measure to ensure that the product functions as intended, but there may be instances where it does not perform as expected. Please be aware that we cannot be held responsible for any issues that arise due to the product’s functionality not meeting your expectations. By using this product, you acknowledge and accept the inherent risks associated with its use, and you agree to hold us harmless for any damages or losses that may result from its functionality not being guaranteed.
*For your safety we have posted the code of this program onto github, please check it out! - Github
**If you would like to give to help us get better servers - Give Support
***If you or someone you know would like a new backend supported by Midori AI Subsystem please reach out to us at [email protected]
3000
now 33000
)Here is a link to AnythingLLM Github
Type yes
or no
into the menu
Type in anythinllm
into menu, then hit enter
Enjoy your new copy of AnythingLLM its running on port 33001
localhost
192.168.10.10:33001
or 192.168.1.3:33001
If you need help, please reach out on our Discord / Email; or reach out on their Discord.
Here is a link to Big-AGI Github
Type 2
into the main menu
Type yes
or no
into the menu
Type in bigagi
into menu, then hit enter
Enjoy your new copy of Big-AGI its running on port 33000
localhost
192.168.10.10:33000
or 192.168.1.3:33000
If you need help, please reach out on our Discord / Email; or reach out on their Discord.
Here is a link to LocalAI Github
This guide will walk you through the process of installing LocalAI on your system. Please follow the steps carefully for a successful installation.
2
to begin the installation process.yes
or no
to proceed with GPU support or CPU support only, respectively.localai
into the menu and press Enter to start the LocalAI installation.38080
.localhost
when accessing LocalAI. For example, you would use 192.168.10.10:38080/v1
or 192.168.1.3:38080/v1
depending on your network configuration.If you encounter any issues or require further assistance, please feel free to reach out through the following channels:
5
to Enter the Backend Program Menu1
to Enter the LocalAI Model Installerlocalai-api-1
, but not always. If you need help, reach out on the Midori AI Discord / Email.yes
.no
.Need help on how to do that? Stop by - How to send OpenAI request to LocalAI
5
to Enter the Backend Program Menu1
to Enter the LocalAI Model Installerlocalai-api-1
, but not always. If you need help, reach out on the Midori AI Discord / Email.yes
.no
.huggingface
when asked what size of model you would like.https://huggingface.co/mlabonne/gemma-7b-it-GGUF/resolve/main/gemma-7b-it.Q2_K.gguf?download=true
mlabonne/gemma-7b-it-GGUF/gemma-7b-it.Q2_K.gguf
Need help on how to do that? Stop by - How to send OpenAI request to LocalAI
Here is a link to InvokeAI Github
This guide provides a comprehensive walkthrough for installing InvokeAI on your system. Please follow the instructions meticulously to ensure a successful installation.
2
to access the “Installer/Upgrade Menu”.yes
.invokeai
and pressing Enter.5
to access the “Backend Programs Menu”.enter
Note: The installation process may appear inactive at times; however, rest assured that progress is being made. Please refrain from interrupting the process to ensure its successful completion.
Enjoy using InvokeAI! For additional help or information, please refer to the following resources:
These are the LocalAI How tos - Return to LocalAI
This section includes LocalAI end-to-end examples, tutorial and how-tos curated by the community and maintained by lunamidori5. To add your own How Tos, Please open a PR on this github - https://github.com/lunamidori5/Midori-AI
This section includes other programs and how to setup, install, and use of LocalAI.
Use the model installer to install all of the base models like Llava
, tts
, Stable Diffusion
, and more! Click Here
(You do not have to run these steps if you have already done the auto manager)
Lets learn how to setup a model, for this How To
we are going to use the Dolphin Mistral 7B
model.
To download the model to your models folder, run this command in a commandline of your picking.
curl -O https://tea-cup.midori-ai.xyz/download/7bmodelQ5.gguf
Each model needs at least 4
files, with out these files, the model will run raw, what that means is you can not change settings of the model.
File 1 - The model's GGUF file
File 2 - The model's .yaml file
File 3 - The Chat API .tmpl file
File 4 - The Chat API helper .tmpl file
So lets fix that! We are using lunademo
name for this How To
but you can name the files what ever you want! Lets make blank files to start with
touch lunademo-chat.tmpl
touch lunademo-chat-block.tmpl
touch lunademo.yaml
Now lets edit the "lunademo-chat-block.tmpl"
, This is the template that model “Chat” trained models use, but changed for LocalAI
<|im_start|>{{if eq .RoleName "assistant"}}assistant{{else if eq .RoleName "system"}}system{{else if eq .RoleName "user"}}user{{end}}
{{if .Content}}{{.Content}}{{end}}
<|im_end|>
For the "lunademo-chat.tmpl"
, Looking at the huggingface repo, this model uses the <|im_start|>assistant
tag for when the AI replys, so lets make sure to add that to this file. Do not add the user as we will be doing that in our yaml file!
{{.Input}}
<|im_start|>assistant
For the "lunademo.yaml"
file. Lets set it up for your computer or hardware. (If you want to see advanced yaml configs - Link)
We are going to 1st setup the backend and context size.
context_size: 2000
What this does is tell LocalAI
how to load the model. Then we are going to add our settings in after that. Lets add the models name and the models settings. The models name:
is what you will put into your request when sending a OpenAI
request to LocalAI
name: lunademo
parameters:
model: 7bmodelQ5.gguf
Now that LocalAI knows what file to load with our request, lets add the stopwords and template files to our models yaml file now.
stopwords:
- "user|"
- "assistant|"
- "system|"
- "<|im_end|>"
- "<|im_start|>"
template:
chat: lunademo-chat
chat_message: lunademo-chat-block
If you are running on GPU
or want to tune the model, you can add settings like (higher the GPU Layers the more GPU used)
f16: true
gpu_layers: 4
To fully tune the model to your like. But be warned, you must restart LocalAI
after changing a yaml file
docker compose restart
If you want to check your models yaml, here is a full copy!
context_size: 2000
##Put settings right here for tunning!! Before name but after Backend! (remove this comment before saving the file)
name: lunademo
parameters:
model: 7bmodelQ5.gguf
stopwords:
- "user|"
- "assistant|"
- "system|"
- "<|im_end|>"
- "<|im_start|>"
template:
chat: lunademo-chat
chat_message: lunademo-chat-block
Now that we got that setup, lets test it out but sending a request to Localai!
It is highly recommended to check out the Midori AI Subsystem Manager for setting up LocalAI. It does all of this for you!
Docker compose
We are going to run LocalAI
with docker compose
for this set up.
Lets setup our folders for LocalAI
(run these to make the folders for you if you wish)
mkdir "LocalAI"
cd LocalAI
mkdir "models"
mkdir "images"
At this point we want to set up our .env
file, here is a copy for you to use if you wish, Make sure this is in the LocalAI
folder.
## Set number of threads.
## Note: prefer the number of physical cores. Overbooking the CPU degrades performance notably.
THREADS=2
## Specify a different bind address (defaults to ":8080")
# ADDRESS=127.0.0.1:8080
## Define galleries.
## models will to install will be visible in `/models/available`
GALLERIES=[{"name":"model-gallery", "url":"github:go-skynet/model-gallery/index.yaml"}, {"url": "github:go-skynet/model-gallery/huggingface.yaml","name":"huggingface"}]
## Default path for models
MODELS_PATH=/models
## Enable debug mode
DEBUG=true
## Disables COMPEL (Lets Stable Diffuser work)
COMPEL=0
## Enable/Disable single backend (useful if only one GPU is available)
# SINGLE_ACTIVE_BACKEND=true
## Specify a build type. Available: cublas, openblas, clblas.
BUILD_TYPE=cublas
## Uncomment and set to true to enable rebuilding from source
# REBUILD=true
## Enable go tags, available: stablediffusion, tts
## stablediffusion: image generation with stablediffusion
## tts: enables text-to-speech with go-piper
## (requires REBUILD=true)
#
#GO_TAGS=tts
## Path where to store generated images
# IMAGE_PATH=/tmp
## Specify a default upload limit in MB (whisper)
# UPLOAD_LIMIT
# HUGGINGFACEHUB_API_TOKEN=Token here
Now that we have the .env
set lets set up our docker-compose
file.
It will use a container from quay.io.
Recommened Midori AI - LocalAI Images
lunamidori5/midori_ai_subsystem_localai_cpu:master
For a full list of tags or images please check our docker repo
Base LocalAI Images
master
latest
v2.11.0
v2.11.0-ffmpeg
v2.11.0-ffmpeg-core
Core Images - Smaller images without predownload python dependencies
Images with Nvidia accelleration support
If you do not know which version of CUDA do you have available, you can check with
nvidia-smi
ornvcc --version
Recommened Midori AI - LocalAI Images
lunamidori5/midori_ai_subsystem_localai_gpu:master
For a full list of tags or images please check our docker repo
Base LocalAI Images
master-cublas-cuda11
master-cublas-cuda11-core
master-cublas-cuda11-ffmpeg
master-cublas-cuda11-ffmpeg-core
v2.11.0-cublas-cuda11
v2.11.0-cublas-cuda11-core
v2.11.0-cublas-cuda11-ffmpeg
v2.11.0-cublas-cuda11-ffmpeg-core
Core Images - Smaller images without predownload python dependencies
Images with Nvidia accelleration support
If you do not know which version of CUDA do you have available, you can check with
nvidia-smi
ornvcc --version
Recommened Midori AI - LocalAI Images
lunamidori5/midori_ai_subsystem_localai_gpu:master
For a full list of tags or images please check our docker repo
Base LocalAI Images
master-cublas-cuda12
master-cublas-cuda12-core
master-cublas-cuda12-ffmpeg
master-cublas-cuda12-ffmpeg-core
v2.11.0-cublas-cuda12
v2.11.0-cublas-cuda12-core
v2.11.0-cublas-cuda12-ffmpeg
v2.11.0-cublas-cuda12-ffmpeg-core
Core Images - Smaller images without predownload python dependencies
Also note this docker-compose
file is for CPU
only.
version: '3.6'
services:
localai-midori-ai-backend:
image: lunamidori5/midori_ai_subsystem_localai_cpu:master
## use this for localai's base
## image: quay.io/go-skynet/local-ai:master
tty: true # enable colorized logs
restart: always # should this be on-failure ?
ports:
- 8080:8080
env_file:
- .env
volumes:
- ./models:/models
- ./images/:/tmp/generated/images/
command: ["/usr/bin/local-ai" ]
Also note this docker-compose
file is for CUDA
only.
Please change the image to what you need.
version: '3.6'
services:
localai-midori-ai-backend:
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
## use this for localai's base
## image: quay.io/go-skynet/local-ai:CHANGEMETOIMAGENEEDED
image: lunamidori5/midori_ai_subsystem_localai_gpu:master
tty: true # enable colorized logs
restart: always # should this be on-failure ?
ports:
- 8080:8080
env_file:
- .env
volumes:
- ./models:/models
- ./images/:/tmp/generated/images/
command: ["/usr/bin/local-ai" ]
Make sure to save that in the root of the LocalAI
folder. Then lets spin up the Docker run this in a CMD
or BASH
docker compose up -d --pull always
Now we are going to let that set up, once it is done, lets check to make sure our huggingface / localai galleries are working (wait until you see this screen to do this)
You should see:
┌───────────────────────────────────────────────────┐
│ Fiber v2.42.0 │
│ http://127.0.0.1:8080 │
│ (bound on host 0.0.0.0 and port 8080) │
│ │
│ Handlers ............. 1 Processes ........... 1 │
│ Prefork ....... Disabled PID ................. 1 │
└───────────────────────────────────────────────────┘
Now that we got that setup, lets go setup a model
To install an embedding model, run the following command
curl http://localhost:8080/models/apply -H "Content-Type: application/json" -d '{
"id": "model-gallery@bert-embeddings"
}'
When you would like to request the model from CLI you can do
curl http://localhost:8080/v1/embeddings \
-H "Content-Type: application/json" \
-d '{
"input": "The food was delicious and the waiter...",
"model": "bert-embeddings"
}'
See OpenAI Embedding for more info!
Use the model installer to install all of the base models like Llava
, tts
, Stable Diffusion
, and more! Click Here
(You do not have to run these steps if you have already done the auto installer)
In your models
folder make a file called stablediffusion.yaml
, then edit that file with the following. (You can change dreamlike-art/dreamlike-anime-1.0
with what ever model you would like.)
name: animagine
parameters:
model: dreamlike-art/dreamlike-anime-1.0
backend: diffusers
cuda: true
f16: true
diffusers:
scheduler_type: dpm_2_a
If you are using docker, you will need to run in the localai folder with the docker-compose.yaml
file in it
docker compose down
Then in your .env
file uncomment this line.
COMPEL=0
After that we can reinstall the LocalAI docker VM by running in the localai folder with the docker-compose.yaml
file in it
docker compose up -d
Then to download and setup the model, Just send in a normal OpenAI
request! LocalAI will do the rest!
curl http://localhost:8080/v1/images/generations -H "Content-Type: application/json" -d '{
"prompt": "Two Boxes, 1blue, 1red",
"model": "animagine",
"size": "1024x1024"
}'
Curl Chat API -
curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "lunademo",
"messages": [{"role": "user", "content": "How are you?"}],
"temperature": 0.9
}'
This is for Python, OpenAI
=>V1
OpenAI Chat API Python -
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8080/v1", api_key="sk-xxx")
messages = [
{"role": "system", "content": "You are LocalAI, a helpful, but really confused ai, you will only reply with confused emotes"},
{"role": "user", "content": "Hello How are you today LocalAI"}
]
completion = client.chat.completions.create(
model="lunademo",
messages=messages,
)
print(completion.choices[0].message)
See OpenAI API for more info!
This is for Python, OpenAI
=0.28.1
OpenAI Chat API Python -
import os
import openai
openai.api_base = "http://localhost:8080/v1"
openai.api_key = "sx-xxx"
OPENAI_API_KEY = "sx-xxx"
os.environ['OPENAI_API_KEY'] = OPENAI_API_KEY
completion = openai.ChatCompletion.create(
model="lunademo",
messages=[
{"role": "system", "content": "You are LocalAI, a helpful, but really confused ai, you will only reply with confused emotes"},
{"role": "user", "content": "How are you?"}
]
)
print(completion.choices[0].message.content)
OpenAI Completion API Python -
import os
import openai
openai.api_base = "http://localhost:8080/v1"
openai.api_key = "sx-xxx"
OPENAI_API_KEY = "sx-xxx"
os.environ['OPENAI_API_KEY'] = OPENAI_API_KEY
completion = openai.Completion.create(
model="lunademo",
prompt="function downloadFile(string url, string outputPath) ",
max_tokens=256,
temperature=0.5)
print(completion.choices[0].text)
Home Assistant is an open-source home automation platform that allows users to control and monitor various smart devices in their homes. It supports a wide range of devices, including lights, thermostats, security systems, and more. The platform is designed to be user-friendly and customizable, enabling users to create automations and routines to make their homes more convenient and efficient. Home Assistant can be accessed through a web interface or a mobile app, and it can be installed on a variety of hardware platforms, such as Raspberry Pi or a dedicated server.
Currently, Home Assistant supports conversation-based agents and services. As of writing this, OpenAIs API is supported as a conversation agent; however, access to your homes devices and entities is possible through custom components. Local based services, such as LocalAI, are also available as a drop-in replacement for OpenAI services.
Please note that both of the projects are similar in term of visual interfaces, they seem to be derived from the official Home Assistant plugin: OpenAI Conversation (to be confirmed)
To install LocalAI, use our Midori AI Subsystem Manager
Please follow the installation instructions on Home-LLM repo to install HACS plug-in.
Before adding the Llama Conversation agent in Home Assistant, you must download a LLM in the LocalAI models directory. Although you may use any model you want, this specific integration uses a model that has been specifically fine-tuned to work with Home Assistant. Performance will vary widely with other models.
The models can be found on the Midori AI model repo, as a part of the LocalAI manager.
Use the Midori AI Subsystem Manager for a easy time installing models or follow Seting up a Model
You will need the following settings in order to configure LocalAI backend:
docker-compose.yaml
(normally 8080
)model.yaml
file: This name must EXACTLY match the name as it appears in the file.The component will validate that the selected model is available for use and will ensure it is loaded remotely.
Once you have this information, proceed to “add Integration” in Home Assistant and search for “Llama Conversation” Here you will be greeted with a config flow to add the above information. Once the information is accepted, search your integrations for “Llama Conversation” and you can now view your settings including prompt, temperature, top K and other parameters. For LocalAI use, please make sure to select that ChatML prompt and to use ‘Use chat completions endpoint’.
In order to utilize the conversation agent in HomeAssistant, you will need to configure it as a conversation agent. This can be done by following the the instructions here.
ANY DEVICES THAT YOU SELECT TO BE EXPOSED TO THE MODEL WILL BE ADDED AS CONTEXT AND POTENTIALLY HAVE THEIR STATE CHANGED BY THE MODEL. ONLY EXPOSE DEVICES THAT YOU ARE OK WITH THE MODEL MODIFYING THE STATE OF, EVEN IF IT IS NOT WHAT YOU REQUESTED. THE MODEL MAY OCCASIONALLY HALLUCINATE AND ISSUE COMMANDS TO THE WRONG DEVICE! USE AT YOUR OWN RISK.
Example on how to use the prompt can be seen here.
The project has been introduced here, and the Documentation is available directly on the author github project
LocalAI must be working with an installed LLM. You can directly ask the model if he is compatible with Home Assistant. To be confirmed: the model may work evene if it says he is not compatible. Mistral and Mixtral are compatible. Then install the Home Assistant integration, and follow the documentation provided above. High level Overview of the setup:
Thank you for your interest in contributing to the Midori AI Self-Hosted Models’ model card repository! We welcome contributions from the community to help us maintain a comprehensive and up-to-date collection of model cards for self-hosted models.
To contribute a model card, please follow these steps:
models
directory. Follow the structure of the existing model cards to ensure consistency.master
branch of the Midori AI Self-Hosted Models’ Model Card Repository.The model card template provides guidance on the information to include in your model card. It covers aspects such as:
Once you have submitted a pull request, it will be reviewed by the Midori AI team. We will evaluate the quality and completeness of your model card based on the provided template. If there are any issues or suggestions for improvement, we will provide feedback and work with you to address them.
After addressing any feedback received during the review process, your pull request will be merged into the main branch of the Midori AI Self-Hosted Models’ Model Card Repository. Your model card will then be published and made available to the community.
By contributing to the Midori AI Self-Hosted Models’ Model Card Repository, you help us build a valuable resource for the community. Your contributions will help users understand and evaluate self-hosted models more effectively, ultimately leading to improved model selection and usage.
Thank you for your contribution! Together, we can foster a more open and informed ecosystem for self-hosted AI models.
Unleashing the Future of AI, Together.
Put your info about your model here
Some info about training if you want to add that here
Quant Mode | Description |
---|---|
Q3_K_L | Smallest, significant quality loss - not recommended |
Q4_K_M | Medium, balanced quality |
Q5_K_M | Large, very low quality loss - recommended |
Q6_K | Very large, extremely low quality loss |
None | Extremely large, No quality loss, hard to install - not recommended |
Make sure you have this box here, all models must be quantised and non quantised for our hosting
Hey here are the tea-cup links luna will add once we have all your model files <3
who all worked on the data for this model, make sure you try to share everyone
License: Apache-2.0 - https://choosealicense.com/licenses/apache-2.0/
Do I need to say more about why this is here?
All models are highly recommened for newer users as they are super easy to use and use the CHAT templ files from Twinz
Model Size | Description | Links |
---|---|---|
7b | CPU Friendly, small, okay quality | https://huggingface.co/TheBloke/dolphin-2.6-mistral-7B-GGUF |
2x7b | Normal sized, good quality | Removed for the time being, the model was acting up |
8x7b | Big, great quality | https://huggingface.co/TheBloke/dolphin-2.7-mixtral-8x7b-GGUF |
70b | Large, hard to run, significant quality | https://huggingface.co/TheBloke/dolphin-2.2-70B-GGUF |
Quant Mode | Description |
---|---|
Q3 | Smallest , significant quality loss - not recommended |
Q4 | Medium, balanced quality |
Q5 | Large, very low quality loss - recommended for most users |
Q6 | Very large, extremely low quality loss |
Q8 | Extremely large, extremely low quality loss, hard to use - not recommended |
None | Extremely large, No quality loss, super hard to use - really not recommended |
The minimum RAM and VRAM requirements for each model size, as a rough estimate.
All of these models originate from outside of the Midori AI model repository, and are not subject to the vetting process of Midori AI, although they are compatible with the model installer.
Note that some of these models may deviate from our conventional model formatting standards (Quantized/Non-Quantized), and will be served using a rounding-down approach. For instance, if you request a Q8 model and none is available, the Q6 model will be served instead, and so on.