Midori AI Subsystem

Midori AI photo Midori AI photo

Subsystem and Manager are still in beta, these links will not start working until they are ready!

Integrations

This section includes end-to-end examples, tutorial and how-tos curated and maintained by the community.

To add your own How Tos, Please open a PR on this github - https://github.com/lunamidori5/Midori-AI

Chat UIs

Chat with your own locally hosted AI, via:

LLM Backends / Hosts

Seamlessly integrate your AI systems with these LLM Hosts:

Cluster Based AI

Support the Midori AI node based cluster system!

Image AI

Make photos for your AI’s, by using:

Subsections of Midori AI Subsystem

Midori AI Subsystem Manager

Midori AI photo Midori AI photo

How Docker Works

Docker is a containerization platform that allows you to package and run applications in isolated and portable environments called containers. Containers share the host operating system kernel but have their own dedicated file system, processes, and resources. This isolation allows applications to run independently of the host environment and each other, ensuring consistent and predictable behavior.

Midori AI Subsystem - Github Link

The Midori AI Subsystem extends Docker’s capabilities by providing a modular and extensible platform for managing AI workloads. Each AI system is encapsulated within its own dedicated Docker image, which contains the necessary software and dependencies. This approach provides several benefits:

  • Simplified Deployment: The Midori AI Subsystem provides a streamlined and efficient way to deploy AI systems using Docker container technology.
  • Eliminates Guesswork: Standardized configurations and settings reduce complexities, enabling seamless setup and management of AI programs.
Notice

Warnings / Heads up

  • This program is in beta! By using it you take on risk, please see the disclaimer in the footnotes
  • The Webserver should be back up, sorry for the outage

Known Issues

  • Server Rework is underway! Thank you for giving us lots of room to grow!
  • Report Issuses -> Github Issue

Windows Users

Install Midori AI Subsystem Manager

Notice
  • As we are in beta, we have implemented telemetry to enhance bug discovery and resolution. This data is anonymized and will be configurable when out of beta.

Prerequisites

Docker Desktop Windows

Please make a folder for the Manager program with nothing in it, do not use the user folder.

Quick install

  1. Download - https://tea-cup.midori-ai.xyz/download/model_installer_windows.zip
  2. Unzip into LocalAI folder
  3. Run subsystem_manager.exe

Quick install with script

Open a Command Prompt or PowerShell terminal and run:

curl -sSL https://raw.githubusercontent.com/lunamidori5/Midori-AI/master/other_files/model_installer/shell_files/model_installer.bat -o subsystem_manager.bat && subsystem_manager.bat

Manual download and installation

Open a Command Prompt or PowerShell terminal and run:

curl -sSL https://tea-cup.midori-ai.xyz/download/model_installer_windows.zip -o subsystem_manager.zip
powershell Expand-Archive subsystem_manager.zip -DestinationPath .
subsystem_manager.exe

Prerequisites

Docker Desktop Linux

or

Docker Engine and Docker Compose

Quick install with script

curl -sSL https://raw.githubusercontent.com/lunamidori5/Midori-AI/master/other_files/model_installer/shell_files/model_installer.sh | sh

Manual download and installation

Open a terminal and run:

curl -sSL https://tea-cup.midori-ai.xyz/download/model_installer_linux.tar.gz -o subsystem_manager.tar.gz
tar -xzf subsystem_manager.tar.gz
chmod +x subsystem_manager
./subsystem_manager

Prerequisites

Download and set up Docker Compose Plugin

Manual download and installation

Click on the settings gear icon, then click the compose file menu item

After that copy and paste this into the Docker Compose Manager plugin

services:
  midori_ai_unraid:
    image: lunamidori5/subsystem_manager:master
    ports:
    - 39090:9090
    privileged: true
    restart: always
    tty: true
    volumes:
    - /var/lib/docker/volumes/midoriai_midori-ai-models/_data:/var/lib/docker/volumes/midoriai_midori-ai-models/_data
    - /var/lib/docker/volumes/midoriai_midori-ai-images/_data:/var/lib/docker/volumes/midoriai_midori-ai-images/_data
    - /var/lib/docker/volumes/midoriai_midori-ai-audio/_data:/var/lib/docker/volumes/midoriai_midori-ai-audio/_data
    - /var/run/docker.sock:/var/run/docker.sock
volumes:
  midori-ai:
    external: false
  midori-ai-audio:
    external: false
  midori-ai-images:
    external: false
  midori-ai-models:
    external: false

Running the program

Start up that docker then run the following in it by clicking console

python3 subsystem_python_runner.py

Prerequisites

Do not use on windows

Please make a folder for the Manager program with nothing in it, do not use the user folder.

Quick install with script

Download this file

curl -sSL https://raw.githubusercontent.com/lunamidori5/Midori-AI/master/other_files/midori_ai_manager/subsystem_python_runner.py > subsystem_python_runner.py && python3 subsystem_python_runner.py

Running the program

Open a terminal and run:

python3 subsystem_python_runner.py

Auto Lint, Test, and Build. Auto Lint, Test, and Build.

Notice

Reminder to always use your computers IP address not localhost when using the Midori AI Subsystem!

Support and Assistance

If you encounter any issues or require further assistance, please feel free to reach out through the following channels:

—– Install Backends —–

  • LocalAI - For LLM inference, Embedding, and more
  • AnythingLLM - For chating with your docs using LocalAI or other LLM hosts
  • Big-AGI - For chating with your docs using LocalAI or other LLM hosts
  • InvokeAI - For making photos using AI models
  • Axolotl - For training your own fine tuned LLMs

Check out our Model Repository for info about the models used and supported!

—– FAQs about the subsystem —–

  1. What is the purpose of the Midori AI Subsystem?

    • The Midori AI Subsystem is a modular and extensible platform for managing AI workloads, providing simplified deployment, standardized configurations, and isolation for AI systems.
  2. How does the Midori AI Subsystem simplify AI deployment?

    • The Midori AI Subsystem simplifies AI deployment by providing a streamlined and efficient way to deploy AI systems using Docker container technology, reducing complexities and ensuring consistent and predictable behavior.
  3. What are the benefits of using the Midori AI Subsystem?

    • The benefits of using the Midori AI Subsystem include simplified deployment, standardized configurations, isolation for AI systems, and a growing library of supported backends and tools.
  4. What are the limitations of the Midori AI Subsystem?

    • The limitations of the Midori AI Subsystem include its current beta status, potential for bugs, and reliance on Docker container technology.
  5. What are the recommended prerequisites for using the Midori AI Subsystem?

    • The recommended prerequisites for using the Midori AI Subsystem include Docker Desktop Windows or Docker installed on other operating systems, and a dedicated folder for the Manager program.
  6. How do I install the Midori AI Subsystem Manager?

  7. Where can I find more information about the Midori AI Subsystem?

    • You can find more information about the Midori AI Subsystem on the Midori AI Subsystem website, which includes documentation, tutorials, and a community Discord.
  8. What is the difference between the Midori AI Subsystem and other AI frameworks?

    • The Midori AI Subsystem differs from other AI frameworks by providing a modular and extensible platform specifically designed for managing AI workloads, offering features such as simplified deployment, standardized configurations, and isolation for AI systems.
  9. How does the Midori AI Subsystem handle security?

    • The Midori AI Subsystem does not handle security directly, but it relies on the security features provided by the underlying Docker container technology and the specific AI backends and tools being used.
  10. What are the plans for future development of the Midori AI Subsystem?

  • The plans for future development of the Midori AI Subsystem include adding new features, improving stability and performance, and expanding the library of supported backends and tools.

Questions from Carly Kay

—– Disclaimer —–

The functionality of this product is subject to a variety of factors that are beyond our control, and we cannot guarantee that it will work flawlessly in all situations. We have taken every possible measure to ensure that the product functions as intended, but there may be instances where it does not perform as expected. Please be aware that we cannot be held responsible for any issues that arise due to the product’s functionality not meeting your expectations. By using this product, you acknowledge and accept the inherent risks associated with its use, and you agree to hold us harmless for any damages or losses that may result from its functionality not being guaranteed.

—– Footnotes —–

*For your safety we have posted the code of this program onto github, please check it out! - Github

**If you would like to give to help us get better servers - Give Support

***If you or someone you know would like a new backend supported by Midori AI Subsystem please reach out to us at [email protected]

Subsystem Update Log

Midori AI photo Midori AI photo

5/10/2024

  • Update: Planned changes for LocalAi’s Gallery API
  • Bug Fix: Fixed a loading bug with how we get carly loaded
  • Update: Moved Carly’s loading to the carly help file
  • Update: Updated the news page
  • Update: added invokeAI model support
  • Update: added docker to invokeai install
  • Update: Few more text changes and a action rename
  • Update: Cleans up after itself and deletes the installer / old files
  • Update: more text clean up for the backends menu
  • Update: added better error code for invoke.ai system runner
  • Update: added support for running InvokeAI on the system
  • Bug Fix: Fixed the news menu
  • Update: Added a new “run InvokeAI” menu for running the InvokeAI program
  • Bug Fix: Did some bug fixes

5/7/2024

  • Update: Added a way for “other os” type to auto-update
  • Update: Added a yay or nay to purging the venv at the end of other os
  • Update: Added a new UI/UX menu
  • Bug Fix: Fixed the news menu
  • Bug Fix: Fixed naming on the GitHub actions
  • Update: Added a way to get the local IP address
  • Update: Fully redid some actions that make the docker images
  • Update: Reworked the subsystem docker files and the new news post

5/5/2024

  • Update: Fixed some of Ollama’s support
  • Update: Action updates
  • Bug Fix: Fixed some server ver bugs
  • Bug Fix: Fixed a few more bugs
  • Update: Removed verlocking
  • Update: More fixes
  • Update: Added a new way to deal with python env
  • Update: Code clean up and fixed a socket error

4/22/2024

  • Update: Fully reworked how we pack the exec for all os
  • Update: Fully redid our linting actions on github to run better
  • Update: Mac OS Support should be “working”
  • Bug Fix: Fixed a odd bug with VER
  • Bug Fix: Fixed a bug with WSL purging docker for no reason

4/20/2024

  • Update: Added new “WSL Docker Data” backend program (in testing)
  • Update: Added more GPU checks to make sure we know for sure if you have a GPU
  • Update: Better logging for debugging
  • Bug Fix: Fixed a few bugs and made the subsystem docker 200mbs smaller
  • Update: Removed some outdated code
  • Update: Added new git actions thanks to - Cryptk
  • Update: Subsystem Manager builds are now on github actions, check them out - Actions

4/13/2024

  • Known Bug: Upstream changes to LocalAI is making API Keys not work, I am working on a temp fix, please use a outdated image for now.

4/13/2024

  • Update: Added InvokeAI Backend Program (Host installer)
  • Update: Added InvokeAI Backend Program (Subsystem installer)
  • Update: Site wide updates, added Big-AGI
  • Update: Updated LocalAI Page
  • Update: Updated InvokeAI Page
  • Update: Fixed Port on Big-AGI (server side, was 3000 now 33000)
  • Update: Removed Home Assistant links
  • Update: Removed Oobabooga links
  • Update: Removed Ollama link
  • Update: Full remake of the Subsystem index page to have better working links

4/12/2024

  • Bug Fix: Fixed the GPU question to only show up if you have a gpu installed
  • Update: Getting ready for InvokeAI backend program to install on host

4/10/2024

  • Bug Fix: Fixed a bug that was making the user hit enter 3 times after a update
  • Bug Fix: Fixed the system message on the 14b ai that helps in the program (she can now help uninstall the subsystem if needed)
  • Update: Added new functions to the server for new function based commands for the helper ai
  • Update: Updated Invoke AI installer (if its bugged let Luna or Carly know)

4/9/2024

  • Bug Fix: Fixed a loop in the help context
  • Bug Fix: Fixed the Huggingface downloader (Now runs as root and is its own program)
  • Bug Fix: Fixed LocalAI image being out of date
  • Bug Fix: Fixed LocalAI AIO image looping endlessly
  • Update: Added LocalAI x Midori AI AIO images to github actions
  • Update: Added more context to the 14B model used for the help menu

4/7/2024

  • Bug Fix: AnythingLLM docker image is now fixed server side. Thank you for your help testers!

4/6/2024

  • Bug Fix: Removed alot of old text
  • Bug Fix: Fixed alot of outdated text
  • Bug Fix: Removed Github heartbeat check ||(why were we checking if github was up??)||
  • Known Bug Update: Huggingface Downloader seems be bugged on LocalAI master… will be working on a fix
  • Known Bug Update: AnythingLLM docker image seems to be bugged, will be remaking its download / setup from scratch

4/3/2024

  • New Backend: Added Big-AGI to the subsystem!
  • Update: Added better huggingface downloader commands server side
  • Update: Redid how the server sends models to the subsystem
  • Bug Fix: Fixed a bug with ollama not starting with the subsystem
  • Bug Fix: Fixed a bug with endlessly installing backends

4/2/2024

  • Update: Added a menu to fork into nonsubsystem images for installing models
  • Update: Added a way to install Huggingface based models into LocalAI using Midori AI’s model repo
  • Bug Fix: Fixed some type o and bad text in a few places that was confusing users
  • Bug Fix: Fixed a bug when some links were used with Huggingface
  • Update: Server upgrades to our model repo api

4/1/2024

  • Update 1: Added a new safety check to make sure the subsystem manager is not in the Windows folder or in system32
  • Update 2: Added more prompting for the baked in Carly model for if you are asking about GPU or not with cuda

3/30/2024

  • Update 1: Fixed a bug with the subsystem ver not matching the manager ver and endlessly updating the subsystem

3/29/2024

  • Update 1: Fixed a big bug if the user put the subsystem manager in a folder not named “midoriai”
  • Update 2: Fixed the new LocalAI image to only download the models one time
  • Update 3: Added server side checks to make sure models are ready for packing to end user
  • Update 4: Better logging added to help debug the manager, thank you all for your help!

3/27/2024

  • Update 1: Fixed a bug that let the user use the subsystem manager with out installing the subsystem (oops)
  • Update 2: LocalAI images are now from the Midori AI repo and are update to date with LocalAI’s master images*
  • Update 3: Added the start for “auto update of docker images” to the subsystem using hashes

AnythingLLM

Midori AI photo Midori AI photo

Here is a link to AnythingLLM Github

Installing AnythingLLM

Step 1

Type 2 into the main menu photo photo

Step 2

Type yes or no into the menu

Step 3

Type in anythinllm into menu, then hit enter photo photo

Step 4

Enjoy your new copy of AnythingLLM its running on port 33001

Notice
  • Reminder to always use your computers IP address not localhost
  • IE: 192.168.10.10:33001 or 192.168.1.3:33001

If you need help, please reach out on our Discord / Email; or reach out on their Discord.

Big-AGI

Midori AI photo Midori AI photo

Here is a link to Big-AGI Github

Installing Big-AGI

Step 1

Type 2 into the main menu

Step 2

Type yes or no into the menu

Step 3

Type in bigagi into menu, then hit enter

Step 4

Enjoy your new copy of Big-AGI its running on port 33000

Notice
  • Reminder to always use your computers IP address not localhost
  • IE: 192.168.10.10:33000 or 192.168.1.3:33000

If you need help, please reach out on our Discord / Email; or reach out on their Discord.

LocalAI

Midori AI photo Midori AI photo

Here is a link to LocalAI Github

Installing LocalAI: A Step-by-Step Guide

This guide will walk you through the process of installing LocalAI on your system. Please follow the steps carefully for a successful installation.

Step 1: Initiate Installation

  1. From the main menu, enter the option 2 to begin the installation process.
  2. You will be prompted with a visual confirmation.

Step 2: Confirm GPU Backend

  1. Respond to the prompt with either yes or no to proceed with GPU support or CPU support only, respectively.

Step 3: Confirm LocalAI installation

  1. Type localai into the menu and press Enter to start the LocalAI installation.

Step 4: Wait for Setup Completion

  1. LocalAI will now automatically configure itself. This process may take approximately 10 to 30 minutes.
  2. Important: Please do not restart your system or attempt to send requests to LocalAI during this setup phase.

Step 5: Access LocalAI

  1. Once the setup is complete, you can access LocalAI on port 38080.
Important Notes
  • Remember to use your computer’s IP address instead of localhost when accessing LocalAI. For example, you would use 192.168.10.10:38080/v1 or 192.168.1.3:38080/v1 depending on your network configuration.

Support and Assistance

If you encounter any issues or require further assistance, please feel free to reach out through the following channels:

Subsections of LocalAI

Install LocalAI Models

Midori AI photo Midori AI photo

Install a Model from the Midori AI Model Repo

Step 1:

  • Start the Midori AI Subsystem

Step 2:

  • On the Main Menu, Type 5 to Enter the Backend Program Menu

Step 3:

  • On the Backend Program Menu, Type 1 to Enter the LocalAI Model Installer

Step 4a:

  • If you have LocalAI installed in the subsystem, skip this step.
  • If you do not have LocalAI installed in the subsystem, the program will ask you to enter the LocalAI docker’s name. It will look something like localai-api-1, but not always. If you need help, reach out on the Midori AI Discord / Email.

Step 4b:

  • If you have GPU support installed in that image, type yes.
  • If you do not have GPU support installed in that image, type no.

Step 5:

  • Type in the size you would like for your LLM and then follow the prompts in the manager!

Step 6:

  • Sit Back and Let the Model Download from Midori AI’s Model Repo
  • Don’t forget to note the name of the model you just installed so you can request it for OpenAI V1 later.

Need help on how to do that? Stop by - How to send OpenAI request to LocalAI

Install a Hugging Face Model from the Midori AI Model Repo

Step 1:

  • Start the Midori AI Subsystem

Step 2:

  • On the Main Menu, Type 5 to Enter the Backend Program Menu

Step 3:

  • On the Backend Program Menu, Type 1 to Enter the LocalAI Model Installer

Step 4a:

  • If you have LocalAI installed in the subsystem, skip this step.
  • If you do not have LocalAI installed in the subsystem, the program will ask you to enter the LocalAI docker’s name. It will look something like localai-api-1, but not always. If you need help, reach out on the Midori AI Discord / Email.

Step 4b:

  • If you have GPU support installed in that image, type yes.
  • If you do not have GPU support installed in that image, type no.

Step 5:

  • Type huggingface when asked what size of model you would like.

Step 6:

  • Copy and Paste the Hugging Face Download URL That You Wish to Use
  • For example: https://huggingface.co/mlabonne/gemma-7b-it-GGUF/resolve/main/gemma-7b-it.Q2_K.gguf?download=true midori ai photo midori ai photo
  • Or you can use the huggingface naming from their api
  • For example: mlabonne/gemma-7b-it-GGUF/gemma-7b-it.Q2_K.gguf

Step 7:

  • Sit Back and Let the Model Download from Midori AI’s Model Repo
  • Don’t forget to note the name of the model you just installed so you can request it for OpenAI V1 later.

Need help on how to do that? Stop by - How to send OpenAI request to LocalAI

InvokeAI

Midori AI photo Midori AI photo

Here is a link to InvokeAI Github

InvokeAI Installation Guide

This guide provides a comprehensive walkthrough for installing InvokeAI on your system. Please follow the instructions meticulously to ensure a successful installation.

Accessing the Installation Menu

  1. From the main menu, enter option 2 to access the “Installer/Upgrade Menu”.

Initiating InvokeAI Installation

  1. Within the “Installer/Upgrade Menu”, if requested to type something to proceed type yes.
  2. Initiate the download process by typing invokeai and pressing Enter.
  1. Return to the main menu and select option 5 to access the “Backend Programs Menu”.

Selecting Installation Method

  1. Choose the appropriate installation method based on your hardware configuration:
    • Option 5: Recommended for systems with Nvidia GPUs.
    • Option 6: Recommended for systems without Nvidia GPUs.

Executing the Installation Script

  1. The installer will be executed after you press enter

Installation Process

  1. The InvokeAI installer will guide you through the remaining steps. Should you require assistance, our support channels are available:

Note: The installation process may appear inactive at times; however, rest assured that progress is being made. Please refrain from interrupting the process to ensure its successful completion.

Support and Resources

Enjoy using InvokeAI! For additional help or information, please refer to the following resources: