Docker is a containerization platform that allows you to package and run applications in isolated and portable environments called containers. Containers share the host operating system kernel but have their own dedicated file system, processes, and resources. This isolation allows applications to run independently of the host environment and each other, ensuring consistent and predictable behavior.
The Midori AI Subsystem extends Docker’s capabilities by providing a modular and extensible platform for managing AI workloads. Each AI system is encapsulated within its own dedicated Docker image, which contains the necessary software and dependencies. This approach provides several benefits:
Simplified Deployment: The Midori AI Subsystem provides a streamlined and efficient way to deploy AI systems using Docker container technology.
Eliminates Guesswork: Standardized configurations and settings reduce complexities, enabling seamless setup and management of AI programs.
Notice
Warnings / Heads up
This program is in beta! By using it you take on risk, please see the disclaimer in the footnotes
The Webserver should be back up, sorry for the outage
Known Issues
Server Rework is underway! Thank you for giving us lots of room to grow!
This seems to be a widely known bug with Google Chorme, Edge, and others, here are our virus scans from a few websites. We will try other ways of packing the files.
Install Midori AI Subsystem Manager
Notice
As we are in beta, we have implemented telemetry to enhance bug discovery and resolution. This data is anonymized and will be configurable when out of beta.
Check out our Model Repository for info about the models used and supported!
—– FAQs about the subsystem —–
What is the purpose of the Midori AI Subsystem?
The Midori AI Subsystem is a modular and extensible platform for managing AI workloads, providing simplified deployment, standardized configurations, and isolation for AI systems.
How does the Midori AI Subsystem simplify AI deployment?
The Midori AI Subsystem simplifies AI deployment by providing a streamlined and efficient way to deploy AI systems using Docker container technology, reducing complexities and ensuring consistent and predictable behavior.
What are the benefits of using the Midori AI Subsystem?
The benefits of using the Midori AI Subsystem include simplified deployment, standardized configurations, isolation for AI systems, and a growing library of supported backends and tools.
What are the limitations of the Midori AI Subsystem?
The limitations of the Midori AI Subsystem include its current beta status, potential for bugs, and reliance on Docker container technology.
What are the recommended prerequisites for using the Midori AI Subsystem?
The recommended prerequisites for using the Midori AI Subsystem include Docker Desktop Windows or Docker installed on other operating systems, and a dedicated folder for the Manager program.
How do I install the Midori AI Subsystem Manager?
You can install the Midori AI Subsystem Manager by downloading the appropriate package for your operating system from the Midori AI Subsystem website and following the installation instructions. Click here to go to the Midori AI Subsystem website
Where can I find more information about the Midori AI Subsystem?
You can find more information about the Midori AI Subsystem on the Midori AI Subsystem website, which includes documentation, tutorials, and a community Discord.
What is the difference between the Midori AI Subsystem and other AI frameworks?
The Midori AI Subsystem differs from other AI frameworks by providing a modular and extensible platform specifically designed for managing AI workloads, offering features such as simplified deployment, standardized configurations, and isolation for AI systems.
How does the Midori AI Subsystem handle security?
The Midori AI Subsystem does not handle security directly, but it relies on the security features provided by the underlying Docker container technology and the specific AI backends and tools being used.
What are the plans for future development of the Midori AI Subsystem?
The plans for future development of the Midori AI Subsystem include adding new features, improving stability and performance, and expanding the library of supported backends and tools.
The functionality of this product is subject to a variety of factors that are beyond our control, and we cannot guarantee that it will work flawlessly in all situations. We have taken every possible measure to ensure that the product functions as intended, but there may be instances where it does not perform as expected. Please be aware that we cannot be held responsible for any issues that arise due to the product’s functionality not meeting your expectations. By using this product, you acknowledge and accept the inherent risks associated with its use, and you agree to hold us harmless for any damages or losses that may result from its functionality not being guaranteed.
—– Footnotes —–
*For your safety we have posted the code of this program onto github, please check it out! - Github
**If you would like to give to help us get better servers - Give Support
***If you or someone you know would like a new backend supported by Midori AI Subsystem please reach out to us at [email protected]
Subsystem Update Log
5/10/2024
Update: Planned changes for LocalAi’s Gallery API
Bug Fix: Fixed a loading bug with how we get carly loaded
Update: Moved Carly’s loading to the carly help file
Update: Updated the news page
Update: added invokeAI model support
Update: added docker to invokeai install
Update: Few more text changes and a action rename
Update: Cleans up after itself and deletes the installer / old files
Update: more text clean up for the backends menu
Update: added better error code for invoke.ai system runner
Update: added support for running InvokeAI on the system
Bug Fix: Fixed the news menu
Update: Added a new “run InvokeAI” menu for running the InvokeAI program
Bug Fix: Did some bug fixes
5/7/2024
Update: Added a way for “other os” type to auto-update
Update: Added a yay or nay to purging the venv at the end of other os
Update: Added a new UI/UX menu
Bug Fix: Fixed the news menu
Bug Fix: Fixed naming on the GitHub actions
Update: Added a way to get the local IP address
Update: Fully redid some actions that make the docker images
Update: Reworked the subsystem docker files and the new news post
5/5/2024
Update: Fixed some of Ollama’s support
Update: Action updates
Bug Fix: Fixed some server ver bugs
Bug Fix: Fixed a few more bugs
Update: Removed verlocking
Update: More fixes
Update: Added a new way to deal with python env
Update: Code clean up and fixed a socket error
4/22/2024
Update: Fully reworked how we pack the exec for all os
Update: Fully redid our linting actions on github to run better
Update: Mac OS Support should be “working”
Bug Fix: Fixed a odd bug with VER
Bug Fix: Fixed a bug with WSL purging docker for no reason
4/20/2024
Update: Added new “WSL Docker Data” backend program (in testing)
Update: Added more GPU checks to make sure we know for sure if you have a GPU
Update: Better logging for debugging
Bug Fix: Fixed a few bugs and made the subsystem docker 200mbs smaller
This guide will walk you through the process of installing LocalAI on your system. Please follow the steps carefully for a successful installation.
Step 1: Initiate Installation
From the main menu, enter the option 2 to begin the installation process.
You will be prompted with a visual confirmation.
Step 2: Confirm GPU Backend
Respond to the prompt with either yes or no to proceed with GPU support or CPU support only, respectively.
Step 3: Confirm LocalAI installation
Type localai into the menu and press Enter to start the LocalAI installation.
Step 4: Wait for Setup Completion
LocalAI will now automatically configure itself. This process may take approximately 10 to 30 minutes.
Important: Please do not restart your system or attempt to send requests to LocalAI during this setup phase.
Step 5: Access LocalAI
Once the setup is complete, you can access LocalAI on port 38080.
Important Notes
Remember to use your computer’s IP address instead of localhost when accessing LocalAI. For example, you would use 192.168.10.10:38080/v1 or 192.168.1.3:38080/v1 depending on your network configuration.
Support and Assistance
If you encounter any issues or require further assistance, please feel free to reach out through the following channels:
On the Main Menu, Type 5 to Enter the Backend Program Menu
Step 3:
On the Backend Program Menu, Type 1 to Enter the LocalAI Model Installer
Step 4a:
If you have LocalAI installed in the subsystem, skip this step.
If you do not have LocalAI installed in the subsystem, the program will ask you to enter the LocalAI docker’s name. It will look something like localai-api-1, but not always. If you need help, reach out on the Midori AI Discord / Email.
Step 4b:
If you have GPU support installed in that image, type yes.
If you do not have GPU support installed in that image, type no.
Step 5:
Type in the size you would like for your LLM and then follow the prompts in the manager!
Step 6:
Sit Back and Let the Model Download from Midori AI’s Model Repo
Don’t forget to note the name of the model you just installed so you can request it for OpenAI V1 later.
Install a Hugging Face Model from the Midori AI Model Repo
Step 1:
Start the Midori AI Subsystem
Step 2:
On the Main Menu, Type 5 to Enter the Backend Program Menu
Step 3:
On the Backend Program Menu, Type 1 to Enter the LocalAI Model Installer
Step 4a:
If you have LocalAI installed in the subsystem, skip this step.
If you do not have LocalAI installed in the subsystem, the program will ask you to enter the LocalAI docker’s name. It will look something like localai-api-1, but not always. If you need help, reach out on the Midori AI Discord / Email.
Step 4b:
If you have GPU support installed in that image, type yes.
If you do not have GPU support installed in that image, type no.
Step 5:
Type huggingface when asked what size of model you would like.
Step 6:
Copy and Paste the Hugging Face Download URL That You Wish to Use
For example: https://huggingface.co/mlabonne/gemma-7b-it-GGUF/resolve/main/gemma-7b-it.Q2_K.gguf?download=true
Or you can use the huggingface naming from their api
For example: mlabonne/gemma-7b-it-GGUF/gemma-7b-it.Q2_K.gguf
Step 7:
Sit Back and Let the Model Download from Midori AI’s Model Repo
Don’t forget to note the name of the model you just installed so you can request it for OpenAI V1 later.
This guide provides a comprehensive walkthrough for installing InvokeAI on your system. Please follow the instructions meticulously to ensure a successful installation.
Accessing the Installation Menu
From the main menu, enter option 2 to access the “Installer/Upgrade Menu”.
Initiating InvokeAI Installation
Within the “Installer/Upgrade Menu”, if requested to type something to proceed type yes.
Initiate the download process by typing invokeai and pressing Enter.
Navigating to Backend Programs
Return to the main menu and select option 5 to access the “Backend Programs Menu”.
Selecting Installation Method
Choose the appropriate installation method based on your hardware configuration:
Option 5: Recommended for systems with Nvidia GPUs.
Option 6: Recommended for systems without Nvidia GPUs.
Executing the Installation Script
The installer will be executed after you press enter
Installation Process
The InvokeAI installer will guide you through the remaining steps. Should you require assistance, our support channels are available:
Note: The installation process may appear inactive at times; however, rest assured that progress is being made. Please refrain from interrupting the process to ensure its successful completion.
Support and Resources
Enjoy using InvokeAI! For additional help or information, please refer to the following resources: