Models Repository

Midori AI Self-Hosted Models Repository

Thank you for your interest in contributing to the Midori AI Self-Hosted Models’ model card repository! We welcome contributions from the community to help us maintain a comprehensive and up-to-date collection of model cards for self-hosted models.

How to Contribute

To contribute a model card, please follow these steps:

  1. Fork the Midori AI Repository to your GitHub account.
  2. Create a new branch in your forked repository where you will make your changes.
  3. Add your model card to the models directory. Follow the structure of the existing model cards to ensure consistency.
  4. Commit your changes and push them to your forked repository.
  5. Open a pull request from your forked repository to the master branch of the Midori AI Self-Hosted Models’ Model Card Repository.
  6. In the pull request, provide a clear and concise description of the changes you have made.

Model Card Template

The model card template provides guidance on the information to include in your model card. It covers aspects such as:

  • Model Name: The name of the model you are describing.
  • Model Description: A brief overview of the model’s purpose, architecture, and key features.
  • Intended Use: Specify the tasks or applications for which the model is designed.
  • Training Data: Describe the dataset(s) used to train the model, including their size, composition, and any relevant characteristics.
  • Limitations and Biases: Discuss any known limitations or potential biases in the model, as well as steps taken to mitigate them.
  • Ethical Considerations: Address any ethical implications or considerations related to the model’s use, such as privacy concerns or potential for discrimination.
  • Deployment Details: If the model is deployed, provide information about the deployment environment, serving infrastructure, and any specific considerations for real-world usage.

Review Process

Once you have submitted a pull request, it will be reviewed by the Midori AI team. We will evaluate the quality and completeness of your model card based on the provided template. If there are any issues or suggestions for improvement, we will provide feedback and work with you to address them.

Merging the Pull Request

After addressing any feedback received during the review process, your pull request will be merged into the main branch of the Midori AI Self-Hosted Models’ Model Card Repository. Your model card will then be published and made available to the community.

Conclusion

By contributing to the Midori AI Self-Hosted Models’ Model Card Repository, you help us build a valuable resource for the community. Your contributions will help users understand and evaluate self-hosted models more effectively, ultimately leading to improved model selection and usage.

Thank you for your contribution! Together, we can foster a more open and informed ecosystem for self-hosted AI models.

Unleashing the Future of AI, Together.

Subsections of Models Repository

Model Template

Model Card for [model name here]

Put your info about your model here

Training

Some info about training if you want to add that here

Models (Quantised / Non Quantised)

Quant Mode Description
Q3_K_L Smallest, significant quality loss - not recommended
Q4_K_M Medium, balanced quality
Q5_K_M Large, very low quality loss - recommended
Q6_K Very large, extremely low quality loss
None Extremely large, No quality loss, hard to install - not recommended

Make sure you have this box here, all models must be quantised and non quantised for our hosting

Download / Install

Hey here are the tea-cup links luna will add once we have all your model files <3

Authors

who all worked on the data for this model, make sure you try to share everyone

License

License: Apache-2.0 - https://choosealicense.com/licenses/apache-2.0/

Do I need to say more about why this is here?

Recommended Models

All models are highly recommened for newer users as they are super easy to use and use the CHAT templ files from Twinz

Model Size Description Links
7b CPU Friendly, small, okay quality https://huggingface.co/TheBloke/dolphin-2.6-mistral-7B-GGUF
2x7b Normal sized, good quality Removed for the time being, the model was acting up
8x7b Big, great quality https://huggingface.co/TheBloke/dolphin-2.7-mixtral-8x7b-GGUF
70b Large, hard to run, significant quality https://huggingface.co/TheBloke/dolphin-2.2-70B-GGUF
Quant Mode Description
Q3 Smallest , significant quality loss - not recommended
Q4 Medium, balanced quality
Q5 Large, very low quality loss - recommended for most users
Q6 Very large, extremely low quality loss
Q8 Extremely large, extremely low quality loss, hard to use - not recommended
None Extremely large, No quality loss, super hard to use - really not recommended

The minimum RAM and VRAM requirements for each model size, as a rough estimate.

  • 7b: System RAM: 10 GB / VRAM: 2 GB
  • 2x7b: System RAM: 25 GB / VRAM: 8 GB
  • 8x7b: System RAM: 55 GB / VRAM: 28 GB
  • 70b: System RAM: 105 GB / VRAM: AI Card or better

Offsite Supported Models

All of these models originate from outside of the Midori AI model repository, and are not subject to the vetting process of Midori AI, although they are compatible with the model installer.

Note that some of these models may deviate from our conventional model formatting standards (Quantized/Non-Quantized), and will be served using a rounding-down approach. For instance, if you request a Q8 model and none is available, the Q6 model will be served instead, and so on.

  • 3b-homellm-v1: 3BV1
  • 3b-homellm-v2: 3BV2
  • 1b-homellm-v1: 1BV1