🕸️
We use mid-sized (< 8 billion parameters) large language models derived from Llama 3.1 8B  and Mistral 7b base models.

On top of these models, we subsequently train specific adapters by means of Low Rank Adaptation (LoRA) methodology.

Additionally, Retrieval Augmented Generation (RAG) is also deployed in order to increase response accuracy.
[Impressum, Datenschutz, Login] Other subprojects of wizzion.com linkring: udk.ai baumhaus.digital kyberia.de fibel.digital gardens.digital refused.science teacher.solar puerto.life naadam.info giver.eu