🕸️
We use mid-sized (< 8 billion parameters) large language models derived from Llama 3.1 8B  and Mistral 7b base models.

On top of these models, we subsequently train specific adapters by means of Low Rank Adaptation (LoRA) methodology.

Additionally, Retrieval Augmented Generation (RAG) is also deployed in order to increase response accuracy.
[Impressum, Datenschutz, Login] Other subprojects of wizzion.com linkring: kyberia.de gardens.digital udk.ai giver.eu teacher.solar baumhaus.digital refused.science naadam.info puerto.life fibel.digital