🕸️
We use mid-sized (< 8 billion parameters) large language models derived from Llama 3.1 8B  and Mistral 7b base models.

On top of these models, we subsequently train specific adapters by means of Low Rank Adaptation (LoRA) methodology.

Additionally, Retrieval Augmented Generation (RAG) is also deployed in order to increase response accuracy.
[Impressum, Datenschutz, Login] Other subprojects of wizzion.com linkring: naadam.info kyberia.de gardens.digital baumhaus.digital fibel.digital puerto.life teacher.solar udk.ai refused.science giver.eu