👂 🎴 🕸️
We
use
mid
-
sized
(<
8
billion
parameters
)
large
language
models
derived
from
Llama
3
.
1
8B
 
and
Mistral
7b
base
models
.<
br
/><
br
/>
On
top
of
these
models
''
we
subsequently
train
specific
adapters
by
means
of
Low
Rank
Adaptation
(
LoRA
)
methodology
.<
br
/><
br
/>
Additionally
''
Retrieval
Augmented
Generation
(
RAG
)
is
also
deployed
in
order
to
increase
response
accuracy
.
[Impressum, Datenschutz, Login] Other subprojects of wizzion.com linkring: gardens.digital refused.science puerto.life teacher.solar kyberia.de fibel.digital naadam.info baumhaus.digital udk.ai giver.eu