👂 🎴 🕸️
We
use
mid
-
sized
(<
8
billion
parameters
)
large
language
models
derived
from
Llama
3
.
1
8B
 
and
Mistral
7b
base
models
.<
br
/><
br
/>
On
top
of
these
models
''
we
subsequently
train
specific
adapters
by
means
of
Low
Rank
Adaptation
(
LoRA
)
methodology
.<
br
/><
br
/>
Additionally
''
Retrieval
Augmented
Generation
(
RAG
)
is
also
deployed
in
order
to
increase
response
accuracy
.
[Impressum, Datenschutz, Login] Other subprojects of wizzion.com linkring: refused.science giver.eu gardens.digital naadam.info teacher.solar puerto.life fibel.digital baumhaus.digital kyberia.de udk.ai