Essentially, You have three options:
fine-tuning (in every training step, training process updates billions and billions of parameters)
training a LoRa (instead of updating billions of parameters, You update just few millions)
do Retrieval Augmented Generation
A Mid-Sized Language Model (MLM) is a generative language model is an advanced AI system comprising of maximum 10 billion (miliarden!) parameters, organized into multiple layers with attention mechanisms. These layers process and interpret vast amounts of text data, while the attention mechanisms allow the model to focus on relevant parts of the input. This architecture enables the model to understand and generate human-like language, perform nuanced tasks like answering complex questions, writing detailed texts, and engaging in sophisticated conversations, leveraging its deep learning capabilities.
We define an Extended Educational Environment (EEE or E3) as an immersive and interactive XR learning environment enriched with AI-driven artifacts and avatars. These avatars and artifacts, developed using tools like UnrealEngine and MetaHuman, each possess distinct "personalities" or "characters" reflecting their underlying knowledge bases and machine learning models.
"I"-Avatarization is the process whereby a living human H consciously creates, develops, fine tunes and optimizes (his|her) own generative AI avatar datasets & models.
That is, using datasets (mails, chat transcripts etc.) to create a generative AI copy of one's self (an "I-Avatar") which could provide information in situation when H (her|him)self is not alive anymore.