No direct guidance :: Learner L must identify patterns or relationships from raw input on her own.
Pattern discovery :: Observation often involves recognizing and understanding patterns in the environment, which mirrors what happens in unsupervised learning.
Learning from the environment :: L derives insights from the world or data without external labels or supervision.
<
p
class
=
„
fragment
“
><
strong
>
No
direct
guidance
strong
>
::
Learner
L
must
identify
patterns
or
relationships
from
raw
input
on
her
own
.
p
><
p
class
=
„
fragment
“
><
strong
>
Pattern
discovery
strong
>
::
Observation
often
involves
recognizing
and
understanding
patterns
in
the
environment
''
which
mirrors
what
happens
in
unsupervised
learning
.
p
><
p
class
=
„
fragment
“
><
strong
>
Learning
from
the
environment
strong
>
::
L
derives
insights
from
the
world
or
data
without
external
labels
or
supervision
.
p
>
Imitation in humans, especially in children, is the process of learning by observing and replicating the actions, behaviors, or expressions of others. It is a fundamental mechanism for acquiring social, cognitive, and motor skills, allowing children to mimic gestures, language, or problem-solving techniques. Through imitation, children learn cultural norms, communication patterns, and even complex tasks without explicit instruction. It is crucial for early development, as it helps children integrate into their social environment and build understanding by modeling behaviors they see in parents, peers, and others.
1548
Imitation
in
humans
''
especially
in
children
''
is
the
process
of
learning
by
observing
and
replicating
the
actions
''
behaviors
''
or
expressions
of
others
.
It
is
a
fundamental
mechanism
for
acquiring
social
''
cognitive
''
and
motor
skills
''
allowing
children
to
mimic
gestures
''
language
''
or
problem
-
solving
techniques
.
Through
imitation
''
children
learn
cultural
norms
''
communication
patterns
''
and
even
complex
tasks
without
explicit
instruction
.
It
is
crucial
for
early
development
''
as
it
helps
children
integrate
into
their
social
environment
and
build
understanding
by
modeling
behaviors
they
see
in
parents
''
peers
''
and
others
.
Learning is building A model of THE world.
1562
Learning
is
building
A
model
of
THE
world
.
Experiential learning is a process of learning through direct experience, where individuals engage in activities, reflect on their actions, and apply what they’ve learned to new situations. Rather than solely reading or listening, learners actively participate, often experimenting, making mistakes, and adapting.
1520
Experiential
learning
is
a
process
of
learning
through
direct
experience
''
where
individuals
engage
in
activities
''
reflect
on
their
actions
''
and
apply
what
they
’
ve
learned
to
new
situations
.
Rather
than
solely
reading
or
listening
''
learners
actively
participate
''
often
experimenting
''
making
mistakes
''
and
adapting
.<
br
>
In learning and problem-solving, a "trial" is a single attempt or effort to reach a goal or find a solution. Each trial involves testing an idea or making a change, then observing the result. If the trial doesn’t succeed, adjustments are made based on what was learned, and a new trial begins. This process, called "trial and error," continues until the desired outcome is achieved. Trials are key in learning because they allow us to explore, adapt, and improve with each attempt, reducing mistakes and moving closer to success over time.
1549
In
learning
and
problem
-
solving
''
a
„
trial
“
is
a
single
attempt
or
effort
to
reach
a
goal
or
find
a
solution
.
Each
trial
involves
testing
an
idea
or
making
a
change
''
then
observing
the
result
.
If
the
trial
doesn
’
t
succeed
''
adjustments
are
made
based
on
what
was
learned
''
and
a
new
trial
begins
.
This
process
''
called
„
trial
and
error
''
“
continues
until
the
desired
outcome
is
achieved
.
Trials
are
key
in
learning
because
they
allow
us
to
explore
''
adapt
''
and
improve
with
each
attempt
''
reducing
mistakes
and
moving
closer
to
success
over
time
.
In learning and problem-solving, an "error" is the difference between what we aimed to achieve and the actual result. It shows how far we are from the desired outcome and helps us understand what needs adjusting. Errors aren’t failures; they’re valuable feedback, guiding us to make improvements in each new attempt. By identifying and reducing errors through practice and adjustments, we gradually get closer to the goal. In this way, errors are essential to learning, as they highlight what doesn’t work and point us toward what might.
1551
In
learning
and
problem
-
solving
''
an
„
error
“
is
the
difference
between
what
we
aimed
to
achieve
and
the
actual
result
.
It
shows
how
far
we
are
from
the
desired
outcome
and
helps
us
understand
what
needs
adjusting
.
Errors
aren
’
t
failures
;
they
’
re
valuable
feedback
''
guiding
us
to
make
improvements
in
each
new
attempt
.
By
identifying
and
reducing
errors
through
practice
and
adjustments
''
we
gradually
get
closer
to
the
goal
.
In
this
way
''
errors
are
essential
to
learning
''
as
they
highlight
what
doesn
’
t
work
and
point
us
toward
what
might
.
In learning, skill-building, and habit formation, repetition is the process of repeatedly practicing or performing a task. This continuous repetition reinforces memory, builds familiarity, and over time, transforms skills into habits or reflexes, making actions more automatic and effortless. Through repetition, connections in the brain are strengthened, allowing tasks to be completed with less conscious effort.
1550
In
learning
''
skill
-
building
''
and
habit
formation
''
repetition
is
the
process
of
repeatedly
practicing
or
performing
a
task
.
This
continuous
repetition
reinforces
memory
''
builds
familiarity
''
and
over
time
''
transforms
skills
into
habits
or
reflexes
''
making
actions
more
automatic
and
effortless
.
Through
repetition
''
connections
in
the
brain
are
strengthened
''
allowing
tasks
to
be
completed
with
less
conscious
effort
.
Game provides a closed system whereby we are allowed to commit errors (& learn from them) without major consequences for real life.
What is Your most favorite game / way of playing ?
Any game You would like to bring & play with Your colleagues during the Congress ?
1552
Game
provides
a
closed
system
whereby
we
are
allowed
to
commit
errors
(&
learn
from
them
)
without
major
consequences
for
real
life
.<
p
class
=
„
fragment
“
>
What
is
Your
most
favorite
game
/
way
of
playing
?
p
><
p
class
=
„
fragment
“
>
Any
game
You
would
like
to
bring
&
play
with
Your
colleagues
during
the
Congress
?
p
>
Supervised learning is a type of machine learning where a model is trained on labeled data to learn the mapping between input features and corresponding outputs. The goal is to enable the model to make accurate predictions or classifications on unseen data by minimizing the error between its predictions and the true labels. Common tasks include regression (predicting continuous values) and classification (assigning categories). Supervised learning relies on a training dataset with known inputs and outputs and evaluates performance using a separate test dataset. Examples include spam email detection, image recognition, and speech-to-text systems.
1521
Supervised
learning
is
a
type
of
machine
learning
where
a
model
is
trained
on
labeled
data
to
learn
the
mapping
between
input
features
and
corresponding
outputs
.
The
goal
is
to
enable
the
model
to
make
accurate
predictions
or
classifications
on
unseen
data
by
minimizing
the
error
between
its
predictions
and
the
true
labels
.
Common
tasks
include
regression
(
predicting
continuous
values
)
and
classification
(
assigning
categories
).
Supervised
learning
relies
on
a
training
dataset
with
known
inputs
and
outputs
and
evaluates
performance
using
a
separate
test
dataset
.
Examples
include
spam
email
detection
''
image
recognition
''
and
speech
-
to
-
text
systems
.
Features are observable and measurable properties or characteristics used to describe data in both machine learning and human experience.
In ML, features are input variables—raw (e.g., pixel intensities, audio waveforms) or engineered (e.g., embeddings, statistical summaries)—that models use to make predictions.
In human experience, features represent sensory or cognitive details like color, texture, pitch, or emotional tone, helping interpret and navigate the world.
1652
Features
are
observable
and
measurable
properties
or
characteristics
used
to
describe
data
in
both
machine
learning
and
human
experience
. <
div
><
br
>
div
><
div
>
In
ML
''
features
are
input
variables
—
raw
(
e
.
g
.''
pixel
intensities
''
audio
waveforms
)
or
engineered
(
e
.
g
.''
embeddings
''
statistical
summaries
)—
that
models
use
to
make
predictions
. <
div
><
br
>
div
><
div
>
In
human
experience
''
features
represent
sensory
or
cognitive
details
like
color
''
texture
''
pitch
''
or
emotional
tone
''
helping
interpret
and
navigate
the
world
.
div
>
div
>
Supervised learning parallels human learning through its reliance on guidance from labeled examples, similar to how humans learn with feedback. For instance, when a child learns to identify objects, they receive input (the object) and a corresponding label (e.g., "dog" or "apple") from a teacher or parent. Mistakes are corrected, reinforcing the connection between input and label, much like how supervised learning algorithms adjust their predictions based on errors.
1676
Supervised
learning
parallels
human
learning
through
its
reliance
on
guidance
from
labeled
examples
''
similar
to
how
humans
learn
with
feedback
.
For
instance
''
when
a
child
learns
to
identify
objects
''
they
receive
input
(
the
object
)
and
a
corresponding
label
(
e
.
g
.''
„
dog
“
or
„
apple
“
)
from
a
teacher
or
parent
.
Mistakes
are
corrected
''
reinforcing
the
connection
between
input
and
label
''
much
like
how
supervised
learning
algorithms
adjust
their
predictions
based
on
errors
.
A classifier in machine learning is a model or algorithm designed to categorize data into predefined groups or labels. It takes input data, analyzes its features, and assigns it to a specific class based on learned patterns from training data. For example, a classifier might identify whether an email is spam or not spam, or recognize handwritten digits. Classifiers are essential in supervised learning tasks and operate by minimizing errors in predictions through training on labeled datasets. Common types include neural networks, support vector machines, decision trees etc.
1653
A
classifier
in
machine
learning
is
a
model
or
algorithm
designed
to
categorize
data
into
predefined
groups
or
labels
.
It
takes
input
data
''
analyzes
its
features
''
and
assigns
it
to
a
specific
class
based
on
learned
patterns
from
training
data
.
For
example
''
a
classifier
might
identify
whether
an
email
is
spam
or
not
spam
''
or
recognize
handwritten
digits
.
Classifiers
are
essential
in
supervised
learning
tasks
and
operate
by
minimizing
errors
in
predictions
through
training
on
labeled
datasets
.
Common
types
include
neural
networks
''
support
vector
machines
''
decision
trees
etc
.
In supervised machine learning, training is the process of teaching a model, like a classifier, to make accurate predictions by learning patterns from labeled data. Each data point in the training set includes features (characteristics or inputs that describe the data, like size or color) and a corresponding label (the correct output or category). The model uses this data to adjust its internal parameters, minimizing the error between its predictions and the actual labels. This is done through algorithms like gradient descent. The goal is to generalize from the training data, enabling the classifier to make accurate predictions on new, unseen data.
1654
In
supervised
machine
learning
''
training
is
the
process
of
teaching
a
model
''
like
a
classifier
''
to
make
accurate
predictions
by
learning
patterns
from
labeled
data
.
Each
data
point
in
the
training
set
includes
features
(
characteristics
or
inputs
that
describe
the
data
''
like
size
or
color
)
and
a
corresponding
label
(
the
correct
output
or
category
).
The
model
uses
this
data
to
adjust
its
internal
parameters
''
minimizing
the
error
between
its
predictions
and
the
actual
labels
.
This
is
done
through
algorithms
like
gradient
descent
.
The
goal
is
to
generalize
from
the
training
data
''
enabling
the
classifier
to
make
accurate
predictions
on
new
''
unseen
data
.
In supervised machine learning, *testing* (or *inference*) is the process of evaluating a trained model's ability to make accurate predictions on new, unseen data. During this phase, the model is given data points with *features* (inputs like size or color) but without the labels it was trained on. The model uses the patterns it learned during training to predict the labels for this data. The results are then compared to the actual labels (if available) to measure the model's performance using metrics like accuracy or precision. Inference is the final application of the model to make real-world predictions.
1656
In
supervised
machine
learning
''
*
testing
*
(
or
*
inference
*)
is
the
process
of
evaluating
a
trained
model
'
s
ability
to
make
accurate
predictions
on
new
''
unseen
data
.
During
this
phase
''
the
model
is
given
data
points
with
*
features
*
(
inputs
like
size
or
color
)
but
without
the
labels
it
was
trained
on
.
The
model
uses
the
patterns
it
learned
during
training
to
predict
the
labels
for
this
data
.
The
results
are
then
compared
to
the
actual
labels
(
if
available
)
to
measure
the
model
'
s
performance
using
metrics
like
accuracy
or
precision
.
Inference
is
the
final
application
of
the
model
to
make
real
-
world
predictions
.
Binary classifiers are evaluated by comparing their predictions to the actual outcomes using a confusion matrix. This is a table with four categories: True Positives (TP), where the classifier correctly predicts a positive outcome; True Negatives (TN), where it correctly predicts a negative outcome; False Positives (FP), where it wrongly predicts a positive; and False Negatives (FN), where it misses a positive case. Metrics like accuracy (overall correctness), precision (focus on positives), and recall (how well positives are found) are calculated from this matrix, helping to assess the classifier’s performance.
1669
Binary
classifiers
are
evaluated
by
comparing
their
predictions
to
the
actual
outcomes
using
a
confusion
matrix
.
This
is
a
table
with
four
categories
:
True
Positives
(
TP
)''
where
the
classifier
correctly
predicts
a
positive
outcome
;
True
Negatives
(
TN
)''
where
it
correctly
predicts
a
negative
outcome
;
False
Positives
(
FP
)''
where
it
wrongly
predicts
a
positive
;
and
False
Negatives
(
FN
)''
where
it
misses
a
positive
case
.
Metrics
like
accuracy
(
overall
correctness
)''
precision
(
focus
on
positives
)''
and
recall
(
how
well
positives
are
found
)
are
calculated
from
this
matrix
''
helping
to
assess
the
classifier
’
s
performance
.
In supervised machine learning, *validating* is the process of fine-tuning and assessing a model's performance during training to ensure it generalizes well to unseen data. Unlike testing, validation occurs on a separate *validation set*, distinct from both training and testing data. The model uses the *features* of this set to make predictions, which are compared to the actual labels to calculate metrics like accuracy or loss. This helps monitor overfitting or underfitting and guides adjustments to model parameters or hyperparameters (e.g., learning rate or regularization). Validation ensures the classifier is optimized before its final evaluation on the test set.
1655
In
supervised
machine
learning
''
*
validating
*
is
the
process
of
fine
-
tuning
and
assessing
a
model
'
s
performance
during
training
to
ensure
it
generalizes
well
to
unseen
data
.
Unlike
testing
''
validation
occurs
on
a
separate
*
validation
set
*''
distinct
from
both
training
and
testing
data
.
The
model
uses
the
*
features
*
of
this
set
to
make
predictions
''
which
are
compared
to
the
actual
labels
to
calculate
metrics
like
accuracy
or
loss
.
This
helps
monitor
overfitting
or
underfitting
and
guides
adjustments
to
model
parameters
or
hyperparameters
(
e
.
g
.''
learning
rate
or
regularization
).
Validation
ensures
the
classifier
is
optimized
before
its
final
evaluation
on
the
test
set
.
Reinforcement learning (RL) is a machine learning paradigm where an agent learns to make decisions by interacting with an environment. Instead of being told what to do, the agent takes actions and receives feedback in the form of rewards or penalties. The goal is to maximize cumulative rewards over time by discovering an optimal strategy, known as a policy. RL is inspired by trial-and-error learning in humans and animals, where behavior improves through experience. It’s particularly useful for tasks with sequential decision-making, such as robotics, game playing, and autonomous systems, where actions impact not only immediate rewards but also future outcomes.
1522
Reinforcement
learning
(
RL
)
is
a
machine
learning
paradigm
where
an
agent
learns
to
make
decisions
by
interacting
with
an
environment
.
Instead
of
being
told
what
to
do
''
the
agent
takes
actions
and
receives
feedback
in
the
form
of
rewards
or
penalties
.
The
goal
is
to
maximize
cumulative
rewards
over
time
by
discovering
an
optimal
strategy
''
known
as
a
policy
.
RL
is
inspired
by
trial
-
and
-
error
learning
in
humans
and
animals
''
where
behavior
improves
through
experience
.
It
’
s
particularly
useful
for
tasks
with
sequential
decision
-
making
''
such
as
robotics
''
game
playing
''
and
autonomous
systems
''
where
actions
impact
not
only
immediate
rewards
but
also
future
outcomes
.
<
br
>
Experiential learning, Unsupervised learning, Supervised learning, Classifiers & Machine Learning ...
1690
Experiential
learning
''
Unsupervised
learning
''
Supervised
learning
''
Classifiers
&
Machine
Learning
...
Supervised learning resembles a structured classroom environment, where explicit feedback is given for each example (e.g., a teacher correcting a student's answers). In contrast, reinforcement learning mirrors experiential learning, where feedback comes as rewards or penalties after actions, guiding behavior toward long-term goals. For instance, a child learning to ride a bike might fall (penalty) or stay balanced (reward), gradually improving through trial and error.
1678
Supervised
learning
resembles
a
structured
classroom
environment
''
where
explicit
feedback
is
given
for
each
example
(
e
.
g
.''
a
teacher
correcting
a
student
'
s
answers
).
In
contrast
''
reinforcement
learning
mirrors
experiential
learning
''
where
feedback
comes
as
rewards
or
penalties
after
actions
''
guiding
behavior
toward
long
-
term
goals
.
For
instance
''
a
child
learning
to
ride
a
bike
might
fall
(
penalty
)
or
stay
balanced
(
reward
)''
gradually
improving
through
trial
and
error
.
“It is unworthy of excellent men to lose hours like slaves in the labour of calculation which could safely be relegated to anyone else if machines were used."
G.W. Leibniz (Describing, in 1685, the value to astronomers of the hand-cranked calculating machine he had invented in 1673.)
1694
<
div
>“
It
is
unworthy
of
excellent
men
to
lose
hours
like
slaves
in
the
labour
of
calculation
which
could
safely
be
relegated
to
anyone
else
if
machines
were
used
."
div
><
div
><
br
>
div
><
div
>
G
.
W
.
Leibniz
(
Describing
''
in
1685
''
the
value
to
astronomers
of
the
hand
-
cranked
calculating
machine
he
had
invented
in
1673
.)
div
><
div
><
br
>
div
>
In machines, reinforcement learning (RL) is implemented using an agent-environment framework. The agent interacts with an environment by taking actions based on a policy (a strategy for decision-making). The environment provides feedback in the form of rewards or penalties, guiding the agent to improve its actions. Key components include a reward function to evaluate outcomes, a value function to estimate long-term benefits of actions, and exploration strategies to balance learning new behaviors versus exploiting known rewards.
1680
In
machines
''
reinforcement
learning
(
RL
)
is
implemented
using
an
agent
-
environment
framework
.
The
agent
interacts
with
an
environment
by
taking
actions
based
on
a
policy
(
a
strategy
for
decision
-
making
).
The
environment
provides
feedback
in
the
form
of
rewards
or
penalties
''
guiding
the
agent
to
improve
its
actions
.
Key
components
include
a
reward
function
to
evaluate
outcomes
''
a
value
function
to
estimate
long
-
term
benefits
of
actions
''
and
exploration
strategies
to
balance
learning
new
behaviors
versus
exploiting
known
rewards
.
When satisfaction follows association, it is more likely to be repeated.
1679
<
div
>
When
satisfaction
follows
association
''
it
is
more
likely
to
be
repeated
.<
br
>
div
>
Q-learning is a model-free reinforcement learning algorithm that enables an agent to learn an optimal policy for decision-making. It works by estimating the Q-values (action-value function), which represent the expected cumulative reward for taking an action in a given state and following the best future actions. The agent updates Q-values iteratively using the formula:
1681
<
p
>
Q
-
learning
is
a
model
-
free
reinforcement
learning
algorithm
that
enables
an
agent
to
learn
an
optimal
policy
for
decision
-
making
.
It
works
by
estimating
the
<
strong
>
Q
-
values
strong
>
(
action
-
value
function
)''
which
represent
the
expected
cumulative
reward
for
taking
an
action
in
a
given
state
and
following
the
best
future
actions
.
The
agent
updates
Q
-
values
iteratively
using
the
formula
:
p
><
img
src
=
„
https
://
miro
.
medium
.
com
/
v2
/
resize
:
fit
:
1043
/
1
*
vTMQI14ls9lWzRXzJGi4sg
.
jpeg
“
/>
DRL is a type of machine learning where an agent learns to make decisions by trial and error, guided by rewards or penalties, using deep neural networks. Unlike traditional methods, which struggle with complex environments, DRL allows machines to learn directly from raw data, like images or game screens. The neural network helps the agent recognize patterns and improve its decisions over time. DRL has achieved impressive results in tasks like playing video games (e.g., Atari, AlphaGo), controlling robots, and developing self-driving cars, making it a powerful tool for solving real-world problems involving sequential decision-making
1682
DRL
is
a
type
of
machine
learning
where
an
agent
learns
to
make
decisions
by
trial
and
error
''
guided
by
rewards
or
penalties
''
using
deep
neural
networks
.
Unlike
traditional
methods
''
which
struggle
with
complex
environments
''
DRL
allows
machines
to
learn
directly
from
raw
data
''
like
images
or
game
screens
.
The
neural
network
helps
the
agent
recognize
patterns
and
improve
its
decisions
over
time
.
DRL
has
achieved
impressive
results
in
tasks
like
playing
video
games
(
e
.
g
.''
Atari
''
AlphaGo
)''
controlling
robots
''
and
developing
self
-
driving
cars
''
making
it
a
powerful
tool
for
solving
real
-
world
problems
involving
sequential
decision
-
making
"Cells that fire together, wire together."
1691
„
Cells
that
fire
together
''
wire
together
.
“
Es begab sich aber zu der Zeit, daß ein Gebot von dem Kaiser Augustus ausging, daß alle Welt geschätzt würde.
Und diese Schätzung war die allererste und geschah zu der Zeit, da Cyrenius Landpfleger von Syrien war.
Und jedermann ging, daß er sich schätzen ließe, ein jeglicher in seine Stadt.
Da machte sich auch auf Joseph aus Galiläa, aus der Stadt Nazareth, in das jüdische Land zur Stadt Davids, die da heißt Bethlehem, darum daß er von dem Hause und Geschlechte Davids war, auf daß er sich schätzen ließe mit Maria, seinem vertrauten Weibe, die ward schwanger ...
1689
<
div
>
Es
begab
sich
aber
zu
der
Zeit
''
daß
ein
Gebot
von
dem
Kaiser
Augustus
ausging
''
daß
alle
Welt
geschätzt
würde
.
div
><
div
><
br
>
div
><
div
>
Und
diese
Schätzung
war
die
allererste
und
geschah
zu
der
Zeit
''
da
Cyrenius
Landpfleger
von
Syrien
war
.
div
><
div
><
br
>
div
><
div
>
Und
jedermann
ging
''
daß
er
sich
schätzen
ließe
''
ein
jeglicher
in
seine
Stadt
.
div
><
div
><
br
>
div
><
div
>
Da
machte
sich
auch
auf
Joseph
aus
Galiläa
''
aus
der
Stadt
Nazareth
''
in
das
jüdische
Land
zur
Stadt
Davids
''
die
da
heißt
Bethlehem
''
darum
daß
er
von
dem
Hause
und
Geschlechte
Davids
war
''
auf
daß
er
sich
schätzen
ließe
mit
Maria
''
seinem
vertrauten
Weibe
''
die
ward
schwanger
...
div
>
Social learning
1523
Peer learning
1525
Machine Learning
1524
Human-Machine Peer Learning
1526
Human
-
Machine
Peer
Learning
Teaching, Pedagogy, Didactics
1527
Teaching
''
Pedagogy
''
Didactics
Artificial Teacher Avatars
1528
Artificial
Teacher
Avatars
Educational Systems
1529
Extended Educational Environments
1530
Extended
Educational
Environments
The Congress
1531
×
- You start the recording with a start button (You can click on it, touch it, or simply move Your finger/cursor over it).
- Subsequently, You move the finger/cursor over the first syllable/word. You pronounce the segment only when its background has green color.
- You gradually progress from syllable to syllable and segment to segment. You always start pronouncing certain segment only when it is green.
- Note that accentuated & long syllables are marked with blue color, while short/non-accentuated syllables are marked with green color.
- After You are done with the last syllable You move Your finger towards the stop button. Then you can playback the whole recording, when you are satisfied you click on "Upload" and the next text appears.
×
In order to be fully compliant with European data-protection Law, we need Your explicit consent regarding use of Your voice data. Please choose one among following consent types:
- Do not upload :: You do not give us Your consent. Thus, no data will be uploaded from Your browser to our server. But You can still use the interface for testing purposes.
- Only speech-to-text-models :: Your recordings will become part of the corpus from which automatic speech recognition (ASR) models will be trained. Corpus itself will not be published but the final model will be published.
- Only text-to-speech :: Similar to previous option but this time, the resulting model will not be used for ASR but for synthesis of artificial voices.
- STT & TTS :: Both ASR and voice synthesis models could be trained from datasets containing Your recordings. Again, the recordings themselves will not be published.
- Public Dataset :: Your recordings will become part of a publicly available dataset. This is the most permissive option.
Note that in all cases, Your recordings will be anonymized and asides voluntary gender / age / zodiac sign / mother language information no metadata is collected.