👂🎴 🕸️
Each
student
randomly
picks
up
one
sticker
.
To
each
sticker
type
''
point
value
is
associated
.
You
have
15
minutes
to
do
whatever
You
want
(
e
.
g
.
deception
''
corruption
''
seduction
etc
.
*).
After
15
minutes
''
the
person
(
or
student
group
)
which
managed
to
collect
stickers
with
highest
sum
of
points
will
be
allowed
to
establish
one
rule
which
the
whole
class
will
follow
during
rest
of
semester
.<
div
><
br
>
div
><
div
>*
the
only
thing
prohibited
is
violence
div
>
All
those
who
have
a
UdK
account
''
log
in
here
*:<
br
/><
br
/><
a
href
=
https
://
medienhaus
.
udk
-
berlin
.
de
/
login
target
=
blank
rel
=
noopener
>
https
://
medienhaus
.
udk
-
berlin
.
de
/
login
a
><
br
/> <
br
/>
and
subsequently
join
the
course
(#
edu
-
intelligence
)
room
:<
br
/> <
br
/><
a
href
=
https
://
medienhaus
.
udk
-
berlin
.
de
/
classroom
/#/
room
/#
edu
-
art
-
cognition
:
medienhaus
.
udk
-
berlin
.
de
target
=
blank
rel
=
noopener
>
https
://
medienhaus
.
udk
-
berlin
.
de
/
classroom
/#/
room
/#
edu
-
art
-
cognition
:
medienhaus
.
udk
-
berlin
.
de
a
><
br
/><
br
/>(
or
install
matrix
client
apps
like
Element
or
Fluffychat
and
put
medienhaus
.
udk
-
berlin
.
de
as
homeserver
)
<
p
class
=
fragment
>
who
am
I
p
><
p
class
=
fragment
>
who
are
You
p
><
p
class
=
fragment
>
is
this
a
course
for
You
?
p
><
p
class
=
fragment
>
credits
(
2
ECTS
for
>
75
%
attendance
''
+
1
for
referat
/
Congress
contribution
''
+
2
Hausarbeit
)
p
><
p
class
=
fragment
>
Leistungsnachweis
p
><
p
class
=
fragment
>
signature
-
related
issues
p
><
p
class
=
fragment
>
Feedback
box
p
><
p
class
=
fragment
>
Congress
p
>
Please
answer
(
anonymously
)
on
the
piece
of
paper
at
least
one
among
following
questions
:<
p
class
=
fragment
>
1
.
What
did
You
learn
?
p
><
p
class
=
fragment
>
2
.
What
did
You
like
?
p
><
p
class
=
fragment
>
3
.
What
did
disturb
You
?
p
><
p
class
=
fragment
>
4
.
What
did
You
not
like
?
p
>
and
throw
it
into
Feedbackbox
.
<
p
class
=
fragment
>
TAKEN
Chapter
3
(
Babies
'
Invisible
Knowledge
)
and
4
(
The
birth
of
a
brain
)
from
Dehaene
'
s
How
we
Learn
p
><
p
class
=
fragment
>
TAKEN
Chapter
5
(
Nurture
'
s
Share
)
and
6
(
Recycle
Your
Brain
)
from
Dehaene
'
s
How
we
Learn
p
><
p
class
=
fragment
>
Chapter
7
(
Attention
)
and
8
(
Active
Engagement
)
from
Dehaene
'
s
How
we
Learn
p
><
p
class
=
fragment
>
Chapter
9
(
Error
Feedback
)
and
10
(
Consolidation
)
from
Dehaene
'
s
How
we
Learn
p
><
p
class
=
fragment
>
AI
unplugged
activity
-
Classification
with
Decision
Trees
p
><
p
class
=
fragment
>
AI
unplugged
activity
-
#
deeplearning
p
><
p
class
=
fragment
>
AI
unplugged
activity
-
Reinforcement
learning
p
><
p
class
=
fragment
>
Non
-
human
learning
(
plants
''
animals
etc
.)
p
><
p
class
=
fragment
>
Un
-
learning
&
altered
learning
.
p
>
Man
is
a
'
homo
discens
,'
a
learning
being
.
People
learn
as
long
as
they
live
.
Life
is
inseparably
connected
with
learning
.
Horst
Siebert
<
br
>
<
div
>
From
Proto
-
Italic
*
diskō
''
from
earlier
*
dikskō
''
from
Proto
-
Indo
-
European
*
di
-
dḱ
-
ské
-
ti
''
derived
from
the
root
root
*
deḱ
-
(“
t
<
strong
>
o
take
strong
>”).
From
the
same
root
as
doceō
and
discipline
unrelated
to
discipulus
.
div
><
br
><
div
div
>
<
p
class
=
fragment
>
What
did
You
learn
today
?
p
><
p
class
=
fragment
>
What
was
the
most
important
thing
You
learned
this
year
?
p
>
Implicit
learning
is
the
process
of
acquiring
knowledge
or
skills
unconsciously
''
without
intentional
effort
or
explicit
awareness
of
what
is
being
learned
.
It
typically
occurs
through
repeated
exposure
to
patterns
''
stimuli
''
or
behaviors
''
allowing
individuals
to
internalize
rules
or
structures
without
being
able
to
articulate
them
directly
.
Observation
in
the
context
of
learning
refers
to
the
process
of
acquiring
knowledge
''
skills
''
or
behaviors
by
perceiving
the
actions
of
others
or
the
dynamics
within
an
environment
.
This
learning
occurs
without
direct
engagement
but
through
experiencing
of
external
stimuli
''
events
''
or
behaviors
.
<
br
>
Unsupervised
learning
is
a
type
of
machine
learning
where
a
model
is
trained
on
data
without
labeled
outcomes
.
The
system
analyzes
and
identifies
patterns
or
structures
in
the
data
''
such
as
clustering
or
associations
''
without
explicit
guidance
on
what
to
look
for
.
It
is
often
used
for
tasks
like
anomaly
detection
''
clustering
''
and
dimensionality
reduction
.<
br
>
<
p
class
=
fragment
><
strong
>
No
direct
guidance
strong
>
::
Learner
L
must
identify
patterns
or
relationships
from
raw
input
on
her
own
.
p
><
p
class
=
fragment
><
strong
>
Pattern
discovery
strong
>
::
Observation
often
involves
recognizing
and
understanding
patterns
in
the
environment
''
which
mirrors
what
happens
in
unsupervised
learning
.
p
><
p
class
=
fragment
><
strong
>
Learning
from
the
environment
strong
>
::
L
derives
insights
from
the
world
or
data
without
external
labels
or
supervision
.
p
>
Imitation
in
humans
''
especially
in
children
''
is
the
process
of
learning
by
observing
and
replicating
the
actions
''
behaviors
''
or
expressions
of
others
.
It
is
a
fundamental
mechanism
for
acquiring
social
''
cognitive
''
and
motor
skills
''
allowing
children
to
mimic
gestures
''
language
''
or
problem
-
solving
techniques
.
Through
imitation
''
children
learn
cultural
norms
''
communication
patterns
''
and
even
complex
tasks
without
explicit
instruction
.
It
is
crucial
for
early
development
''
as
it
helps
children
integrate
into
their
social
environment
and
build
understanding
by
modeling
behaviors
they
see
in
parents
''
peers
''
and
others
.
Learning
is
building
A
model
of
THE
world
.
Experiential
learning
is
a
process
of
learning
through
direct
experience
''
where
individuals
engage
in
activities
''
reflect
on
their
actions
''
and
apply
what
they
ve
learned
to
new
situations
.
Rather
than
solely
reading
or
listening
''
learners
actively
participate
''
often
experimenting
''
making
mistakes
''
and
adapting
.<
br
>
In
learning
and
problem
-
solving
''
a
trial
is
a
single
attempt
or
effort
to
reach
a
goal
or
find
a
solution
.
Each
trial
involves
testing
an
idea
or
making
a
change
''
then
observing
the
result
.
If
the
trial
doesn
t
succeed
''
adjustments
are
made
based
on
what
was
learned
''
and
a
new
trial
begins
.
This
process
''
called
trial
and
error
''
continues
until
the
desired
outcome
is
achieved
.
Trials
are
key
in
learning
because
they
allow
us
to
explore
''
adapt
''
and
improve
with
each
attempt
''
reducing
mistakes
and
moving
closer
to
success
over
time
.
In
learning
and
problem
-
solving
''
an
error
is
the
difference
between
what
we
aimed
to
achieve
and
the
actual
result
.
It
shows
how
far
we
are
from
the
desired
outcome
and
helps
us
understand
what
needs
adjusting
.
Errors
aren
t
failures
;
they
re
valuable
feedback
''
guiding
us
to
make
improvements
in
each
new
attempt
.
By
identifying
and
reducing
errors
through
practice
and
adjustments
''
we
gradually
get
closer
to
the
goal
.
In
this
way
''
errors
are
essential
to
learning
''
as
they
highlight
what
doesn
t
work
and
point
us
toward
what
might
.
In
learning
''
skill
-
building
''
and
habit
formation
''
repetition
is
the
process
of
repeatedly
practicing
or
performing
a
task
.
This
continuous
repetition
reinforces
memory
''
builds
familiarity
''
and
over
time
''
transforms
skills
into
habits
or
reflexes
''
making
actions
more
automatic
and
effortless
.
Through
repetition
''
connections
in
the
brain
are
strengthened
''
allowing
tasks
to
be
completed
with
less
conscious
effort
Game
provides
a
closed
system
whereby
we
are
allowed
to
commit
errors
(&
learn
from
them
)
without
major
consequences
for
real
life
.<
p
class
=
fragment
>
What
is
Your
most
favorite
game
/
way
of
playing
?
p
><
p
class
=
fragment
>
Any
game
You
would
like
to
bring
&
play
with
Your
colleagues
during
the
Congress
?
p
>
Supervised
learning
is
a
type
of
machine
learning
where
a
model
is
trained
on
labeled
data
to
learn
the
mapping
between
input
features
and
corresponding
outputs
.
The
goal
is
to
enable
the
model
to
make
accurate
predictions
or
classifications
on
unseen
data
by
minimizing
the
error
between
its
predictions
and
the
true
labels
.
Common
tasks
include
regression
(
predicting
continuous
values
)
and
classification
(
assigning
categories
).
Supervised
learning
relies
on
a
training
dataset
with
known
inputs
and
outputs
and
evaluates
performance
using
a
separate
test
dataset
.
Examples
include
spam
email
detection
''
image
recognition
''
and
speech
-
to
-
text
systems
.
Features
are
observable
and
measurable
properties
or
characteristics
used
to
describe
data
in
both
machine
learning
and
human
experience
. <
div
><
br
>
div
><
div
>
In
ML
''
features
are
input
variables
raw
(
e
.
g
.''
pixel
intensities
''
audio
waveforms
)
or
engineered
(
e
.
g
.''
embeddings
''
statistical
summaries
)—
that
models
use
to
make
predictions
. <
div
><
br
>
div
><
div
>
In
human
experience
''
features
represent
sensory
or
cognitive
details
like
color
''
texture
''
pitch
''
or
emotional
tone
''
helping
interpret
and
navigate
the
world
.
div
>
div
>
Supervised
learning
parallels
human
learning
through
its
reliance
on
guidance
from
labeled
examples
''
similar
to
how
humans
learn
with
feedback
.
For
instance
''
when
a
child
learns
to
identify
objects
''
they
receive
input
(
the
object
)
and
a
corresponding
label
(
e
.
g
.''
dog
or
apple
)
from
a
teacher
or
parent
.
Mistakes
are
corrected
''
reinforcing
the
connection
between
input
and
label
''
much
like
how
supervised
learning
algorithms
adjust
their
predictions
based
on
errors
.
A
classifier
in
machine
learning
is
a
model
or
algorithm
designed
to
categorize
data
into
predefined
groups
or
labels
.
It
takes
input
data
''
analyzes
its
features
''
and
assigns
it
to
a
specific
class
based
on
learned
patterns
from
training
data
.
For
example
''
a
classifier
might
identify
whether
an
email
is
spam
or
not
spam
''
or
recognize
handwritten
digits
.
Classifiers
are
essential
in
supervised
learning
tasks
and
operate
by
minimizing
errors
in
predictions
through
training
on
labeled
datasets
.
Common
types
include
neural
networks
''
support
vector
machines
''
decision
trees
etc
.
In
supervised
machine
learning
''
training
is
the
process
of
teaching
a
model
''
like
a
classifier
''
to
make
accurate
predictions
by
learning
patterns
from
labeled
data
.
Each
data
point
in
the
training
set
includes
features
(
characteristics
or
inputs
that
describe
the
data
''
like
size
or
color
)
and
a
corresponding
label
(
the
correct
output
or
category
).
The
model
uses
this
data
to
adjust
its
internal
parameters
''
minimizing
the
error
between
its
predictions
and
the
actual
labels
.
This
is
done
through
algorithms
like
gradient
descent
.
The
goal
is
to
generalize
from
the
training
data
''
enabling
the
classifier
to
make
accurate
predictions
on
new
''
unseen
data
.
In
supervised
machine
learning
''
*
testing
*
(
or
*
inference
*)
is
the
process
of
evaluating
a
trained
model
'
s
ability
to
make
accurate
predictions
on
new
''
unseen
data
.
During
this
phase
''
the
model
is
given
data
points
with
*
features
*
(
inputs
like
size
or
color
)
but
without
the
labels
it
was
trained
on
.
The
model
uses
the
patterns
it
learned
during
training
to
predict
the
labels
for
this
data
.
The
results
are
then
compared
to
the
actual
labels
(
if
available
)
to
measure
the
model
'
s
performance
using
metrics
like
accuracy
or
precision
.
Inference
is
the
final
application
of
the
model
to
make
real
-
world
predictions
.
Binary
classifiers
are
evaluated
by
comparing
their
predictions
to
the
actual
outcomes
using
a
confusion
matrix
.
This
is
a
table
with
four
categories
:
True
Positives
(
TP
)''
where
the
classifier
correctly
predicts
a
positive
outcome
;
True
Negatives
(
TN
)''
where
it
correctly
predicts
a
negative
outcome
;
False
Positives
(
FP
)''
where
it
wrongly
predicts
a
positive
;
and
False
Negatives
(
FN
)''
where
it
misses
a
positive
case
.
Metrics
like
accuracy
(
overall
correctness
)''
precision
(
focus
on
positives
)''
and
recall
(
how
well
positives
are
found
)
are
calculated
from
this
matrix
''
helping
to
assess
the
classifier
s
performance
.
In
supervised
machine
learning
''
*
validating
*
is
the
process
of
fine
-
tuning
and
assessing
a
model
'
s
performance
during
training
to
ensure
it
generalizes
well
to
unseen
data
.
Unlike
testing
''
validation
occurs
on
a
separate
*
validation
set
*''
distinct
from
both
training
and
testing
data
.
The
model
uses
the
*
features
*
of
this
set
to
make
predictions
''
which
are
compared
to
the
actual
labels
to
calculate
metrics
like
accuracy
or
loss
.
This
helps
monitor
overfitting
or
underfitting
and
guides
adjustments
to
model
parameters
or
hyperparameters
(
e
.
g
.''
learning
rate
or
regularization
).
Validation
ensures
the
classifier
is
optimized
before
its
final
evaluation
on
the
test
set
.
Reinforcement
learning
(
RL
)
is
a
machine
learning
paradigm
where
an
agent
learns
to
make
decisions
by
interacting
with
an
environment
.
Instead
of
being
told
what
to
do
''
the
agent
takes
actions
and
receives
feedback
in
the
form
of
rewards
or
penalties
.
The
goal
is
to
maximize
cumulative
rewards
over
time
by
discovering
an
optimal
strategy
''
known
as
a
policy
.
RL
is
inspired
by
trial
-
and
-
error
learning
in
humans
and
animals
''
where
behavior
improves
through
experience
.
It
s
particularly
useful
for
tasks
with
sequential
decision
-
making
''
such
as
robotics
''
game
playing
''
and
autonomous
systems
''
where
actions
impact
not
only
immediate
rewards
but
also
future
outcomes
.
<
br
>
Experiential
learning
''
Unsupervised
learning
''
Supervised
learning
''
Classifiers
&
Machine
Learning
...
Supervised
learning
resembles
a
structured
classroom
environment
''
where
explicit
feedback
is
given
for
each
example
(
e
.
g
.''
a
teacher
correcting
a
student
'
s
answers
).
In
contrast
''
reinforcement
learning
mirrors
experiential
learning
''
where
feedback
comes
as
rewards
or
penalties
after
actions
''
guiding
behavior
toward
long
-
term
goals
.
For
instance
''
a
child
learning
to
ride
a
bike
might
fall
(
penalty
)
or
stay
balanced
(
reward
)''
gradually
improving
through
trial
and
error
.
<
div
>“
It
is
unworthy
of
excellent
men
to
lose
hours
like
slaves
in
the
labour
of
calculation
which
could
safely
be
relegated
to
anyone
else
if
machines
were
used
."
div
><
div
><
br
>
div
><
div
>
G
.
W
.
Leibniz
(
Describing
''
in
1685
''
the
value
to
astronomers
of
the
hand
-
cranked
calculating
machine
he
had
invented
in
1673
.)
div
><
div
><
br
>
div
>
In
machines
''
reinforcement
learning
(
RL
)
is
implemented
using
an
agent
-
environment
framework
.
The
agent
interacts
with
an
environment
by
taking
actions
based
on
a
policy
(
a
strategy
for
decision
-
making
).
The
environment
provides
feedback
in
the
form
of
rewards
or
penalties
''
guiding
the
agent
to
improve
its
actions
.
Key
components
include
a
reward
function
to
evaluate
outcomes
''
a
value
function
to
estimate
long
-
term
benefits
of
actions
''
and
exploration
strategies
to
balance
learning
new
behaviors
versus
exploiting
known
rewards
.
<
div
>
When
satisfaction
follows
association
''
it
is
more
likely
to
be
repeated
.<
br
>
div
>
<
p
>
Q
-
learning
is
a
model
-
free
reinforcement
learning
algorithm
that
enables
an
agent
to
learn
an
optimal
policy
for
decision
-
making
.
It
works
by
estimating
the
<
strong
>
Q
-
values
strong
>
(
action
-
value
function
)''
which
represent
the
expected
cumulative
reward
for
taking
an
action
in
a
given
state
and
following
the
best
future
actions
.
The
agent
updates
Q
-
values
iteratively
using
the
formula
:
p
><
img
src
=
https
://
miro
.
medium
.
com
/
v2
/
resize
:
fit
:
1043
/
1
*
vTMQI14ls9lWzRXzJGi4sg
.
jpeg
/>
DRL
is
a
type
of
machine
learning
where
an
agent
learns
to
make
decisions
by
trial
and
error
''
guided
by
rewards
or
penalties
''
using
deep
neural
networks
.
Unlike
traditional
methods
''
which
struggle
with
complex
environments
''
DRL
allows
machines
to
learn
directly
from
raw
data
''
like
images
or
game
screens
.
The
neural
network
helps
the
agent
recognize
patterns
and
improve
its
decisions
over
time
.
DRL
has
achieved
impressive
results
in
tasks
like
playing
video
games
(
e
.
g
.''
Atari
''
AlphaGo
)''
controlling
robots
''
and
developing
self
-
driving
cars
''
making
it
a
powerful
tool
for
solving
real
-
world
problems
involving
sequential
decision
-
making
Cells
that
fire
together
''
wire
together
.
<
div
>
Es
begab
sich
aber
zu
der
Zeit
''
daß
ein
Gebot
von
dem
Kaiser
Augustus
ausging
''
daß
alle
Welt
geschätzt
würde
.
div
><
div
><
br
>
div
><
div
>
Und
diese
Schätzung
war
die
allererste
und
geschah
zu
der
Zeit
''
da
Cyrenius
Landpfleger
von
Syrien
war
.
div
><
div
><
br
>
div
><
div
>
Und
jedermann
ging
''
daß
er
sich
schätzen
ließe
''
ein
jeglicher
in
seine
Stadt
.
div
><
div
><
br
>
div
><
div
>
Da
machte
sich
auch
auf
Joseph
aus
Galiläa
''
aus
der
Stadt
Nazareth
''
in
das
jüdische
Land
zur
Stadt
Davids
''
die
da
heißt
Bethlehem
''
darum
daß
er
von
dem
Hause
und
Geschlechte
Davids
war
''
auf
daß
er
sich
schätzen
ließe
mit
Maria
''
seinem
vertrauten
Weibe
''
die
ward
schwanger
...
div
>
Social
learning
Peer
learning
Machine
Learning
Human
-
Machine
Peer
Learning
Teaching
''
Pedagogy
''
Didactics
Artificial
Teacher
Avatars
Educational
Systems
Extended
Educational
Environments
The
Congress
[Impressum, Datenschutz, Login] Other subprojects of wizzion.com linkring: naadam.info giver.eu kyberia.de udk.ai baumhaus.digital refused.science fibel.digital puerto.life gardens.digital teacher.solar