Finished "Neuroscience" lecture series 🥳
started on "Manifold learning", same teacher
aka "dimensionality reduction"
I always thought in ML when we say "this data is 1000s of dimensions" etc, that this is the accurate dims of representation of it and dim reduction is an approximation so our eyes can perceive it at some level
But actually, the true representation of a 1000 dimensional data can be 50 dimensions perhapsl or even 3 or 2
so those 1000s Dims are just the space that represents it but that doesn't mean that is the most meaningful representation
this is a quick introductory course and I highly recommend it. only 7 short lectures
She really teaches well and the lecture talks about methods as well like t-SNE, UMAP, PCA etc
I will put a link in comment
Joy of life-long learning day 77...
Almost done with Neuroscience. But will pick up another course on it once finished
Intro to Neuroscience: 95% - Today: 6%
LLM training-Stanford Lecture: 8% - Today: 3%
--------
Fourier Analysis: 22% - Today: 0%
Discrete Mathematics: 33% - Today: 0%
Fundamentals of physics 1: %44 - Today: 0%
Vector Calculus & PDEs: 40% - Today: 0%
Dynamical systems: 36% - Today: 0%
Probability & Statistics: 16% - Today: 0%
Linear Algebra 3/3 30% - Today: 0%
Information Theory: 6% - Today: 0%
Applied Calculus with Python: 32% - Today: 0%
HF LLM training book: 0% - Today: 0%
-------
✅Calculus 1: 100%
✅Calculus 2: 100%
✅Ordinary Differential equations: 100%
✅Linear Algebra 1/3: Systems and Matrix Equations 100%
✅Linear Algebra 2/3: Matrix Algebra, Determinants, & Eigenvectors 100%
✅Introduction to probability: 100%