Home page:

Brought to you by you:

Additional funding provided by Amplify Partners

For any early-stage ML entrepreneurs, Amplify would love to hear from you: 3blue1brown@amplifypartners.com

Full playlist:

Typo correction: At 14:45, the last index on the bias vector is n, when it’s supposed to in fact be a k. Thanks for the sharp eyes that caught that!

For those who want to learn more, I highly recommend the book by Michael Nielsen introducing neural networks and deep learning:

There are two neat things about this book. First, it’s available for free, so consider joining me in making a donation Nielsen’s way if you get something out of it. And second, it’s centered around walking through some code and data which you can download yourself, and which covers the same example that I introduce in this video. Yay for active learning!

I also highly recommend Chris Olah’s blog:

For more videos, Welch Labs also has some great series on machine learning:

For those of you looking to go *even* deeper, check out the text “Deep Learning” by Goodfellow, Bengio, and Courville.

Also, the publication Distill is just utterly beautiful:

Lion photo by Kevin Pluck

——————

Animations largely made using manim, a scrappy open source python library.

If you want to check it out, I feel compelled to warn you that it’s not the most well-documented tool, and has many other quirks you might expect in a library someone wrote with only their own use in mind.

Music by Vincent Rubinetti.

Download the music on Bandcamp:

Stream the music on Spotify:

If you want to contribute translated subtitles or to help review those that have already been made by others and need approval, you can click the gear icon in the video and go to subtitles/cc, then “add subtitles/cc”. I really appreciate those who do this, as it helps make the lessons accessible to more people.

——————

3blue1brown is a channel about animating math, in all senses of the word animate. And you know the drill with YouTube, if you want to stay posted on new videos, subscribe, and click the bell to receive notifications (if you’re into that).

If you are new to this channel and want to see more, a good place to start is this playlist:

Various social media stuffs:

Website:

Twitter:

Patreon:

Facebook:

Reddit:

Nguồn:https://duancocobay.com/

Xem Thêm Bài Viết Khác:https://duancocobay.com/dau-tu

i just want to say THANK YOU SO MUCH for this video. this really helped my understanding of neural networks.

10:57 an alternative way of thinking about it is how certain that neuron is that that region of pixels has that specific shape based on the weights assigned to each pixel there – if it's more certain, the number will be higher. if it's less certain, the number will be lower or negative.

❤❤❤❤

Hi Pi, Pi ,Pi, Pi and AI!

Nice presentation, you are very good orator, I never heard you breathe, do you even need air 🙂 I was able to focus so well on the content and not the presentation.

Thanks for sharing this great video！

Как понять сколько нужно слоев чтобы матрица вещественных чисел 28х28 могла распознавать числа от 0 до 9?

Как понять сколько нейронов должно быть в каждом слое, чтобы матрица действительных чисел 28х28 могла распознавать числа от 0 до 9?

How to understand the minimum number of layers needed to the matrix of real numbers 28×28 could recognize numbers from 0 to 9?

How to understand the minimum number of neurons should be in each layer so that the matrix of real numbers 28×28 could recognize numbers from 0 to 9?

왜 한글 자막 있다 없냐ㅠㅠ

I like living under a rock.

@3blue1Brown At 14:38 isn't matrix for bias supposed to be [k x 1]? instead of [n x 1]? I am not sure if I'm right but I think since there are k neurons at layer 1, the number of bias also should be k?

So inside one neuron, do each weight for each neuron have different values? Or the same weight value for each input in one neuron?

you are a God Gifted Teacher! please accept my respect master!!!

Random thought, imagine watching this when you have trypophobia

holy crap thank you so much for this video it helped so much

f that's the best subscribe request I ever seen at the end of a video: subscribe so the AI can take positive data, on a ai video, noice.

Can anyone help me understand the 9:50 explanation on weights depicting pixels?

Can’t believe they got Peter Gregory to narrate.

There is an error at the matrix indexes at 16:00, the last column index at first row is n and at other rows it's k, should be consistent

At 10:05 you say that adding the negative weights around to detect edges will increase the weighted sum, but surely as the activation value is between 0 and 1, it will decrease the weighted sum and thus affect which neurons are affected in the next layer?

Very informative and interesting.

Typo: At 14:40 the vector of biases that is added should be [b0…bk] and not [b0…bn] because the size of the matrix that you get from the multiplication is [k+1] x 1 and not [n+1] x 1.

Hi, I have a doubt. what is the difference between parameter a and w ?

Can you tell me about the hidden layer and the number of hidden layer depend upon??

I think there was a mistake at 14:42. The matrix for the bias should go from b0 to bk, not bn.

Awesome video regardless.

at 00:56, you probably said it wrong, it should be 0 to 9, not 0 to 10. Anyway that does change the concept though 😉

greet~~~

If only he had better morals, he might actually accomplish something for humanity…….

Thank you for all these beautiful videos 🙌

Very beautifully explained neural networks !

8:50 Here's where I lost the track and got confused

Dude you are probably the most impressive teacher I have ever seen…congratulations you are a blessing

Great content!

I can´t belief how good and easy this video is! Well done, very helpful

Why I need that waited sum as a math framework of activation functions or for determining activation value?

This man makes calculus seem like prealgebra

I wanted to learn about machine learning before, I’m really interested in AI. This is really helpful, thanks!

Il piano astrale e' più che reale e' del tutto artificiale connesso alla matrix universale

Great job !

Thanks a lot. Excellent video. Inspiring me for a proposal for my research director. Greetings from Popayan, Colombia.

简单明了

this was shockingly similar to a projection matrix. didn't see that coming

I watched this video when it came out and understood like a third of it. 2 years of math classes later, and now I understand the first 14 minutes of it. Then he hit me with that linear algebra.

<3

Instructions unclear. I now have dyscalculia.