Sunday, March 31, 2024

Activation Capabilities & Non-Linearity: Neural Networks 101 | by Egor Howell | Oct, 2023

Must read


Explaining why neural networks can study (almost) something and every part

Towards Data Science
Picture by Google DeepMind: https://www.pexels.com/picture/an-artist-s-illustration-of-artificial-intelligence-ai-this-image-was-inspired-by-neural-networks-used-in-deep-learning-it-was-created-by-novoto-studio-as-part-of-the-visualising-ai-pr-17483874/

In my earlier article, we launched the multi-layer perceptron (MLP), which is only a set of stacked interconnected perceptrons. I extremely suggest you examine my earlier submit if you’re unfamiliar with the perceptron and MLP as will talk about it fairly a bit on this article:

An instance MLP with two hidden layers is proven under:

A fundamental two-hidden multi-layer perceptron. Diagram by creator.

Nonetheless, the issue with the MLP is that it may well solely match a linear classifier. It is because the person perceptrons have a step operate as their activation operate, which is linear:

The Perceptron, which is the best neural community. Diagram by creator.

So regardless of stacking our perceptrons could seem like a modern-day neural community, it’s nonetheless a linear classifier and never that a lot completely different from common linear regression!

One other downside is that it isn’t totally differentiable over the entire area vary.

So, what can we do about it?

Non-Linear Activation Capabilities!

What’s Linearity?

Let’s rapidly state what linearity means to construct some context. Mathematically, a operate is taken into account linear if it satisfies the next situation:

There’s additionally one other situation:

However, we are going to work with the beforehand equation for this demonstration.

Take this quite simple case:



Supply hyperlink

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article