You know
what, this is the most popular question asked by any machine learning/data
science discussion. Newbies (and some specialists as well) tend to complicate
around the relation between this trio.
I will
keep the boring formal definitions of AI/ML for last. Let us start with an
exciting character named Deep Blue (presented by IBM) who became eminent in
1997 when he has beaten the Chess champion - Gary Kasparov. For each 3-min
move, Deep Blue could analyze 50 billion positions & take a decision based
on pre-programmed software rules. This was an example of AI without ML. Two
decades passed by and the spotlight shifts to Seoul in 2016 where an even more
interesting character named AlphaGo created by Google defeated the Go world
champion - Lee Sedol. This was an example of AI with ML. No rule was
pre-programmed into Alphago! Not even AlphaGo's development team would be able
to pinpoint exactly what set of final rules are used by AlphaGo to make its
moves and why!
AI without
ML - Humans provide the rules to the machines
AI with ML - Humans provide only the
data. The machines learn the rules themselves.
You may have seen nice following circular graphs where ML is displayed to be a subsection of AI and DL is shown to be a subdivision of ML.

Well the fact is in recent times; ML is hijacking almost all
AI space and there is very little non-ML AI development happening. So, you can
say that most of the AI systems today run using ML.
AI began in 1953 when Claude Shanon
at Bell labs hired two assistants named Marvin Minsky and John McCarthy setting
in motion a chain of events that was to have wide-ranging implications for
human-kind. They had a common interest in a quaint scientific field of those
times called 'thinking machines'. Turing had a couple of years back proposed
his now-famous Turing test - ‘a computer can be said to be intelligent if a
human judge can't tell whether he is interacting with a human or a machine’ and
it was a hot subject in those days. Anyway in 1958, the three of them came up
with an interesting proposal requesting a break from regular work for 8 weeks
and for funding of a 'series of brain-storming sessions' to discuss this new
field which they formally titled 'Artificial Intelligence'. While I am sure
that in modern-day this kind of proposal would raise eyebrows, it did get
approved and the rest is history!
While John McCarthy gave the general
definition of AI as “the science & engineering of making intelligent
machines or machines that think the way humans think”, Arthur Samuel in 1959
defined Machine Learning (ML) as - “a field of study that gives computers the
ability to learn without being explicitly programmed”.
In those days and for several decades
afterward, ML was one of the (several) techniques by which AI (“making
intelligent machines”) could be achieved. Starting in the late 90’s the face of
AI changed as never before! The Internet era threw in an abundance of DATA.
This fueled up ML-like never before because as we discussed in ML systems what
happens is - Humans provide only the data. The machines learn the rules
themselves. They don't need explicit programming.
So, most of the AI systems today are
based on ML. As to what DL is, it is a subset of ML inspired by biological
systems like the human brain which uses multiple layers to progressively
extract higher-level features from the raw input. For e.g. when we see
something, data is passed from our eyes to the brain to be interpreted. The
brain identifies the object thru’ several layers of processing at first it will
identify the edges and corners, while subsequent layers extract higher-level
features, and finally, we see whole features like digits or letters or faces.
The adjective "deep" in deep learning comes from the use of multiple
layers in the network.
This is a crisp definition meant for the layman without going
too deep.
Post a Comment