Search · Best Podcasts
Learning Machines 101
By Richard M. Golden, Ph.D., M.S.E.E., B.S.E.E.
About this podcast
Smart machines based upon the principles of artificial intelligence and machine learning are now prevalent in our everyday life. For example, artificially intelligent systems recognize our voices, sort our pictures, make purchasing suggestions, and can automatically fly planes and drive cars. In this podcast series, we examine such questions such as: How do these devices work? Where do they come from? And how can we make them even smarter and more human-like? These are the questions that will be addressed in this podcast series!
Podcasts like "Learning Machines 101"   · View all
AI exposed (Audio) - Channel 9
By Microsoft
ProHuddle, a Podcast about Enterprise IT, Software, Oracle, Databases and The Cloud
By Yalim K. Gerger
Impact in Five
By Impact in Five
LEVERTON Podcast
By LEVERTON Podcast
Chatty con Leche
By Chatty con Leche
Episodes (Total: 10)
Sept. 26, 2017 · 00:21:49
This 68th episode of Learning Machines 101 discusses a broad class of unsupervised, supervised, and reinforcement machine learning algorithms which iteratively update their parameter vector by adding a perturbation based upon all of the training data. This process is repeated, making a perturbation of the parameter vector based upon all of the training data until a parameter vector is generated which exhibits improved predictive performance. The magnitude of the perturbation at each learning iteration is called the “stepsize” or “learning rate” and the identity of the perturbation vector is called the “search direction”. Simple mathematical formulas are presented based upon research from the late 1960s by Philip Wolfe and G. Zoutendijk that ensure convergence of the generated sequence of parameter vectors. These formulas may be used as the basis for the design of artificially intelligent smart automatic learning rate selection algorithms. For more information, please visit the official website:  www.learningmachines101.com    
Aug. 21, 2017 · 00:25:40
In this episode we discuss how to learn to solve constraint satisfaction inference problems. The goal of the inference process is to infer the most probable values for unobservable variables. These constraints, however, can be learned from experience. Specifically, the important machine learning method for handling unobservable components of the data using Expectation Maximization is introduced. Check it out at: www.learningmachines101.com  
July 17, 2017 · 00:34:00
In this episode of Learning Machines 101 (www.learningmachines101.com) we discuss how to solve constraint satisfaction inference problems where knowledge is represented as a large unordered collection of complicated probabilistic constraints among a collection of variables. The goal of the inference process is to infer the most probable values of the unobservable variables given the observable variables. Specifically, Monte Carlo Markov Chain ( MCMC ) methods are discussed.
June 19, 2017 · 00:30:00
In this episode rerun we introduce the concept of gradient descent which is the fundamental principle underlying learning in the majority of deep learning and neural network learning algorithms. Check out the website: www.learningmachines101.com to obtain a transcript of this episode!
May 15, 2017 · 00:28:04
In this rerun of episode 24 we explore the concept of evolutionary learning machines. That is, learning machines that reproduce themselves in the hopes of evolving into more intelligent and smarter learning machines. This leads us to the topic of stochastic model search and evaluation. Check out the blog with additional technical references at: www.learningmachines101.com 
April 20, 2017 · 00:22:04
This 63rd episode of Learning Machines 101 discusses how to build reinforcement learning machines which become smarter with experience but do not use this acquired knowledge to modify their actions and behaviors. This episode explains how to build reinforcement learning machines whose behavior evolves as the learning machines become increasingly smarter. The essential idea for the construction of such reinforcement learning machines is based upon first developing a supervised learning machine. The supervised learning machine then “guesses” the desired response and updates its parameters using its guess for the desired response! Although the reasoning seems circular, this approach in fact is a variation of the important widely used machine learning method of Expectation-Maximization. Some applications to learning to play video games, control walking robots, and developing optimal trading strategies for the stock market are briefly mentioned as well. Check us out at: www.learningmachines101.com   
March 19, 2017 · 00:31:05
This 62nd episode of Learning Machines 101 (www.learningmachines101.com)  discusses how to design reinforcement learning machines using your knowledge of how to build supervised learning machines! Specifically, we focus on Value Function Reinforcement Learning Machines which estimate the unobservable total penalty associated with an episode when only the beginning of the episode is observable. This estimated Value Function can then be used by the learning machine to select a particular action in a given situation to minimize the total future penalties that will be received. Applications include: building your own robot, building your own automatic aircraft lander, building your own automated stock market trading system, and building your own self-driving car!!
Feb. 23, 2017 · 00:29:15
This is the third of a short subsequence of podcasts providing a summary of events associated with Dr. Golden’s recent visit to the 2015 Neural Information Processing Systems Conference. This is one of the top conferences in the field of Machine Learning. This episode reviews and discusses topics associated with the Introduction to Reinforcement Learning with Function Approximation Tutorial presented by Professor Richard Sutton on the first day of the conference. This episode is a RERUN of an episode originally presented in January 2016 and lays the groundwork for future episodes on the topic of reinforcement learning! Check out: www.learningmachines101.com  for more info!!
Jan. 23, 2017 · 00:29:32
This 60th episode of Learning Machines 101 discusses how one can use novelty detection or anomaly detection machine learning algorithms to monitor the performance of other machine learning algorithms deployed in real world environments. The episode is based upon a review of a talk by Chief Data Scientist Ira Cohen of Anodot presented at the 2016 Berlin Buzzwords Data Science Conference. Check out: www.learningmachines101.com to hear the podcast or read a transcription of the podcast!
Dec. 21, 2016 · 00:29:56
I discuss the concept of a “neural network” by providing some examples of recent successes in neural network machine learning algorithms and providing a historical perspective on the evolution of the neural network concept from its biological origins. For more details visit us at: www.learningmachines101.com  
About Listen Notes
Podcast search engine with 406,152 podcasts and 23,672,635 episodes. Built by a one-person team. Learn more.
Follow us
Monthly updates via email (past issues)