Adaptive Method for Neural Nets

Adaptive Method For Neural Nets-Free Presentation Download

  • Date:30 Sep 2020
  • Views:6
  • Downloads:0
  • Size:1.25 MB

Share Presentation : Adaptive Method For Neural Nets

Download and Preview : Adaptive Method For Neural Nets

Report CopyRight/DMCA Form For : Adaptive Method For Neural Nets


Transcription:

Neural Nets Something you can use andsomething to think aboutCris Koutsougeras What are Neural Nets What are they good for.
Pointers to some models and formulations Methods that you can use Fun and open issues What are they good forSystem of interest.
Black box If I have a finite set of observations samples of input outputbehavior can I figure out what function the box performs System identification Prediction forecasting.
Controls Autogenerate a simulator Dealing with cases where only samples are knownabout the function of interest Why Neural Nets Target function is unknown except from.
Target is known but is very hard to describe infinite terms e g closed form expression Target function is non deterministicSomething to think about General Structure of a NN.
Function Approximation Output is a composition of thevarious node functions Output is parametric on the inputs weights thresholds are parameters.
Bottomline function approximators Nets perform functional compositionsNet output f1 O2W21 O3W31 f1 f 2 O4W42 O5W52 W21 f 3 O5W53 W31 Output is a complex function of the inputs The complexity comes.
from the deep nesting of the typical neuron functions Y f f f f x1 x2 xn f f f x1 x2 xn f f f x1 x2 xn f f f fx1 x2 Inputs xn.
Net s function is an elastic curveBy adjusting the weights curve parameters the curve fits the samplesAdjusting the weights is the key issue How does it work We have a Sample set Si x1 t1 x2 t2 xn tn .
We have the net producing yi f xi W We define a Quality measure Q W that involves fand the targets ti We adjust W iteratively W a w Q until Q isoptimized .
A convenient Q is usually the mean square error How does it workNet function W a W QSomething you can use.
Nonlinear regressionQuality function is the sum of individual errors Minimizing the erroris like stretching the curve to fit the samples Problem How do we know that we are done Nonlinear regression.
Nonlinear regression Problems Not enough non linearity to fit or Overfitting Need for minimal.
nonlinearity that canaccomplish fitting Gradient Decent can get stuckWeight Space Simulated Annealing.
W a t W QTurn a into a time function startwith very large values andgradually reduce itTheorem If a is reduced at a.
slow enough rate theprobability of landing at theglobal minimumasymptotically tends to 1Something you can use.
Simulated Annealing By starting with lots ofenergy and reducing itslowly enough the probewill eventually have.
enough energy to jumpout of local minima butnot out of the global If itremains long enough inthat energy range it will.
get trapped in the globalminimum area Let s have some fun What network structure do we need Particularly how many nodes .
Let s have some fun Yj F WrjVrj So WrjVrj F 1 Yj r linearequations YjInputs Xij.
Our Framework New Class of Training Algorithms We conclude that after proper training by any method all intermediate normalized vectors Y project at the samepoint in the direction of W .
Thus all Y s are aligned on a plane that is perpendicular New class of algorithms Find weights for hidden layer that align all Y s on a W for the output layer is the normal to that plane One such Algorithm.
Yi W wQMinimize d i which is parametric on all weights Thus use asquality function Q d i2and perform a gradient descent .
Something you can use Open Questions What is the minimum number of neurons What is the minimum nontrivial rank thatthe system can assume This determines.
the number of neurons in the intermediate Interesting Results The local activation functions must be nonlinear for hidden layer but notfor the output layer We thus arrive at the same result as Kolmogorov s The solvability of W j Yi 1 proves universal approximation with only.
one necessary hidden layer The minimum nontrivial rank of the matrix Yiprovides the number ofhidden layer neurons necessary for proper fitting Problem the matrix is parametric and we have no effective method.
for computing the lowest non trivial rank We came up with other characterizations based on Vapnik Chervonenkis dimension and PAC learning However the problem of a precise optimum number for the hiddenlayer is at large still open Something to think about .
Clustering Models pattern recognition classification Neuron functions represent discriminant functions that can be used to construct borders among Clustering Models.
pattern recognition classification Neuron functions represent discriminant functions that can be used to construct borders among Linear neurons thresholding 1 if F w1x1 w2x2 wnxn T.
0 if F w1x1 w2x2 wnxn T Radial Basis1 if w1 x1 2 w2 x2 2 wn xn 2 R20 if w1 x1 2 w2 x2 2 wn xn 2 R2.
Title: Adaptive Method for Neural Nets Author: C K Created Date: 4/9/2006 4:53:14 PM Document presentation format: On-screen Show Other titles: Arial Times New Roman Symbol Blackadder ITC Tahoma Default Design Microsoft Photo Editor 3.0 Photo Microsoft Equation 3.0 Neural Nets: Something you can use and something to think about What are they good for Why Neural Nets General Structure of a NN ...

Related Presentations