Theprojectdescribesteachingprocessofmulti-layerneuralnetworkemployingbackpropagationalgorithm.Toillustratethisprocessthethreelayerneuralnetworkwithtwoinputsandoneoutput,whichisshowninthepicturebelow,isused:Eachneuroniscomposedoftwounits.Firstunitaddsproductsofweightscoefficientsandinputsignals.Thesecondunitrealisenonlinearfunction,calledneuronactivationfunction.Signaleisadderoutputsignal,andy=f(e)isoutputsignalofnonlinearelement.Signalyisalsooutputsignalofneuron.Toteachtheneuralnetworkweneedtrainingdataset.Thetrainingdatasetconsistsofinputsignals(x1andx2)assignedwithcorrespondingtarget(desiredoutput)z.Thenetworktrainingisaniterativeprocess.Ineachiterationweightscoefficientsofnodesaremodifiedusingnewdatafromtrainingdataset.Modificationiscalculatedusingalgorithmdescribedbelow:Eachteachingstepstartswithforcingbothinputsignalsfromtrainingset.Afterthisstagewecandetermineoutputsignalsvaluesforeachneuronineachnetworklayer.Picturesbelowillustratehowsignalispropagatingthroughthenetwork,Symbolsw(xm)nrepresentweightsofconnectionsbetweennetworkinputxmandneuronnininputlayer.Symbolsynrepresentsoutputsignalofneuronn.Propagationofsignalsthroughthehiddenlayer.Symbolswmnrepresentweightsofconnectionsbetweenoutputofneuronmandinputofneuronninthenextlayer.Propagationofsignalsthroughtheoutputlayer.Inthenextalgorithmsteptheoutputsignalofthenetworkyiscomparedwiththedesiredoutputvalue(thetarget),whichisfoundintrainingdataset.Thedifferenceiscallederrorsignalofoutputlayerneuron.Itisimpossibletocomputeerrorsignalforinternalneuronsdirectly,becauseoutputvaluesoftheseneuronsareunknown.Formanyyearstheeffectivemethodfortrainingmultiplayernetworkshasbeenunknown.Onlyinthemiddleeightiesthebackpropagationalgorithmhasbeenworkedout.Theideaistopropagateerrorsignal(computedinsingleteachingstep)backtoallneurons,whichoutputsignalswereinputfordiscussedneuron.Theweights'coefficientswmnusedtopropagateerrorsbackareequaltothisusedduringcomputingoutputvalue.Onlythedirectionofdataflowischanged(signalsarepropagatedfromoutputtoinputsoneaftertheother).Thistechniqueisusedforallnetworklayers.Ifpropagatederrorscamefromfewneuronstheyareadded.Theillustrationisbelow:Whentheerrorsignalforeachneuroniscomputed,theweightscoefficientsofeachneuroninputnodemaybemodified.Informulasbelowdf(e)/derepresentsderivativeofneuronactivationfunction(whichweightsaremodified).df1(e)/dw(x1)1=[df1(e)/de]*[de/dw(x1)1]=[df(e)/de]*x1Coefficientaffectsnetworkteachingspeed.Thereareafewtechniquestoselectthisparameter.Thefirstmethodistostartteachingprocesswithlargevalueoftheparameter.Whileweightscoefficientsarebeingestablishedtheparameterisbeingdecreasedgradually.Thesecond,morecomplicated,methodstartsteachingwithsmallparametervalue.Duringtheteachingprocesstheparameterisbeingincreasedwhentheteachingisadvancedandthendecreasedagaininthefinalstage.Startingteachingprocesswithlowparametervalueenablestodetermineweightscoefficientssigns.ReferencesRyszardTadeusiewczSiecineuronowe,Kraków1992