SNN与Deep-Learning区别

整理文档很辛苦,赏杯茶钱您下走!

免费阅读已结束,点击下载阅读编辑剩下 ...

阅读已结束,您可以下载文档离线阅读编辑

资源描述

Shortanswer:Strictlyspeaking,DeepandSpikingrefertotwodifferentaspectsofaneuralnetwork:Spikingreferstotheactivationofindividualneurons,whileDeepreferstotheoverallnetworkarchitecture.Thusinprinciplethereisnothingcontradictoryaboutaspiking,deepneuralnetwork(infact,thebrainisarguablysuchasystem).However,inpracticethecurrentapproachestoDLandSNNdon'tworkwelltogether.Specifically,DeepLearningascurrentlypracticedtypicallyreliesonadifferentiableactivationfunctionandthusdoesn'thandlediscretespiketrainswell.Furtherdetails:Realneuronscommunicateviadiscretespikesofvoltage.Whenbuildinghardware,spikinghassomeadvantagesinpowerconsumption,andyoucanroutespikeslikedatapackets(AddressEventRepresentationorAER)toemulatetheconnectivityfoundinthebrain.However,spikingisanoisyprocess;generallyasinglespikedoesn'tmeanmuch,soitiscommoninsoftwaretoabstractawaythespikingdetailsandmodelasinglescalarspikerate.Thissimplifiesalotofthings,especiallyifyourgoalismachinelearningandnotbiologicalmodeling.ThekeyideaofDeepLearningistohavemultiplelayersofneurons,witheachlayerlearningincreasingly-complexfeaturesbasedonthepreviouslayer.Forexample,inavisionsetting,thelowestlevellearnssimplepatternslikelinesandedges,thenextlayermaylearncompositionsofthelinesandedges(cornersandcurves),thenextlayermaylearnsimpleshapes,andsoonupthehierarchy.Upperlevelsthenlearncomplexcategories(people,cats,cars)orevenspecificinstances(yourboss,yourcat,thebatmobile).Oneadvantageofthisisthatthelowest-levelfeaturesaregenericenoughtoapplytolotsofsituationswhiletheupperlevelscangetveryspecific.ThecanonicalwaytotrainspikingnetworksissomeformofSpikeTimingDependentPlasticity(STDP),whichlocallyreinforcesconnectionsbasedoncorrelatedactivity.ThecanonicalwaytotrainaDeepNeuralNetworkissomeformofgradientdescentback-propagation,whichadjustsallweightsbasedontheglobalbehaviorofthenetwork.Gradientdescenthasproblemswithnon-differentiableactivationfunctions(likediscretestochasticspikes).Ifyoudon'tcareaboutlearning,itshouldbeeasiertocombinetheapproaches.Onecouldpresumablytakeapre-traineddeepnetworkandimplementjustthefeed-forwardpart(nofurtherlearning)asaspikingneuralnet(perhapstoputitonachip).Theresultingchipwouldnotlearnfromnewdatabutshouldimplementwhateverfunctiontheoriginalnetworkhadbeentrainedtodo.

1 / 1
下载文档,编辑使用

©2015-2020 m.777doc.com 三七文档.

备案号:鲁ICP备2024069028号-1 客服联系 QQ:2149211541

×
保存成功