Introduction to Machine learning

整理文档很辛苦,赏杯茶钱您下走!

免费阅读已结束,点击下载阅读编辑剩下 ...

阅读已结束,您可以下载文档离线阅读编辑

资源描述

IntroductiontoMachinelearningIntroductiontomachinelearningbyQuentindeLaroussilhe-@Underflow404MachinelearningAmachinelearningalgorithmisanalgorithmlearningtoaccomplishataskbyobservingdata.●Usedoncomplextaskswhereit’shardtodevelopalgorithmswithhandcrafted-rules●ExploitspatternsinobserveddataandextractrulesautomaticallyFieldsofapplication●Computervision●Speechrecognition●Financialanalysis●Searchengines●Ads-targeting●Contentsuggestion●Self-drivingcars●Assistants●etc...Example:objectdetectionBigvariationinvisualfeatures:●Shape●Background●Size/positionClassifyinganobjectinapictureisnotaneasytask.Example:objectdetection●Learnfromannotatedcorpusofexamples(adataset)toclassifyunknownimagesamongdifferentobjecttypes●Observeimagestolearnpatterns●Lotofdataavailable(i.e:ImageNetdataset)●Verygooderrorrates(CNN)GeneralconceptsTypesofMLalgorithmsSupervisedLearnafunctionbyobservingexamplescontainingtheinputandtheexpectedoutput.●Classification●RegressionUnsupervisedFindunderliningrelationsindatabyobservingtherawdataonly(withouttheexpectedoutput).●Clustering●DimensionalityreductionTrainingsetClassificationvsRegressionRegressionLearnafunctionmappinganinputelementtoarealvalue.i.e:PredictthetemperatureoftomorrowgivensomemeteosignalsClassificationLearnafunctionmappinganinputelementtoaclass(withinafinitesetofpossibleclasses).i.e:Predicttheweatheroftomorrow:{sunny,cloudy,rainy}givensomemeteosignalsRegressionClassificationClusteringAclusteringalgorithmseparatedifferentobserveddatapointsinsimilargroups(clusters).Wedonotknowthelabelsduringtraining.Cluster1Cluster3Cluster2ReinforcementlearningLearntheoptimalbehaviorforanagentinanenvironmenttomaximizeagivengoal.Examples:●Driveacaronaroadandminimizethecollisionrisk●Playvideo-games●ChoosethepositionofadsonawebsitetomaximizethenumberofclicksFeatureextractionThefirststepinamachinelearningprocessistoextractusefulvaluesfromthedata(calledfeatures).Thegoalistoextracttheinformationusefulforthetaskwewanttolearn.Examples:●Stockmarkettime-serie→[openingprice,closingprice,lowest,highest]●Image→Imagewithedgesfiltered●Document→bag-of-wordModelisationprocessknearestneighborsk-nearestneighbors●Classificationandregressionmodel●Supervisedlearning:wehaveannotatedexamples●Weclassifyanewexamplebasedonthelabelsofhis“nearestneighbors”●kisthenumberofneighborstakeninconsiderationk-nearestneighborsToclassifyapoint:Welookthek-nearestneighbors(herek=5)andwedoamajorityvote.Thispointhas3redneighborsand2blueneighbors,itwillbeclassifiedasred.k-nearestneighbors●Ndatapoints●Requireadistancefunctionbetweenpoints●Regression(averagethevalueofthek-nearestneighbors)●Classification(majorityvoteofthek-nearestneighbors)k-nearestneighbors:effectofk●kisthenumberofneighborstakeninconsideration●Ifk=1○Theaccuracyonthetrainingsetis100%○Itmightnotgeneralizeonnewdata●Ifk1○Theaccuracyonthetrainingsetmightnotbe100%○Itmightgeneralizebetteronunseendatak-nearestneighbors:weightedversionInthecaseofunbalancedrepartitionbetweenclasseswecangiveweightstotheexamples.●Theweightofaunderrepresentedclasswillbesethigh.●Theweightofaoverrepresentedclasswillbesetlow.Whenwedothemajorityvote,wetaketheweightinconsideration:●Forclassificationwedoaweightedvote.●Forregressionwedoaweightedaverage.Decisiontrees,randomforestsDecisiontreeDecisiontree●Decisiontreespartitionthefeaturespacebysplittingthedata●LearningthedecisiontreeconsistsinfindingtheorderandthesplitcriterionforeachnodeDecisiontree●Thedecisiontreelearningisparametrizedbythemethodforchoosingthesplitsandthemaximumheight●Ifthemaximumheightisbigenough,alltheexamplesofthetrainingdatawouldbecorrectlyclassified:overfitting.Decisiontree:entropymetric●S:Thedatasetsbeforethesplit●X:Setofexistingclasses●p(x):ProportionofelementsinclassxtothenumberofelementsinS●A:Thesplitcriterion●T:ThedatasetscreatedbythesplitEntropy:Ateachstepwecreatethenodebysplittingwiththecriterionwiththehighestinformationgain.Randomforests●Whenthedepthofadecisiontreeisgrowingtheerroronvalidationdatatendstoincreasealot:highvariance●OnewaytoexploitalotofdataistotrainmultipledecisiontreesandaveragethemAlgorithm:●SelectNpointsinthetrainingdataandkfeatures(usuallysqrt(p))●Learnanewdecisiontree●StopwhenwehaveenoughtreesClustering-kmeansClusteringwithk-means●Clusteringalgorithm●Requireadistancefunctionbetweenpoints●kisthenumberofclusterthealgorithmwillfindClusteringwithk-meansObjective:Dividethedatasetinksetsbyminimizingthewithin-clustersumofsquares(sumofdistancesofeachpointoftheclustertothecenterofthecluster)WhereSarethesetswearelearningandμthemeanoftheseti.iClusteringwithk-meansGradientdescentGradientdescent1.DefineamodeldependingonW(theparametersofthemodel)2.Definealossfunctionthatquantifytheerrorthemodeldoesonthetrainingdata:○convergence○themodelisgoodenough○yourspentallyourmoneyLinearregressionLinearregression:IntroductionInsupervisedlearning,wehaveexamplesofl

1 / 22
下载文档,编辑使用

©2015-2020 m.777doc.com 三七文档.

备案号:鲁ICP备2024069028号-1 客服联系 QQ:2149211541

×
保存成功