机器学习-试卷-finals15

整理文档很辛苦,赏杯茶钱您下走!

免费阅读已结束,点击下载阅读编辑剩下 ...

阅读已结束,您可以下载文档离线阅读编辑

资源描述

CS189Spring2015IntroductiontoMachineLearningFinal•Youhave2hours50minutesfortheexam.•Theexamisclosedbook,closednotesexceptyourone-page(two-sided)cheatsheet.•Nocalculatorsorelectronicitems.•MarkyouranswersONTHEEXAMITSELF.Ifyouarenotsureofyouransweryoumaywishtoprovideabriefexplanationandstateyourassumptions.•Fortrue/falsequestions, llintheTrue/Falsebubble.•Formultiple-choicequestions, llinthebubbleforEXACTLYONEchoicethatrepresentsthebestanswertothequestion.FirstnameLastnameSIDFirstandlastnameofstudenttoyourleftFirstandlastnameofstudenttoyourrightForsta useonly:Q1.TrueorFalse/44Q2.MultipleChoice/33Q3.DecisionTheory/9Q4.ParameterEstimation/8Q5.LocallyWeightedLogisticRegression/14Q6.DecisionTrees/7Q7.ConvolutionalNeuralNets/11Q8.Streamingk-means/9Q9.LowDimensionalDecompositions/15Total/1501Q1.[44pts]TrueorFalse(a)[2pts]Aneuralnetworkwithmultiplehiddenlayersandsigmoidnodescanformnon-lineardecisionboundaries.TrueFalse(b)[2pts]Allneuralnetworkscomputenon-convexfunctionsoftheirparameters.TrueFalse(c)[2pts]Forlogisticregression,withparametersoptimizedusingastochasticgradientmethod,settingparametersto0isanacceptableinitialization.TrueFalse(d)[2pts]Forarbitraryneuralnetworks,withweightsoptimizedusingastochasticgradientmethod,settingweightsto0isanacceptableinitialization.TrueFalse(e)[2pts]GivenadesignmatrixX2Rnd,wheredn,ifweprojectourdataontoakdimensionalsubspaceusingPCAwherekequalstherankofX,werecreateaperfectrepresentationofourdatawithnoloss.TrueFalse(f)[2pts]Hierarchicalclusteringmethodsrequireaprede nednumberofclusters,muchlikek-means.TrueFalse(g)[2pts]Givenaprede nednumberofclustersk,globallyminimizingthek-meansobjectivefunctionisNP-hard.TrueFalse(h)[2pts]Usingcrossvalidationtoselecthyperparameterswillguaranteethatourmodeldoesnotover t.TrueFalse(i)[2pts]Arandomforestisanensemblelearningmethodthatattemptstolowerthebiaserrorofdecisiontrees.TrueFalse(j)[2pts]Baggingalgorithmsattachweightsw1:::wntoasetofNweaklearners.Theyre-weightthelearnersandconvertthemintostrongones.BoostingalgorithmsdrawNsampledistributions(usuallywithreplacement)fromanoriginaldatasetforlearnerstotrainon.TrueFalse(k)[2pts]GivenanymatrixX,itssingularvaluesaretheeigenvaluesofXXandXX.TrueFalse(l)[2pts]GivenanymatrixX,(XX+I)1for6=0alwaysexists.TrueFalse(m)[2pts]BackpropagationismotivatedbyutilizingChainRuleandDynamicProgrammingtoconservemathe-maticalcalculations.TrueFalse(n)[2pts]Anin nitedepthbinaryDecisionTreecanalwaysachieve100%trainingaccuracy,providedthatnopointismislabeledinthetrainingset.TrueFalse(o)[2pts]InOnevsAllMulti-ClassClassi cationinSVM,wearetryingtoclassifyaninputdatapointXasoneoftheNclasses(C1:::Cn),eachofwhichhasaparametervector~w1:::~wn.WeclassifypointXastheclassCiwhichmaximizestheinnerproductofXand~wi.TrueFalse2(p)[2pts]Thenumberofparametersinaparametricmodelis xed,whilethenumberofparametersinanon-parametricmodelgrowswiththeamountoftrainingdata.TrueFalse(q)[2pts]Asmodelcomplexityincreases,biaswilldecreasewhilevariancewillincrease.TrueFalse(r)[2pts]Consideracancerdiagnosisclassi cationproblemwherealmostallofthepeoplebeingdiagnoseddon'thavecancer.Theprobabilityofcorrectclassi cationisthemostimportantmetrictooptimize.TrueFalse(s)[2pts]Forthe1-NearestNeighborsalgorithm,asthenumberofdatapointsincreasestoin nityinourdataset,theerrorofouralgorithmisguaranteedtobeboundedbytwicetheBayesRisk.TrueFalse(t)[2pts]Increasingthedimensionalityofourdataalwaysdecreasesourmisclassi cationrate.TrueFalse(u)[2pts]ItispossibletorepresentaXORfunctionwithaneuralnetworkwithoutahiddenlayer.TrueFalse(v)[2pts]Athighdimensionality,theKDtreespeeduptothenearestneighborcanbeslowerthanthenaivenearestneighborimplementation.TrueFalse3Q2.[33pts]MultipleChoice(a)[3pts]GivenaNeuralNetwithNinputnodes,nohiddenlayers,oneoutputnode,withEntropyLossandSigmoidActivationFunctions,whichofthefollowingalgorithms(withtheproperhyper-parametersandini-tialization)canbeusedto ndtheglobaloptimum?SimulatedAnnealing(GradientDescentwithrestarts)StochasticGradientDescentMini-BatchGradientDescentBatchGradientDescentAlloftheaboveNoneoftheabove(b)[3pts]Givenfunctionf(x)=jx2+3j1de nedonR:NewtonsMethodonminimizinggradientswillalwaysconvergetotheglobaloptimuminoneit-erationfromanystartinglocationStochasticGradientDescentwillalwayscon-vergetotheglobaloptimuminoneiterationTheproblemisnonconvex,soitnotfeasibleto ndasolution.AlloftheaboveNoneoftheabove(c)[3pts]Danielwantstominimizeaconvexlossfunctionf(x)usingstochasticgradientdescent.Givenarandomstartingpoint,marktheconditionthatwouldguaranteethatstochasticgradientdescentwillconvergetotheglobaloptimum.Lett=stepsizeatiterationt.t0ConstantstepsizetDecreasingstepsizet=1ptDecreasingstepsizet=1t2AlloftheaboveNoneoftheabove(d)[3pts]Whichofthefollowingistrueoflogisticregression?ItcanbemotivatedbylogoddsTheoptimalweightvectorcanbefoundusingMLE.ItcanbeusedwithL1regularizationAlloftheaboveNoneoftheabove(e)[3pts]You'vejust nishedtrainingadecisiontreeforspamclassi cation,anditisgettingabnormallybadperformanceonbothyourtrainingandtestsets.Youknowthaty

1 / 13
下载文档,编辑使用

©2015-2020 m.777doc.com 三七文档.

备案号:鲁ICP备2024069028号-1 客服联系 QQ:2149211541

×
保存成功