机器学习-试卷-finals13

整理文档很辛苦,赏杯茶钱您下走!

免费阅读已结束,点击下载阅读编辑剩下 ...

阅读已结束,您可以下载文档离线阅读编辑

资源描述

CS189Spring2013IntroductiontoMachineLearningFinal•Youhave3hoursfortheexam.•Theexamisclosedbook,closednotesexceptyourone-page(twosides)ortwo-page(oneside)cribsheet.•Pleaseusenon-programmablecalculatorsonly.•MarkyouranswersONTHEEXAMITSELF.Ifyouarenotsureofyouransweryoumaywishtoprovideabriefexplanation.AllshortanswersectionscanbesuccessfullyansweredinafewsentencesATMOST.•Fortrue/falsequestions, llintheTrue/Falsebubble.•Formultiple-choicequestions, llinthebubblesforALLCORRECTCHOICES(insomecases,theremaybemorethanone).Foraquestionwithppointsandkchoices,everyfalsepositivewilincurapenaltyofp=(k1)points.•Forshortanswerquestions,unnecessarilylongexplanationsandextraneousdatawillbepenalized.Pleasetrytobeterseandpreciseanddothesidecalculationsonthescratchpapersprovided.•PleasedrawaboundingboxaroundyouranswerintheShortAnswerssection.Amissedanswerwithoutaboundingboxwillnotberegraded.FirstnameLastnameSIDForsta useonly:Q1.True/False/23Q2.MultipleChoiceQuestions/36Q3.ShortAnswers/26Total/851Q1.[23pts]True/False(a)[1pt]SolvinganonlinearseparationproblemwithahardmarginKernelizedSVM(GaussianRBFKernel)mightleadtoover tting.TrueFalse(b)[1pt]InSVMs,thesumoftheLagrangemultiplierscorrespondingtothepositiveexamplesisequaltothesumoftheLagrangemultiplierscorrespondingtothenegativeexamples.TrueFalse(c)[1pt]SVMsdirectlygiveustheposteriorprobabilitiesP(y=1jx)andP(y=1jx).TrueFalse(d)[1pt]V(X)=E[X]2E[X2]TrueFalse(e)[1pt]Inthediscriminativeapproachtosolvingclassi cationproblems,wemodeltheconditionalprobabilityofthelabelsgiventheobservations.TrueFalse(f)[1pt]Inatwoclassclassi cationproblem,apointontheBayesoptimaldecisionboundaryxalwayssatis esP(y=1jx)=P(y=0jx).TrueFalse(g)[1pt]AnylinearcombinationofthecomponentsofamultivariateGaussianisaunivariateGaussian.TrueFalse(h)[1pt]ForanytworandomvariablesXN(1;21)andYN(2;22),X+YN(1+2;21+22).TrueFalse(i)[1pt]StanfordandBerkeleystudentsaretryingtosolvethesamelogisticregressionproblemforadataset.TheStanfordgroupclaimsthattheirinitializationpointwillleadtoamuchbetteroptimumthanBerkeley'sinitializationpoint.Stanfordiscorrect.TrueFalse(j)[1pt]Inlogisticregression,wemodeltheoddsratio(p1p)asalinearfunction.TrueFalse(k)[1pt]Randomforestscanbeusedtoclassifyin nitedimensionaldata.TrueFalse(l)[1pt]InboostingwestartwithaGaussianweightdistributionoverthetrainingsamples.TrueFalse(m)[1pt]InAdaboost,theerrorofeachhypothesisiscalculatedbytheratioofmisclassi edexamplestothetotalnumberofexamples.TrueFalse(n)[1pt]Whenk=1andN!1,thekNNclassi cationrateisboundedabovebytwicetheBayeserrorrate.TrueFalse(o)[1pt]Asinglelayerneuralnetworkwithasigmoidactivationforbinaryclassi cationwiththecrossentropylossisexactlyequivalenttologisticregression.TrueFalse2(p)[1pt]ThelossfunctionforLeNet5(theconvolutionalneuralnetworkbyLeCunetal.)isconvex.TrueFalse(q)[1pt]Convolutionisalinearoperationi.e.( f1+ f2)g= f1g+ f2g.TrueFalse(r)[1pt]Thek-meansalgorithmdoescoordinatedescentonanon-convexobjectivefunction.TrueFalse(s)[1pt]A1-NNclassi erhashighervariancethana3-NNclassi er.TrueFalse(t)[1pt]Thesinglelinkagglomerativeclusteringalgorithmgroupstwoclustersonthebasisofthemaximumdistancebetweenpointsinthetwoclusters.TrueFalse(u)[1pt]Thelargesteigenvectorofthecovariancematrixisthedirectionofminimumvarianceinthedata.TrueFalse(v)[1pt]TheeigenvectorsofAATandATAarethesame.TrueFalse(w)[1pt]Thenon-zeroeigenvaluesofAATandATAarethesame.TrueFalse3Q2.[36pts]MultipleChoiceQuestions(a)[4pts]Inlinearregression,wemodelP(yjx)N(wTx+w0;2).Theirreducibleerrorinthismodelis.2E[(yE[yjx])2jx]E[(yE[yjx])jx]E[yjx](b)[4pts]LetS1andS2bethesetofsupportvectorsandw1andw2bethelearntweightvectorsforalinearlyseparableproblemusinghardandsoftmarginlinearSVMsrespectively.Whichofthefollowingarecorrect?S1S2w1=w2S1maynotbeasubsetofS2w1maynotbeequaltow2.(c)[4pts]Ordinaryleast-squaresregressionisequivalenttoassumingthateachdatapointisgeneratedaccordingtoalinearfunctionoftheinputpluszero-mean,constant-varianceGaussiannoise.Inmanysystems,however,thenoisevarianceisitselfapositivelinearfunctionoftheinput(whichisassumedtobenon-negative,i.e.,x0).Whichofthefollowingfamiliesofprobabilitymodelscorrectlydescribesthissituationintheunivariatecase?P(yjx)=1p2xexp((y(w0+w1x))22x2)P(yjx)=1p2exp((y(w0+w1x))222)P(yjx)=1p2xexp((y(w0+(w1+2)x))222)P(yjx)=1xp2exp((y(w0+w1x))22x22)(d)[3pts]TheleftsingularvectorsofamatrixAcanbefoundin.EigenvectorsofAATEigenvectorsofATAEigenvectorsofA2EigenvaluesofAAT(e)[3pts]Averagingtheoutputofmultipledecisiontreeshelps.IncreasebiasDecreasebiasIncreasevarianceDecreasevariance(f)[4pts]LetAbeasymmetricmatrixandSbethematrixcontainingitseigenvectorsascolumnvectors,andDadiagonalmatrixcontainingthecorrespondingeigenvaluesonthediagonal.Whichofthefollowingaretrue:AS=SDAS=DSSA=DSAS=DST(g)[4pts]Considerthefollowingdataset:A=(0;2),B=(0;1)andC=(1;0).Thek-meansalgorithmisinitializedwithcentersatAandB.Uponconvergence,thetwocenterswillbeatAandCAandthemidpointofBCCandthemidpointofABAandB4(h)[3pts]Whichofthefollowinglossfunctionsareconvex?Misclassi ca

1 / 9
下载文档,编辑使用

©2015-2020 m.777doc.com 三七文档.

备案号:鲁ICP备2024069028号-1 客服联系 QQ:2149211541

×
保存成功