JMathImagingVisDOI10.1007/s10851-010-0251-1AFirst-OrderPrimal-DualAlgorithmforConvexProblemswithApplicationstoImagingAntoninChambolle·ThomasPock©SpringerScience+BusinessMedia,LLC2010AbstractInthispaperwestudyafirst-orderprimal-dualal-gorithmfornon-smoothconvexoptimizationproblemswithknownsaddle-pointstructure.Weproveconvergencetoasaddle-pointwithrateO(1/N)infinitedimensionsforthecompleteclassofproblems.Wefurthershowaccelerationsoftheproposedalgorithmtoyieldimprovedratesonprob-lemswithsomedegreeofsmoothness.InparticularweshowthatwecanachieveO(1/N2)convergenceonproblems,wheretheprimalorthedualobjectiveisuniformlycon-vex,andwecanshowlinearconvergence,i.e.O(ωN)forsomeω∈(0,1),onsmoothproblems.Thewideapplica-bilityoftheproposedalgorithmisdemonstratedonseveralimagingproblemssuchasimagedenoising,imagedeconvo-lution,imageinpainting,motionestimationandmulti-labelimagesegmentation.KeywordsConvexoptimization·Dualapproaches·Totalvariation·Inverseproblems·Image·Reconstruction1IntroductionVariationalmethodshaveproventobeparticularlyusefultosolveanumberofill-posedinverseimagingproblems.Theycanbedividedintotwofundamentallydifferentclasses:con-vexandnon-convexproblems.TheadvantageofconvexA.Chambolle()CMAP,EcolePolytechnique,CNRS,91128Palaiseau,Francee-mail:antonin.chambolle@cmap.polytechnique.frT.PockInstituteforComputerGraphicsandVision,GrazUniversityofTechnology,8010Graz,Austriae-mail:pock@icg.tugraz.atproblemsovernon-convexproblemsisthataglobalopti-mumcanbecomputed,inmanycaseswithagoodpreci-sionandinareasonabletime,independentoftheinitializa-tion.Hence,thequalityofthesolutionsolelydependsontheaccuracyofthemodel.Ontheotherhand,non-convexproblemshaveoftentheabilitytomodelmorepreciselytheprocessbehindanimageacquisition,buthavethedrawbackthatthequalityofthesolutionismoresensitivetotheini-tializationandtheoptimizationalgorithm.Totalvariationminimizationplaysanimportantroleinconvexvariationalmethodsforimaging.Themajoradvan-tageofthetotalvariationisthatitallowsforsharpdiscon-tinuitiesinthesolution.Thisisofvitalinterestformanyimagingproblems,sinceedgesrepresentimportantfeatures,e.g.objectboundariesormotionboundaries.However,itisalsowellknownthatvariationalmethodsincorporatingto-talvariationregularizationaredifficulttominimizeduetothenon-smoothnessofthetotalvariation.Theaimofthispaperisthereforetoprovideaflexiblealgorithmwhichisparticularlysuitableforsuchnon-smoothconvexoptimiza-tionproblemsinimaging.InSect.2were-visitaprimal-dualalgorithmproposedbyPock,Bischof,CremersandChambollein[29]formini-mizingaconvexrelaxationoftheMumford-Shahfunctional.Insubsequentwork[12],Esseretal.studiedthesamealgo-rithminamoregeneralframeworkandestablishedconnec-tionstootherknownalgorithms.Weshowinthispaperthatthisalgorithmhasaconver-gencerateofO(1/N),moreprecisely,apartial1primal-dualgapdecayslikeoneoverthetotalnumberofiterations.ThisrateequalstheratesrecentlyprovenbyNesterovin[27]1Thepartialprimal-dualgapisaweakermeasurethanthecommonlyusedprimal-dualgapbutitcanbeappliedtofunctionswithunboundeddomain.JMathImagingVisandmorerecentlybyNemirovskiin[23]on(almost)thesameclassofproblemsasinthispaper.WefurthershowinSect.5thatforcertainproblems,thetheoreticalrateofconvergencecanbefurtherimproved.InparticularweshowthattheproposedalgorithmcanbemodifiedtoyieldarateofconvergenceO(1/N2)forproblemswhichhavesomereg-ularityintheprimalorinthedualobjectiveandislinearlyconvergent(O(ωN),ω1)forsmoothproblems.Theprimal-dualalgorithmproposedinthispapercanbeeasilyadaptedtodifferentproblems,iseasytoimplementandcanbeeffectivelyacceleratedonparallelhardwaresuchasgraphicsprocessingunits(GPUs).Thisisparticularlyap-pealingforimagingproblems,wherereal-timeapplicationsplayanimportantrole.ThisisdemonstratedinSect.6onseveralvariationalproblemssuchasdeconvolution,zoom-ing,inpainting,motionestimationandsegmentation.Weendthepaperwithashortdiscussion.2TheGeneralProblemLetX,Ybetwofinite-dimensionalrealvectorspaces2equippedwithaninnerproduct·,·andnorm·=·,·12.ThemapK:X→Yisacontinuouslinearoperatorwithin-ducednormK=max{Kx:x∈Xwithx≤1}.(1)Thegeneralproblemweconsiderinthispaperisthegenericsaddle-pointproblemminx∈Xmaxy∈YKx,y+G(x)−F∗(y)(2)whereG:X→[0,+∞]andF∗:Y→[0,+∞]areproper,convex,lower-semicontinuous(l.s.c.)functions,F∗beingit-selftheconvexconjugateofaconvexl.s.c.functionF.Letusobservethatthissaddle-pointproblemisaprimal-dualformulationofthenonlinearprimalproblemminx∈XF(Kx)+G(x),(3)orofthecorrespondingdualproblemmaxy∈Y−G∗(−K∗y)+F∗(y).(4)Wereferto[32]formoredetails.Weassumethattheseprob-lemshaveatleastonesolution(ˆx,ˆy)∈X×Y,whichthere-foresatisfiesKˆx∈∂F∗(ˆy),−(K∗ˆy)∈∂G(ˆx),(5)2Infact,inmostofthepaperXandYcouldbegeneralrealHilbertspaceswithinfinitedimension,however,inthatcase,theassumptionthatKhasfinitenormisveryrestrictive.Wewillemphasizewhentheassumptionoffinitedimensionalityisreallyneeded.where∂F∗and∂Garethesubgradientsoftheconvexfunc-tionsF∗andG.Seeagain[32]fordetails.ThroughoutthepaperwewillassumethatFandGare“simple”,inthesensethattheirresolventoperatordefinedthroughx=(I+τ∂F)−1(y)=ar