HBase,BigTable,Hadop,MapReduce,ZooKepper等――当前大规模网站

整理文档很辛苦,赏杯茶钱您下走!

免费阅读已结束,点击下载阅读编辑剩下 ...

阅读已结束,您可以下载文档离线阅读编辑

资源描述

大型網站所使用的工具Perlbal-多個網頁伺服器的負載平衡MogileFS-分散式檔案系統有公司認為MogileFS比起Hadoop適合拿來處理小檔案memcached-共享記憶體??把資料庫或其他需要經常讀取的部分,用記憶體快取(Cache)方式存放Moxi-Memcache的PROXYMoreResource:::::王耀聰陳威宇jazz@nchc.org.twwaue@nchc.org.tw教育訓練課程HBaseisadistributedcolumn-orienteddatabasebuiltontopofHDFS.HBaseis..Adistributeddatastorethatcanscalehorizontallyto1,000sofcommodityserversandpetabytesofindexedstorage.DesignedtooperateontopoftheHadoopdistributedfilesystem(HDFS)orKosmosFileSystem(KFS,akaCloudstore)forscalability,faulttolerance,andhighavailability.IntegratedintotheHadoopmap-reduceplatformandparadigm.BenefitsDistributedstorageTable-likeindatastructuremulti-dimensionalmapHighscalabilityHighavailabilityHighperformanceWhouseHBaseAdobe–內部使用(Structuredata)Kalooga–圖片搜尋引擎Meetup–社群聚會網站Streamy–成功從MySQL移轉到HbaseTrendMicro–雲端掃毒架構Yahoo!–儲存文件fingerprint避免重複More-StartedtowardbyChadWaltersandJim2006.11GooglereleasespaperonBigTable2007.2InitialHBaseprototypecreatedasHadoopcontrib.2007.10FirstuseableHBase2008.1HadoopbecomeApachetop-levelprojectandHBasebecomessubproject2008.10~HBase0.18,0.19releasedHBaseIsNot…Tableshaveoneprimaryindex,therowkey.Nojoinoperators.Scansandqueriescanselectasubsetofavailablecolumns,perhapsbyusingawildcard.Therearethreetypesoflookups:Fastlookupusingrowkeyandoptionaltimestamp.FulltablescanRangescanfromregionstarttoend.HBaseIsNot…(2)Limitedatomicityandtransactionsupport.HBasesupportsmultiplebatchedmutationsofsinglerowsonly.Dataisunstructuredanduntyped.NoaccessedormanipulatedviaSQL.ProgrammaticaccessviaJava,REST,orThriftAPIs.ScriptingviaJRuby.WhyBigtable?PerformanceofRDBMSsystemisgoodfortransactionprocessingbutforverylargescaleanalyticprocessing,thesolutionsarecommercial,expensive,andspecialized.VerylargescaleanalyticprocessingBigqueries–typicallyrangeortablescans.Bigdatabases(100sofTB)WhyBigtable?(2)MapreduceonBigtablewithoptionallyCascadingontoptosupportsomerelationalalgebrasmaybeacosteffectivesolution.ShardingisnotasolutiontoscaleopensourceRDBMSplatformsApplicationspecificLaborintensive(re)partitionaingWhyHBase?HBaseisaBigtableclone.ItisopensourceIthasagoodcommunityandpromiseforthefutureItisdevelopedontopofandhasgoodintegrationfortheHadoopplatform,ifyouareusingHadoopalready.IthasaCascadingconnector.HBasebenefitsthanRDBMSNorealindexesAutomaticpartitioningScalelinearlyandautomaticallywithnewnodesCommodityhardwareFaulttoleranceBatchprocessingDataModelTablesaresortedbyRowTableschemaonlydefineit’scolumnfamilies.EachfamilyconsistsofanynumberofcolumnsEachcolumnconsistsofanynumberofversionsColumnsonlyexistwheninserted,NULLsarefree.ColumnswithinafamilyaresortedandstoredtogetherEverythingexcepttablenamesarebyte[](Row,Family:Column,Timestamp)ValueRowkeyColumnFamilyvalueTimeStampMembersMasterResponsibleformonitoringregionserversLoadbalancingforregionsRedirectclienttocorrectregionserversThecurrentSPOFregionserverslavesServingrequests(Write/Read/Scan)ofClientSendHeartBeattoMasterThroughputandRegionnumbersarescalablebyregionserversRegions表格是由一或多個region所構成Region是由其startKey與endKey所指定每個region可能會存在於多個不同節點上,而且是由數個HDFS檔案與區塊所構成,這類region是由Hadoop負責複製實際個案討論–部落格邏輯資料模型一篇Blogentry由title,date,author,type,text欄位所組成。一位User由username,password等欄位所組成。每一篇的Blogentry可有許多Comments。每一則comment由title,author,與text組成。ERD部落格–HBaseTableSchemaRowkeytype(以2個字元的縮寫代表)與timestamp組合而成。因此rows會先後依type及timestamp排序好。方便用scan()來存取Table的資料。BLOGENTRY與COMMENT的”一對多”關係由comment_title,comment_author,comment_text等columnfamilies內的動態數量的column來表示每個Column的名稱是由每則comment的timestamp來表示,因此每個columnfamily的column會依時間自動排序好ArchitectureZooKeeperHBasedependsonZooKeeper(Chapter13)andbydefaultitmanagesaZooKeeperinstanceastheauthorityonclusterstateOperationThe-ROOT-tableholdsthelistof.META.tableregionsThe.META.tableholdsthelistofalluser-spaceregions.Installation(1)$wget*.tar.gz-C/opt/$sudoln-sf/opt/hbase-0.20.3/opt/hbase$sudochown-R$USER:$USER/opt/hbase$sudomkdir/var/hadoop/$sudochmod777/var/hadoop啟動Hadoop…Setup(1)$vim/opt/hbase/conf/hbase-env.shexportJAVA_HOME=/usr/lib/jvm/java-6-sunexportHADOOP_CONF_DIR=/opt/hadoop/confexportHBASE_HOME=/opt/hbaseexportHBASE_

1 / 33
下载文档,编辑使用

©2015-2020 m.777doc.com 三七文档.

备案号:鲁ICP备2024069028号-1 客服联系 QQ:2149211541

×
保存成功