最近把線上一個(gè)配置在拷貝到線下一臺(tái)機(jī)器后,發(fā)現(xiàn) hadoop datanode起不來(lái),總是報(bào)這個(gè)異常: 2014-03-11 10:38:44,238 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-1337291857-192.168.2.5
最近把線上一個(gè)配置在拷貝到線下一臺(tái)機(jī)器后,發(fā)現(xiàn)hadoop datanode起不來(lái),總是報(bào)這個(gè)異常:
2014-03-11 10:38:44,238 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-1337291857-192.168.2.50-1394505472069 (storage id DS-1593966629-192.168.2.50-50010-1394505524173) service to search002050.sqa.cm4/192.168.2.50:9000 org.apache.hadoop.util.DiskChecker$DiskErrorException: Invalid volume failure config value: 1 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.(FsDatasetImpl.java:183) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance (FsDatasetFactory.java:34) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance (FsDatasetFactory.java:30) at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:920) at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:882) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo (BPOfferService.java:308) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake (BPServiceActor.java:218) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:660) at java.lang.Thread.run(Thread.java:662)
原因是:
dfs.datanode.failed.volumes.tolerated 這個(gè)參數(shù)直接拷貝了線上的配置為1,
其含義是:The number of volumes that are allowed to fail before a datanode stops offering service. By default any volume failure will cause a datanode to shutdown. 即datanode可以忍受的磁盤(pán)損壞的個(gè)數(shù)。
在hadoop集群中,經(jīng)常會(huì)發(fā)生磁盤(pán)只讀或者損壞的情況。datanode在啟動(dòng)時(shí)會(huì)使用dfs.datanode.data.dir下配置的文件夾(用來(lái)存儲(chǔ)block),若是有一些不可以用且個(gè)數(shù)>上面配置的值,DataNode就會(huì)啟動(dòng)失敗。
在線上環(huán)境中fs.datanode.data.dir配置為10塊盤(pán),所以dfs.datanode.failed.volumes.tolerated設(shè)置為1,是允許有一塊盤(pán)是壞的。而線下的只有一塊盤(pán),這volFailuresTolerated和volsConfigured的值都為1,所以會(huì)導(dǎo)致代碼里面判斷失敗。
詳見(jiàn)hadoop源碼的FsDatasetImpl.java的182行:
// The number of volumes required for operation is the total number // of volumes minus the number of failed volumes we can tolerate. final int volFailuresTolerated = conf.getInt(DFSConfigKeys.DFS_DATANODE_FAILED_VOLUMES_TOLERATED_KEY, DFSConfigKeys.DFS_DATANODE_FAILED_VOLUMES_TOLERATED_DEFAULT); String[] dataDirs = conf.getTrimmedStrings(DFSConfigKeys.DFS_DATANODE_DATA_DIR_KEY); int volsConfigured = (dataDirs == null) ? 0 : dataDirs.length; int volsFailed = volsConfigured - storage.getNumStorageDirs(); this.validVolsRequired = volsConfigured - volFailuresTolerated; if (volFailuresTolerated < 0 || volFailuresTolerated >= volsConfigured) { throw new DiskErrorException("Invalid volume failure " + " config value: " + volFailuresTolerated); } if (volsFailed > volFailuresTolerated) { throw new DiskErrorException("Too many failed volumes - " + "current valid volumes: " + storage.getNumStorageDirs() + ", volumes configured: " + volsConfigured + ", volumes failed: " + volsFailed + ", volume failures tolerated: " + volFailuresTolerated); }
原文地址:hadoop集群DataNode起不來(lái):“DiskChecker$DiskErrorExceptio, 感謝原作者分享。
聲明:本網(wǎng)頁(yè)內(nèi)容旨在傳播知識(shí),若有侵權(quán)等問(wèn)題請(qǐng)及時(shí)與本網(wǎng)聯(lián)系,我們將在第一時(shí)間刪除處理。TEL:177 7030 7066 E-MAIL:11247931@qq.com