seafile专业版日志问题


#1

docker安装的专业版seafile,挺不错的。 就是某天检查的时候发现logs文件夹大小为1T。
清理删除后,过了2天,就又生成到300G.
我了个大去的,这个是怎么回事,求解答。


#2

elasticsearch.log 是这玩意的问题,他的大小以可见的速度,飞速增长。求问,到哪里可以关闭或者限制他的大小
image


#3

建议写个crontab任务,定期清理一下吧,现在没有配置可以关闭这个日志


#4

应该看下这个日志,看看什么导致这个日志增加很快。


#5

#这个是用docker部署的专业版,日志贴一点在这里,烦请看一下。

[2019-05-07 17:16:17,438][WARN ][bootstrap ] running as ROOT user. this is a bad idea!
[2019-05-07 17:16:17,448][WARN ][bootstrap ] unable to install syscall filter: seccomp unavailable: your kernel is buggy and you should upgrade
[2019-05-07 17:16:17,615][INFO ][node ] [Coldblood] version[2.4.5], pid[203], build[c849dd1/2017-04-24T16:18:17Z]
[2019-05-07 17:16:17,615][INFO ][node ] [Coldblood] initializing …
[2019-05-07 17:16:18,184][INFO ][plugins ] [Coldblood] modules [reindex, lang-expression, lang-groovy], plugins [analysis-ik], sites []
[2019-05-07 17:16:18,242][INFO ][env ] [Coldblood] using [1] data paths, mounts [[/shared (shfs)]], net usable_space [10.7tb], net total_space [14.5tb], spins? [possibly], types [fuse.shfs]
[2019-05-07 17:16:18,242][INFO ][env ] [Coldblood] heap size [989.8mb], compressed ordinary object pointers [true]
[2019-05-07 17:16:18,242][WARN ][env ] [Coldblood] max file descriptors [40960] for elasticsearch process likely too low, consider increasing to at least [65536]
[2019-05-07 17:16:19,453][INFO ][ik-analyzer ] try load config from /opt/seafile/seafile-pro-server-6.3.13/pro/elasticsearch/config/analysis-ik/IKAnalyzer.cfg.xml
[2019-05-07 17:16:19,761][INFO ][ik-analyzer ] [Dict Loading] custom/mydict.dic
[2019-05-07 17:16:19,762][INFO ][ik-analyzer ] [Dict Loading] custom/single_word_low_freq.dic
[2019-05-07 17:16:19,765][INFO ][ik-analyzer ] [Dict Loading] custom/ext_stopword.dic
[2019-05-07 17:16:20,111][INFO ][node ] [Coldblood] initialized
[2019-05-07 17:16:20,111][INFO ][node ] [Coldblood] starting …
[2019-05-07 17:16:20,114][INFO ][transport ] [Coldblood] publish_address {local[1]}, bound_addresses {local[1]}
[2019-05-07 17:16:20,116][INFO ][discovery ] [Coldblood] elasticsearch/v9ZK6BxKQgmSMJymTxz_-g
[2019-05-07 17:16:20,122][INFO ][cluster.service ] [Coldblood] new_master {Coldblood}{v9ZK6BxKQgmSMJymTxz_-g}{local}{local[1]}{local=true}, reason: local-disco-initial_connect(master)
[2019-05-07 17:16:20,213][INFO ][http ] [Coldblood] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2019-05-07 17:16:20,213][INFO ][node ] [Coldblood] started
[2019-05-07 17:16:20,415][INFO ][gateway ] [Coldblood] recovered [2] indices into cluster_state
[2019-05-07 17:16:20,684][WARN ][indices.cluster ] [Coldblood] [[repofiles][3]] marking and sending shard failed due to [failed recovery]
[repofiles][[repofiles][3]] IndexShardRecoveryException[failed to fetch index version after copying it over]; nested: IndexShardRecoveryException[shard allocated for local recovery (post api), should exist, but doesn’t, current files: [write.lock, segments_e, segments_50, _1dt_Lucene50_0.tim, _1dt.nvm, _1dt_Lucene50_0.tip, _1dt.fnm, _1dt.fdx, _1dt.fdt, _1dt_Lucene50_0.doc, _1dt_Lucene54_0.dvd, _1dt.si, _1dt_Lucene50_0.pos, _1dt_Lucene54_0.dvm, _1dt.nvd]]; nested: NoSuchFileException[/shared/seafile/pro-data/search/data/elasticsearch/nodes/0/indices/repofiles/3/index/_6x.si];
at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:224)
at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:56)
at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: [repofiles][[repofiles][3]] IndexShardRecoveryException[shard allocated for local recovery (post api), should exist, but doesn’t, current files: [write.lock, segments_e, segments_50, _1dt_Lucene50_0.tim, _1dt.nvm, _1dt_Lucene50_0.tip, _1dt.fnm, _1dt.fdx, _1dt.fdt, _1dt_Lucene50_0.doc, _1dt_Lucene54_0.dvd, _1dt.si, _1dt_Lucene50_0.pos, _1dt_Lucene54_0.dvm, _1dt.nvd]]; nested: NoSuchFileException[/shared/seafile/pro-data/search/data/elasticsearch/nodes/0/indices/repofiles/3/index/_6x.si];
at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:208)
… 5 more
Caused by: java.nio.file.NoSuchFileException: /shared/seafile/pro-data/search/data/elasticsearch/nodes/0/indices/repofiles/3/index/6x.si
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177)
at java.nio.channels.FileChannel.open(FileChannel.java:287)
at java.nio.channels.FileChannel.open(FileChannel.java:335)
at org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:81)
at org.apache.lucene.store.FileSwitchDirectory.openInput(FileSwitchDirectory.java:186)
at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:89)
at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:89)
at org.apache.lucene.store.Directory.openChecksumInput(Directory.java:109)
at org.apache.lucene.codecs.lucene50.Lucene50SegmentInfoFormat.read(Lucene50SegmentInfoFormat.java:82)
at org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:362)
at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:493)
at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:490)
at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:731)
at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:683)
at org.apache.lucene.index.SegmentInfos.readLatestCommit(SegmentInfos.java:490)
at org.elasticsearch.common.lucene.Lucene.readSegmentInfos(Lucene.java:95)
at org.elasticsearch.index.store.Store.readSegmentsInfo(Store.java:164)
at org.elasticsearch.index.store.Store.readLastCommittedSegmentsInfo(Store.java:149)
at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:199)
… 5 more
[2019-05-07 17:16:20,689][WARN ][cluster.action.shard ] [Coldblood] [repofiles][3] received shard failed for target shard [[repofiles][3], node[v9ZK6BxKQgmSMJymTxz
-g], [P], v[187], s[INITIALIZING], a[id=qb0Xcx01TZ-dXoMdBI5WLg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2019-05-07T09:16:20.145Z]]], indexUUID [sbppLxssTICHYeFA6xv9Gg], message [failed recovery], failure [IndexShardRecoveryException[failed to fetch index version after copying it over]; nested: IndexShardRecoveryException[shard allocated for local recovery (post api), should exist, but doesn’t, current files: [write.lock, segments_e, segments_50, _1dt_Lucene50_0.tim, _1dt.nvm, _1dt_Lucene50_0.tip, _1dt.fnm, _1dt.fdx, _1dt.fdt, _1dt_Lucene50_0.doc, _1dt_Lucene54_0.dvd, _1dt.si, _1dt_Lucene50_0.pos, _1dt_Lucene54_0.dvm, _1dt.nvd]]; nested: NoSuchFileException[/shared/seafile/pro-data/search/data/elasticsearch/nodes/0/indices/repofiles/3/index/_6x.si]; ]
[repofiles][[repofiles][3]] IndexShardRecoveryException[failed to fetch index version after copying it over]; nested: IndexShardRecoveryException[shard allocated for local recovery (post api), should exist, but doesn’t, current files: [write.lock, segments_e, segments_50, _1dt_Lucene50_0.tim, _1dt.nvm, _1dt_Lucene50_0.tip, _1dt.fnm, _1dt.fdx, _1dt.fdt, _1dt_Lucene50_0.doc, _1dt_Lucene54_0.dvd, _1dt.si, _1dt_Lucene50_0.pos, _1dt_Lucene54_0.dvm, _1dt.nvd]]; nested: NoSuchFileException[/shared/seafile/pro-data/search/data/elasticsearch/nodes/0/indices/repofiles/3/index/_6x.si];
at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:224)
at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:56)
at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: [repofiles][[repofiles][3]] IndexShardRecoveryException[shard allocated for local recovery (post api), should exist, but doesn’t, current files: [write.lock, segments_e, segments_50, _1dt_Lucene50_0.tim, _1dt.nvm, _1dt_Lucene50_0.tip, _1dt.fnm, _1dt.fdx, _1dt.fdt, _1dt_Lucene50_0.doc, _1dt_Lucene54_0.dvd, _1dt.si, _1dt_Lucene50_0.pos, _1dt_Lucene54_0.dvm, _1dt.nvd]]; nested: NoSuchFileException[/shared/seafile/pro-data/search/data/elasticsearch/nodes/0/indices/repofiles/3/index/_6x.si];
at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:208)
… 5 more
Caused by: java.nio.file.NoSuchFileException: /shared/seafile/pro-data/search/data/elasticsearch/nodes/0/indices/repofiles/3/index/_6x.si
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177)
at java.nio.channels.FileChannel.open(FileChannel.java:287)
at java.nio.channels.FileChannel.open(FileChannel.java:335)
at org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:81)
at org.apache.lucene.store.FileSwitchDirectory.openInput(FileSwitchDirectory.java:186)
at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:89)
at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:89)
at org.apache.lucene.store.Directory.openChecksumInput(Directory.java:109)
at org.apache.lucene.codecs.lucene50.Lucene50SegmentInfoFormat.read(Lucene50SegmentInfoFormat.java:82)
at org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:362)
at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:493)
at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:490)
at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:731)
at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:683)
at org.apache.lucene.index.SegmentInfos.readLatestCommit(SegmentInfos.java:490)
at org.elasticsearch.common.lucene.Lucene.readSegmentInfos(Lucene.java:95)
at org.elasticsearch.index.store.Store.readSegmentsInfo(Store.java:164)
at org.elasticsearch.index.store.Store.readLastCommittedSegmentsInfo(Store.java:149)
at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:199)
… 5 more


#6

你可以试试把 ES 的索引数据目录删掉。然后给 Docker 足够的内存。