1
iyiluo 131 天前
你应该先怀疑是不是硬盘快坏了,这个几率比遇到文件系统出故障几率大得多
|
2
barrysj 131 天前
确实考虑硬盘的问题更大
有监控 cpu iowait 和硬盘的读写延迟之类的数据吗 |
3
zzlyzq 131 天前
硬盘可能有故障,而非文件系统问题。
|
4
zhoudaiyu OP |
5
hefish 131 天前
op 心里已经有答案了。
|
6
liuchao719 131 天前
是有什么需求要用这么低版本的内核嘛,通常来讲开发者对低版本会关照少一些,我觉得能用高版本还是尽量用高版本?因为之前做项目很多问题都是版本过旧引起的,升级一下就没有问题了。
|
7
zhoudaiyu OP |
8
Hormazed 131 天前
我们也在用 Red Hat Enterprise Linux Server release 7.9 (Maipo)为了避免内核问题,最近陆续把内核升级到 6.6.8 ,数量已有 20 台左右
Linux version 6.6.8 (root@VTW12NET) (gcc (GCC) 11.2.1 20220127 (Red Hat 11.2.1-9), GNU ld version 2.27-44.base.el7_9.1) #1 SMP PREEMPT_DYNAMIC Thu Dec 20 12:01:06 CST 2023 就我们使用情况来说。没出现过问题,IO 平均每周 300G 。 |
9
Hormazed 131 天前
Resolution
Red Hat Enterprise Linux 7 This issue has been resolved with the errata RHSA-2020:1016 for the package(s) kernel-3.10.0-1127.el7 or later. Red Hat Enterprise Linux 8 This issue has been resolved with the errata RHSA-2019:3517 for the package(s) kernel-4.18.0-147.el8 or later. Root Cause Ext4/jbd2 deadlock involving the jdb2 checkpoint thread and the jdb2 commit thread. Each thread is waiting for the other to move forward: The checkpoint thread aquires the and waits for the commit thread to finish ( waitqueue), but the commit thread cannot progress since it's trying to acquire the same mutex () owned by the checkpoint thread, leading to the deadlock.j_checkpoint_mutexj_wait_done_commitj_checkpoint_mutex The deadlock is resolved by upstream commit 53cf978457325d8fb2cdecd7981b31a8229e446e (jbd2: fix deadlock while checkpoint thread waits commit thread to finish). On Red Hat Enterprise Linux 7, the patch resolving this issue has been backported as part of a more general update in (private) BZ 1747387 - ext4 jbd2: stable update for 7.8 On Red Hat Enterprise Linux 8, the patch resolving this issue has been backported as part of a more general update in (private) BZ 1698815 - [ext4][jbd2] Stable update for rhel8.1 |
10
Kumo31 131 天前
给出的信息太少 不足以判断是哪里的问题。不过不用迷信内核,这么老的内核版本 bug 可不少,我们做存储碰上过一篮子的各种内核 bug
|
11
JackSlowFcck 131 天前
要不,换块硬盘试试?
|
12
zhoudaiyu OP @Hormazed #8 生产系统不是非常敢用 elrepo 提供的内核,尽管我也想用新的
@Hormazed #9 我们的内核版本已经比这个新了,是 1160 @JackSlowFcck #11 周期太长了,机器托管到别的机房了,现在有的业务是单点,只能先再用上了 @Kumo31 #10 大概率我们要 3.10 一直用上去了,我们还有信创的机器是 4.19 的内核,也反馈过 bug ,厂商也没管 |
13
lrvy 131 天前
RHEL 的话直接找售后查😁
|
14
ruidoBlanco 131 天前
用红帽,内核问题,首先选择升级内核到当前版本最新看看还有没有问题,其次是找红帽,问社区能有多大用处?
即便是问社区,日志什么也不贴,让人猜? |
15
zhoudaiyu OP |
16
julyclyde 131 天前
-t long 诊断一下吧
不应该没问题 |
17
msg7086 131 天前
不敢用 elrepo 的内核的话,那要不要看看 UEK 内核?
|
18
zhoudaiyu OP hang 住时候的内核日志截取了部分
Aug 15 09:33:53 node16 kernel: INFO: task jbd2/dm-2-8:1839 blocked for more than 120 seconds. Aug 15 09:33:53 node16 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 15 09:33:53 node16 kernel: jbd2/dm-2-8 D ffff8e7efea1acc0 0 1839 2 0x00000000 Aug 15 09:33:53 node16 kernel: Call Trace: Aug 15 09:33:53 node16 kernel: [<ffffffff8e8d2ba0>] ? task_rq_unlock+0x20/0x20 Aug 15 09:33:53 node16 kernel: [<ffffffff8ef87169>] schedule+0x29/0x70 Aug 15 09:33:53 node16 kernel: [<ffffffffc118433c>] jbd2_journal_commit_transaction+0x23c/0x19c0 [jbd2] Aug 15 09:33:53 node16 kernel: [<ffffffff8e8e111e>] ? account_entity_dequeue+0xae/0xd0 Aug 15 09:33:53 node16 kernel: [<ffffffff8e8e48bc>] ? dequeue_entity+0x11c/0x5c0 Aug 15 09:33:53 node16 kernel: [<ffffffff8e8e5ec1>] ? put_prev_entity+0x31/0x400 Aug 15 09:33:53 node16 kernel: [<ffffffff8e82b59e>] ? __switch_to+0xce/0x580 Aug 15 09:33:53 node16 kernel: [<ffffffff8e8c6d10>] ? wake_up_atomic_t+0x30/0x30 Aug 15 09:33:53 node16 kernel: [<ffffffff8ef86c8f>] ? __schedule+0x3af/0x860 Aug 15 09:33:53 node16 kernel: [<ffffffff8e8adf0e>] ? try_to_del_timer_sync+0x5e/0x90 Aug 15 09:33:53 node16 kernel: [<ffffffffc118af89>] kjournald2+0xc9/0x260 [jbd2] Aug 15 09:33:53 node16 kernel: [<ffffffff8e8c6d10>] ? wake_up_atomic_t+0x30/0x30 Aug 15 09:33:53 node16 kernel: [<ffffffffc118aec0>] ? commit_timeout+0x10/0x10 [jbd2] Aug 15 09:33:53 node16 kernel: [<ffffffff8e8c5c21>] kthread+0xd1/0xe0 Aug 15 09:33:53 node16 kernel: [<ffffffff8e8c5b50>] ? insert_kthread_work+0x40/0x40 Aug 15 09:33:53 node16 kernel: [<ffffffff8ef93ddd>] ret_from_fork_nospec_begin+0x7/0x21 Aug 15 09:33:53 node16 kernel: [<ffffffff8e8c5b50>] ? insert_kthread_work+0x40/0x40 Aug 15 09:33:53 node16 kernel: INFO: task containerd:225811 blocked for more than 120 seconds. Aug 15 09:33:53 node16 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 15 09:33:53 node16 kernel: containerd D ffff8e7efef1acc0 0 225811 1 0x00000080 Aug 15 09:33:53 node16 kernel: Call Trace: Aug 15 09:33:53 node16 kernel: [<ffffffff8ef87169>] schedule+0x29/0x70 Aug 15 09:33:53 node16 kernel: [<ffffffffc1181085>] wait_transaction_locked+0x85/0xd0 [jbd2] Aug 15 09:33:53 node16 kernel: [<ffffffff8e8c6d10>] ? wake_up_atomic_t+0x30/0x30 Aug 15 09:33:53 node16 kernel: [<ffffffffc1181378>] add_transaction_credits+0x278/0x310 [jbd2] Aug 15 09:33:53 node16 kernel: [<ffffffff8ea2830b>] ? __kmalloc+0x1eb/0x230 Aug 15 09:33:53 node16 kernel: [<ffffffffc11dd8c4>] ? ext4_htree_store_dirent+0x34/0x120 [ext4] Aug 15 09:33:53 node16 kernel: [<ffffffffc1181601>] start_this_handle+0x1a1/0x430 [jbd2] Aug 15 09:33:53 node16 kernel: [<ffffffff8ea287c2>] ? kmem_cache_alloc+0x1c2/0x1f0 Aug 15 09:33:53 node16 kernel: [<ffffffffc1181ab3>] jbd2__journal_start+0xf3/0x1f0 [jbd2] Aug 15 09:33:53 node16 kernel: [<ffffffffc11eb0ba>] ? ext4_dirty_inode+0x2a/0x60 [ext4] Aug 15 09:33:53 node16 kernel: [<ffffffffc1217759>] __ext4_journal_start_sb+0x69/0xe0 [ext4] Aug 15 09:33:53 node16 kernel: [<ffffffffc11eb0ba>] ext4_dirty_inode+0x2a/0x60 [ext4] Aug 15 09:33:53 node16 kernel: [<ffffffff8ea7f00d>] __mark_inode_dirty+0x15d/0x270 Aug 15 09:33:53 node16 kernel: [<ffffffff8ea6b8e9>] update_time+0x89/0xd0 Aug 15 09:33:53 node16 kernel: [<ffffffff8ea6bdfa>] touch_atime+0x10a/0x220 Aug 15 09:33:53 node16 kernel: [<ffffffff8ea63694>] iterate_dir+0xe4/0x130 Aug 15 09:33:53 node16 kernel: [<ffffffff8ea63c8c>] SyS_getdents64+0x9c/0x120 Aug 15 09:33:53 node16 kernel: [<ffffffff8ea63900>] ? fillonedir+0x110/0x110 Aug 15 09:33:53 node16 kernel: [<ffffffff8ef93f92>] system_call_fastpath+0x25/0x2a Aug 15 09:33:53 node16 kernel: INFO: task containerd:2700571 blocked for more than 120 seconds. Aug 15 09:33:53 node16 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 15 09:33:53 node16 kernel: containerd D ffff8e3eff79acc0 0 2700571 1 0x00000080 Aug 15 09:33:53 node16 kernel: Call Trace: Aug 15 09:33:53 node16 kernel: [<ffffffff8ef87169>] schedule+0x29/0x70 Aug 15 09:33:53 node16 kernel: [<ffffffffc1181085>] wait_transaction_locked+0x85/0xd0 [jbd2] Aug 15 09:33:53 node16 kernel: [<ffffffff8e8c6d10>] ? wake_up_atomic_t+0x30/0x30 Aug 15 09:33:53 node16 kernel: [<ffffffffc1181378>] add_transaction_credits+0x278/0x310 [jbd2] Aug 15 09:33:53 node16 kernel: [<ffffffff8ef86c8f>] ? __schedule+0x3af/0x860 Aug 15 09:33:53 node16 kernel: [<ffffffffc1181601>] start_this_handle+0x1a1/0x430 [jbd2] Aug 15 09:33:53 node16 kernel: [<ffffffff8ef87169>] ? schedule+0x29/0x70 Aug 15 09:33:53 node16 kernel: [<ffffffff8ef84c51>] ? schedule_timeout+0x221/0x2d0 Aug 15 09:33:53 node16 kernel: [<ffffffff8ea287c2>] ? kmem_cache_alloc+0x1c2/0x1f0 Aug 15 09:33:53 node16 kernel: [<ffffffffc1181ab3>] jbd2__journal_start+0xf3/0x1f0 [jbd2] Aug 15 09:33:53 node16 kernel: [<ffffffffc11eb0ba>] ? ext4_dirty_inode+0x2a/0x60 [ext4] Aug 15 09:33:53 node16 kernel: [<ffffffffc1217759>] __ext4_journal_start_sb+0x69/0xe0 [ext4] Aug 15 09:33:53 node16 kernel: [<ffffffffc11eb0ba>] ext4_dirty_inode+0x2a/0x60 [ext4] Aug 15 09:33:53 node16 kernel: [<ffffffff8ea7f00d>] __mark_inode_dirty+0x15d/0x270 Aug 15 09:33:53 node16 kernel: [<ffffffff8ea6b8e9>] update_time+0x89/0xd0 Aug 15 09:33:53 node16 kernel: [<ffffffff8eb8cfe4>] ? __radix_tree_lookup+0x84/0xf0 Aug 15 09:33:53 node16 kernel: [<ffffffff8ea6b9d0>] file_update_time+0xa0/0xf0 Aug 15 09:33:53 node16 kernel: [<ffffffff8e9c03d8>] __generic_file_aio_write+0x198/0x400 Aug 15 09:33:53 node16 kernel: [<ffffffff8e9c0699>] generic_file_aio_write+0x59/0xa0 Aug 15 09:33:53 node16 kernel: [<ffffffffc11de5c8>] ext4_file_write+0x348/0x600 [ext4] Aug 15 09:33:53 node16 kernel: [<ffffffff8ea010bc>] ? page_add_file_rmap+0x8c/0xc0 Aug 15 09:33:53 node16 kernel: [<ffffffff8e9f339e>] ? do_numa_page+0x1be/0x250 Aug 15 09:33:53 node16 kernel: [<ffffffff8ea4d063>] do_sync_write+0x93/0xe0 Aug 15 09:33:53 node16 kernel: [<ffffffff8ea4db50>] vfs_write+0xc0/0x1f0 Aug 15 09:33:53 node16 kernel: [<ffffffff8ea4eaf2>] SyS_pwrite64+0x92/0xc0 Aug 15 09:33:53 node16 kernel: [<ffffffff8ef93f92>] system_call_fastpath+0x25/0x2a Aug 15 09:33:53 node16 kernel: INFO: task dcgm-exporter:68381 blocked for more than 120 seconds. Aug 15 09:33:53 node16 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 15 09:33:53 node16 kernel: dcgm-exporter D ffff8e3eff81acc0 0 68381 57193 0x00000080 Aug 15 09:33:53 node16 kernel: Call Trace: Aug 15 09:33:53 node16 kernel: [<ffffffff8ef87169>] schedule+0x29/0x70 Aug 15 09:33:53 node16 kernel: [<ffffffffc1181085>] wait_transaction_locked+0x85/0xd0 [jbd2] Aug 15 09:33:53 node16 kernel: [<ffffffff8e8c6d10>] ? wake_up_atomic_t+0x30/0x30 Aug 15 09:33:53 node16 kernel: [<ffffffffc1181378>] add_transaction_credits+0x278/0x310 [jbd2] Aug 15 09:33:53 node16 kernel: [<ffffffffc1181601>] start_this_handle+0x1a1/0x430 [jbd2] Aug 15 09:33:53 node16 kernel: [<ffffffff8e8e8a43>] ? load_balance+0x1a3/0xa10 Aug 15 09:33:53 node16 kernel: [<ffffffff8ea287c2>] ? kmem_cache_alloc+0x1c2/0x1f0 Aug 15 09:33:53 node16 kernel: [<ffffffffc1181ab3>] jbd2__journal_start+0xf3/0x1f0 [jbd2] Aug 15 09:33:53 node16 kernel: [<ffffffffc11eb0ba>] ? ext4_dirty_inode+0x2a/0x60 [ext4] Aug 15 09:33:53 node16 kernel: [<ffffffffc1217759>] __ext4_journal_start_sb+0x69/0xe0 [ext4] Aug 15 09:33:53 node16 kernel: [<ffffffffc11eb0ba>] ext4_dirty_inode+0x2a/0x60 [ext4] Aug 15 09:33:53 node16 kernel: [<ffffffff8ea7f00d>] __mark_inode_dirty+0x15d/0x270 Aug 15 09:33:53 node16 kernel: [<ffffffff8ea6b8e9>] update_time+0x89/0xd0 Aug 15 09:33:53 node16 kernel: [<ffffffff8ea6b9d0>] file_update_time+0xa0/0xf0 Aug 15 09:33:53 node16 kernel: [<ffffffff8e9c03d8>] __generic_file_aio_write+0x198/0x400 Aug 15 09:33:53 node16 kernel: [<ffffffff8e9c0699>] generic_file_aio_write+0x59/0xa0 Aug 15 09:33:53 node16 kernel: [<ffffffffc11de5c8>] ext4_file_write+0x348/0x600 [ext4] Aug 15 09:33:53 node16 kernel: [<ffffffff8e9f339e>] ? do_numa_page+0x1be/0x250 Aug 15 09:33:53 node16 kernel: [<ffffffff8ea4d063>] do_sync_write+0x93/0xe0 Aug 15 09:33:53 node16 kernel: [<ffffffff8ea4db50>] vfs_write+0xc0/0x1f0 Aug 15 09:33:53 node16 kernel: [<ffffffff8ea4e92f>] SyS_write+0x7f/0xf0 Aug 15 09:33:53 node16 kernel: [<ffffffff8ef93f92>] system_call_fastpath+0x25/0x2a Aug 15 09:33:53 node16 kernel: INFO: task gcs_server:21032 blocked for more than 120 seconds. |
19
ruidoBlanco 131 天前
@zhoudaiyu 升级内核吧。不像是什么简单能解的。即便能定位到 bug ,最后一定还是得升内核。
|
20
zhoudaiyu OP @ruidoBlanco #19 已经是 1160 了,是红帽 7 的最新的内核了,再升就得用 elrepo 或者楼上说的 uek 内核了,领导估计不大支持升级
|
22
blackeeper 130 天前 1
可能是内存条故障了,你可以 cp 一个大文件,大于内存容量的文件,看看系统是否 hung 住了
我以前有两条内存条,跟你状况一样,文件系统也是 xfs ,运行一段时间也是突然所有进程都 hung 住了,shell 有些命令可以执行,磁盘不能写,以为是文件系统或者磁盘故障,最后排查下来发现是其中一条内存条故障了。 |
23
mdeche101644 129 天前
mark 一下看后面有没有大佬给解释一下。应该产生 coredump 了吧,可能需要分析那个,看看哪个进程或行为导致日志的 dirtynode 了。日志里是不是在说 dcgm 删某个节点了,然后是不是删了又执行某些操作出问题了,请大佬们指正
|
24
zhoudaiyu OP |
25
Emiya1208 127 天前 1
0.1 硬盘坏了可以在 dmesg 里面看到
0.2 内存可能需要添加 mcelog 监控,可能内存坏了。 上面是回复前几楼的猜想。 实际上应该还是内核的 bug 。 1. 红帽的回答已经很清楚了,并给出来了诊断步骤。可以用诊断步骤确认。 2. 由于 JBD2 的锁机制和同步方式在某些情况下可能导致死锁问题,xfs 不使用 JBD2 ,更换 XFS 很可能解决问题。 3. jbd2 日志线程在等待获取 j_checkpoint_mutex 锁。j_checkpoint_mutex 锁当前由某个任务(例如 java 进程)持有。 持有锁的任务正在等待 j_wait_done_commit 队列唤醒,而 j_wait_done_commit 队列需要 j_checkpoint_mutex 锁来继续,这形成了一个死锁。 |