linux磁盘分区文件系统filesystem损坏后修复

openstack中把volume detach掉后,重新attach回来会有一定概率导致分区无法挂载,手动mount一下后会发现报错filesystem无法识别。
表象可能是lsblk和df显示的磁盘大小不一致。主动指明成ext4后再次mount后,仍然会是失败,这个时候就需要手动恢复一下filesystem了。

  1. 先把磁盘去掉挂载, 以下均以/dev/vdb为例

    1
    # unmount /dev/vdb

    如果失败,则修改/etc/fstab里的记录,注释掉挂载,然后重启OS。

  2. 用fsck检查磁盘

    1
    2
    3
    4
    5
    6
    7
    8
    # e2fsck /dev/vdb
    e2fsck 1.41.12 (17-May-2010)
    Pass 1:Checking inodes, blocks, and sizes
    Pass 2:Checking directory structure
    Pass 3:Checking directory connectivity
    Pass 4:Checking reference counts
    Pass 5:Checking group sumary information
    ext4-1:11/131072 files (0.0% non-contiguous),27050/524128 blocks
  3. 用resize2fs重新恢复一次磁盘filesystem

    1
    2
    3
    4
    # resize2fs /dev/vdb
    resize2fs 1.41.12 (17-May-2010)
    Resizing the filesystem on /dev/vdb to 524128 (1k) blocks.
    The filesystem on /dev/vdb is now 524128 blocks long.
  4. 再次mount回来,检查文件。

    1
    # mount /dev/vdb /data0

转到HEXO了

Jekyll安装主题太麻烦,也许是没找到路子。
Jekyll依赖的Ruby的环境在我本地始终没有配置成功,尝试了好几遍,弃了。

决定试探一下HEXO,基于npm安装的,主题安装比较容易,改起来也挺简单。
唯一的遗憾是Jekyll全部代码push到github,github是可以自动编译的,但是HEXO需要在本地产生静态代码,然后把所有的静态代码push到github里去。
但是整体的效果还是挺满意的,换了。

python编译为pyc

1
2
3
4
5
6
#递归编译当前目录下的所有code为pyc,-b表示在原位置生成,不放到pycache里
python3 -m compileall -b ./
#删除掉所有的py文件
find . -name “*.py” |xargs rm -rf
#删除掉缓存文件
find . -name “pycache” |xargs rm -rf

检查磁盘的IO性能是否能匹配ETCD的需求

https://www.ibm.com/cloud/blog/using-fio-to-tell-whether-your-storage-is-fast-enough-for-etcd

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# mkdir test-data
# fio --rw=write --ioengine=sync --fdatasync=1 --directory=test-data --size=22m --bs=2300 --name=mytest
mytest: (g=0): rw=write, bs=(R) 2300B-2300B, (W) 2300B-2300B, (T) 2300B-2300B, ioengine=sync, iodepth=1
fio-3.7
Starting 1 process
mytest: Laying out IO file (1 file / 22MiB)
Jobs: 1 (f=1): [W(1)][100.0%][r=0KiB/s,w=85KiB/s][r=0,w=38 IOPS][eta 00m:00s]
mytest: (groupid=0, jobs=1): err= 0: pid=23832: Thu May 28 10:04:19 2020
write: IOPS=36, BW=81.9KiB/s (83.9kB/s)(21.0MiB/274912msec)
clat (usec): min=11, max=12185, avg=33.60, stdev=182.66
lat (usec): min=12, max=12187, avg=36.18, stdev=182.68
clat percentiles (usec):
| 1.00th=[ 17], 5.00th=[ 21], 10.00th=[ 22], 20.00th=[ 23],
| 30.00th=[ 24], 40.00th=[ 27], 50.00th=[ 30], 60.00th=[ 31],
| 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 42], 95.00th=[ 51],
| 99.00th=[ 69], 99.50th=[ 80], 99.90th=[ 125], 99.95th=[ 775],
| 99.99th=[11207]
bw ( KiB/s): min= 4, max= 166, per=100.00%, avg=81.37, stdev=39.59, samples=549
iops : min= 2, max= 74, avg=36.43, stdev=17.59, samples=549
lat (usec) : 20=5.02%, 50=89.91%, 100=4.93%, 250=0.07%, 500=0.01%
lat (usec) : 750=0.01%, 1000=0.03%
lat (msec) : 10=0.01%, 20=0.02%
fsync/fdatasync/sync_file_range:
sync (msec): min=2, max=464, avg=27.36, stdev=31.73
sync percentiles (msec):
| 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 8],
| 30.00th=[ 11], 40.00th=[ 19], 50.00th=[ 22], 60.00th=[ 24],
| 70.00th=[ 26], 80.00th=[ 34], 90.00th=[ 57], 95.00th=[ 84],
| 99.00th=[ 159], 99.50th=[ 220], 99.90th=[ 313], 99.95th=[ 330],
| 99.99th=[ 414]
cpu : usr=0.10%, sys=0.40%, ctx=24233, majf=0, minf=13
IO depths : 1=200.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,10029,0,0 short=10029,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
WRITE: bw=81.9KiB/s (83.9kB/s), 81.9KiB/s-81.9KiB/s (83.9kB/s-83.9kB/s), io=21.0MiB (23.1MB), run=274912-274912msec

Disk stats (read/write):
vda: ios=0/26445, merge=0/12218, ticks=0/359055, in_queue=343720, util=9.48%


All you have to do then is look at the output and check if the 99th percentile of fdatasync durations is less than 10ms. If that is the case, then your storage is fast enough. Here is an example output:

主要是看fsync/fdatasync/sync_file_range的结果,单位是毫秒,如果99.00%的少于10毫秒,就说明这个disk是没问题,反之容易引起ETCD读写的问题。

sqlalchercy can not create table automatically

按照教程书写python的定义表的model文件,正常情况下是没有问题,能自动映射成数据库里的表,但是有一种情况就是无法生成。

例如下面的代码的路径是,ssms/utils/db_tool.py

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# -*- coding: utf-8 -*-


from contextlib import contextmanager

from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker

from ssms.utils.db_setting import SSMS_DB_Base

SQLALCHEMY_DATABASE_URI = 'sqlite:///db/accounts.db'

engine = create_engine(SQLALCHEMY_DATABASE_URI, echo=True) # Connect to database
# engine = create_engine(SQLALCHEMY_DATABASE_URI) # Connect to database
SSMS_DB_Base.metadata.create_all(engine) # Create models


@contextmanager
def session_scope():
"""Provide a transactional scope around a series of operations."""
s = sessionmaker(bind=engine)()
s.expire_on_commit = False
try:
yield s
s.commit()
except:
s.rollback()
raise
finally:
s.close()

所有的model的文件的路径是ssms/beans/*.py

然后就会发现,有些表可以映射,有些表无法映射,非常之奇怪。最后猜测是python的文件加载顺序导致的,所以在每个model和db_tool.py里加了一个print 命令打印点东西到控制台,证实在db_tool.py前加载的都可以映射,在后面加载的就没法映射的了,那么如何解呢?

在ssms/utils/init.py文件里全部import一遍所有的model,那么这样就能保证在ssms/utils/目录下,加载db_tool.py之前所有的model都被加载好了,这样就可以全部映射了。

亲测好用!

update at 2021年4月8日09:41:44
看起来Flask-Migrate更加容易和强大。
https://flask-migrate.readthedocs.io/en/latest/


Powered by Hexo and Hexo-theme-hiker

Copyright © 2012 - 2025 tiaobug.com All Rights Reserved.

鲁ICP备2024124237号-1