As a result, the first test showed that the storage reboot the cluster nodes with the current configuration causes data corruption. As a result, the virtual machine image becomes corrupted.I think the problem is in GlusterFS synchonizing and self-heal mechanism. My other thought is that maybe the cause of damage in the use of "thin provisioning". I'll check this option if the new configuration will not work correctly.
Now will change setup for my GlusterFS server and client using AFR translator:
Serever config on NAS01-NODE01 (172.16.0.1):
Serever config on NAS01-NODE02 (172.16.0.2):
Client config on both nodes:
Now will change setup for my GlusterFS server and client using AFR translator:
Serever config on NAS01-NODE01 (172.16.0.1):
##############################################
### GlusterFS Server Volume Specification ##
##############################################
# dataspace on node1
volume gfs-ds
type storage/posix
option directory /data
end-volume
# posix locks
volume gfs-ds-locks
type features/posix-locks
subvolumes gfs-ds
end-volume
# dataspace on node2
volume gfs-node2-ds
type protocol/client
option transport-type tcp/client
option remote-host 172.16.0.2 # storage network
option remote-subvolume gfs-ds-locks
option transport-timeout 10 # value in seconds; it should be set relatively low
end-volume
# automatic file replication translator for dataspace
volume gfs-ds-afr
type cluster/afr
subvolumes gfs-ds-locks gfs-node2-ds # local and remote dataspaces
end-volume
# the actual exported volume
volume gfs
type performance/io-threads
option thread-count 8
option cache-size 64MB
subvolumes gfs-ds-afr
end-volume
# finally, the server declaration
volume server
type protocol/server
option transport-type tcp/server
subvolumes gfs
# storage network access only
option auth.ip.gfs-ds-locks.allow 172.16.0.*,127.0.0.1
option auth.ip.gfs.allow 172.16.0.*
end-volume
Serever config on NAS01-NODE02 (172.16.0.2):
##############################################
### GlusterFS Server Volume Specification ##
##############################################
# dataspace on node2
volume gfs-ds
type storage/posix
option directory /data
end-volume
# posix locks
volume gfs-ds-locks
type features/posix-locks
subvolumes gfs-ds
end-volume
# dataspace on node1
volume gfs-storage1-ds
type protocol/client
option transport-type tcp/client
option remote-host 172.16.0.1 # storage network
option remote-subvolume gfs-ds-locks
option transport-timeout 10 # value in seconds; it should be set relatively low
end-volume
# automatic file replication translator for dataspace
volume gfs-ds-afr
type cluster/afr
subvolumes gfs-ds-locks gfs-node1-ds # local and remote dataspaces
end-volume
# the actual exported volume
volume gfs
type performance/io-threads
option thread-count 8
option cache-size 64MB
subvolumes gfs-ds-afr
end-volume
# finally, the server declaration
volume server
type protocol/server
option transport-type tcp/server
subvolumes gfs
# storage network access only
option auth.ip.gfs-ds-locks.allow 172.16.0.*,127.0.0.1
option auth.ip.gfs.allow 172.16.0.*
end-volume
Client config on both nodes:
#############################################
## GlusterFS Client Volume Specification ##
#############################################
# the exported volume to mount # required!
volume cluster
type protocol/client
option transport-type tcp/client
option remote-host 172.16.0.1 / or .2 for node2 !!!
option remote-subvolume gfs # exported volume
option transport-timeout 10 # value in seconds, should be relatively low
end-volume
# performance block for cluster # optional!
volume writeback
type performance/write-behind
option aggregate-size 131072
subvolumes cluster
end-volume
# performance block for cluster # optional!
volume readahead
type performance/read-ahead
option page-size 65536
option page-count 16
subvolumes writeback
end-volume
No comments:
Post a Comment