One discussion in toolbox.com give me the idea to write this article. The subject was full root (/) filesystem and how to extend it. Most of the participants do not believe this is possible, but i will demonstrate how to do it. My demonstration is based on Oracle Solaris 10 Generic_142910-17 i386.
1. What is my OS
# showrev
Hostname: sun02
Hostid: 10b69b13
Release: 5.10
Kernel architecture: i86pc
Application architecture: i386
Hardware provider:
Domain:
Kernel version: SunOS 5.10 Generic_142910-17
2. Check the map of harddisk
# prtvtoc /dev/dsk/c1t0d0s2
* /dev/dsk/c1t0d0s2 partition map
<snip>
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 2 00 16065 2104515 2120579 /
1 4 00 2570400 6297480 8867879 /usr
2 5 00 0 33495525 33495524
3 3 01 11309760 1060290 12370049
8 1 01 0 16065 16064
As you can see I have some unallocated cylinders after each partition (/. /usr and swap) and this is done on time of installation for the reason of demonstration.
3. Check the exact sizes of filesystems
# df -k / /usr
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c1t0d0s0 1019856 377425 581240 40% /
/dev/dsk/c1t0d0s1 3100362 2251708 786647 75% /usr
# swap -l
swapfile dev swaplo blocks free
/dev/dsk/c1t0d0s3 30,3 8 1060280 1060280
4. Create two control files, filled with random bytes for control of the integrity of filesystems
# dd if=/dev/urandom of=/checkfileroot bs=1024 count=10240
10240+0 records in
10240+0 records out
# dd if=/dev/urandom of=/usr/checkfileusr bs=1024 count=10240
10240+0 records in
10240+0 records out
5. And get checksums ot the files
# digest -a sha1 /checkfileroot /usr/checkfileusr
(/checkfileroot) = c5ee33c68b147c58e6190a99a647a9baf35581a8
(/usr/checkfileusr) = 77bb739b734ab01a43578479ec4a3abe92e6c4bd
6. Extend slice 0, 1 and 3 with some amount of cylinders
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <DEFAULT cyl 2085 alt 2 hd 255 sec 63>
/pci@0,0/pci1000,30@10/sd@0,0
Specify disk (enter its number): 0
selecting c1t0d0
[disk formatted]
Warning: Current Disk has mounted partitions.
/dev/dsk/c1t0d0s0 is currently mounted on /. Please see umount(1M).
/dev/dsk/c1t0d0s1 is currently mounted on /usr. Please see umount(1M).
/dev/dsk/c1t0d0s3 is currently used by swap. Please see swap(1M).
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
fdisk - run the fdisk program
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit
format> p
PARTITION MENU:
0 - change `0' partition
1 - change `1' partition
2 - change `2' partition
3 - change `3' partition
4 - change `4' partition
5 - change `5' partition
6 - change `6' partition
7 - change `7' partition
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
!<cmd> - execute <cmd>, then return
quit
partition> 0
Part Tag Flag Cylinders Size Blocks
0 root wm 1 - 131 1.00GB (131/0/0) 2104515
Enter partition id tag[root]:
Enter partition permission flags[wm]:
Enter new starting cyl[1]:
Enter partition size[2104515b, 131c, 131e, 1027.60mb, 1.00gb]: 140c
partition> 1
Part Tag Flag Cylinders Size Blocks
1 usr wm 160 - 551 3.00GB (392/0/0) 6297480
Enter partition id tag[usr]:
Enter partition permission flags[wm]:
Enter new starting cyl[160]:
Enter partition size[6297480b, 392c, 551e, 3074.94mb, 3.00gb]: 400c
partition> 3
Part Tag Flag Cylinders Size Blocks
3 swap wu 704 - 769 517.72MB (66/0/0) 1060290
Enter partition id tag[swap]:
Enter partition permission flags[wu]:
Enter new starting cyl[704]:
Enter partition size[1060290b, 66c, 769e, 517.72mb, 0.51gb]: 80c
partition> la
Ready to label disk, continue? y
partition> q
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
fdisk - run the fdisk program
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit
format> q
7. And the moment of true, extend root (/) filesystem
# growfs -M / /dev/rdsk/c1t0d0s0
Warning: 5748 sector(s) in last cylinder unallocated
/dev/rdsk/c1t0d0s0: 2249100 sectors in 367 cylinders of 48 tracks, 128 sectors
1098.2MB in 23 cyl groups (16 c/g, 48.00MB/g, 11648 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
1279648, 1378080, 1476512, 1574944, 1673376, 1771808, 1870240, 1968672,
2067104, 2165536
8. Then grow /usr
# growfs -M /usr /dev/rdsk/c1t0d0s1
Warning: 624 sector(s) in last cylinder unallocated
/dev/rdsk/c1t0d0s1: 6426000 sectors in 1046 cylinders of 48 tracks, 128 sectors
3137.7MB in 66 cyl groups (16 c/g, 48.00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
5512224, 5610656, 5709088, 5807520, 5905952, 6004384, 6102816, 6201248,
6291488, 6389920
9. And swap
# swap -d /dev/dsk/c1t0d0s3
/dev/dsk/c1t0d0s3 was dump device --
invoking dumpadm(1M) -d swap to select new dump device
dumpadm: no swap devices are available
# swap -a /dev/dsk/c1t0d0s3
operating system crash dump was previously disabled --
invoking dumpadm(1M) -d swap to select new dump device
# dumpadm -d swap
Dump content: kernel pages
Dump device: /dev/dsk/c1t0d0s3 (swap)
Savecore directory: /var/crash/sun02
Savecore enabled: yes
Save compressed: on
For the swap I just delete and add it again. And update dump device, this is important. Of course it is not always possible just to delete virtual memory on production, but its possible to play with creation of new swap device, delete old, add old and delete new. This can take long time on production system, but its relatively safe operation
10. So, let check again the sizes of filesystems
# df -k / /usr
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c1t0d0s0 1090677 387745 641741 38% /
/dev/dsk/c1t0d0s1 3163878 2262020 839851 73% /usr
# swap -l
swapfile dev swaplo blocks free
/dev/dsk/c1t0d0s3 30,3 8 1285192 1285192
As you can see they are bigger that few minutes a go
11. But what is the situation with control files
# digest -a sha1 /checkfileroot /usr/checkfileusr
(/checkfileroot) = c5ee33c68b147c58e6190a99a647a9baf35581a8
(/usr/checkfileusr) = 77bb739b734ab01a43578479ec4a3abe92e6c4bd
As you can see they are the same
12. Et voila, we successfully extend our filesystems on the fly. By the way in official Oracle Solaris documentations you can see this
--------------
LIMITATIONS
Only UFS file systems (either mounted or unmounted) can be expanded using the growfs command. Once a file system is expanded, it cannot be decreased in size. The following conditions prevent you from expanding file systems: When acct is activated and the accounting file is on the target device. When C2 security is activated and the logging file is on the target file system. When there is a local swap file in the target file system. When the file system is root (/), /usr, or swap.
Showing posts with label disk. Show all posts
Showing posts with label disk. Show all posts
Solaris x86 root filesystem mirroring
Preamble
This document is directed to give idea how to create mirroring of root filesystem in Solaris x86 with the help of Solaris Volume Manager. Here will be used already installed OS and all the work will be done without need of reinstall
Prerequisites
1. First we should be sure have two identical harddisks in the server
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0d0 <DEFAULT cyl 1563 alt 2 hd 255 sec 63>
/pci@0,0/pci-ide@7,1/ide@0/cmdk@0,0
1. c0d1 <DEFAULT cyl 1563 alt 2 hd 255 sec 63>
/pci@0,0/pci-ide@7,1/ide@0/cmdk@1,0
Specify disk (enter its number): ^C
2. Its need to create small slice for metadb information (usually slice 7) like:
partition> p
Current partition table (unnamed):
Total disk cylinders available: 1563 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 root wm 518 - 1562 8.01GB (1045/0/0) 16787925
1 swap wu 3 - 133 1.00GB (131/0/0) 2104515
2 backup wm 0 - 1562 11.97GB (1563/0/0) 25109595
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 134 - 135 15.69MB (2/0/0) 32130
8 boot wu 0 - 0 7.84MB (1/0/0) 16065
9 alternates wu 1 - 2 15.69MB (2/0/0) 32130
3. Next step is to create the same partitions on the second disk. To avoid human error it is much better to use some kind of automations:
prtvtoc /dev/rdsk/c0d0s2 > /tmp/c0d0s2.toc
fmthard -s /tmp/c0d0s2.toc /dev/rdsk/c0d1s2
4. Then we should identify the partitions need to be mirrored:
# egrep "ufs|swap" /etc/vfstab|grep "/dev/dsk"
/dev/dsk/c0d0s1 - - swap - no -
/dev/dsk/c0d0s0 /dev/rdsk/c0d0s0 / ufs 1 no -
LVM
1. Let’s create few copies of metadb on the partitions we create for this puspose:
# metadb -a -f -c 2 c0d0s7 c0d1s7
2. It’s time to put our disks where OS reside under management of SVM
# metainit -f d10 1 1 c0d0s0
d10: Concat/Stripe is setup
# metainit -f d11 1 1 c0d0s1
d11: Concat/Stripe is setup
# metainit d0 -m d10
d0: Mirror is setup
# metainit d1 -m d11
d1: Mirror is setup
# metaroot d0
3. Check newly created devices:
# ls -l /dev/md/rdsk
total 8
lrwxrwxrwx 1 root root 36 Aug 30 18:29 d0 -> ../../../devices/pseudo/md@0:0,0,raw
lrwxrwxrwx 1 root root 36 Aug 30 18:29 d1 -> ../../../devices/pseudo/md@0:0,1,raw
lrwxrwxrwx 1 root root 37 Aug 30 18:28 d10 -> ../../../devices/pseudo/md@0:0,10,raw
lrwxrwxrwx 1 root root 37 Aug 30 18:28 d11 -> ../../../devices/pseudo/md@0:0,11,raw
# ls -l /dev/md/dsk
total 8
lrwxrwxrwx 1 root root 36 Aug 30 18:29 d0 -> ../../../devices/pseudo/md@0:0,0,blk
lrwxrwxrwx 1 root root 36 Aug 30 18:29 d1 -> ../../../devices/pseudo/md@0:0,1,blk
lrwxrwxrwx 1 root root 37 Aug 30 18:28 d10 -> ../../../devices/pseudo/md@0:0,10,blk
lrwxrwxrwx 1 root root 37 Aug 30 18:28 d11 -> ../../../devices/pseudo/md@0:0,11,blk
4. Make appropriate changes in /etc/vfstab to get boot from mirror, not standard disks
# egrep "ufs|swap" /etc/vfstab|grep "/dev/md/dsk"
/dev/md/dsk/d1 - - swap - no -
/dev/md/dsk/d0 /dev/md/rdsk/d0 / ufs 1 no -
5. The next step is to flush cache buffers and reboot
# sync;sync;sync
# init 6
6. It is time to put second disk under management of SVN
# metainit -f d20 1 1 c0d1s0
d20: Concat/Stripe is setup
# metainit -f d21 1 1 c0d1s1
d21: Concat/Stripe is setup
7. And to add them to created previously mirrors. Be aware process of synchronisation will continue in background and you can check the process
# metattach d0 d20
d0: submirror d20 is attached
# metattach d1 d21
d1: submirror d21 is attached
8. Check the process of building mirrors and wait till they finnish
# metastat
d1: Mirror
Submirror 0: d11
State: Okay
Submirror 1: d21
State: Resyncing
Resync in progress: 96 % done
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 2104515 blocks (1.0 GB)
d11: Submirror of d1
State: Okay
Size: 2104515 blocks (1.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0d0s1 0 No Okay Yes
d21: Submirror of d1
State: Resyncing
Size: 2104515 blocks (1.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0d1s1 0 No Okay Yes
d0: Mirror
Submirror 0: d10
State: Okay
Submirror 1: d20
State: Resyncing
Resync in progress: 13 % done
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 16787925 blocks (8.0 GB)
d10: Submirror of d0
State: Okay
Size: 16787925 blocks (8.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0d0s0 0 No Okay Yes
d20: Submirror of d0
State: Resyncing
Size: 16787925 blocks (8.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0d1s0 0 No Okay Yes
Device Relocation Information:
Device Reloc Device ID
c0d1 Yes id1,cmdk@AVMware_Virtual_IDE_Hard_Drive=01000000000000000001
c0d0 Yes id1,cmdk@AVMware_Virtual_IDE_Hard_Drive=00000000000000000001
Boot
1. Next step is to check if partition on second disk is active
# fdisk /dev/rdsk/c0d1p0
Total disk size is 1566 cylinders
Cylinder size is 16065 (512 byte) blocks
Cylinders
Partition Status Type Start End Length %
= = = =
1 Active Solaris2 1 1565 1565 100
2. And to add boot record to the second disk to make it bootable
# /sbin/installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0d1s0
stage1 written to partition 0 sector 0 (abs 16065)
stage2 written to partition 0, 233 sectors starting at 50 (abs 16115)
3. Add new item in the boot menu (/boot/grub/menu.lst) to have alternative way to boot
title Alternate boot
root (hd1,0,a)
kernel /platform/i86pc/multiboot
module /platform/i86pc/boot_archive
4. Check if the new item is added to the boot menu
# bootadm list-menu
The location for the active GRUB menu is: /boot/grub/menu.lst
default 0
timeout 10
0 Solaris 10 5/08 s10x_u5wos_10 X86
1 Solaris failsafe
2 Alternate boot
5. That’s all, you have already mirrored root partition (plus swap)
Conclusion
In the document are not mentioned all the options and possibilities of SVM, but only short set, need to done the work. For further information, please consult official Oracle documentation.
Oracle ASM - yet another LVM (management)
6. Next step is to start ASM instance. Do not forget to set SID to be +ASM when you try to login with sqlplus in to the instance
$ export ORACLE_SID=+ASM
$ sqlplus / as sysasm
SQL*Plus: Release 11.2.0.1.0 Production on Mon Apr 5 09:28:21 2010
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup
ASM instance started
Total System Global Area 284565504 bytes
Fixed Size 1336036 bytes
Variable Size 258063644 bytes
ASM Cache 25165824 bytes
ASM diskgroups mounted
SQL>
7. Lets check the status of our diskgroups
SQL> SELECT name, type, total_mb, free_mb FROM V$ASM_DISKGROUP;
DATA NORMAL 32756 32575
8. Check the list of available disks
SQL> select name, path from V$ASM_DISK;
NAME
------------------------------
PATH
--------------------------------------------------------------------
ORCL:DFDISK4
ORCL:DFDISK5
DFDISK0
ORCL:DFDISK0
DFDISK1
ORCL:DFDISK1
DFDISK2
ORCL:DFDISK2
DFDISK3
ORCL:DFDISK3
FBDISK0
ORCL:FBDISK0
FBDISK1
ORCL:FBDISK1
8 rows selected.
9. And create new diskgroup for flashback
SQL> CREATE DISKGROUP FBA NORMAL REDUNDANCY FAILGROUP fba_fb1 disk 'ORCL:FBDISK0' FAILGROUP fba_fb2 disk 'ORCL:FBDISK1';
Diskgroup created.
10. and see the new list of diskgroups
SQL> select name,total_mb from V$ASM_DISKGROUP;
NAME TOTAL_MB
------------------------------ ----------
DATA 32756
FBA 16378
11. Add two new disks to diskgroup DATA
SQL> ALTER DISKGROUP data ADD DISK 'ORCL:DFDISK4', 'ORCL:DFDISK5';
Diskgroup altered.
SQL> select name,total_mb from V$ASM_DISKGROUP where name='DATA';
NAME TOTAL_MB
------------------------------ ----------
DATA 49134
12. Remove one of the disks from diskgroup
SQL> ALTER DISKGROUP data DROP DISK DFDISK5;
Diskgroup altered.
SQL> select name,total_mb from V$ASM_DISKGROUP where name='DATA';
NAME TOTAL_MB
------------------------------ ----------
DATA 40945
and size is already 40GB
For further information about migration of database to ASM, filepaths, etc, please consult official Oracle documentation about ASM: Oracle® Database Storage Administrator's Guide
$ export ORACLE_SID=+ASM
$ sqlplus / as sysasm
SQL*Plus: Release 11.2.0.1.0 Production on Mon Apr 5 09:28:21 2010
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup
ASM instance started
Total System Global Area 284565504 bytes
Fixed Size 1336036 bytes
Variable Size 258063644 bytes
ASM Cache 25165824 bytes
ASM diskgroups mounted
SQL>
7. Lets check the status of our diskgroups
SQL> SELECT name, type, total_mb, free_mb FROM V$ASM_DISKGROUP;
DATA NORMAL 32756 32575
8. Check the list of available disks
SQL> select name, path from V$ASM_DISK;
NAME
------------------------------
PATH
--------------------------------------------------------------------
ORCL:DFDISK4
ORCL:DFDISK5
DFDISK0
ORCL:DFDISK0
DFDISK1
ORCL:DFDISK1
DFDISK2
ORCL:DFDISK2
DFDISK3
ORCL:DFDISK3
FBDISK0
ORCL:FBDISK0
FBDISK1
ORCL:FBDISK1
8 rows selected.
9. And create new diskgroup for flashback
SQL> CREATE DISKGROUP FBA NORMAL REDUNDANCY FAILGROUP fba_fb1 disk 'ORCL:FBDISK0' FAILGROUP fba_fb2 disk 'ORCL:FBDISK1';
Diskgroup created.
10. and see the new list of diskgroups
SQL> select name,total_mb from V$ASM_DISKGROUP;
NAME TOTAL_MB
------------------------------ ----------
DATA 32756
FBA 16378
11. Add two new disks to diskgroup DATA
SQL> ALTER DISKGROUP data ADD DISK 'ORCL:DFDISK4', 'ORCL:DFDISK5';
Diskgroup altered.
SQL> select name,total_mb from V$ASM_DISKGROUP where name='DATA';
NAME TOTAL_MB
------------------------------ ----------
DATA 49134
12. Remove one of the disks from diskgroup
SQL> ALTER DISKGROUP data DROP DISK DFDISK5;
Diskgroup altered.
SQL> select name,total_mb from V$ASM_DISKGROUP where name='DATA';
NAME TOTAL_MB
------------------------------ ----------
DATA 40945
and size is already 40GB
For further information about migration of database to ASM, filepaths, etc, please consult official Oracle documentation about ASM: Oracle® Database Storage Administrator's Guide
Oracle ASM - yet another LVM (system configuration)
4. Its time for some system administrator tasks. Oracle ASM need special mark of disks will work with
4.1. Because of some reason (i don't know why) ASM can work only with partitions, but not with entire disks. So it is need to create one big partition on each disk
[root@rh-asm-ora ~]# fdisk /dev/sda
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
The number of cylinders for this disk is set to 1044.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1044, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1044, default 1044):
Using default value 1044
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
And the similar for the rest of the disks /dev/sdb, /dev/sdc, /dev/sdd, /dev/sde and /dev/sdf
4.2. Next step is to configure the ASMlib. This is done via init script
[root@rh-asm-ora ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
If you do not get OK on the last line check /var/log/messages. Usual reason is you do not install the correct version of ASMlib. Check on this site Oracle ASMLib. If you cant find modules for your version of kernel you should compile them from source you can get from here: http://oss.oracle.com/projects/oracleasm/
4.3. And check if the kernel module is loaded
[root@rh-asm-ora ~]# lsmod |grep ora
oracleasm 46356 1
4.4. The module is loaded so lets label the disks to be recognized by Oracle ASM. I will dedicate 4 disks for tablespace and 2 disks for flashback
[root@rh-asm-ora ~]# /etc/init.d/oracleasm createdisk dfdisk0 /dev/sda1
Marking disk "dfdisk0" as an ASM disk: [ OK ]
[root@rh-asm-ora ~]# /etc/init.d/oracleasm createdisk dfdisk1 /dev/sdb1
Marking disk "dfdisk1" as an ASM disk: [ OK ]
[root@rh-asm-ora ~]# /etc/init.d/oracleasm createdisk dfdisk2 /dev/sdc1
Marking disk "dfdisk2" as an ASM disk: [ OK ]
[root@rh-asm-ora ~]# /etc/init.d/oracleasm createdisk dfdisk3 /dev/sdd1
Marking disk "dfdisk3" as an ASM disk: [ OK ]
[root@rh-asm-ora ~]# /etc/init.d/oracleasm createdisk fbdisk0 /dev/sde1
Marking disk "fbdisk0" as an ASM disk: [ OK ]
[root@rh-asm-ora ~]# /etc/init.d/oracleasm createdisk fbdisk1 /dev/sdf1
Marking disk "fbdisk1" as an ASM disk: [ OK ]
4.5. Check the ASM volumes
[root@rh-asm-ora ~]# /etc/init.d/oracleasm listdisks
DFDISK0
DFDISK1
DFDISK2
DFDISK3
FBDISK0
FBDISK1
5. The next step is install Oracle ASM software. This is mostly straight-forward process, so just read the installation guide and do it. Please do not forget in version 11gR2 Oracle ASM is part of Grid installation package and not standard Oracle Database installation package
4.1. Because of some reason (i don't know why) ASM can work only with partitions, but not with entire disks. So it is need to create one big partition on each disk
[root@rh-asm-ora ~]# fdisk /dev/sda
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
The number of cylinders for this disk is set to 1044.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1044, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1044, default 1044):
Using default value 1044
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
And the similar for the rest of the disks /dev/sdb, /dev/sdc, /dev/sdd, /dev/sde and /dev/sdf
4.2. Next step is to configure the ASMlib. This is done via init script
[root@rh-asm-ora ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
If you do not get OK on the last line check /var/log/messages. Usual reason is you do not install the correct version of ASMlib. Check on this site Oracle ASMLib. If you cant find modules for your version of kernel you should compile them from source you can get from here: http://oss.oracle.com/projects/oracleasm/
4.3. And check if the kernel module is loaded
[root@rh-asm-ora ~]# lsmod |grep ora
oracleasm 46356 1
4.4. The module is loaded so lets label the disks to be recognized by Oracle ASM. I will dedicate 4 disks for tablespace and 2 disks for flashback
[root@rh-asm-ora ~]# /etc/init.d/oracleasm createdisk dfdisk0 /dev/sda1
Marking disk "dfdisk0" as an ASM disk: [ OK ]
[root@rh-asm-ora ~]# /etc/init.d/oracleasm createdisk dfdisk1 /dev/sdb1
Marking disk "dfdisk1" as an ASM disk: [ OK ]
[root@rh-asm-ora ~]# /etc/init.d/oracleasm createdisk dfdisk2 /dev/sdc1
Marking disk "dfdisk2" as an ASM disk: [ OK ]
[root@rh-asm-ora ~]# /etc/init.d/oracleasm createdisk dfdisk3 /dev/sdd1
Marking disk "dfdisk3" as an ASM disk: [ OK ]
[root@rh-asm-ora ~]# /etc/init.d/oracleasm createdisk fbdisk0 /dev/sde1
Marking disk "fbdisk0" as an ASM disk: [ OK ]
[root@rh-asm-ora ~]# /etc/init.d/oracleasm createdisk fbdisk1 /dev/sdf1
Marking disk "fbdisk1" as an ASM disk: [ OK ]
4.5. Check the ASM volumes
[root@rh-asm-ora ~]# /etc/init.d/oracleasm listdisks
DFDISK0
DFDISK1
DFDISK2
DFDISK3
FBDISK0
FBDISK1
5. The next step is install Oracle ASM software. This is mostly straight-forward process, so just read the installation guide and do it. Please do not forget in version 11gR2 Oracle ASM is part of Grid installation package and not standard Oracle Database installation package
Solaris x86 root filesystem mirroring
Preamble
This document is directed to give idea how to create mirroring of root filesystem in Solaris x86 with the help of Solaris Volume Manager. Here will be used already installed OS and all the work will be done without need of reinstall
Prerequisites
1. First we should be sure have two identical harddisks in the server
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0d0 <DEFAULT cyl 1563 alt 2 hd 255 sec 63>
/pci@0,0/pci-ide@7,1/ide@0/cmdk@0,0
1. c0d1 <DEFAULT cyl 1563 alt 2 hd 255 sec 63>
/pci@0,0/pci-ide@7,1/ide@0/cmdk@1,0
Specify disk (enter its number): ^C
2. Its need to create small slice for metadb information (usually slice 7) like:
partition> p
Current partition table (unnamed):
Total disk cylinders available: 1563 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 root wm 518 - 1562 8.01GB (1045/0/0) 16787925
1 swap wu 3 - 133 1.00GB (131/0/0) 2104515
2 backup wm 0 - 1562 11.97GB (1563/0/0) 25109595
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 134 - 135 15.69MB (2/0/0) 32130
8 boot wu 0 - 0 7.84MB (1/0/0) 16065
9 alternates wu 1 - 2 15.69MB (2/0/0) 32130
3. Next step is to create the same partitions on the second disk. To
avoid human error it is much better to use some kind of automations:
prtvtoc /dev/rdsk/c0d0s2 > /tmp/c0d0s2.toc
fmthard -s /tmp/c0d0s2.toc /dev/rdsk/c0d1s2
4. Then we should identify the partitions need to be mirrored:
# egrep "ufs|swap" /etc/vfstab|grep "/dev/dsk"
/dev/dsk/c0d0s1 - - swap - no -
/dev/dsk/c0d0s0 /dev/rdsk/c0d0s0 / ufs 1 no -
LVM
1. Let’s create few copies of metadb on the partitions we create for this puspose:
# metadb -a -f -c 2 c0d0s7 c0d1s7
2. It’s time to put our disks where OS reside under management of SVM
# metainit -f d10 1 1 c0d0s0 d10: Concat/Stripe is setup # metainit -f d11 1 1 c0d0s1 d11: Concat/Stripe is setup # metainit d0 -m d10 d0: Mirror is setup # metainit d1 -m d11 d1: Mirror is setup # metaroot d03. Check newly created devices:
# ls -l /dev/md/rdsk
total 8
lrwxrwxrwx 1 root root 36 Aug 30 18:29 d0 -> ../../../devices/pseudo/md@0:0,0,raw
lrwxrwxrwx 1 root root 36 Aug 30 18:29 d1 -> ../../../devices/pseudo/md@0:0,1,raw
lrwxrwxrwx 1 root root 37 Aug 30 18:28 d10 -> ../../../devices/pseudo/md@0:0,10,raw
lrwxrwxrwx 1 root root 37 Aug 30 18:28 d11 -> ../../../devices/pseudo/md@0:0,11,raw
# ls -l /dev/md/dsk
total 8
lrwxrwxrwx 1 root root 36 Aug 30 18:29 d0 -> ../../../devices/pseudo/md@0:0,0,blk
lrwxrwxrwx 1 root root 36 Aug 30 18:29 d1 -> ../../../devices/pseudo/md@0:0,1,blk
lrwxrwxrwx 1 root root 37 Aug 30 18:28 d10 -> ../../../devices/pseudo/md@0:0,10,blk
lrwxrwxrwx 1 root root 37 Aug 30 18:28 d11 -> ../../../devices/pseudo/md@0:0,11,blk
4. Make appropriate changes in /etc/vfstab to get boot from mirror, not standard disks
# egrep "ufs|swap" /etc/vfstab|grep "/dev/md/dsk"
/dev/md/dsk/d1 - - swap - no -
/dev/md/dsk/d0 /dev/md/rdsk/d0 / ufs 1 no -
5. The next step is to flush cache buffers and reboot
# sync;sync;sync
# reboot
6. It is time to put second disk under management of SVN
# metainit -f d20 1 1 c0d1s0
d20: Concat/Stripe is setup
# metainit -f d21 1 1 c0d1s1
d21: Concat/Stripe is setup
7. And to add them to created previously mirrors. Be aware process of
synchronisation will continue in background and you can check the
process
# metattach d0 d20
d0: submirror d20 is attached
# metattach d1 d21
d1: submirror d21 is attached
8. Check the process of building mirrors and wait till they finnish
# metastat
d1: Mirror
Submirror 0: d11
State: Okay
Submirror 1: d21
State: Resyncing
Resync in progress: 96 % done
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 2104515 blocks (1.0 GB)
d11: Submirror of d1
State: Okay
Size: 2104515 blocks (1.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0d0s1 0 No Okay Yes
d21: Submirror of d1
State: Resyncing
Size: 2104515 blocks (1.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0d1s1 0 No Okay Yes
d0: Mirror
Submirror 0: d10
State: Okay
Submirror 1: d20
State: Resyncing
Resync in progress: 13 % done
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 16787925 blocks (8.0 GB)
d10: Submirror of d0
State: Okay
Size: 16787925 blocks (8.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0d0s0 0 No Okay Yes
d20: Submirror of d0
State: Resyncing
Size: 16787925 blocks (8.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0d1s0 0 No Okay Yes
Device Relocation Information:
Device Reloc Device ID
c0d1 Yes id1,cmdk@AVMware_Virtual_IDE_Hard_Drive=01000000000000000001
c0d0 Yes id1,cmdk@AVMware_Virtual_IDE_Hard_Drive=00000000000000000001
Boot
1. Next step is to check if partition on second disk is active
# fdisk /dev/rdsk/c0d1p0
Total disk size is 1566 cylinders
Cylinder size is 16065 (512 byte) blocks
Cylinders
Partition Status Type Start End Length %
========= ====== ============ ===== === ====== ===
1 Active Solaris2 1 1565 1565 100
SELECT ONE OF THE FOLLOWING:
1. Create a partition
2. Specify the active partition
3. Delete a partition
4. Change between Solaris and Solaris2 Partition IDs
5. Exit (update disk configuration and exit)
6. Cancel (exit without updating disk configuration)
Enter Selection: 5
2. And to add boot record to the second disk to make it bootable
# /sbin/installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0d1s0
stage1 written to partition 0 sector 0 (abs 16065)
stage2 written to partition 0, 233 sectors starting at 50 (abs 16115)
3. Add new item in the boot menu (/boot/grub/menu.lst) to have alternative way to boot
title Alternate boot
root (hd1,0,a)
kernel /platform/i86pc/multiboot
module /platform/i86pc/boot_archive
4. Check if the new item is added to the boot menu
# bootadm list-menu
The location for the active GRUB menu is: /boot/grub/menu.lst
default 0
timeout 10
0 Solaris 10 5/08 s10x_u5wos_10 X86
1 Solaris failsafe
2 Alternate boot
5. That’s all, you have already mirrored root partition (plus swap)
Conclusion
In the document are not mentioned all the options and possibilities
of SVM, but only short set, need to done the work. For further
information, please consult official Oracle documentation.
Play with soft partitions on Solaris part 4
Grow soft partition on the fly
1. Create random file and calculate checksum
# cd /oradata
# dd if=/dev/urandom of=file bs=1024 count=1024
1024+0 records in
1024+0 records out
# digest -a md5 file
f0252c61cab0d92ae0b91206af72f85d
2. Grow d51 partition by 1 GB and grow the filesystem
# metattach d51 1g
d51: Soft Partition has been grown
# growfs -M /oradata /dev/md/rdsk/d51
/dev/md/rdsk/d51: 8388608 sectors in 2048 cylinders of 128 tracks, 32 sectors
4096.0MB in 82 cyl groups (25 c/g, 50.00MB/g, 8192 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 102464, 204896, 307328, 409760, 512192, 614624, 717056, 819488, 921920,
7375136, 7477568, 7580000, 7682432, 7784864, 7887296, 7989728, 8092160,
8194592, 8297024
3. See the mounted soft partitions and check if the file is the same
# df -k|grep md
/dev/md/dsk/d51 4109006 5145 4042237 1% /oradata
/dev/md/dsk/d52 1488991 1521 1427911 1% /oralogs
# digest -a md5 file
f0252c61cab0d92ae0b91206af72f85d
Conclusion
In few sample steps we create volumes and filesystems, grow it without disrupting normal work of the system and applications. This example show the power of Solaris Volume manager, but of course do not cover many of the options and details you will need in daily work. For more information about the subject is strictly recommended to read official SUN documentation and related papers.
1. Create random file and calculate checksum
# cd /oradata
# dd if=/dev/urandom of=file bs=1024 count=1024
1024+0 records in
1024+0 records out
# digest -a md5 file
f0252c61cab0d92ae0b91206af72f85d
2. Grow d51 partition by 1 GB and grow the filesystem
# metattach d51 1g
d51: Soft Partition has been grown
# growfs -M /oradata /dev/md/rdsk/d51
/dev/md/rdsk/d51: 8388608 sectors in 2048 cylinders of 128 tracks, 32 sectors
4096.0MB in 82 cyl groups (25 c/g, 50.00MB/g, 8192 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 102464, 204896, 307328, 409760, 512192, 614624, 717056, 819488, 921920,
7375136, 7477568, 7580000, 7682432, 7784864, 7887296, 7989728, 8092160,
8194592, 8297024
3. See the mounted soft partitions and check if the file is the same
# df -k|grep md
/dev/md/dsk/d51 4109006 5145 4042237 1% /oradata
/dev/md/dsk/d52 1488991 1521 1427911 1% /oralogs
# digest -a md5 file
f0252c61cab0d92ae0b91206af72f85d
Conclusion
In few sample steps we create volumes and filesystems, grow it without disrupting normal work of the system and applications. This example show the power of Solaris Volume manager, but of course do not cover many of the options and details you will need in daily work. For more information about the subject is strictly recommended to read official SUN documentation and related papers.
Play with soft partitions on Solaris part 3
Creation of soft partitions
1. Create on soft partition of 3 gigabytes with name d51 and another with size 1.5G and name d52
# metainit d51 -p d5 3g
d51: Soft Partition is setup
# metainit d52 -p d5 1500m
d52: Soft Partition is setup
2. Check the status of d52 for example
# metastat d52
d52: Soft Partition
Device: d5
State: Okay
Size: 3072000 blocks (1.5 GB)
Extent Start Block Block count
0 6291520 3072000
d5: RAID
State: Okay
Interlace: 32 blocks
Size: 33091584 blocks (15 GB)
Original device:
Size: 33094272 blocks (15 GB)
Device Start Block Dbase State Reloc Hot Spare
c1t0d0s0 330 No Okay Yes
c1t1d0s0 330 No Okay Yes
c1t2d0s0 330 No Okay Yes
c1t3d0s0 330 No Okay Yes
c1t4d0s0 330 No Okay Yes
Device Relocation Information:
Device Reloc Device ID
c1t0d0 Yes id1,sd@SATA_____VBOX_HARDDISK____VB302359a4-9d8f686a
c1t1d0 Yes id1,sd@SATA_____VBOX_HARDDISK____VB5a844d32-bae9afe7
c1t2d0 Yes id1,sd@SATA_____VBOX_HARDDISK____VBb0b0a9e8-72b6d075
c1t3d0 Yes id1,sd@SATA_____VBOX_HARDDISK____VBc7fe9534-5a7df964
c1t4d0 Yes id1,sd@SATA_____VBOX_HARDDISK____VB09918bf7-59adb880
3. Create filesystems
# newfs /dev/md/rdsk/d51
Warning: setting rpm to 60
newfs: construct a new file system /dev/md/rdsk/d51: (y/n)? y
/dev/md/rdsk/d51: 6291456 sectors in 1536 cylinders of 128 tracks, 32 sectors
3072.0MB in 62 cyl groups (25 c/g, 50.00MB/g, 8192 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 102464, 204896, 307328, 409760, 512192, 614624, 717056, 819488, 921920,
5326496, 5428928, 5531360, 5633792, 5736224, 5838656, 5941088, 6043520,
6145952, 6248384
# newfs /dev/md/rdsk/d52
Warning: setting rpm to 60
newfs: construct a new file system /dev/md/rdsk/d52: (y/n)? y
/dev/md/rdsk/d52: 3072000 sectors in 750 cylinders of 128 tracks, 32 sectors
1500.0MB in 33 cyl groups (23 c/g, 46.00MB/g, 11264 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 94272, 188512, 282752, 376992, 471232, 565472, 659712, 753952, 848192,
2167552, 2261792, 2356032, 2450272, 2544512, 2638752, 2732992, 2827232,
2921472, 3015712
4. Then create mountpoints, mount filesystems and check the mount
# mkdir /oradata /oralogs
# mount /dev/md/dsk/d51 /oradata
# mount /dev/md/dsk/d52 /oralogs
# df -k|grep md
/dev/md/dsk/d51 3081231 3089 3016518 1% /oradata
/dev/md/dsk/d52 1488991 1521 1427911 1% /oralogs
1. Create on soft partition of 3 gigabytes with name d51 and another with size 1.5G and name d52
# metainit d51 -p d5 3g
d51: Soft Partition is setup
# metainit d52 -p d5 1500m
d52: Soft Partition is setup
2. Check the status of d52 for example
# metastat d52
d52: Soft Partition
Device: d5
State: Okay
Size: 3072000 blocks (1.5 GB)
Extent Start Block Block count
0 6291520 3072000
d5: RAID
State: Okay
Interlace: 32 blocks
Size: 33091584 blocks (15 GB)
Original device:
Size: 33094272 blocks (15 GB)
Device Start Block Dbase State Reloc Hot Spare
c1t0d0s0 330 No Okay Yes
c1t1d0s0 330 No Okay Yes
c1t2d0s0 330 No Okay Yes
c1t3d0s0 330 No Okay Yes
c1t4d0s0 330 No Okay Yes
Device Relocation Information:
Device Reloc Device ID
c1t0d0 Yes id1,sd@SATA_____VBOX_HARDDISK____VB302359a4-9d8f686a
c1t1d0 Yes id1,sd@SATA_____VBOX_HARDDISK____VB5a844d32-bae9afe7
c1t2d0 Yes id1,sd@SATA_____VBOX_HARDDISK____VBb0b0a9e8-72b6d075
c1t3d0 Yes id1,sd@SATA_____VBOX_HARDDISK____VBc7fe9534-5a7df964
c1t4d0 Yes id1,sd@SATA_____VBOX_HARDDISK____VB09918bf7-59adb880
3. Create filesystems
# newfs /dev/md/rdsk/d51
Warning: setting rpm to 60
newfs: construct a new file system /dev/md/rdsk/d51: (y/n)? y
/dev/md/rdsk/d51: 6291456 sectors in 1536 cylinders of 128 tracks, 32 sectors
3072.0MB in 62 cyl groups (25 c/g, 50.00MB/g, 8192 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 102464, 204896, 307328, 409760, 512192, 614624, 717056, 819488, 921920,
5326496, 5428928, 5531360, 5633792, 5736224, 5838656, 5941088, 6043520,
6145952, 6248384
# newfs /dev/md/rdsk/d52
Warning: setting rpm to 60
newfs: construct a new file system /dev/md/rdsk/d52: (y/n)? y
/dev/md/rdsk/d52: 3072000 sectors in 750 cylinders of 128 tracks, 32 sectors
1500.0MB in 33 cyl groups (23 c/g, 46.00MB/g, 11264 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 94272, 188512, 282752, 376992, 471232, 565472, 659712, 753952, 848192,
2167552, 2261792, 2356032, 2450272, 2544512, 2638752, 2732992, 2827232,
2921472, 3015712
4. Then create mountpoints, mount filesystems and check the mount
# mkdir /oradata /oralogs
# mount /dev/md/dsk/d51 /oradata
# mount /dev/md/dsk/d52 /oralogs
# df -k|grep md
/dev/md/dsk/d51 3081231 3089 3016518 1% /oradata
/dev/md/dsk/d52 1488991 1521 1427911 1% /oralogs
Play with soft partitions on Solaris part 2
2. Create solaris partition on the disks
# fdisk -W /tmp/partition /dev/rdsk//c1t0d0p0
# fdisk -F /tmp/partition /dev/rdsk//c1t1d0p0
# fdisk -F /tmp/partition /dev/rdsk//c1t2d0p0
# fdisk -F /tmp/partition /dev/rdsk//c1t3d0p0
# fdisk -F /tmp/partition /dev/rdsk//c1t4d0p0
3. We will populate this disk map over the rest of the disks in automatic way to avoid human errors
# prtvtoc /dev/rdsk/c1t0d0s2 > /tmp/c1t0d0s2.toc
# fmthard -s /tmp/c1t0d0s2.toc /dev/rdsk/c1t1d0s2
fmthard: New volume table of contents now in place
# fmthard -s /tmp/c1t0d0s2.toc /dev/rdsk/c1t2d0s2
fmthard: New volume table of contents now in place
# fmthard -s /tmp/c1t0d0s2.toc /dev/rdsk/c1t3d0s2
fmthard: New volume table of contents now in place
# fmthard -s /tmp/c1t0d0s2.toc /dev/rdsk/c1t4d0s2
fmthard: New volume table of contents now in place
4. And create 2 copies of metadb on each of the 4 disks
# metadb -a -f -c 2 c1t0d0s7 c1t1d0s7 c1t2d0s7 c1t3d0s7 c1t4d0s7
5. Create RAID 5 array
# metainit d5 -r c1t0d0s0 c1t1d0s0 c1t2d0s0 c1t3d0s0 c1t4d0s0
d5: RAID is setup
6. And check the status of array
# metastat d5
d5: RAID
State: Okay
Interlace: 32 blocks
Size: 33091584 blocks (15 GB)
Original device:
Size: 33094272 blocks (15 GB)
Device Start Block Dbase State Reloc Hot Spare
c1t0d0s0 330 No Okay Yes
c1t1d0s0 330 No Okay Yes
c1t2d0s0 330 No Okay Yes
c1t3d0s0 330 No Okay Yes
c1t4d0s0 330 No Okay Yes
Device Relocation Information:
Device Reloc Device ID
c1t0d0 Yes id1,sd@SATA_____VBOX_HARDDISK____VB302359a4-9d8f686a
c1t1d0 Yes id1,sd@SATA_____VBOX_HARDDISK____VB5a844d32-bae9afe7
c1t2d0 Yes id1,sd@SATA_____VBOX_HARDDISK____VBb0b0a9e8-72b6d075
c1t3d0 Yes id1,sd@SATA_____VBOX_HARDDISK____VBc7fe9534-5a7df964
c1t4d0 Yes id1,sd@SATA_____VBOX_HARDDISK____VB09918bf7-59adb880
# fdisk -W /tmp/partition /dev/rdsk//c1t0d0p0
# fdisk -F /tmp/partition /dev/rdsk//c1t1d0p0
# fdisk -F /tmp/partition /dev/rdsk//c1t2d0p0
# fdisk -F /tmp/partition /dev/rdsk//c1t3d0p0
# fdisk -F /tmp/partition /dev/rdsk//c1t4d0p0
3. We will populate this disk map over the rest of the disks in automatic way to avoid human errors
# prtvtoc /dev/rdsk/c1t0d0s2 > /tmp/c1t0d0s2.toc
# fmthard -s /tmp/c1t0d0s2.toc /dev/rdsk/c1t1d0s2
fmthard: New volume table of contents now in place
# fmthard -s /tmp/c1t0d0s2.toc /dev/rdsk/c1t2d0s2
fmthard: New volume table of contents now in place
# fmthard -s /tmp/c1t0d0s2.toc /dev/rdsk/c1t3d0s2
fmthard: New volume table of contents now in place
# fmthard -s /tmp/c1t0d0s2.toc /dev/rdsk/c1t4d0s2
fmthard: New volume table of contents now in place
4. And create 2 copies of metadb on each of the 4 disks
# metadb -a -f -c 2 c1t0d0s7 c1t1d0s7 c1t2d0s7 c1t3d0s7 c1t4d0s7
5. Create RAID 5 array
# metainit d5 -r c1t0d0s0 c1t1d0s0 c1t2d0s0 c1t3d0s0 c1t4d0s0
d5: RAID is setup
6. And check the status of array
# metastat d5
d5: RAID
State: Okay
Interlace: 32 blocks
Size: 33091584 blocks (15 GB)
Original device:
Size: 33094272 blocks (15 GB)
Device Start Block Dbase State Reloc Hot Spare
c1t0d0s0 330 No Okay Yes
c1t1d0s0 330 No Okay Yes
c1t2d0s0 330 No Okay Yes
c1t3d0s0 330 No Okay Yes
c1t4d0s0 330 No Okay Yes
Device Relocation Information:
Device Reloc Device ID
c1t0d0 Yes id1,sd@SATA_____VBOX_HARDDISK____VB302359a4-9d8f686a
c1t1d0 Yes id1,sd@SATA_____VBOX_HARDDISK____VB5a844d32-bae9afe7
c1t2d0 Yes id1,sd@SATA_____VBOX_HARDDISK____VBb0b0a9e8-72b6d075
c1t3d0 Yes id1,sd@SATA_____VBOX_HARDDISK____VBc7fe9534-5a7df964
c1t4d0 Yes id1,sd@SATA_____VBOX_HARDDISK____VB09918bf7-59adb880
Subscribe to:
Posts (Atom)
Should I trust AI
Should I trust AI? So far no, sorry. I tested for the moment (May, 2025) most advanced model for programming and ask very simple question:...
-
Grow soft partition on the fly 1. Create random file and calculate checksum # cd /oradata # dd if=/dev/urandom of=file bs=1024 count=10...
-
To build firewall under AIX is sample, but as each host based firewall should be done careful 1. Prerequisites To start firewall in AIX yo...
-
There are some cases when you want to create compressed tar archive but you do not have enough disk space to keep original files and tar arc...