Unix Profession Webcast October 2007

Download Report

Transcript Unix Profession Webcast October 2007

Solaris Virtualization
Methods (with
Practical Exercise in
Containers)
Dusan Baljevic
Sydney, Australia
© 2007 Dusan Baljevic
The information contained herein is subject to change without notice
Solaris - Four Types of Virtualization
•
Solaris Resource Management
•
Solaris Containers
•
Sun Fire Dynamic System Domains
•
Logical Domains
July 21, 2015
Dusan Baljevic
2
Four Types of Virtualization - Diagram
July 21, 2015
Dusan Baljevic
3
Sun Partitioning
July 21, 2015
Software
Solaris
Containers
Firmware
Logical
Domains
Hardware
Dynamic
Domains
Single OS Image
Cool Threads
Servers Only
High-end and
Midrange Only
Dusan Baljevic
4
Solaris Resource Management
Solaris Resource Management can control the
CPU
shares, operating parameters, and other aspects of
each process, thereby allowing many applications
to coexist in the same operating system
environment
July 21, 2015
Dusan Baljevic
5
Solaris Containers
•
•
•
•
•
•
•
•
•
Solaris Containers can create multiple virtualized environments
(Solaris Zones) within one Solaris kernel structure, keeping the
memory footprint low
Solaris Containers can be used with Solaris Resource Management
to provide flexibility in fine-grained resource controls (good for
consolidating dynamically resource-controlled environments within a
single kernel or version of Solaris)
Only one OS instance - less instances to administrate
Best efficiency
Built into the O/S, no hypervisor
Physical storage management done in the global zone
Minimal overhead (typically < 1-2 %)
Unified TCP/IP stack for all zones
Up to 8191 (non-global) zones are supported within a single OS
image
July 21, 2015
Dusan Baljevic
6
Sun Fire Dynamic System Domains
Sun Fire Dynamic System Domains can create
electrically isolated domains on high-end Sun Fire systems,
while offering the maximum security isolation and
availability
in a single chassis and combining many redundant
hardware features for high availability
Dynamic System Domains are great for consolidating a
smaller number of mission critical services with security and
availability
July 21, 2015
Dusan Baljevic
7
Logical Domains (LDom)
•
•
•
•
•
•
•
Logical Domains fit somewhere between the Containers
and Dynamic System Domains
Logical domains provide isolation between the various
domains (achieved through a firmware layer), lowering the
hardware infrastructure
They are available on CoolThreads servers only (Sun Fire
T-1000 and T-2000)
Each guest domain can be created, destroyed,
reconfigured, and rebooted independently
Virtual console, Ethernet, disk, and cryptographic
acceleration
Live dynamic reconfiguration of virtual CPUs
Fault management architecture (FMA) diagnosis for each
logical domain
July 21, 2015
Dusan Baljevic
8
Solaris Containers – File System Models
•
•
•
•
•
Before creating any zones on a Solaris 10 server, all
processes run in the global zone
After a zone is created, it has processes which are
associated with it
Any zone which is not the global zone is called a nonglobal zone. Some call non-global zones "zones." Others
call them "local zones“ (this is not recommended)
The default zone file system model is called sparse-root.
It trades-off efficiency at the cost of some flexibility.
Sparse-root zones optimize physical memory and disk
space usage by sharing directories like /usr, /sbin,
/platform, and /lib. Sparse-root zones have their own
private file areas for directories like /etc and /var
Whole-root zones increase configuration flexibility but
increase resource usage. They do not use shared file
systems for /usr, /lib, and few others
July 21, 2015
Dusan Baljevic
9
Zone types
global zone
sparse root zone
whole root zone
/dev/dsk/c0t0d0s0
/
/etc
/opt
/var
/dev
/
/
/etc
/etc
/opt
/opt
/var
/var
special lofs
special lofs
/lib
/platform
 lofs
inherit-pkg-dir
/lib
/sbin
 lofs
inherit-pkg-dir
/platform
/usr
 lofs
inherit-pkg-dir
/sbin
/devices
 lofs
inherit-pkg-dir
/usr
/kernel
~ 4 GB
July 21, 2015
~ 100 MB
Dusan Baljevic
~ 4 GB
10
Solaris Containers – States
July 21, 2015
Dusan Baljevic
11
Solaris Containers – Storage
•
Direct Attached Storage (simple, but limits
flexibility when moving a container or its workload
to a different server)
•
Network Attached Storage (NAS). Currently root
directory of a container cannot be stored on NAS.
However, NAS can be used to centralize zone
and application storage
•
Storage Area Networks (SAN)
July 21, 2015
Dusan Baljevic
12
Solaris Containers – File Systems
Storage can be assigned to containers through
several methods:
•
Loopback File System (LOFS)
•
Zettabyte File System (ZFS)
•
Unix File System (UFS)
•
Direct device
•
Network File System (NFS)
July 21, 2015
Dusan Baljevic
13
Solaris Containers – Firewalls and
Networking
•
Currently, IP filters cannot be used to filter traffic
passing between zones since it remains inside the
system and never reaches firewalls and filters
•
IP Multi-Pathing (IPMP) and Sun Trunking can be
used to improve network bandwidth and
availability
•
VLANs available through IPMP too
July 21, 2015
Dusan Baljevic
14
Solaris Containers – Subnet Masks
•
Zone’s network interfaces are configured by the
global zone – hence, netmask information must
be stored in the global zone
•
If non-default subnet masks are used for nonglobal zones, ensure that the mask information is
stored in the global zone’s /etc/netmasks file
•
Subnet mask may also be specified via
zonecfg(1) command using CIDR notation
(10.99.64.0/28)
July 21, 2015
Dusan Baljevic
15
Solaris Containers – Product Registry
Database
•
Very important warning: ensure packages are registered in global zone
correctly:
zoneadm -z zone1 install
Preparing to install zone <zone1>.
ERROR: the product registry database </> does not exist
ERROR: cannot determine the product registry
implementation used by the global zone at </>
ERROR: cannot create zone boot environment <zone1>
zoneadm: zone 'zone1': '/usr/lib/lu/lucreatezone' failed with exit code 74
•
Solution:
pkgchk -l
Software contents file initialized
July 21, 2015
Dusan Baljevic
16
Solaris Containers – Dynamic Resource
Pools
Processor sets, Pools and Projects
• DRP are collections of resources reserved for
exclusive use by an application or set of
applications
• Processor set is a grouping of processors. One or
more workloads can be assigned to a processor
set via DRP
• Commands: pooladm(1), poolcfg(1), poolstat(1),
poolbind(1), psrset(1), prctl(1), and others:
# prtcl –n zone.max-swap –v 1g –t privileged –r –e
deny –I zone zfszone1
•
July 21, 2015
Dusan Baljevic
17
Solaris Containers – Resource Capping
Daemon
Major improvement in Solaris 10 Update 4
• The cap can be modified while the container is running:
# rcapadm -E
# rcapadm -z loczone3 -m 300m
• Because the cap does not reserve RAM, one can oversubscribe RAM usage. Drawback is the possibility of
paging
• The cap can be defined when container is set up via
zonecfg(1) command and “add capped-memory” option
• Virtual memory (swap) can also be capped
• Third new memory cap is “locked memory” (prevented
from being paged out)
•
July 21, 2015
Dusan Baljevic
18
Solaris Containers – Fair Share
Scheduler
A commonly used method to prevent "CPU abuse" is to
assign a number of “CPU shares” to each container. The
relative number of shares assigned per zone guarantees
a relative minimum amount of CPU power. This is less
wasteful than dedicating a CPU to a container that will not
completely utilize the dedicated CPUs
• In Solaris 10 Update 4 only two steps are needed:
a) The system must use FSS as the default scheduler
(command tells the system to use FSS as the default
scheduler the next time it boots):
# dispadmin -d FSS
b) The container must be assigned some shares:
# zonecfg -z myzonex
zonecfg:myzonex> set cpu-shares=100
zonecfg:myzonex> exit
•
July 21, 2015
Dusan Baljevic
19
Solaris Containers – O/S Patching
Methods
•
When only sparse-root zones are used (default)
and zones provide one or a few types of services,
install patches all packages and patches in all
zones
•
In a system with zones provides many different
types of services (Web development, database
testing, proxy services), install packages and
patches directly into the zones which need them
July 21, 2015
Dusan Baljevic
20
Solaris Containers – patchadd(1)
patchadd(1) has option “-G”
Add patch(es) to packages in the current zone only. When
used in the global zone, the patch is added to packages
in the global zone only and is not propagated to packages
in any existing or yet-to-be-created non-global zone
When used in a non-global zone, the patch is added to
packages in the non-global zone only
July 21, 2015
Dusan Baljevic
21
Solaris Containers – O/S Patching
Methods (continued)
•
•
•
Solaris 10 Update 4 adds the ability to use Live Upgrade
tools on a system with containers. It is possible to apply
an update to a zoned system and drastically reduce the
downtime necessary to apply patches
Live Upgrade can create an Alternate Boot Environment
(ABE). The ABE can be patched while the Original Boot
Environment (OBE) is still running its containers. After the
patches have been applied, the system can be rebooted
into the ABE. Downtime is limited to the time it takes to reboot the system
Additional benefit: if there is a problem with the patch,
instead of backing it out, the system can be rebooted into
the OBE while the problem is investigated
July 21, 2015
Dusan Baljevic
22
Solaris Containers – Flash Archives
•
All zones must be stopped when the flash archive
is made from the global zone
•
If the source and target systems use different
hardware configurations, device assignments
must be changed after the flash archive is
installed
•
Soft partitions in SVM cannot be flash archived
yet
July 21, 2015
Dusan Baljevic
23
Solaris Containers and ZFS – Third-Party
Backups
•
EMC Networker 7.3.2. backs up and restores ZFS file
systems, including ZFS ACLs
•
Veritas NetBackup will provide Netbackup support in
version 6.5, which is scheduled for release in the second
half of 2007. Current versions of NetBackup can backup
and restore ZFS file systems, but ZFS ACLs are not
preserved
•
IBM Tivoli Storage Manager client software (5.4.1.2)
backs up and restores ZFS file systems with both the CLI
and the GUI. ZFS ACLs are also preserved
•
Computer Associates' BrightStor ARCserve product backs
up and restores ZFS file systems, but ZFS ACLs are not
preserved
July 21, 2015
Dusan Baljevic
24
Solaris Containers, ZFS and Data
Protector
•
Back Agents (Disk Agents)
ZFS support: Solaris 10 (including ACL support)
6.0
Planned CY Q4’07
ZFS support: Solaris 10 (excluding ACL support)
5.5
Planned CY Q4’07
•
Non-global zone/container support is still not in
the planning for Data Protector
July 21, 2015
Dusan Baljevic
25
Solaris Containers – zoneadm(1)
•
Zoneadm(1) administer zones. Many options
•
As part of security policy, prevent someone from running
DoS attack in a non-global zone. To do so, we typically
add the following to a zone's configuration, using
zonecfg(1M):
add rctl
set name=zone.max-lwps
add value (priv=privileged,limit=1000,action=deny)
end
July 21, 2015
Dusan Baljevic
26
Solaris Containers – Root File System
July 21, 2015
Dusan Baljevic
27
Solaris Containers – Root File System
•
Create a sparse-root Container:
host-global# zonecfg -z zone1
zonecfg:zone1> create
zonecfg:zone1> set zonepath=/zones/roots/zone1
zonecfg:luke> set autoboot=false
zonecfg:zone1> add inherit-pkg-dir
zonecfg:zone1:inherit-pkg-dir> set dir=/opt
zonecfg:zone1:inherit-pkg-dir> end
zonecfg:zone1> add rctl
zonecfg:zone1:rctl> set name=zone.cpu-shares
zonecfg:zone1:rctl> set value=(priv=privileged,limit=40,action=none)
zonecfg:zone1:rctl> end
zonecfg:zone1> add net
zonecfg:zone1:net> set physical=qfe0
zonecfg:zone1:net> set address=192.168.1.200
zonecfg:zone1:net> end
zonecfg:zone1> exit
July 21, 2015
Dusan Baljevic
28
Solaris Containers – Root File System
(continued)
•
Install the Container:
host-global# zoneadm -z zone1 install
•
Create the sysidcfg file:
host-global# cat /zones/roots/zone1/root/etc/sysidcfg
system_locale=en_AU.ISO8859-1
terminal=xterm
network_interface=primary { hostname=zone1 }
timeserver=192.168.1.73
security_policy=NONE
name_service=DNS {domain_name=mydom.com
name_server=10.99.66.44,192.168.1.10}
timezone=Australia/NSW
root_password=Mxpy/32z032
July 21, 2015
Dusan Baljevic
29
Solaris Containers – Root File System
(continued)
•
Create file (if using JumpStart):
host-global# touch
/zones/roots/zone1/root/etc/.NFS4inst_state.domain
•
Boot the Container:
host-global# zoneadm -z zone1 boot
•
Log into the console, use zlogin(1):
host-global# zlogin -C zone1
July 21, 2015
Dusan Baljevic
30
Traditional File Systems and ZFS
July 21, 2015
Dusan Baljevic
31
Global vs Non-global Zone (ZFS)
July 21, 2015
Dusan Baljevic
32
Global vs Non-global Zone (ZFS)
(continued)
July 21, 2015
Dusan Baljevic
33
Solaris Containers – ZFS
•
Use zpool(1) command:
host-global# zpool create zoneroot c0t0d0 c1t0d1
•
Create a new ZFS file system:
host-global# zfs create zoneroot/zfszone1
host-global# chmod 700 /zoneroot/zfszone1
July 21, 2015
Dusan Baljevic
34
Solaris Containers – ZFS (continued)
•
Set the quota on the file system:
host-global# zfs set quota=1024m zoneroot/zfszone1
•
Create a sparse-root zone:
host-global# zonecfg -z zfszone1
zonecfg:zfszone1> create
zonecfg:zfszone1> set zonepath=/zoneroot/zfszone1
zonecfg:zfszone1> add net
zonecfg:zfszone1:net> set physical=hme2
zonecfg:zfszone1:net> set address=192.168.7.40
zonecfg:zfszone1:net> end
zonecfg:zfszone1> exit
July 21, 2015
Dusan Baljevic
35
Solaris Containers – ZFS (continued)
host-global# zoneadm –z zfszone1 install
host-global# cat /zoneroot/zfszone1/root/etc/sysidcfg
system_locale=C
terminal=dtterm
network_interface=primary { hostname=zfszone1 }
timeserver=localhost
security_policy=NONE
name_service=NONE
timezone=US/Eastern
root_password="“
host-global# zoneadm -z zfszone1 boot
host-global# zlogin -C zfszone1
July 21, 2015
Dusan Baljevic
36
Solaris Containers – UFS with SVM
July 21, 2015
Dusan Baljevic
37
Solaris Containers – UFS with SVM
The example assumes the following disk
layout:
c1t2d0s0
20MB
Metadata DB
c1t2d0s3
5GB
Data partition
c2t4d0s0
20MB
Metadata DB
c2t4d0s3
5GB
Data partition
July 21, 2015
Dusan Baljevic
38
Solaris Containers – UFS with SVM
(continued)
•
Create the SVM database and the replicas of it:
host-global# metadb -a –c 2 -f c1t2d0s0 c2t4d0s0
•
Create two metadisks - virtual devices:
host-global# metainit d11 1 1 c1t2d0s3
host-global# metainit d12 1 1 c2t4d0s3
July 21, 2015
Dusan Baljevic
39
Solaris Containers – UFS with SVM
(continued)
•
Create the first part of the mirror:
host-global# metainit d10 -m d11
•
Add the second metadisk to the mirror:
host-global# metattach d10 d12
July 21, 2015
Dusan Baljevic
40
Solaris Containers – UFS with SVM
(continued)
•
Create a new soft partition. A "soft partition" is an SVM feature which
allows the creation of multiple virtual partitions in one metadisk
(requires
the “–p” option to metainit(1)):
host-global# metainit d100 -p d10 524M
•
Create the new UFS file system:
host-global# mkdir -p /zones/roots/ufszone1
host-global# newfs /dev/md/dsk/d100
host-global# mount /dev/md/dsk/d100 /zones/roots/ufszone1
host-global# chmod 700 /zones/roots/ufszone1
July 21, 2015
Dusan Baljevic
41
Solaris Containers – UFS with SVM
(continued)
•
Create a sparse-root zone:
host-global# zonecfg -z ufszone1
zonecfg:ufszone1> create
zonecfg:ufszone1> set zonepath=/zones/roots/ufszone1
zonecfg:ufszone1> add net
zonecfg:ufszone1> set physical=ipge1
zonecfg:ufszone1> set address=10.99.64.12/28
zonecfg:ufszone1> end
zonecfg:ufszone1> exit
host-global# zoneadm -z ufszone1 install
July 21, 2015
Dusan Baljevic
42
Solaris Containers – UFS with SVM
(continued)
host-global# cat /zones/roots/ufszone1/root/etc/sysidcfg
system_locale=C
terminal=vt100
network_interface=primary { hostname=ufszone1 }
timeserver=localhost
security_policy=NONE
name_service=NONE
timezone=Europe/Berlin
root_password=“”
July 21, 2015
Dusan Baljevic
43
Solaris Containers – UFS with SVM
(continued)
host-global# zoneadm -z ufszone1 boot
host-global# zlogin -C ufszone1
July 21, 2015
Dusan Baljevic
44
Containers – Hostname Caveat
•
There does not seem to be any special
requirement for zone naming
•
One caveat though, zones should use a naming
convention that enables them to be seen
individually with ps(1) commands (for example,
"ps -elfyZ")
•
To do this, one would want the first eight
characters of each zone name to be unique
July 21, 2015
Dusan Baljevic
45
Containers – Hostname Caveat
(continued)
# zoneadm list -v
ID NAME
STATUS
0 global
running
PATH
/
1 longhost-z1
running
/zones/longhost-z1
2 longhost-z2
running
/zones/longhost-z2
3 longhost-z3
running
/zones/longhost-z3
4 longhost-z4
running
/zones/longhost-z4
July 21, 2015
Dusan Baljevic
46
Containers – Hostname Caveat
(continued)
When a less experienced Unix admin runs a command to check which
processes run in each zone, they get the following type of results...
The following example is for daemon inetd, which runs in each zone:
# ps -efZ | grep inetd
global
root
256
1 0 Mar 13 ? 0:39 /usr/lib/inet/inetd start
longhost root 792
1 0 Mar 13 ? 0:39 /usr/lib/inet/inetd start
longhost root 1129 1 0 Mar 13 ? 0:38 /usr/lib/inet/inetd start
longhost root 1144 1 0 Mar 13 ? 0:39 /usr/lib/inet/inetd start
longhost root 1394 1 0 Mar 13 ? 0:39 /usr/lib/inet/inetd start
July 21, 2015
Dusan Baljevic
47
Containers – Username Caveat
•
Namespace is consistent across all zones. The UID which
gets assigned with the "-u“ option is visible in the Global
Zone. These users are not visible in peer non-global
zones
•
Define unique UIDs across all zones (including the global
zone):
# useradd -u 501 -g mygid -d /export/home myuser1
# useradd -u 502 -g mygid -d /export/home myuser2
# useradd -u 503 -g mygid -d /export/home myuser3
•
These processes can now be viewed in the global zone:
# prstat -Z
July 21, 2015
Dusan Baljevic
48
Using Project in Containers
•
Instead of defining global parameters in /etc/system, use
project-based resource management. For example, to
allocate 4 GB shared memory to user oracle:
# projadd -U oracle -K "project.max-shmmemory=(priv,4096MB,deny)" user.oracle
July 21, 2015
Dusan Baljevic
49
Migrating Container – Root File System
Method
hostA-global# zlogin zone1 shutdown -y -i 0
hostA-global# zoneadm -z zone1 detach
hostA-global# cd /zones/roots/zone1
hostA-global# pax -w@f /tmp/zone1.pax -p e *
hostA-global# scp /tmp/zone1.pax root@hostB:/tmp/zone1.pax
hostB-global# mkdir -m 700 -p /zones/roots/zone2
hostB-global# cd /zones/roots/zone2
hostB-global# pax -r@f /tmp/zone1.pax -p e
hostB-global# zonecfg -z zone2
zonecfg:dusk> create -a /zones/roots/zone2
zonecfg:dusk> exit
hostB-global# zoneadm -z zone2 attach
hostB-global# zoneadm -z zone2 boot
July 21, 2015
Dusan Baljevic
50
Migrating Container – ZFS Method
hostA-global# zlogin zfszone1 shutdown -y -i 0
hostA-global# zoneadm -z zfszone1 detach
hostA-global# zfs snapshot zoneroot/zfszone1@Snap1
hostA-global# zfs send zoneroot/zfszone1@Snap1 > /tmp/zfszone1.Bck1
hostA-global# scp /tmp/zfszone1.Bck1 root@hostB:/tmp/zfszone1.Bck1
hostB-global# zpool create zoneroot c1t0d0 c1t0d1
hostB-global# zfs receive zoneroot/zfszone2 < /tmp/zfszone1.Bck1
hostB-global# zonecfg -z zfszone2
zonecfg:zfszone2> create -a /zoneroot/zfszone2
zonecfg:zfszone2> exit
hostB-global# zoneadm -z zfszone2 attach
hostB-global# zoneadm -z zfszone2 boot
July 21, 2015
Dusan Baljevic
51
Migrating Container – UFS Method
hostA-global# zlogin ufszone1 shutdown -y -i 0
hostA-global# zoneadm -z ufszone1 detach
hostA-global# cd /zones/roots/ufszone1
hostA-global# pax -w@f /tmp/ufszone1.pax -p e *
hostA-global# scp /tmp/ufszone1.pax root@hostB:/tmp/ufszone1.pax
hostB-global# cd /zones/roots/ufszone2
hostB-global# pax -r@f /tmp/ufszone1.pax -p e
hostB-global# zonecfg -z ufszone2
zonecfg:ufszone2> create -a /zones/root/ufszone2
zonecfg:ufszone2> exit
hostB-global# zoneadm -z ufszone2 attach
hostB-global# zoneadm -z ufszone2 boot
July 21, 2015
Dusan Baljevic
52
Simpler Container Management
http://opensolaris.org/os/project/zonemgr/
zonemgr -a add -n zone11 -t w -z "/zones" -P “mypass" -R /root \
-I "192.16.250.43|ipge1|24|zone1-mgt" \
-I "10.2.17.43|ipge2|24|zone1-app" \
-I "172.16.123.87|ipge3|24|zone1-web" -D “mydomain.dom" \
-d "172.16.123.11,10.2.17.2,192.168.1.32" \
-s "netservices|limited" -s "basic|lock" -S ssh \
-C /etc/ssh/sshd_config -C /etc/resolv.conf \
-C /etc/nsswitch.conf -C /etc/inetd.conf \
-C /etc/motd -C /etc/issue -C /etc/default/login \
-C /etc/default/syslogd
July 21, 2015
Dusan Baljevic
53
Examples of Container Management
•
To log in to the zone with a user name use:
host-global# zlogin -l user zonename
To log in as user root, use the zlogin command without
options.
•
If a login problem occurs and one cannot use the zlogin(1)
command, an alternative is provided. Enter the zone by
using the zlogin(1) command with the “–S” (safe) option.
Only use this mode to recover a damaged zone when
other forms of login are failing. In this environment, it
might be possible to diagnose why the zone login
July 21, 2015
Dusan Baljevic
54
Adding ZFS File Systems to a NonGlobal Zone
•
One can add a ZFS file system as a generic file system when the goal
is to share space with the global zone. A ZFS file system that is
added to a non-global zone must have its mountpoint property
set to legacy.
•
A ZFS file system is added to a non-global zone by a global
administrator in the global zone:
# zonecfg -z zfszone2
zonecfg:zion> add fs
zonecfg:zion:fs> set type=zfs
zonecfg:zion:fs> set special=zoneroot/dusanfs
zonecfg:zion:fs> set dir=/export/myfs
zonecfg:zion:fs> end
July 21, 2015
Dusan Baljevic
55
Delegating Datasets to a Non-Global
Zone
•
If the primary goal is to delegate the administration of
storage to a zone, then ZFS supports adding datasets to
a non-global zone:
•
A ZFS file system is delegated to a non-global zone by a
global administrator in the global zone.
# zonecfg -z myzone
zonecfg:zion> add dataset
zonecfg:zion:dataset> set name=myzpool/zoneroot/myds
zonecfg:zion:dataset> end
July 21, 2015
Dusan Baljevic
56
Adding ZFS Volumes to a Non-Global
Zone
•
ZFS volumes cannot be added to a non-global zone by using the
zonecfg command's add dataset subcommand. If one adds an ZFS
volume, the zone cannot boot. Volumes can be added to a zone:
# zonecfg -z zfszone4
zonecfg:zion> add device
zonecfg:zion:device> set match=/dev/zvol/dsk/zoneroot/specfs
zonecfg:zion:device> end
July 21, 2015
Dusan Baljevic
57
ZFS and Containers – Current
Status
Solaris 10 8/07 (Update 4) supports the use of ZFS file systems
• Possible to upgrade the Solaris O/S when non-global zones are
installed without most of the limitations found in releases prior to
Solaris 10 7/07 (only limitation is Solaris Flash archive)
• Do not use “flar create” to create Solaris Flash archive in these
instances:
In any non-global zone,
In the global zone if there are any non-global zones
installed
There is no workaround yet (Bug ID 6246943)
• “zpool history” (ZFS automatically logs successful zfs(1) and zpool(1)
commands
• “zpool status -v” (display a list of files with persistent errors)
•
July 21, 2015
Dusan Baljevic
58
ZFS and Containers – Current
Status (continued)
•
ZFS can create block devices too. They are
called zvols. Since Nevada build 54, they are fully
integrated into the Solaris iSCSI infrastructure
•
ZFS snapshots - Chris Gerhard once had over
60,000 snapshots (he was snapshotting file
systems by the minute). Since snapshots in ZFS
only take up the space that actually changes
between snapshots, there is no reason to not
doing snapshots all the time
July 21, 2015
Dusan Baljevic
59
Containers – Current Status
•
From Solaris Express SNV-49 and Solaris
Update 4 there is a new feature called Brand
Zones (BrandZ). It enables the user to use zones
for other operating systems than Solaris. In the
first release, there is support for Linux based O/S
through a branded zone type called “lx”. The
original zones are now termed “native” zones
•
Tim Foster has written an SMF service that will
snapshot ZFS file systems on a regular basis. It is
fully automatic, configurable and integrated with
SMF
July 21, 2015
Dusan Baljevic
60
Containers – Current Status
•
•
The following boot arguments can now be used:
-s
Boot to the single-user milestone
-m <milestone> Boot to the specified milestone
-i </path/to/init> Boot the specified program as 'init'.
This is only useful with branded zones
Allowed syntax include:
global# zoneadm -z myzone1 boot -- -s
global# zoneadm -z zfszone2 reboot -- -i /sbin/myinit
myzone3# reboot -- -m verbose
•
These boot arguments can be stored with zonecfg(1), for later boots:
set bootargs="-m verbose"
July 21, 2015
Dusan Baljevic
61
Containers – Current Status
•
Containers can now have exclusive access to one or
more network interfaces. No other container, or even the
global zone, can send or receive packets on that interface
(good for routing, IP Filter, DHCP and other features):
global# zoneadm -z myzone1
zonecfg:myzone1> set ip-type=exclusive
zonecfg:myzone1> add net
zonecfg:myzone1> set physical=ipge0
zonecfg:myzone1> end
zonecfg:myzone1> exit
July 21, 2015
Dusan Baljevic
62
ZFS Mountroot (currently X86 only)
•
ZFS Mountroot provides capability of configuring
a ZFS root file system. It is not a complete boot
solution - it relies on the existence of a small UFS
boot environment
•
ZFS Mountroot was integrated in Solaris Nevada
Build 37 - OpenSolaris release (disabled by
default)
•
The ZFS Mountroot does not work on SPARC
currently (ZFS Boot Project plans to have it in
Solaris 10 Update 5)
July 21, 2015
Dusan Baljevic
63
NetApp vs Sun in Open Source
Case
6th of September 2007:
Network Appliance (NetApp) has filed a lawsuit in
District Court in Lufkin, Texas, today claiming that
Sun infringed on seven NetApp patents for the
Write Anywhere File Layout (WAFL) and RAID
http://www.computerworlduk.com/technology/stoag
e/software/news/index.cfm?newsid=5018
July 21, 2015
Dusan Baljevic
64
Appendix
Container zfszone1 (sparse-root):
host-global# zonecfg -z zfszone1
zfszone1: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zfszone1> create
zonecfg:zfszone1> set zonepath=/zoneroot/zfszone1
zonecfg:zfszone1> add net
zonecfg:zfszone1:net> set physical=eri0
zonecfg:zfszone1:net> set address=16.112.222.192
zonecfg:zfszone1:net> end
zonecfg:zfszone1> set autoboot=true
zonecfg:zfszone1> verify
zonecfg:zfszone1> commit
zonecfg:zfszone1> exit
July 21, 2015
Dusan Baljevic
65
Appendix
Container zfszone1 (continued):
host-global# time zoneadm -z zfszone1 install
Preparing to install zone <zfszone1>.
Creating list of files to copy from the global zone.
Copying <2625> files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <1104> packages on the zone.
Initialized <1104> packages on zone.
Zone <zfszone1> is initialized.
Installation of <2> packages was skipped.
The file </zoneroot/zfszone1/root/var/sadm/system/logs/install_log> contains a log of the
zone installation.
real 12m6.405s
user 3m57.545s
sys 5m27.698s
July 21, 2015
Dusan Baljevic
66
Appendix
Container zfszone1 (continued):
host-global# cat /zoneroot/zfszone1/root/etc/sysidcfg
system_locale=C
terminal=vt100
network_interface=primary { hostname=zfszone1 }
timeserver=localhost
security_policy=NONE
name_service=NONE
timezone=US/Mountain
root_password=EluHgsjkioMF2
July 21, 2015
Dusan Baljevic
67
Appendix
Container zfszone2 (whole-root):
host-global# zonecfg -z zfszone2
zonecfg:zfszone2> create -b
zonecfg:zfszone2> set zonepath=/zoneroot/zfszone2
zonecfg:zfszone2> set autoboot=true
zonecfg:zfszone2> add fs
zonecfg:zfszone2> set dir=/export/myfs
zonecfg:zfszone2> set special=zoneroot/dusanfs
zonecfg:zfszone2> set type=zfs
zonecfg:zfszone2> end
zonecfg:zfszone2> add net
zonecfg:zfszone2> set address=16.112.222.193
zonecfg:zfszone2> set physical=eri0
zonecfg:zfszone2> end
zonecfg:zfszone2> add rctl
zonecfg:zfszone2> set name=zone.max-lwps
zonecfg:zfszone2> add value (priv=privileged,limit=1000,action=deny)
zonecfg:zfszone2> end
zonecfg:zfszone2> add attr
zonecfg:zfszone2> set name=comment
zonecfg:zfszone2> set type=string
zonecfg:zfszone2> set value="Zone zfszone2 by Dusan Baljevic"
zonecfg:zfszone2> end
zonecfg:zfszone2> exit
July 21, 2015
Dusan Baljevic
68
Appendix
Container zfszone2 (continued):
host-global# time zoneadm -z zfszone2 install
Preparing to install zone <zfszone2>.
Creating list of files to copy from the global zone.
Copying <127441> files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <1104> packages on the zone.
Initialized <1104> packages on zone.
Zone <zfszone2> is initialized.
Installation of <2> packages was skipped.
The file </zoneroot/zfszone2/root/var/sadm/system/logs/install_log> contains a log of the
zone installation.
real 36m59.150s
user 5m42.744s
sys 10m22.442s
July 21, 2015
Dusan Baljevic
69
Appendix
Container zfszone2 (continued):
host-global# cat /zoneroot/zfszone2/root/etc/sysidcfg
system_locale=C
system_locale=en_US.ISO8859-15
terminal=xterm
network_interface=primary { hostname=zfszone2 }
timeserver=localhost
security_policy=NONE
name_service=NONE
timezone=Europe/Berlin
root_password=WPmEHkx4Nk/hc
July 21, 2015
Dusan Baljevic
70
Appendix
Container loczone3 (sparse-root):
host-global# zonecfg -z loczone3
loczone3: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:loczone3> create
zonecfg:loczone3> set zonepath=/zones/loczone3
zonecfg:loczone3> set autoboot=true
zonecfg:loczone3> add net
zonecfg:loczone3:net> set physical=eri0
zonecfg:loczone3:net> set address=192.168.222.223
zonecfg:loczone3:net> end
zonecfg:loczone3> add inherit-pkg-dir
zonecfg:loczone3:inherit-pkg-dir> set dir=/opt
zonecfg:loczone3:inherit-pkg-dir> end
zonecfg:loczone3> verify
zonecfg:loczone3> commit
zonecfg:loczone3> exit
July 21, 2015
Dusan Baljevic
71
Appendix
Container loczone3 (sparse-root):
host-global# zoneadm -z loczone3 boot
zoneadm: zone 'loczone3': WARNING: eri0:3: no
matching subnet found in netmasks(4) for
192.168.222.223; using default of
255.255.255.0..
July 21, 2015
Dusan Baljevic
72
Appendix ZFS Pools
host-global# zfs list
NAME
myraid1pool
zoneroot
USED
AVAIL
86K 9.78G
6.81G 12.0G
REFER
MOUNTPOINT
24.5K /myraid1pool
3.73G /zoneroot
zoneroot/dusanfs
24.5K 1024M
24.5K
legacy
zoneroot/specfs
22.5K 15.0G
22.5K
-
zoneroot/zfszone1 75.8M 12.0G 75.8M /zoneroot/zfszone1
zoneroot2
July 21, 2015
76.0M 324M 75.8M /zoneroot2
Dusan Baljevic
73
Appendix Zpool Status
host-global# zpool status -v
pool: myraid1pool
state: ONLINE
scrub: none requested
config:
NAME
STATE READ WRITE CKSUM
myraid1pool
ONLINE
0
0
0
mirror
ONLINE
0
0
0
c2t50001FE15004134Cd3s5 ONLINE
0
0
0
c2t50001FE15004134Cd2s5 ONLINE
0
0
0
errors: No known data errors
July 21, 2015
Dusan Baljevic
74
Appendix Zpool Status (continued)
pool: zoneroot
state: ONLINE
scrub: none requested
config:
NAME
STATE
zoneroot
ONLINE
READ WRITE CKSUM
0
0
0
c1t50001FE15004134Dd1s4 ONLINE
0
0
0
c2t50001FE15004134Cd1s6 ONLINE
0
0
0
errors: No known data errors
July 21, 2015
Dusan Baljevic
75
Appendix Zpool Status (continued)
pool: zoneroot2
state: ONLINE
scrub: none requested
config:
NAME
STATE
zoneroot2
ONLINE
raidz1
ONLINE
READ WRITE CKSUM
0
0
0
0
0
0
c1t50001FE15004134Dd1s5 ONLINE
0
0
0
c2t50001FE15004134Ed1s7 ONLINE
0
0
0
errors: No known data errors
July 21, 2015
Dusan Baljevic
76
Appendix ZFS Pool Properties
host-global# zfs get all myraid1pool
NAME
PROPERTY
VALUE
SOURCE
myraid1pool type
filesystem
myraid1pool creation
Thu Sep 20 1:45 2007 myraid1pool used
86K
…
myraid1pool mountpoint /myraid1pool
default
myraid1pool sharenfs
off
default
myraid1pool checksum
on
default
myraid1pool compression off
default
…
myraid1pool shareiscsi
off
default
myraid1pool xattr
on
default
July 21, 2015
Dusan Baljevic
77
Appendix Zone Listing
host-global# zoneadm list -cv
ID NAME
STATUS PATH
0 global
running /
BRAND
native
IP
shared
4 zfszone1 running /zoneroot/zfszone1 native
shared
7 zfszone2 running /zoneroot/zfszone2 native
shared
9 loczone3 running /zones/loczone3
shared
native
11 zfszone4 running /zoneroot2/zfszone4 native shared
July 21, 2015
Dusan Baljevic
78
Appendix Zone Network Status
host-global# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
zone zfszone1
inet 127.0.0.1 netmask ff000000
lo0:2: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
zone zfszone2
inet 127.0.0.1 netmask ff000000
lo0:3: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
zone loczone3
inet 127.0.0.1 netmask ff000000
lo0:4: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
zone zfszone4
inet 127.0.0.1 netmask ff000000
eri0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 16.112.222.71 netmask fffffc00 broadcast 16.112.223.255
ether 0:3:ba:16:dc:a1
eri0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
zone zfszone1
inet 16.112.222.192 netmask fffffc00 broadcast 16.112.223.255
eri0:2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
zone zfszone2
inet 16.112.222.193 netmask fffffc00 broadcast 16.112.223.255
eri0:3: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
zone loczone3
inet 192.168.222.223 netmask ffffff00 broadcast 192.168.222.255
eri0:4: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
zone zfszone4
inet 192.168.222.249 netmask ffffff00 broadcast 192.168.222.255
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
inet6 ::1/128
eri0: flags=2004841<UP,RUNNING,MULTICAST,DHCP,IPv6> mtu 1500 index 2
inet6 fe80::203:baff:fe16:dca1/10
ether 0:3:ba:16:dc:a1
July 21, 2015
Dusan Baljevic
79
Thank You!
© 2007 Dusan Baljevic
The information contained herein is subject to change without notice