Está en la página 1de 29

EMC Power Path – Solaris

• Basically Multipathing is a fault-tolerance & performance enhancement technique

where there will be more than one physical paths between the computer and its
storage devices through the buses, controllers & switches. The product/software
released by EMC for this purpose is EMC power path.

First of all to use this software it needs to be installed and it can be downloaded from powerlink
website . Once it is installed and configured below are some of the commands for the
administration purpose.

• When new luns are added, to check the newly added luns

#/etc/powermt display

#/etc/powermt display dev=all

If it does not recognizes then

#devfsadm % this takes the luns to OS control.

To make the configuration changes

#/etc/powermt config

To save the changes

#/etc/powermt save

To see all the devices and the logical device ID’S of the disk

#/etc/powermt display dev=all | more

To remove Failed devices & all the old device entries

#/etc/powermt check

It shows the failed devices and asks whether to delete the failed ones. For example

Warning: xxxxxxx device path c25t7d6 is currently dead.

Do you want to remove it (y/n/a/q)? y enter.

11)What happens if vxconfigd is disabled?

Ans:- Basically vxconfigd is the veritas volume manager configuration daemon.It maintains disk
configuration and disk groups in Veritas volume manager. When ever this (vxconfigd) is disabled
it stops taking requests from other veritas volume manager utilities for configuration changes and
also stops updating the changes to the kernel and configuration information stored on disk. So
when ever this is disabled, we cannot work under Veritas Volume Manager.

Q) What is HA?

ANSWER) HA High Availability is a technology to achieve failover with very less latency.
Its a practical requirement of data centers these days when customers expect the servers to
be running 24 hours on all 7 days around the whole 365 days a year – usually referred as
24x7x365. So to achieve this, a redundant infrastructure is created to make sure if one
database server or if one app server fails there is a replica Database or Appserver ready to
take-over the operations. End customer never experiences any outage when there is a HA
network infrastructure.


Q) What is Array?

A) Array is a group of Independent physical disks to configure any Volumes or RAID



Q) What is the highest and lowest priority of SCSI?

A) There are 16 different ID’s which can be assigned to SCSI device 7, 6, 5, 4, 3, 2, 1, 0, 15,
14, 13, 12, 11, 10, 9, 8.

Highest priority of SCSI is ID 7 and lowest ID is 8.

Q)How to find the WWN ( World Wide Name) in solaris ?

A) #fcinfo hba-port | grep WWN

To see the model and firmware details

#fcinfo hba-port

Note : – World Wide Name (WWN) are unique 8 byte identifiers in fibre channel which are
similar to the MAC Addresses on a Network Interface Card (NIC).

• World Wide port Name (WWpN) – It is a WWN assigned to a port on a Fabric

• World Wide node Name (WWnN),It is a WWN assigned to a node/device on a Fibre

Channel fabric


Q) Which one is the Default ID for SCSI HBA?

A) Generally the default ID for SCSI HBA is 7.
SCSI- Small Computer System Interface
HBA – Host Bus Adaptor

Q)How is a SAN managed?

A)There are many management software’s used for managing SAN’s to name a few
- Santricity
- IBM Tivoli Storage Manager.
- CA Unicenter.
- Veritas Volumemanger.

Q) Can you briefly explain each of these Storage area components?

A) Fabric Switch: It’s a device which interconnects multiple network devices .There are
switches starting from 16 port to 32 ports which connect 16 or 32 machine nodes etc.
vendors who manufacture these kind of switches are Brocade, McData

Q)What is a typical storage area network consists of – if we consider it for implementation

in a small business setup? If we consider any small business following are essentials
components of SAN
ANS) – Fabric Switch
- FC Controllers
- JBOD’s

Q) What is a HBA?

A) Host bus adapters (HBAs) are needed to connect the server (host) to the storage.

Q) What are the advantages of SAN?

A) Massively extended scalability

Greatly enhanced device connectivity
Storage consolidation
LAN-free backup
Server-less (active-fabric) backup
Server clustering
Heterogeneous data sharing
Disaster recovery – Remote mirroring
While answering people do NOT portray clearly what they mean & what advantages each
of them have, which are cost effective & which are to be used for the client’s requirements.

Q) What is the difference b/w SAN and NAS?

A) The basic difference between SAN and NAS, SAN is Fabric based and NAS is Ethernet
SAN – Storage Area Network

It accesses data on block level and produces space to host in form of disk.
NAS – Network attached Storage

It accesses data on file level and produces space to host in form of shared network folder. (VCS)

Q) Which two ways can the syntax of the file be verified?

Answer) 1) Can check manually 2 ) At VCS startup

Jeopardy (VCS)
Q) There are three heartbeat connections, two private and one low priority, that are
configured and operational in a VCS cluster.

What happens if both of the private heartbeat connections are unplugged?

Answer) The cluster enters Jeopardy state.

Display Locked User Accounts

Q) I have 200 user accounts. How can I get the list of locked user accounts?

A) Can use the following command

cat /etc/shadow | grep “*LK*”

What is SCSI target ID on the gauss disk ?

Ans: 5

Q: What is a zone?

A: A zone is a virtual operating system abstraction that provides a protected environment

in which applications run. The applications are protected from each other to provide
software fault isolation. To ease the labor of managing multiple applications and their
environments, they co-exist within one operating system instance, and are usually managed
as one entity.


Q: What is a container?

A: A zone which also uses the operating system?s resource management facility is then
called a container. Many people use the two words “zone” and “container” interchangeably.


Q: What types of zones are available?

A: It is possible to create non-global zones that run the same OS as the global zone, which is
the OS running on the system. It is also possible to create a non-global zone that runs a
different operating environment from the global zone. The branded zone (BrandZ)
framework extends the Solaris Zones infrastructure to include the creation of brands that
contain alternative sets of runtime behaviors. The following types of non-global zones are

• native:
The default SX CE and Solaris 10 non-global zone is the native zone. It has the same
characteristics as the Solaris 10 Operating System or SX release that is running in
the global zone.
If you have configured your system with Solaris Trusted Extensions, each non-
global zone is associated with a level of security, or label. Labeled zones can be
configured starting with the Solaris 10 11/06 release. For more information, see
Solaris Trusted Extensions Installation and Configuration.
• ipkg:
The ipkg non-global zone is the default on the OpenSolaris release. It has the same
characteristics as the OpenSolaris release that is running in the global zone.
• Branded zones that run an environment different that the OS release on the system
o The lx branded zone introduced in the SX DE and Solaris 10 8/07 releases
provides a Linux environment for your applications and runs on x86 and
x64 machines. For more information, visit the OpenSolaris Community:
o The solaris8 and solaris9 branded zones enable you to migrate a Solaris 8 or
Solaris 9 system to a Solaris 8 or Solaris 9 container on a host running the
Solaris 10 8/07 Operating System or later S10 release. The solaris8 zone is an
environment for Solaris 8 applications on SPARC machines. The solaris9
zone is an environment for Solaris 9 applications on SPARC machines.

Q: What is a global zone? Sparse-root zone? Whole-root zone? Local zone?

A: After installing Solaris 10 on a system, but before creating any zones, all processes run in
the global zone. After you create a zone, it has processes that are associated with that zone
and no other zone. Any process created by a process in a non-global zone is also associated
with that non-global zone.

Any zone which is not the global zone is called a non-global zone. Some people call non-
global zones simply “zones.” Others call them “local zones” but this is discouraged.

The default native zone filesystem model is called “sparse-root.” This model emphasizes
efficiency at the cost of some configuration flexibility. Sparse-root zones optimize physical
memory and disk space usage by sharing some directories, like /usr and /lib. Sparse-root
zones have their own private file areas for directories like /etc and /var. Whole-root zones
increase configuration flexibility but increase resource usage. They do not use shared
filesystems for /usr, /lib, and a few others.
There is no supported way to convert an existing sparse-root zone to a whole-root zone.
Creating a new zone is required.

Q: Can I create a zone which shares (“inherits”) some, but not all of /usr, /lib, /platform,

A: The original design of Solaris Containers assumes that those four directories are either
all shared (“inherited”) or all not shared. Sharing some and not others will lead to
undefined and/or unpredictable behavior.


Q: How do I get zones or containers?

A: Operating systems based on the OpenSolaris code base may elect to include support for
zones. Sun provides Solaris 10 and Solaris Express, each of which include complete support
for Zones.


Q: What hardware can utilize zones or containers?

A: Zones and resource management are all software feature of OpenSolaris, and by
extension, Solaris and other operating systems based on OpenSolaris. As software features,
they do not depend upon any specific hardware platform. Any hardware that runs
OpenSolaris or one of its distros, e.g. Solaris 10, will be able to have these features.


Q: Will my software run in a zone or container?

A: Most Solaris software will run unmodified in a zone, without needing to re-compile.
Unprivileged software (programs that do not run as root nor with specific privileges)
typically run unmodified in a zone once they can be successfully installed. Installation
software must not assume that it can write into shared, read-only filesystems, e.g. /usr. This
can be circumvented by adding a writable filesystem to the zone (e.g. at /usr/local) or using
a whole-root zone.
However, there are a few applications which need non-default privileges to run – privileges
not normally available in a zone, such as the ability to set the system?s time-of-day clock.
For these situations, the feature named “configurable privileges” has been added. This
feature allows the global zone administrator – the person who manages zones on a system –
to assign additional, non-default privileges to a zone. The zone?s administrator can then
allow individual users to use those non-default privileges.
An application that requires privileges which cannot be added to a zone may need
modification to run properly in a zone.
Here are some guidelines:

• An application that accesses the network and files, and performs no other I/O,
should work correctly.
• Applications which require direct access to certain devices, e.g., a disk partition, will
usually work if the zone is configured correctly. However, in some cases this may
increase security risks.
• Applications which require direct access to these devices must be modified to work
o /dev/kmem
o a network device
1. Starting with OpenSolaris build 37 and Solaris 10 8/07, a zone can be
configured as an “exclusive-IP zone” which gives it exclusive access
to the NIC(s) that the zone has been assigned. Applications in such a
zone can communicate directly with the NIC(s) available to the zone.
2. Applications running in shared-IP zones should instead use one of
the many IP services.

For more details, read the white paper “Bringing Your Application Into the Zone“. Note
that changes have been made to privileges, IP types, and other areas used with zones since
this paper was published. For current information, also see the administration guide.

Q: What features are new in Solaris 10 10/08?

A: New features include the following:

1. Support has been added for using ZFS clones when cloning a zone. If the source and
the target zonepaths reside on ZFS and both are in the same pool, a snapshot of the
source zonepath is taken and zoneadm clone uses ZFS to clone the zone. You can
still specify that a ZFS zonepath be copied instead. If neither the source nor the
target zonepath is on ZFS, or if one is on ZFS and the other is not on ZFS, the clone
process uses the existing copy technique. In all cases, the system copies the data
from a source zonepath to a target zonepath if using a ZFS clone is not possible.
2. A new -b option to zoneadm attach has also been added. Use this option to specify
official or Interim Diagnostics Relief (IDR) patches to be backed out of a zone
during the attach. This option applies only to zone brands that use SVr4 packaging.
3. ***********************************************

Q: How “big” is a zone?

A: If configured with default parameters, a zone requires about 85MB of free disk space
per zone when the global zone has been installed with the “All” metacluster of Solaris
packages. Additional packages installed in the global zone will require additional space in
the non-global zones. SVM soft partitions can be used to divide disk slices and enforce per-
zone disk space constraints. When performing capacity planning, 40MB of additional RAM
per zone is suggested. Applications do not use any “extra” RAM because they are running
in a zone.
A zone installed using the “full-root model” will take up as much space as the initial Solaris
10 installation, which will be more than 500MB in most cases.


Q: How many containers can one copy of Solaris have?

A: While the theoretical limit is over 8,000, the practical limit depends on:

• The amount of hardware resources used by the applications versus the amount
available in the system. This includes the number and processing power of CPUs,
memory size, NICs, HBAs, etc.
• What portion of the installed zones are actually in use. For example, you can create
100 zones, each ready to offer a web service, but only boot the 10 that you need this
month. The unbooted zones take up disk space, but do not cause the use of any extra
CPU power, RAM, or I/O.

Consider these examples which worked:

• 40 zones, each running five copies of the Apache web service, on an E250 with two
300MHz CPUs, 512MB RAM, and three hard disk drives totalling 40GB. With all
zones running and a load consisting of multiple simultaneous HTTP requests to
each zone, the overhead of using zones was so small it wasn’t measurable (<5%).


Q: Can each zone run a different Solaris version?

A: No. All of the zones use a single underlying kernel. The version of the kernel determines
the version of every container in that domain.


Q: What types of re-configurations require a non-global zone re-boot?


• Adding a device to a non-global zone.

• Binding a zone to a pool.


Q: Can containers be clustered?

A: Yes, but not without adding additional cluster management software. As of this writing,
Sun is developing extensions to its Sun Cluster software, so that Resource Groups can be
placed within non-global zones. <Veritas/Symantec> has also announced support for Zones
in the Veritas Cluster product.


Q: Can I use SysV shared memory between containers?

A: No. This would violate several security principles.

Q: Can a zone include multiple zones (aka “is the containment model hierarchical”)?

A: No, the model is strictly two-level: one global zones and one or more non-global zones.
Only the global zone can create non-global zones, and each non-global zone must be
contained within the global zone.


Q: Can I automate the process of entering system information, e.g. with sysidcfg?

A: Yes, after a zone has been installed, copy a sysidcfg(4) file to the zone’s /etc/sysidcfg
before the first boot of that zone.


Q: Can some local zones be in different time zones?

A: Yes. Each non-global zone has its own copy of /etc/default/init, which contains the
timezone setting. You can change the line starting with “TZ=”. The recognized names of
timezones are in /usr/share/lib/zoneinfo. For example, Eastern Standard Time in the USA is
defined in the file /usr/share/lib/zoneinfo/US/Eastern. To set a non-global zone’s timezone to
that timezone, the line in /etc/default/init would look like this:


Q: Can some non-global zones have different date and/or time settings (i.e. different clocks)?

A: Although different zones can ’be’ in different time zones, each zone gets its date and time
clock from the same source. This means that the time zone setting gets applied after the
current time data is obtained from the kernel.
If you would like the ability to have different clock sources per zone, please add a call
record to RFE 5033497. [August 2005]


Q: Can I label my terminal windows with the name of the zone I’m logged into?

A: Yes. After logging into the zone, enter this command:

zone% /bin/echo “33]0;Zone `/bin/zonename`07\c”


Q: How can I add a filesystem to an existing zone?

A: There are four methods. The following list uses UFS examples, but other types of file
systems, such as HSFS and VxFS, can be used in the zonecfg “fs” resource type property or
attached by mount(1M).
1. Create and mount the filesystem in the global zone and use LOFS to mount it into
the non-global zone (very safe)
2. Create the filesystem in the global zone and use zonecfg to mount the filesystem into
the zone as a UFS filesystem (very safe)
3. Export the device associated with the disk partition to the non-global zone, create
the filesystem in the non-global zone and mount it. Security consideration: If a
_block_ device is present in the zone, a malicious user could create a corrupt
filesystem image on that device, and mount a filesystem. This might cause the
system to panic. The problem is less acute with raw (character) devices. Disk devices
should only be placed into a zone that is part of a relatively trusted infrastructure.
4. Mount a UFS filesystem directly into the non-global zone’s directory structure
(allows dynamic modifications to the mount without rebooting the non-global zone)


Q: How can I make a writeable /usr/local in a sparse-root zone?

A: Use one of the methods above, for example:

global# mkdir -p /path/to/some/storage/local/twilight

global# zonecfg -z twilight

zonecfg:twilight> add fs

zonecfg:twilight:fs> set dir=/usr/local

zonecfg:twilight:fs> set special=/path/to/some/storage/local/twilight

zonecfg:twilight:fs> set

zonecfg:twilight:fs> end

zonecfg:twilight> commit

zonecfg:twilight> exit



Q: Can I assign an SVM meta-device, or a Veritas Volume, to a non-global zone?

A: With Solaris 10 1/06, you can directly assign an SVM meta-device into a non-global zone,
using the same method you would with most other devices.
Symantec supports the assignment of a Veritas Volume into a non-global zone.

Q: Can I, and should I, import raw devices into a non-global zone?

A: The Solaris Zones feature set provides the global zone administrator with the ability to
allow a non-global zone to access a raw device. There are many situations where this will be
the best approach to solve a problem. There are even situations which require such use.

First, however, it is important to stress that there are usually other solutions that do not
require direct device access. Let’s discuss this first.

With regard to importing VxVM devices into a zone, this is possible with VxVM 5.0MP3
and up. For earlier versions, your options depend on the goal. If the goal is to make a
filesystem available in the zone, the solution is to create the filesystem in the global zone,
and LOFS or direct mount the filesystem in the zone. On the other hand, if the goal is to
make a mirrored block device available in the zone, the only solution is to upgrade to
VxVM 5.0MP3 or higher.

If you want to make a filesystem available in the zone, create the filesystem in the global
zone, and use LOFS to make the filesystem available in the zone. On the other hand, if the
goal is to make a mirrored block device available in the zone, another solution must be

In any situation, if direct device access is required within a zone, you must perform careful
failure analysis and evaluation of the possible outcomes of “catastrophic application failure.
If the non-global zone will use COTS software, and will be managed by trustworthy people,
then the risks will be small. Fortunately, in most cases there are also other solutions which
do not use direct device access from a zone.

Here are two extreme examples:

1. A zone will be created for the purpose of training students on basic Unix commands.
The root account will only be used by the global zone administrator. The system will
be attached to a LAN which is not connected to any other networks. The instructor
needs access to the sound device. There are very few risks associated with such
access – it would be very difficult for the sound device to suffer a failure, and even if
it did it would be unlikely to affect other zones.
The zone can be given access to this via the zonecfg sub-commands:

global# zonecfg -z zonename

zonecfg:zonename> add device

zonecfg:zonename:device> set match=/dev/sound/*

zonecfg:zonename:device> end

zonecfg:zonename> exit

The zone will have access to sound devices, but will not have access to any other devices.
2. A zone will be created for the purpose of teaching students about a database program
that requires access to raw disk partitions. The instructor knows how to use Unix, but does
not have a background in Unix system administration. Further, the instructor will require
use of the root account to assist students. It is possible that the instructor could make a
mistake, or a malicious student could abuse the raw disk access, leading to a crash of the
kernel. This would also stop all of the other non-global zones, as well as the global zone. If
the other zones are running production software, this request for raw disk access in a zone
should not be fulfilled. Other solutions should be pursued, such as creating an RBAC role
for the instructor which only gives the necessary privileges to the isntructor’s Unix account.

Other examples must be judged by their particulars, e.g. a production database program
which needs raw access. Factors to consider include:

• Who will login to the zone? How trustworthy are they?

• Is this system protected from unauthorized access by a firewall?
• What level of availability is required by applications running in this zone and in
other zones?


Q: Can I share an I/O resource (e.g. NIC, HBA) between containers?

A: Yes, in fact, that is the default model. Each container is assigned its own IP address, but
usually multiple containers will share one NIC. Further, multiple zones may be assigned
separate filesystems accessed through one HBA.


Q: Can zones in one computer communicate via the network?

A: Both shared-IP and exclusive-IP zones can communicate via the network. In general, a
zone is assigned to use one or more network ports (aka NICs), and network traffic to or
from other computers uses the assigned NIC(s), following standard IP rules.
Network traffic between two zones on the same system may require extra planning. If a
zone is an “exclusive-IP” zone, its network packets will always leave the computer, and
inbound packets will always come from outside the computer. Further, an exclusive-IP zone
performs all of its own network configuration, including routing and IP filtering.
Before Solaris 10 10/08, network traffic between two shared-IP zones always stayed in the
computer, i.e. it didn’t traverse the physical network. This provided very high bandwidth,
low latency transmission. However, starting with Solaris 10 10/08, traffic between two
shared-IP zones stays in the computer unless a default router is used for one or both zones.
Traffic from a zone with a default router will go out to the router before coming back to the
destination zone. For more information on default routers for zones, see the documentation
and Steffen’s blog.
Full IP-level functionality is available in an exclusive-IP zone. Exclusive-IP zones always
communicate with each other over the physical network. That communication can be
restriced using IP Filter from within such zones, just as it can for a separate system.

For shared-IP zones in one computer that communicate using IP networking,the following
• Inter-zone network latency is extremely small, and bandwidth is extremely high
• Solaris IP Filter can be enabled in non-global zones by turning on loopback filtering
as described in System Administration Guide: IP Services. Filter rules are still
configured in the global zone.

It is possible to configure routing to block traffic between specific zones completely.


Q: How do I modify the network configuration of a running zone?

A: For shared-IP zones, the ifconfig(1M) command can be used in the global zone to modify
that zone’s existing network configuration or to add new logical interfaces to a zone. Here
are some examples that add, and then delete a logical interface assigned to a zone:

global# ifconfig bge0 addif zone myzone

global# ifconfig bge0 removeif


Q: Can IP Multipathing (IPMP) be used with zones?

A: Yes.
Exclusive-IP zones can use IPMP. IPMP is configured the same way in an exclusive-IP zone
as it is on a system not using zones.
For shared-IP zones, IPMP can be configured in the global zone. Failover of a network link
(e.g. hme0) that is protected by IPMP will bring the associated logical interfaces (e.g.
hme0:3) for the zones over to the secondary link (e.g. bge0).
For more information, see the section “Using IP Network Multipathing on a Solaris System
With Zones Installed” in System Administration Guide: Solaris Containers-Resource
Management and Solaris Zones.


Q: Can IP Filter be used with zones?

A: You have the same IP Filter functionality that you have in the global zone in an
exclusive-IP zone. IP Filter is also configured the same way in exclusive-IP zones and the
global zone.
For shared-IP zones, the IPFilter features in Solaris 10 can be used to filter traffic passing
between one non-global zone and other computers on the network. This includes the ability
to use NAT features, i.e., redirect traffic destined for the global zone to non-global zones.


Q: Can I prevent a zone from using the network?

A: Yes. A zone does not need a network interface in order to operate. If you don’t specify a
network interface when you create the zone, it will still boot correctly. If an existing zone
has been given access to a network interface, you can use zonecfg(1M) to remove that
access, but if the zone is running you must also either re-boot the zone or use ifconfig(1M)
to remove access until the next re-boot.
It is also possible to allow a shared-IP zone to access the network, but not communicate with
other zones on the same system. One method is to set up a pair of routes using the “-reject”
argument to the route(1) command. For example, if one zone has an IP address of <Addr1>
and the second zone has an address of <Addr2>, then the following commands will prevent
network traffic from passing between the two zones. [July 2006]

global# route add <Addr1> <Addr2> -interface -reject

global# route add <Addr2> <Addr1> -interface -reject


Q: Are VLANs supported in zones?

A: Yes. For a shared-IP zone, the VLAN interface must be plumbed in the global zone. LAN
and VLAN separation are available in an exclusive-IP non-global zone.


Q: How do I configure a default route in a container?

A: For a shared-IP configuration: All routes, including default routes, must be configured
by the global zone administrator. By default, such zones use the global zone’s default
router. Starting with Solaris 10 10/08, each shared-IP zone can be assigned its own default
router with the “defrouter” setting. For more information on default routers for zones, see
the documentation and Steffen’s blog.
For an exclusive-IP configuration: The zone administrator can configure IP on those data-
links with the same flexibility and options as in the global zone.


Q: How can I restrict a zone (or a few zones) to one NIC (network connector)?

A: The global zone administrator configures each zone’s access to zero or more NICs. A
shared-IP zone can be the only zone using a NIC.
Exclusive-IP zones have more separation which reaches down to the data-link layer. One or
more data-link names, which can be a NIC or a VLAN on a NIC, are assigned to an
exclusive-IP zone by the global administrator. The zone administrator can configure IP on
those data-links with the same options as in the global zone.


Q: When I tried to mount a file system into a non-global zone, an error message displayed
stating that the mount point was busy. Why?
A: All accesses to entries in lofs mounted file systems map to their underlying file system.
Therefore, if a mount point is made available in multiple locations via lofs and it is in use in
any of those locations (as a mount point, a current working directory, etc.), an attempt to
mount a file system at that mount point will fail unless the overlay flag has been specified.


Q: How can I mount a filesystem into two or more different zones safely?

A: Create a directory in the global zone, and remount it into each non-global zone using
lofs. This will allow reading and writing from both zones without corrupting. It’s the same
mechanism used by the automounter in certain cases.


Q: How can I create a zone with its own /usr or root file system (a ’whole root file system’)?

A: By default a zone shares /usr and a few other directories with the global zone. If a zone
needs its own separate copy of /usr, et al., you must tell zonecfg to not use the default
configuration. To do this, use the “-b” option on the “create” sub-command of the
zonecfg(2) command.
If you do this, you must specify each existing file system that you do want to share with this
new zone.


Q: How can I restrict a zone (or a few zones) to one HBA (storage connector)?

Each zone uses space in at least one disk partition – its root directory and several others
(e.g. /etc) live there. All of these files are part of Solaris. In addition, each zone can be given
access to one or more file systems and/or one or more raw disks. By planning carefully, you
can configure one zone so that all of its files and devices are accessible through one HBA,
and all of the storage of another zone is accessible through a different HBA.


Q: Can a non-global zone NFS-mount a file system that has been shared from its own global

A: No. This may be addressed in the future. However, the filesystem can be LOFS-mounted
into the local zone, and, if necessary, the global zone can export the same filesystem via NFS
so that other computers can also access those files.


Q: Can a zone’s root directory be on a ZFS file system?

A: Solaris 10 release:
Placing a zone’s root directory (i.e. it’s PATHNAME) on ZFS is supported starting with
Solaris 10 10/08, and you can then upgrade with Live Upgrade going forward. There are
still issues with placing a zone on ZFS on a release prior to Solaris 10 10/08 and then trying
to upgrade.
Solaris Express Release


Q: Can a zone be an NFS server?

A: A global zone can be an NFS server. A non-global zone cannot use the Solaris NFS
server featuers. This issue may be addressed in the future. See RFE 5102011.
However, non-Solaris NFS server software (i.e. “userland” NFS server software) has been
shown to work correctly in a non-global zone. Such software works because it does not run
in the kernel, unlike the Solaris NFS server software which runs in the Solaris kernel.


Q: Can a zone be a DHCP server?

A: A global zone can be a DHCP server.

Starting with Solaris 10 11/06, a non-global zone can be a DHCP server. This ability became
more flexible with Solaris 10 8/07, which added a feature called IP Instances.


Q: Can a zone be a DNS server?

A: Yes.


Q: Can a zone be an NTP client or server?

A1: A zone can be an NTP server.

A2: The NTP client software sets the system time clock shared by all zones, including the
global zone. By default, non-global zones cannot do this. However, the global zone
administrator can give a zone the ability to change the system time clock with the
“sys_time” privilege. Be aware that this changes the time clock for all zones.


Q: Can a zone be a NIS (aka yp), NIS+, or LDAP server?

A: Yes, yes, and yes.


Q: Can a zone provide network login via telnet, rlogin, rsh or ssh?
A: Yes, yes, and yes.


Q: Can a zone be an ftp server?

A: A zone can be an ftp server, but it is not possible to use ftpconfig(1M) to set up a zone to
be an anonymous ftp server. This is because ftpconfig attempts to set up certain device
special files, and a zone does not have the necessary privileges.


Q: Can a zone run sendmail?

A: Yes.


Q: Can I use X windows in a zone?

A: There are a few different methods to use X windows with zones:

1. On the system console: at the login screen, you can choose “Remote Host” and enter
the hostname of the zone. The X windows login screen should be replaced with an X
windows remote login screen.
2. At the console, logged into the global zone: you can tell X to allow remote
connections from the non-global zone, telnet to that zone, and set the appropriate
environment variable so that X sessions go to the global zone’s X windows session,
e.g. “setenv DISPLAY my-global-zone”.
3. At another system, you can login directly to the non-global zone, and perform
steps similar to the previous method.


Q: How can I prevent one container from consuming all of the CPU power?

A: Use the resource management features of Containers. This requires using some
combination of the Fair Share Scheduler, CPU caps, assigned (’dedicated’) CPUs, and/or
[Dynamic] Resource Pools features.

Web Links:
Non-Global Zone Configuration (Overview)
Fair Share Scheduler (Overview)
CPU Caps
Dynamic Resource Pools (Overview)


Q: What is the resource granularity for CPU assignment to a container?

A: Fair Share Scheduler: Arbitrary. FSS guarantees a minimum amount of CPU utilization,
so it doesn’t waste CPU cycles. Excessive CPU use is only prevented if there is contention
for CPU resources. Minima are specified by “shares” and enforced by the Fair Share
Scheduler. For example, CPU share assignments could be 1, 1000, 999, resulting in
utilization minima of 0.05%, 50%, and (practically speaking) 50%.
CPU Cap: number of CPUs, in hundredths of a CPU. One zone can be capped at 4.01
CPUs, and another can be capped at 4.02 CPUs. Dedicated CPU: CPU range, in integer
number of CPUs. On an x86 system, Solaris considers every CPU core to be a “CPU.” On
SPARC CMT systems, every hardware thread is a “CPU” so a four-socket T5440 has 256
“CPUs.” On other SPARC systems, every CPU core is a “CPU.”


Q: How can I limit (cap) the CPU usage of an application?

A: In OpenSolaris, and starting with Solaris 10 5/08, use the capped-cpu resource type. In
OpenSolaris and starting with Solaris 10 8/07, you can use the dedicated-cpu resource type
to automatically create a temporary pool when the zone boots. See Non-Global Zone
Configuration (Overview).
Alternatively, you can create a processor set with one or more CPUs and bind it to a
resource pool. Then create a zone and bind it to the same resource pool. Run the application
in that zone. The application will only “see” that set of processors.


Q: How can I limit the memory used by a container?

A: You can use the Resource Capping Daemon (rcapd) for all releases. In OpenSolaris, and
starting with Solaris 10 8/07, you can use the capped-memory resource to set limits for
physical, swap, and locked memory. Determine values for this resource if you plan to cap
memory for the zone by using rcapd from the global zone. The physical property of the
capped-memory resource is used by rcapd as the max-rss value for the zone.

Web Links:
Non-Global Zone Configuration (Overview)
Administering the Resource Capping Daemon


Q: Can I dynamically change the quantity of a resource (CPU, memory, network bandwidth)
assigned to a container?

A: To change the number of CPU shares associated with a container without re-booting it,
use the prctl command, e.g.

prctl -n zone.cpu-shares -r -v $SHARES `pgrep -z $ZONENAME init`

where $SHARES is the new number of shares and $ZONENAME is the name of the zone.
In OpenSolaris and Solaris 10 (starting with 5/08) similar methods can be used to change
the CPU cap, RAM cap, VM cap and shared memory cap.

Web Links:
Resource Controls
Using the prctl Command
Fair Share Scheduler (Overview)


Q: Can swap space usage be managed?

A: The entire swap partition is treated as a single global resource to processes running in
both global and non-global zones. Before Solaris 10 8/07, you couldn’t limit the amount of
swap used by a zone on a per-zone basis. You can globally limit the size of the swap-based
filesystems (e.g. /tmp) by using the “size” mount option in the container’s /etc/vfstab file,
e.g. “size=200m”. This allows you to decrease the effect of many and/or large files created in
Starting with Solaris 10 8/07, you can use the capped-memory resource to cap the amount
of virtual memory (VM) that a zone uses. This can also be set dynamically with the resource
control zone.max-swap.


Q: Can I limit the network bandwidth used by a zone?

A: Yes, use the IPQoS features in Solaris 10. You must manage this from the global zone for
the containers.


Q: Do containers use up alot of CPU power?

A: CPU overhead of containers is hardly measurable (i.e. <1%) for a few zones or even
dozens of zones, depending somewhat on the applications.


Q: Can the share value for a running project or zone be changed?

A: Yes. Here is an example:

prctl -n project.cpu-shares -v 10 -r -i project group.staff

The prctl utility allows the examination and modification of the resource controls associated
with an active process, task or project on the system. It allows access to the basic and
privileged limits on the specified entity.
-n specifies the name of the resource to get or set
-r specifies a replace operation
-v specifies the new value for the resource
-i specifies the owning process, task or project of the resource.


Q: Can I bind a zone to a pool?

A: Yes, but in OpenSolaris and Solaris 10 8/07 and later, it’s much easier to use the
’dedicated-cpus’ feature.
To bind a zone’s processes to a pool, first create the pool, then use zonecfg(1M) to bind a
zone to it.

1. Enable resource pools on your system using either svcadm or pooladm -e.
2. Use pooladm -s to create the pool configuration.
3. Use pooladm -c to commit the configuration at /etc/pooladm.conf.
4. Use poolcfg -c to modify the configuration.

poolcfg -c ’create pset pset_zone (uint pset.min = 3; uint pset.max = 3)’

poolcfg -c ’create pool pool_zone (string pool.scheduler=”FSS”)’
poolcfg -c ’associate pool pool_zone (pset pset_zone)’

5. Use pooladm -c to commit the configuration at /etc/pooladm.conf.

See the administration guide.
The command to perform the binding, from the global zone, would be:

zonecfg -z zone1 set pool=pool_zone

If the zone was running, you must re-boot it for the binding to take effect, unless you also
dynamically assign the zone to the pool


Q: Can projects/zones be reassigned to a different resource pool while they are running?

A: Yes. Here is an example:

poolbind -p web_app -i zoneid myzone

The poolbind command binds zones, projects, tasks and processes to a pool.

-p is the name of the pool to bind

-i specifies the process id, zone id, task id or project id to be bound to the pool.


Q: Can you move processors between processor sets while the system is running?
A: Yes, you can. Here is the command(s) you would use:

• If you don’t care which CPUs you move from a processor set the command would
poolcfg -dc “transfer 2 from pset pset1 to pset2″
which will move any two processors from pset1 to pset2
-d operate directly on the kernel state
-c this signifies the command

If you want to move a specific CPU(s) here is the command:

poolcfg -dc “transfer to pset pset2 (CPU 0, CPU 1)”
which will move CPUs 0 and 1 to pset2.


Q: How can I prevent one zone from using all the swap space by filling up /tmp?

A: For manual mounts, use the option “-o size=sz” where sz is the size limit you want.
Ending the size in ’k’ means kilobytes, ending it in ’m’ means megabytes. Example: “-o
size=500m”. This option can also be added into /etc/vfstab. For more details, view the man
pages for mount_tmpfs(1M) and vfstab(4).

With Solaris 10 8/07, you can use the resource control, zone.max-swap. (The swap property
of the capped-memory resource is the preferred way to set this control.)

Also, note that RFE 1177209 will give the global zone administrator the ability to control
the amount of swap space used by one zone.


Q: Do I need to set a locked memory cap for a zone? If so, what value should I set?

A: A locked memory cap in a zone can be set using the zonecfg capped-memory resource.
Applications generally do not lock significant amounts of memory, but you might decide to
set locked memory if the zone’s applications are known to lock memory.

If the zone administrator is less than trusted or if DOS exploits are of concern, you can also
consider setting the locked memory cap to 10% of the system’s physical memory or to the
zone’s physical memory cap.


Q: What software can manage zones?

A: Here are just a few of the software tools – some free, some not free – which will help you
manage Solaris Zones:

• SunMC (Sun Management Center) GUI

• WebMin GUI has a Solaris Zones module
• Xone Control GUI
• The Zone Manager Command
• Zonestat command reports on resource usage and caps


Q: How do I create a zone?

A: First gather some information, then use the Solaris Container Manager GUI or the
commands shown below. This is the simplest possible creation of a zone that has network
access. You will need this information (example values in parentheses:

1. Name that you choose for the zone (my-zone)

2. Hostname that choose for the zone (my-zone)
3. Name of the directory in the global zone where all of the zone’s operating system
files will be (/zones/zone_roots/my-zone)
4. IP address of the zone (
5. Name of the network device that the zone should use (hme0)

Using the sample information in the appropriate commands, which will take about 10
minutes on a small system with a new installation of OpenSolaris or Solaris 10:

global# zonecfg -z my-zone

zonecfg:my-zone> create

zonecfg:my-zone> set zonepath=/zones/zone_roots/my-zone

zonecfg:my-zone> add net

zonecfg:my-zone:net> set address=

zonecfg:my-zone:net> set physical=hm0

zonecfg:my-zone:net> end

zonecfg:my-zone> commit

zonecfg:my-zone> exit

global# zoneadm -z my-zone install

global# zoneadm -z my-zone boot


Q: How do I remove a zone?

A: Use these commands, substituting the correct names for <bracketed> text.
global# zoneadm -z <zonename> uninstall

global# zonecfg -z <zonename> delete


Q: Is the maximum number of exclusive-IP zones limited to the number of physical ethernet

A: No, if you use VLANs you can have one per VLAN per port. To use the same base ’bge0’
for multiple dhcp zones, in the case of VLANs you would assign bge1000 to zoneA, bge2000
to zoneB, etc. The VNIC component of Crossbow allows multiple virtual NICs on a port
without any VLANs. You can try this out at Crossbow project.


Q: Are there any recent changes for exclusive-IP zones in OpenSolaris?

A: Prior to build 83, the data-link used with exclusive-IP zones must be GLDv3. Note that
there is a patch [patch ID 118777-12] that allows the legacy ce device to be used with
exclusive-IP zones with build 80-82. In OpenSolaris build 83 and later, the data-link used
with exclusive-IP zones need not be GLDv3 since the Nemo unification provides a way to
present legacy device drivers as GLDv3 using a shim module. Hence, no patch to ce is

Q: Can each container be a different Solaris patch level, so I can test patches in a “test”
container before applying them to a “production” container?

A: There are two parts to the answer: 1) There is only one kernel running on the system, so
all zones must be at the same patch level with respect to the kernel and core system
components. Such patches can only be applied from the global zone, and they affect the
global and all local zones equally. The KU is an example of such a patch.
2) Middleware such as Java Enterprise System can be patched on a per-zone basis. If the
software can be installed in the local zone then it must be patchable from the local zone as
well, regardless of the zone type, whole-root or sparse-root.


Q: Can I move a zone from one computer/domain to another?

A: Yes. See Migrating a Non-Global Zone to a Different Machine. For information on

migrating a Solaris 8 or Solaris 9 container, see System Administration Guide: Solaris 8
Containers and System Administration Guide: Solaris 9 Containers.

Q: Is there a way to correlate audit records from multiple containers?

A: Yes, the global zone sees all audit records. Each non-global zone only sees its own audit

Q: I created a zone and booted it, but it doesn’t work. What should I do?

A: The most common problem is that the zone doesn’t have its system identification
information yet. You can determine if this is the problem by running “ps -fz ” in the global
zone. If the output only shows zsched, init, and a (3-6) processes related to SMF (/lib/svc/
…, /usr/sbin/svccfg) then system identification is not complete. To complete this, attach to
the zone’s console by running “zlogin -C ” in the global zone, pressing once, and following
the instructions.


Q: Can I add packages to just the global zone (for example, SRS netConnect)?

A: Yes, use pgkadd -G. Note that if the SUNW_PKG_THISZONE package parameter is set
to true, you do not have to use the -G option


Q: Do zones boot automatically, or must I boot each one manually every time the system

A: The zones autoboot property determines whether the zone is booted when the system
boots. The global zone adminstrator can set the autoboot property to “true” or “false.” The
zones service svc:/system/zones:default must also be enabled.


Q: Should I halt a system’s zones before applying patches?

A: There is no need to do this. In fact, the package and patch tools will perform their
operations on all zones that are running, as well as all zones that are not currently running
but are capable of being booted (e.g. they are at least in the “installed” state). The running
zones are operated on first, and then for each zone that is not running but can be booted,
the zone is booted, the operation is performed, and the zone is then halted.


Q: Where does a zone’s syslog output go?

A: By default the syslog output from a zone goes only into the zone’s syslog file. If you
would like the output to also appear in the global zone’s log files, configure the non-global
zone’s loghost to be the global zone.


Q: I removed a device from a zone, but it’s still there. Why, and how do I get rid of it?
A: This is bug 4963368. The current (Feb 2005) workaround is: after using zonecfg to
remove the device, manually remove the corresponding entry in {ZONEPATH}/dev.
If you’re running Solaris Express, this bug is corrected in builds 46 and higher. If you are
running Solaris 10, this bug is corrected in Solaris 10 8/07.


Q: Are there any special guidelines for using Live Upgrade with zones?

A: There are a number of considerations when using Live Upgrade (LU) on a system with
zones installed. It is critical to avoid zone state transitions during lucreate and lumount

• When you lucreate an alternate boot environment (ABE), if a zone is not running,
then it cannot be booted until the lucreate has completed.
• When you lucreate an ABE, if a zone is running, it should not be halted or rebooted
until the lucreate has completed.
• When an ABE is lumounted, you cannot boot zones or reboot them, although zones
that were running before the lumount can continue to run.

Because a non-global zone can be controlled by a non-global zone administrator as well as

the global zone administrator, it is best to have all zones halted during lucreate or lumount.

It is important to note that when LU operations are underway, non-global zone

administrator involvement is critical. The upgrade affects their work as administrators, and
they will be dealing with the changes that occur as a result of the upgrade. They should
make sure that any local packages are stable throughout the sequence, handle any post-
upgrade tasks (such as configuration file tweaking), and generally schedule around the
system outage.

Here is an example of a problem that could occur if these guidelines are not followed. If this
sequence of actions takes place:

1. In global zone: lucreate -n new

2. In non-global zone: pkgadd FooBar
3. In global zone: luupgrade -n new, luactivate -n new, init 6

When the system comes back up, the non-global zone users will notice that they no longer
have the FooBar feature added by the package.


Q: Are Solaris 10 zones configured on ZFS prior to the Solaris 10 10/08 release upgradeable
using Live Upgrade?

A: Not yet, but it is being investigated. Live Upgrade can be used on Solaris 10 10/08
systems that have zones configured with the zonepath on ZFS.

Q: What is the default networking service configuration of a non-global zone when it is

A: On Solaris 10 systems, the traditional open configuration is installed. On SX systems, the

limited networking configuration is installed.
You can switch the zone to either networking configuration by using the netservices
command, or enable and disable specific services by using SMF commands.


Q: How do I clear a hung non-global zone?

A: Reboot the global zone.


Q: Can I access one zone from another zone?

A: Only through IP connections, e.g. telnet, rlogin.


Q: Can I ’su’ from one zone to another?

A: No, this would violate the security implementation of zones. In this context, think of
zones as separate computers – you can’t ’su’ from one Unix computer to another.
You can use the zlogin(1) command to login to a non-global zone from the global zone. You
must have all privileges(5) to use zlogin.


Q: Can I prevent the root account in one zone from affecting other zones?

A: Because each container has its own namespace, each container has its own root account.
Each zone’s root account is unable to access other containers in any way.


Q: Can programs running in one zone change the operation of programs running in another

A: A great deal of design work was done to prevent containers from affecting each other. By
default it is very difficult for one local zone to affect another zone, but it is possible. It is also
easy for the global zone administer to configure containers unsafely. Consider these factors:

• First, there are no known methods for one user (even root) in one local zone to
’break into’ another zone (global or non-global).
However, a modern computer has many resources, some of them real, some virtual.
Denial of Service attacks often attempt to use all of the instances of a virtual
resource. One early attack on Unix systems was creating so many processes that all
of the PIDs were in use, preventing the creation of new processes. There are now
methods to prevent those attacks, and those methods automatically apply, or have
been applied to, zones. In some cases the method of prevention includes the manual
use of Solaris features, e.g. projects.
• By default it is difficult to disrupt operation of zones. However, the global zone
administrator can make it easier for a non-global zone user to impact operation of
one or more other zones, even the global zone. Try to avoid assigning disk devices
directly to non-global zones: the root user of that zone might be able to take
advantage of this to cause a SCSI bus reset or even panic the kernel. Also, avoid
assigning the same device or file system to multiple zones unless needed to achieve a
specific goal. If that is necessary, ensure that all of the software in those two zones
will obey a synchronization mechanism when using the device or file system.


Q: How do I prevent a ’fork bomb’ from affecting all of the zones?

A: A ’fork bomb’ is a process which creates (forks) as many child processes as possible,
attempting to use up all of the virtual memory or PIDs in a system, resulting in a Denial of
Service to other users. If you would like to prevent someone from doing this in a non-global
zone, add this to a zone’s configuration, using zonecfg(1M):

add rctl

set name=zone.max-lwps

add value (priv=privileged,limit=1000,action=deny)


That will prevent a zone’s processes from having a total of more than 1000 LWPs


Q: Can Oracle use shared memory in a Container?

A: In Solaris, Oracle uses ISM (Intimate Shared Memory) or DISM (Dynamic ISM). DISM
is preferred because it provides more flexibility.

ISM can be used in a Solaris Container, for any release of Solaris 10.

Because we keep improving Containers, there are slightly different answers to the question
“can DISM be used,” depending on the particular release of Solaris 10.

1. Solaris 10 8/07 and newer: Yes, Oracle can use DISM in a Container. Because the
Solaris privilege ’proc_lock_memory’ is in a zone’s default set of privileges, you
should limit the amount of RAM that a particular zone can lock. If you don’t do
this, that zone could lock down enough memory that the global zone – including
platform management tools – cannot function properly.
In Solaris 10 5/08 and later, you should set that limit with the following command:

global# zonecfg -z myzone

add capped-memory

set locked=4g



Note that common memory-size suffixes can be used: k or K (kilobytes), m or M (MB), g or

G (GB), etc. See zonecfg(1M) for more details.
In Solaris 10 8/07 you should set that limit with the following command:

global# zonecfg -z myzone

set max-locked-memory=4g


2. Solaris 10 11/06: Yes, Oracle can use DISM in a Container. To enable the use of DISM,
the global zone administrator must add the privilege “proc_lock_memory” to the
Container. To do this, use zonecfg(1M) to add the line

set limitpriv=default,proc_lock_memory

to the Container’s configuration.

3. Solaris 10, Releases 3/05, 1/06, 6/06: A Container can only use ISM. It cannot use DISM.
This is a side-effect of the implementation of the security boundary which protects zones
from each other.


Q: Can I use the Solaris 10 FSS (Fair Share Scheduler) with Oracle in a Solaris Container?

A: There are currently (June 2006) two distinct concerns regarding the use of FSS in a
Container when running Oracle databases:

1. In testing – Oracle processes use internal methods to prioritize themselves to

improve inefficiency. It is possible that these methods might not work well in
conjunction with the Solaris FSS. Although there are no known problems with non-
RAC configurations, Sun and Oracle are testing this type of configuration to
discover any negative interactions. This testing should be completed soon.
2. It is not possible to use the Solaris FSS with Oracle RAC in a Container. A Solaris
patch is being tested that fixes this problem.

Q: What are zone’s strengths compared to other server virtualization solutions?

A: Solaris Zones have many strengths relative to other server virtualization solutions,

• Cost: zones are a feature of the operating system. There is no extra charge for using
• Integration: Zones are integrated into the operating system, providing seamless
functionality and a smooth upgrade path.
• Portability: Zones are not tied to any one hardware platform. As a device-
independent feature set of OpenSolaris, their functionality is exactly the same on all
hardware to which OpenSolaris has been ported.
• Observability: The Global Zone has visibility into all activity in all zones, including
viewing process and network activity, system-wide accounting and auditing, etc.
This makes it possible to find performance problems and resolve inter-zone
conflicts, both of which are extremely difficult problems on most other SV solutions.
It is even possible to re-host applications typically found on different systems (e.g.
web server and app server) on different zones in the same system, and then use
DTrace to analyze their interactions.
• Manageability: You can manage all of the zones on one system as one collection,
rather than as separate servers. This includes adding packages and patches once per
system, not once per zone.
• Sun Dynamic System Domains


Q: Are containers like VMware?

A: They are only vaguely similar. Both technologies are very useful for consolidating
servers. However, the basic model is different: Containers form isolated application
environments that share one OS instance, while VMware hosts multiple OS instances. The
differences also include:

• Containers are only available for Solaris 10 and SX Nevada. VMware supports
Solaris, Microsoft Windows and Linux clients, simultaneously.
• VMware uses a great deal of CPU capacity managing the multiple environments.
CPU overhead of containers is hardly measurable (typically <1%) for a few zones or
even dozens of zones, depending somewhat on the applications.
• Containers do not have any financial cost beyond Solaris license and/or support
costs. VMware for production environments costs thousands of dollars, and a
license is necessary for each Windows or RH instance hosted on top of VMware.

Q)How to find Global zone name from local Zone?

A) From the Local Zone Run The following command

# arp –a | grep SP