Note: This web page was automatically created from a PalmOS "pedit32" memo.

Solaris notes

Did you try putting /usr/local/lib into the system runtime linking environment with "crle" first? In other words, something like this (Solais 8/9/10 only): crle -c /var/ld/ld.config -l /usr/local/lib:/usr/lib
meter-ferrante) pvs /bin/chmod (SUNW_1.1); meter-ferrante)
According to Francisco, the fix is to change mounting of dcslib from /dcspdb/sun4-5 to /dcspdb/solaris-2.6
Just wanted to share the info below with the group. Using the info below, I've just expanded the /usr filesystem on a test box while it was still mounted (single user mode) but should still work for mutliuser mode as well. Let me know if you have any question on this. Will be using this to resolve a small /var filesystem on a client box without having to re-install or repartition/reformat. On Tue, Feb 08, 2005 at 01:40:36PM -0800, Tri H. Tran wrote: > Thanks Dan. That was the hint that I needed. Googling for more > info on this resulted in the below link for further detail: > > > > > > On Tue, Feb 08, 2005 at 12:26:06PM -0800, Dan Stromberg wrote: > > for UFS - use mkfs -M on a mounted filesystem, or mkfs -G on an > > unmounted > > filesystem. (These flags are undocumented by the way) > > > > On 9 it's growfs, on earlier releases of Solaris, one of these flags may do it. > >
Mounting individual partitions of a Solaris CD-ROM on a linux system: seki-root> fdisk -l /dev/hda Note: sector size is 2048 (not 512) Disk /dev/hda (Sun disk label): 1 heads, 640 sectors, 2048 cylinders Units = cylinders of 640 * 512 bytes Device Flag Start End Blocks Id System /dev/hda1 r 0 752 240640 4 SunOS usr /dev/hda2 r 752 1984 394240 2 SunOS root /dev/hda3 1984 1988 1280 0 Empty /dev/hda4 1988 1992 1280 0 Empty /dev/hda5 1992 1996 1280 0 Empty /dev/hda6 1996 2000 1280 0 Empty Thu Feb 17 15:41:53 seki-root> bc bc 1.06 Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. 640*512 327680 327680*752 246415360 Thu Feb 17 15:42:13 seki-root> mount /dev/hda /mnt/root Thu Feb 17 15:42:19 seki-root> bc Thu Feb 17 15:42:20 seki-root> mount -o offset=246415360 /dev/hda /mnt/usr Thu Feb 17 15:42:28 seki-root> cd /mnt/usr Thu Feb 17 15:42:31 seki-root> d ./ bin@ devices/ kernel/ opt/ reconfigure .swappart@ .tmp_proto/ ../ cdrom/ etc/ lib/ platform/ sbin/ system/ usr/ a/ dev/ .java@ mnt/ proc/ .swapinfo@ tmp/ var@ /mnt/usr Thu Feb 17 15:42:32 seki-root>
>What I heared the SPARCs require some minimum number of MHz for >Solaris 10 to run. Is there a similiar minimum requirent? That's because of a CPU bug; what we really require is UltraSPARC-II or later (which means no UltraSPARC <= 200MHz qualifies)
Ultrasparc I's do not run Solaris 10 - due to the CPU bug they have, they were declared not supported. ISTR Ultra 1's came in 137 and 167 MHz.
For more on NFS v4: nfs(4) nfsmapid
It appears that Solaris 10 defaults to bit bucketing connections rather than giving a nice "connection refused" by default
sshd trouble on solaris 10: This supposedly fixes it: "svcadm enable svc:/network/ssh:default" Sun apparently sees it as a bug (only in the CD install), but we're getting it when autoinstalling the FCS :-S This gave a "next state" of online: "svcadm clear ssh" This script fixed it: #!/bin/sh SSHD_DIR=/etc/ssh export SSHD_DIR if [ -f "$SSHD_DIR/ssh_host_key" ] then : else maxtime 180 /usr/bin/ssh-keygen -trsa1 -b 1024 -f "$SSHD_DIR/ssh_host_key" -N '' fi if [ -f "$SSHD_DIR/ssh_host_dsa_key" ] then : else maxtime 180 /usr/bin/ssh-keygen -d -f "$SSHD_DIR/ssh_host_dsa_key" -N "" fi if [ -f "$SSHD_DIR/ssh_host_rsa_key" ] then : else maxtime 180 /usr/bin/ssh-keygen -t rsa -f "$SSHD_DIR/ssh_host_rsa_key" -N "" fi
root -X benefits from X11UseLocalhost no /etc/ssh/sshd_config Also helpful with X11 forwarding? (Analogous to the fix required on Fedora Core 3) # IPv4 only ListenAddress # IPv4 & IPv6 # ListenAddress :: A poster on comp.unix.solaris suggests I may be able to add sshd flags to /lib/svc/method/sshd
Interesting command: svcs -l ssh
Automounting has changed somehow - auto_dcslib isn't working. Statically mounting dcslib on for the time being.
/dcs/bin/man is making a mess of solaris 10 man pages
Interesting URL:
> - sshd could use different PAM service names for the different auth types. > (eg "sshd-public-key", "sshd-password", "sshd-gssapi-with-mic" and fall > back to "sshd" if these don't exists. This would probably be tricky to > write because you'd have to stop and start PAM for each auth attempt.) Solaris 10's sshd does this. See: The service names it uses are: - sshd-none - sshd-password - sshd-kbdint - sshd-pubkey - sshd-hostbased - sshd-gssapi (for both, gssapi-keyex and gssapi-with-mic)
Need to google for "bootenv.rc" sometime
On first autoinstall 1) ypbind isn't communicating with ypcat. ypbind not running. 2) sshd didn't get started up 3) /dcs/bin/man is broken on sun manual pages 4) First reboot after an autoinstall asks for an NFSv4 domain 5) lsof in dcslib isn't working 6) scsiinfo in dcslib isn't working (may need sol 9 too) 7) top seems fine 8) nfswatch seems fine 9) memconf might like to be upgraded, though prtconf and/or prtdiag may replace it 10) libtool may be broken on solaris 10 (2005-02-24) 11) /etc/hosts is coming up misconfigured. May also need to do something with /etc/inet/ipnodes: A new feature of Solaris 10 is sourcing the "ipnodes" name service prior to "hosts" for IPv4 addresses. Reference "/etc/nsswitch.conf". /\/\/\/\/\ This will cause problems when updating the IP address of your IPv4 interfaces by only updating "/etc/hosts" aka "/etc/inet/hosts". Updating "/etc/inet/ipnodes" will also be necessary. /\/\/\/\/\ you can't have IPv6 addresses in hosts. you can put IPv4 addresses into ipnodes, but you don't have to. if ipnodes will not give an answer, hosts will be used. BTW: it's a feature from all IPv6 enabled Solaris versions. /\/\/\/\/\ > you can't have IPv6 addresses in hosts. Sure you can. The parser just ignores them. 12) srsh isn't bootstrapping itself correctly 13) NIS client needed to be enabled with "svcadm enable network/nis/client"
There's only one Solaris/x86 version. It can be booted in both 32 and 64 bit mode. Most programs are 32 bit and shared between the 32 and 64 bit runtime. The first publicly available build of S10 which supported an 64 bit kernel was build 72. The GA build is 74L2a (cat /etc/release) If it doesn't boot in 64 bit mode after install you need to run: eeprom boot-file='' (empty means: take default)
svc:/system/fmd:default (Solaris Fault Manager) (another thing to google for)
Changing the community string on solaris 9: Go to /etc/*/snmpd.conf Edit read-community field to have the value you want
Solaris 10: smpatch analyze ...along the lines of yum/apt/up2date
SunPCI version for Sol 7/8/9 reportedly works fine on Solaris 10
New in solaris 10: 1) ipfilter Solaris 10 ships with IP filter (ipf) and does NAT as well. Put config files in /etc/ipf Remember to enable the network interface in "pfil.ap" and restart. ipfstat -hio 2) Configuring CUPS on solaris 10: /usr/sadm/admin/bin/printmgr 3) fmstat A server running a late build of Solaris 10 detected a CPU fault, and offlined the CPU on the fly - no impact to the system at all. In order to test it, FMA was disabled, and the CPU was re-enabled. 4) inetd is "legacy?" svc.startd 5) If you remove SUNWftpr, the ftp service will be automatically deleted from the smf repository. 6) As Casper pointed out, in Solaris 10 every process is multithreaded, in principle. That is, all of the functionality of libthread has been moved into libc and the system-reserved %g7 register is used to point to the internal thread structure and the thread-local storage of each thread. /\/\/\/\ Traditional single-threaded processes are still single-threaded in the sense that only one thread of control is present, so unless you mess with %g7, it is really not possible to detect the difference, unless you try really hard. No behaviorial changes are incurred. 7) Restart a network interface: svcadm clear network/physical 8) Fine grained privileges system? Have a look at 'ppriv -l -v' - I think you want proc_setid. 9) Using NFS v4 (which is available in Solaris 10), "mountd" and "lockd" are obsolete. NFS v4 uses the well-defined port 2049, thus improving firewall support. 10) These functions (gethostbyname and friends) have been superseded by getipnodebyname(3SOCKET), getipnodebyaddr(3SOCKET), and getaddrinfo(3SOCKET), which provide greater portability to applications when multithreading is performed or technolo- gies such as IPv6 are used. For example, the functions described in the following cannot be used with applications targeted to work with IPv6. 11) Another interesting command: svcprop -p config rpc/bind 12) libSQlite comes with the OS? Does this mean ndbm is finally deprecated, and NIS won't have silly limits? 13) We appear to be getting Tomcat on port 898/tcp
Items for further study: 1) Also go to and search on 'smf' /\/\/\/\/\ The Internet services daemon, inetd(1M), has been rewritten as part of SMF. It stores all of its configuration data in the SMF database, rather than /etc/inet/inetd.conf, allowing the SMF tools to be used to control and observe inetd-based services.
Experiments to get NIS working on Solaris 10: jesus-root /etc) ypcat passwd can't communicate with ypbind jesus-root /etc) svcs network/nis/client STATE STIME FMRI disabled 14:51:03 svc:/network/nis/client:default jesus-root /etc) svcadm enable network/nis/client jesus-root /etc) svcs network/nis/client STATE STIME FMRI online 17:33:58 svc:/network/nis/client:default jesus-root /etc) ypwhich Domain not bound on Problem was largely due to /etc/hosts breakage (?), noted previously.
NFSv4, Inspect domain: cat /var/run/nfs4_domain
Nice article about Solaris 10. Mostly covers DTrace (very interesting system information it can provide), Zones (half way between XEN/UML and chroot in terms of isolation - only one kernel runs, but more effective than chroot), and SMF (svcs, svcadm - very like AIX *src). Solaris 10 features an entirely new network stack named FireEngine. The BSD tools are installed in /usr/ucb, the GNU tools in /usr/sfw, Solaris development tools in /usr/ccs, /usr/X11 contains Xorg and /usr/X contains openwin.
Checking why an SMF (Service management framework) service is having problems: bash-3.00# svcs -x ssh svc:/network/ssh:default (SSH server) State: maintenance since Fri 11 Mar 2005 08:43:23 PM PST Reason: Start method failed repeatedly, last exited with status 1. See: See: sshd(1M) See: /var/svc/log/network-ssh:default.log Impact: This service is not running. bash-3.00#
Enabling NIS on Solaris 10 went like: 15 svcs -a | less -sc 16 svcadm enable svc:/network/nis/client 17 svcs -x svc:/network/nis/client 18 ypcat passwd
Trying to enable NIS and sshd using: jesus-root> cat site.xml <?xml version='1.0'?> <!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'> <service_bundle type='profile' name='site'> <service name='network/rpc/keyserv' version='1' type='service'> <instance name='default' enabled='true'/> </service> <service name='network/nis/client' version='1' type='service'> <instance name='default' enabled='true'/> </service> <service name='network/ssh' version='1' type='service'> <instance name='default' enabled='true'/> </service> </service_bundle> Mon Mar 14 14:18:53 jesus-root> pwd /var/svc/profile Mon Mar 14 14:18:56 This method appears to work pretty well. :)
svcprop looks interesting. As does svccfg.
Useful info on SMF (which is the generic name for this svcadm, svcs, svcprop, svccfg stuff) man 5 smf_bootstrap SEE ALSO pkgadd(1M), pkgrm(1M), svcadm(1M), svccfg(1M), svc.startd(1M), libscf(3LIB), service_bundle(4), attri- butes(5), smf(5), smf_security(5)
From a comment in /etc/inittab, which is now very small in Solaris 10: # For modifying parameters passed to ttymon, use svccfg(1m) to modify # the SMF repository. For example: # # # svccfg # svc:> select system/console-login # svc:/system/console-login> setprop ttymon/terminal_type = "xterm" # svc:/system/console-login> exit
About SMF rebuilding its repository after a cloning: > QUESTION: Is that what happened-- the SMF rebuilt the repository? And, > since all the files from the original disk were transferred to the new disk > via the ufsdump/ufsrestore clone operation, why did the SMF rebuild the > repository? Yes, that is what happened. The hashes used by the automatic import process to detect manifest file changes include the inode number of the file, which wouldn't be retained by ufsrestore.
> I think you can just toss any odd such command into: > > /a/var/svc/profile/upgrade > > and they will be executed on next boot (and the file is then renamed > away) > > Casper I don't believe we've committed that interface at any public level, so I would not encourage its use. Writing your own one-time service is quite easy and would be preferable. Dave /\/\/\/\/\/ This is interesting. So far I implemented such post installation modifications and additions through a one-time manifest in /var/svc/site/ which would run a script to do these modifications and then disable itself at the end. I modelled it after the inetd-upgrade.xml manifest. But it would be much simpler to use the upgrade profile. At which point in the reboot process is it run, i.e., is the network and nfs client functionality available? /\/\/\/\/\/\ (Casper confirms that we need a service, not the upgrade script)
Removing an SMF service from the repository, and re-adding it, in case of a strange SMF problem. This example uses rpcbind, but this could be applied to other fault management resource identifier (FMRI)'s: # svcadm disable network/rpc/bind # svccfg delete network/rpc/bind # svccfg import /var/svc/manifest/network/rpc/bind.xml # svcadm enable network/rpc/bind Is there any more information in /var/svc/log/network-rpc-bind:default.log? Do you get output that looks something like this from this command:? # svcprop -p config rpc/bind config/allow_indirect boolean true config/enable_tcpwrappers boolean false config/verbose_logging boolean false
Solaris 10's rpcbind at least supports tcp wrappers. However, there is no warm start option (2005-03-16).
The -d and -D switches are generally helpful to check dependencies - what multi-user-system is dependant on. EG, svcs -d svc:/milestone/multi-user-server:default STATE STIME FMRI disabled Jan_14 svc:/network/rpc/bootparams:default disabled Jan_14 svc:/network/nfs/server:default disabled Jan_14 svc:/network/rarp:default disabled Jan_14 svc:/network/dhcp-server:default online Jan_14 svc:/network/ssh:default online Jan_14 svc:/milestone/multi-user:default
I've only skimmed this so far, but it appears to be a very good URL about defining your own SMF services:
On debugging memory problems, using libumem and MDB:
A PDF about DynFS, AKA DFS, which I believe is new in Solaris 10:
/usr/ccs/bin/what /usr/bin/ssh Shows ssh's revision history
Solaris 10, SMF stuff: Run all services: boot -m milestone=all Go to multiuser (not ^D anymore) : svcadm milestone -t all Verbose boot: boot -mverbose
SMF Guide by Sun:
Native SMF facilities should not modify inittab or inetd.conf
Nice inittab doc:
Nice blog with lots of SMF-related things:
Profile application The first time the existence of each of the three service profiles listed below is detected, svc.startd(1M) automati- cally applies the profile. /var/svc/profile/generic.xml /var/svc/profile/platform.xml /var/svc/profile/site.xml
2005-03-30 As a means of getting ssh to come up enabled out of the box, after an autoinstall, on Solaris 10, I've: 1) Created an "ssh-keys.xml" under manifest/network, which should be able to create the keys, using sun's provided sshd startup script. This script is derived from sun's ssh.xml script, which strangely doesn't seem to be setting up the keys itself, even though it knows how - nothing is calling that code - at least not until I set up ssh-keys.xml 2) I've added an enabling of ssh-keys from the profiles directory, in site.xml. site.xml should be used once and then ignored by the system, after an install.
nfsstat -m gives nice statistics
2005-04-01 OK, now we've got sshd and NIS coming up upon an initial install. But: 1) limited-fingerd is coming up misconfigured in /etc/inetd.conf 2) /etc/inetd.conf is a bit of a hack in Solaris 10; may want to move these things to SMF 3) Need a complement of patches for Solaris 10 4) We're getting two lines for the hostname of the machine in /etc/hosts 5) sendmail needs an overhaul; should use vendor daemon and m4 6) Take a crack at making sshd use a machine's hostname instead of localhost in $DISPLAY 7) Xprint configured, but no printers configured? Using "native" printsystem 8) /etc/rc2.d/S74autofs no longer exists - may want a different way of adding -DTYPE=sun4-5 9) We could probably reenable rpcbind restriction on Solaris 10... 10) srsh is not enabled on first boot; it's configured via inetd.conf; will a second boot enable it? 11) stel -is- enabled, even though it's configured via inetd.conf 12) Took a crack at $ROOT'ing 600-tcpiss 13) yup isn't running; it's in inetd.conf; another reboot? 14) telnet isn't disabled, and it isn't giving a "don't use this" banner 15) limited-fingerd problem is a tail problem in set-nth-inetd-field: + set-nth-inetd-field finger 7 /dcslib/allsys/etc/limited-fingerd +usage: tail [+/-[n][lbc][f]] [file] + tail [+/-[n][l][r|f]] [file] 16) "native" printsystem is doing: + make-link ../init.d/spc S80spc + make-link ../init.d/lp S80lp 17) 994-harden is wanting to look in inetd.conf still
Despite enabling X11 forwarding in sshd_config, when I try to run an X program over ssh from an FC3 system to a Solaris 10 system, I'm getting no $DISPLAY. truss'ing the top-level sshd shows that it's trying to bind to a large number of IPv4 ports, and failing each time, although the IPv6 binds appear to be working. I've modified sshd_config to use only IPv4 and ignore IPv6, hoping that'll help matters. The change was: # IPv4 only ListenAddress # IPv4 & IPv6 #ListenAddress :: Despite this change and reboot, sshd is still trying both IPv4 and IPv6. Now I'm trying: # If port forwarding is enabled, specify if the server can bind to INADDR_ANY. # This allows the local port forwarding to work when connections are received # from any remote host. GatewayPorts yes I found a better way around this: I've adapted root and groot to work with localhost:* $DISPLAY's
gnome keyboard doesn't work out of the box, even in a VNC session. This URL: ...has what appears to be a good way around the problem, which I've put in each/600-gnome-keyboard
Looks like apache is in /usr/apache/bin/httpd on Sol 10
Sol 10 includes a BSD listener, in.lpd, as well as an IPP listener, I'm not 100% sure yet, but the IPP listener appears to be hung off of apache. :) print-svc is apparently still "lpsched", so that IPP-apache thing is probably just a configuration GUI. The rumors were that Sol 10 would include CUPS, however, I'm seeing only vestiges of CUPS so far.
Looks like e-mail is broken with our autoinstall config, which gives all the more impetus to modernize it. OK, maybe it's not 100% broken: 1) I sent a test message via the "mail" program, and it never arrived 2) I sent another test message via /usr/lib/sendmail -bs, and that worked fine. It may turn out that /bin/mail is invoking sendmail with an option our old sendmail daemon doesn't understand, or similar.
Solaris 9 and 10 have /usr/sadm/bin/smpatch. However, 9 doesn't appeart to have the relevant daemon running on it out of the box, while 10 does.
smpatch: Located in /usr/sadm/bin/smpatch Useful commands: smpatch analyze - show what's needed smpatch update - apply what's needed Sol 9's smpatch help is terse Sol 10's smpatch help is quite long
Nice article on Zones: Includes a list of links to other articles about zones.
From: Toomas Soome <Toomas dot Soome at microlink dot ee> Newsgroups: comp.unix.solaris Date: Tue, 12 Apr 2005 16:54:00 +0300 Roland Mainz wrote: > Dan Stromberg wrote: > >>Yesterday at the Solaris 10 bootcamp, the instructor told us that ZFS is a >>128 bit filesystem. >> >>My question is, "Is ZFS 2^128 blocks, 2^128 bytes, or 2^128 bits, or what?" hm, I have numbers like 3*10^26 TB per file system and 2^48 snapshots 2^48 filer per file system 2^56 attributes per file 2^56 entries per directory 16Exbyte max fs size 16Exbyte max file size 16Exbyte max attribute size 2^62 devices per pool 2^62 pools per system 2^62 fs per pool I hope those numbers are correct:)
Relationship between ZFS and SVM: ZFS can handle huge filesystems, as well as mirroring. No word yet on whether it'll do RAID 5 though. SVM is still good for mirroring the root filesystem, as well as for mirroring or RAID 5'ing UFS filesystems, swap partitions and database partitions.
Solaris and offlining individual RAM pages and CPU's: >>At the "solaris 10 bootcamp" I attended yesterday, the lecturer was going >>kind of fast, but I semi-thought that he said that Solaris 10 can offline >>individual pages of RAM when a memory error is detected in a page. >> >>Is this true, or was I filling in the bits I didn't hear with my own >>wishful thinking? > > > It's true. What's more, Solaris 8 and 9 do the same (dependant on kernel > patch). Same story for CPUs. While Solaris 8 and 9 can retire individual pages of memory etc they do so without the benefit of the Solaris 10 self-healing framework. Among other things this means that page retirements are not persistent across reboot on 8 and 9. /\/\/\/\/\/\ 1. page will be removed if given threshold of ecc recoverable errors would be exceeded - so far no data is lost so you can safely "remove" the page 2. if uncorectable error occurs - then I guess id depends. if page contains kernel data then probably system panic will be forced, if in userland then killing application would be enough probably and then is app was under SMF it will be properly started. I think I saw document explaining this but can't find it right now.
clear_locks can release NFS locks for a given host
If you are looking for additional information about Sun's Solaris 10 OS, then please visit: Other helpful sites include: -- A portal designed for Sun users and administrators. -- Includes all Sun documentation. -- Presentation
I've modified /dcslib/allsys/etc/check-Inetd.conf in a way that I anticipate will work with Solaris 10. 2005-04-21
Very unhelpful: bash-3.00# /usr/sbin/svccfg import site.xml svccfg: couldn't parse document Somewhat helpful: bash-3.00# xmllint --valid site.xml site.xml:5: parser error : error parsing attribute name <service name='network/yup-tcp' version='1' type='service'> ^ site.xml:5: parser error : attributes construct error <service name='network/yup-tcp' version='1' type='service'> ^ site.xml:5: parser error : internal error <service name='network/yup-tcp' version='1' type='service'> ^ site.xml:5: parser error : Couldn't find end of Start Tag service_bundle line 4 <service name='network/yup-tcp' version='1' type='service'> ^ site.xml:5: parser error : Extra content at the end of the document <service name='network/yup-tcp' version='1' type='service'> ^ bash-3.00#
Very good URL about SMF (Greenfield): Uses setting up Postfix with spamassasin and clamav as an example, and covers methods; manifests; dependencies (both file and service); checking a service's status; disabling, deleting, editing and recreating a service to get past problems; and creating a service based on inetd information, without needing to create an inetd.conf.
Another good URL on SMF, this time using samba as an example:
When upgrading the CPU's in an Ultra 2, things you may need to do: 1) Upgrade firmware 2) Add dongle to indicate you have one rather than 2 CPU's 3) Make sure CPU module(s) are seated correctly - the levers do NOT ensure this 4) Change jumper J2301 to indicate new MHz of CPU(s)
The major concern in current Solaris (also 10) is that with UFS you can get no more than 16TB in one FS. After crossing the 1TB limit certain restrictions apply: for example you cannot have more than 1.000.000 inodes per TB. This results in rather large average files. If tha's a problem you will have to buy a filesystem like Veritas or wait for ZFS to be part of S10. For pure performance getting about 64MB/s isn't hard to do if you read/write larger junks of data. Almost any disk setup using stripes with 4+ disks should be able to deliver that. We currently run our servers on top of unexpensive hardware RAID-5 boxes using SATA drives internaly. Major thing to consider is a battery backed up large cache (1GB) which really helps a lot if you read/write small files/junks Thomas
New storage product from Sun in the pipeline, likely going to use Opteron, Solaris 10 and ZFS:
Creating a root mirror on a SunFire V440: esmft1-root /opt) format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c1t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> /pci@1f,700000/scsi@2/sd@0,0 1. c1t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> /pci@1f,700000/scsi@2/sd@1,0 2. c1t2d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> /pci@1f,700000/scsi@2/sd@2,0 3. c1t3d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> /pci@1f,700000/scsi@2/sd@3,0 Specify disk (enter its number): ^D esmft1-root /opt) raidctl -c c1t0d0 c0t1d0 Disks must be on the same controller. esmft1-root /opt) raidctl -c c1t0d0 c1t1d0 Volume 'c1t0d0' created esmft1-root /opt) format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c1t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> /pci@1f,700000/scsi@2/sd@0,0 1. c1t2d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> /pci@1f,700000/scsi@2/sd@2,0 2. c1t3d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> /pci@1f,700000/scsi@2/sd@3,0 Specify disk (enter its number): ^D esmft1-root /opt) raidctl RAID RAID RAID Disk Volume Status Disk Status
c1t0d0 RESYNCING c1t0d0 OK c1t1d0 OK
A cron job for a V440's hardware RAID monitoring: #!/bin/bash expected=3 result="$(raidctl | tr '\t' '\012' | grep -c OK)" if [ "$result" != "$expected" ] then echo Got "$result OKs, should have gotten $expected" | /bin/mailx -s "RAID problem on $(uname -n)" fi
Checking a device definition in the NV RAM: The solution was to edit the nvalias's to include the ide CDROM Procedure get to ok prompt show-disks command select ide cdrom nvalias cdrom ^Y - and add specific device 2,0:f to end of ^Y nvstore - to write away Then you can boot cdrom from IDE CDROM
Two articles on Solaris (and others?) password aging:
Enabling serial console on an x86 sun: 1. Enable console redirection in the BIOS: A. Boot or reboot the server. B. When prompted, press <F2> to enter BIOS setup. C. Select the Advanced menu from the category selections along the top. D. Select Console Redirection. 2. Modify some settings in the OS: A. Edit the /etc/grub.conf (to allow the OS messages to pass thru console): -comment out the 'splashimage' -add 'serial -unit=1 --speed=9600' -add 'terminal --timeout=1 console serial' -append to the kernel line, 'console=tty0 console=ttyS0,9600n8' -See InfoDoc 71430 for additional explanation B. Edit the inittab (to allow console login): -add 'co:2345:respawn:/sbin/agetty -t 60 ttyS0 9600 vt100' to the end of the second-to-last section of the config -See InfoDoc 74452 for additional explanation C. Make sure that the /etc/securetty file contains: -ttyS0 3. Reboot the platform OS, so that these changes will take affect.
Thanks to John Malick, Rob McMahon and Darren Dunham. To find the FRMI for a specific process one can: # svcs -a | grep -i xfs which returns online Jan_17 svc:/application/x11/xfs:default # svcs -l x11/xfs which returns info on the FMRI. or /bin/ps -o ctid -p <PID> To get the contract ID and then svcs -o ctid,fmri | fgrep <contract id> The reason the process was not being killed when I disabled it was this service is historically a inetd controlled service and there was no stop method in the manifest. I also was directed to the inetadm command.

Back to Dan's palm memos