Pages

Tuesday, September 19, 2017

Oracle - Renaming disk group seg faults

If you are attempting to use the "renamedg" command to rename the disk headers in a cloned oracle volume and are getting a segmentation fault, look no further.  The key is to make sure that the user which is running the renamedg command (normally grid or oracle) has ORACLE_HOME set in the environment.

For example, I was doing this command:

[grid@server] /u01/app/grid/12.2/bin/renamedg dgname=OLD_NAME newdgname="CLONED_NAME" asm_diskstring='/dev/oracleasm/disks/DISKNAME'

clscfpinit: Failed clsdinitx [-1] ecode [64]
2017-09-19 13:23:43.533 [4160639488] gipclibInitializeClsd: clscfpinit failed with -1.

Segmentation fault

After setting ORACLE_HOME for grid, the command worked.  There was not much out on the Internets about this, so hopefully it can help someone.

Thursday, December 22, 2016

Red Hat Satellite 6.2 - Automatic configuration of Apache vhosts with Puppet

Overview

I have a need to deploy multiple RHEL 7.3 server with Apache, PHP and mod_ssl which was going to be front ended by an external load balancer. Each server runs the same php script that accepts a URL string and does stuff with the parameters that are passed. Nothing too fancy.  There are multiple 3rd parties that are sourcing the data over the internet.

The existing setup of this "app" is that we have a VM setup for each of the 3rd party sources and everytime another source is added, a new server is built.  This does not scale, is not sustainable and is not a good use of our resources.

I'm writing this post with the assumption that the reader already has a working Satellite environment running and knows how to do basic tasks within Satellite.

Warning!
Doing this stuff on a server that already has any apache configuration on it will cause all of that configuration to be deleted and replaced with what puppet thinks should be there.

The Plan

My plan is to:

  • Use Satellite to deploy and configure the new servers. 
  • I want to setup a vhost for each of the 3rd party sources.
  • I do not want to have to make a new puppet module every time we add a new source.
  • Vhosts need to be defined in a smart class variable
  • I want to have very minimal configuration happening during the kickstart/provisioning. I'd like puppet to ensure the correct things are installed, running and configured.

Starting Out

TLDR: My first attempt failed miserably and a custom puppet module wrapper is needed.

I found the official apache puppet module off of the puppet forge ( https://forge.puppet.com/puppetlabs/apache ) and downloaded the tar.gz.  Looked at the dependencies tab and saw that this module requires the Concat and stdlib modules. I already had those downloaded for other puppet modules I'm using.  

I uploaded those modules into a custom puppet repository that I have setup in my Satellite. Added the 3 modules to a content view I called "web servers" and published a new view. Once that was complete, I published it to the "dev" life cycle environment for the web servers content view.  

Next step is to create new host groups for the web servers. In the host group, I select all the puppet modules I need, apache and stdlib. (Concat is apparently not selectable but just needs to be in the content view)

Now I'm ready to provision a server to my new host group and see what happens.  After the install was complete, puppet errored out and did not apply any kind of configuration. Not shocking since just about every puppet module downloaded from the forge needs to be modified or messed with in some way before it'll work in Satellite.  It was also not shocking that there is just about zero information on configuring apache with puppet through Satellite anywhere on the internet or on the RH access.  

I put my 15 years of google foo experience to the test and was able to find some similar things people were doing which got me on my way.

The Solution

The apache module from the puppet forge is not created to run in an environment like Satellite. Because of this, I had to create a custom puppet module that acts as a "wrapper" that actually gets the information from the Satellite and passes it to the apache module.

  • To start off, sign into somewhere that has puppet installed and create a new module. This command will ask you questions and will create an empty module folder structure with a metadata file.
puppet module generate <yourname>-httpdwrapper
  • A new directory should created, called <yourname>-httpdwrapper.  cd into it and then cd into the manifests directory. Edit the init.pp file add some class information
class httpdwrapper {
        class { '::httpdwrapper::install': }
}


  • Save the file. Then create a new file called install.pp and put the following in it.

class httpdwrapper::install (
        $myvhosts = $httpdwrapper::params::myvhosts,
        )
{
create_resources(apache::vhost, $myvhosts)
}


  • Save the file. Then back up two directories so you're in the directory that has the <yourname>-httpdwrapper directory in it and build the module.  

puppet module build <yourname>-httpdwrapper


  • A new tar.gz file will show up in yourname-httpdwrapper/pkg. Download this and upload it to the custom puppet repository in Satellite.  
  • In the Satellite UI, browse to Configure > Smart Class Parameters
  • Search for myvhosts and select it
  • Select Overwrite
  • Change Keytype to hash
  • Put in some default vhost information into the default value field. The text field can be dragged down so you can see it easier.

host1:
  priority: '10'
  vhost_name: testvhost.domain.com
  port: '80'
  docroot: "/var/www/testvhost"

  • Submit the changes.
  • Go to the content view for the web servers, add the httpdwrapper module to the view, publish and promote.
  • Go to the host group for the web servers, add httpdwrapper::install and submit.
  • Edit the host group again, this time select the parameters tab. myvhosts should be listed there. Override the default values. Now you can put in as many hosts you want as well as any other vhost config options. See https://github.com/puppetlabs/puppetlabs-apache#defined-type-apachevhost for those options. Example:

bobsbait:
  priority: '10'
  vhost_name: bobbait.example.com
  port: '443'
  docroot: "/var/www/bobsbait"
  docroot_owner: apache
  docroot_group: apache
  ssl: true
somewebsite:
  priority: '10'
  vhost_name: somewebsite.example.com
  port: '80'
  docroot: "/var/www/somewebsite"
underconstruction:
  priority: '10'
  vhost_name: underconstruction.example.com
  port: '80'
  docroot: "/var/www/underconstruction"


  • Click submit. The next time puppet agent runs on the server, it should create a new conf file for each site in /etc/httpd/conf.d and create the web root.  I'm doing the devops!


I think that the create_resources puppet function can be used to pass information in a similar fashion to all the classes within the apache module, so I should be able to just make new variables in the httpdwrapper module and then be able to change them in the UI.

Hope this helps someone figure some stuff out.

Thursday, February 4, 2016

VM Ware - Get UUID of every host in a vSphere

I made use of virt-who to scan my vSphere environment and populate Satellite 6 with what guests are running on which hosts as well as enter the hosts into the Content Hosts section so I could assign them subscriptions.

When virt-who adds an ESX host, it names it with it's UUID in Content Hosts, which can get confusing later.  I wanted to rename the content host entry to be the actual ESX host name. To do this, I had to figure out what the UUID of each ESX host was and there's not an easy way to do this in the vSphere client.

PowerCLI to the rescue!


  1. Open power CLI
  2. Connect to the vCenter
    1. Connect-VIServer <vserver name>
  3. You will get prompted by windows for authentication
  4. Run this command to see the UUID's for every host in that vCenter
    1. Get-View -ViewType HostSystem -Propert Name, hardware.systeminfo | Select Name, @{N="UUID" ; E={$_.hardware.systeminfo.uuid } }
  5. Match up the UUID's from the command with the UUID entries in Content Hosts. Rename them with their real name.
  6. Done!

Friday, April 10, 2015

Flexpod - Setting up a UCS Mini with Direct Attached Storage (DAS)

Overview
This entry will document what we did to setup a UCS Mini Flexpod with direct attached storage for a remote office. Using iSCSI.

This document will not cover step by step instructions on how to do common UCS or NetApp tasks(such as creating service profiles or a storage volume), only things specific to setting it up for using in the UCS Mini configuration.

Equipment
The hardware used:

Type Model Firmware / OS
Chassis UCS 5108 AC2  3.0(1c)
Fabric Interconnects UCS 6324 3.0(1c)
Blades UCS B200 M3 3.0(1c)
Storage FAS 2552 (no external storage) Switchless cDOT 8.3

Reference
For information on how to setup a UCS Chassis from scratch, please see this awesome guide:

I have used this guide to setup two, standard UCS Chassis as well as a framework for setting up the UCS Mini.  

Assumptions
I am assuming:
  • The equipment is un-boxed and racked
    • Do not turn anything on yet, just get it in the racks
  • Proper network infrastructure exists (Cisco switches)
    • I am not a network guy, so sadly this guide will not include any network configurations besides setting IP addresses.
Storage Configuration
The storage will be setup first.  My example is a FAS 2552 with dual controllers, without external storage in a switchless cDOT 8.3 cluster.

  1. Download the official NetApp setup program and guide from the NOW site
    • http://mysupport.netapp.com/NOW/public/system_setup/
    • I am using the FAS 2552 setup guide
  2. Plug in the power, but do not turn it on. Leave the power supply switch off.
  3. Using the supplied network cable, connect the two ACP Ports. Has a picture of a wrench with a padlock on it.
  4. Using SAS cables, connect the left SAS port on controller 1 into the right SAS port on controller 2.
  5. Connect the right SAS port on controller 1 into the left SAS port on controller 2.
  6. Using 10gig SFP cables, e0e on controller 1 to e0e on controller 2
  7. Connect e0f on controller 1 to e0f on controller 2
  8. Connect Remote Management port (wrench icon) to your management network.
  9. Should look something like this:
    • Forgive the cable mess, temporary setup before we shipped it to the remote site.
    • Ports e0c and e0d are going to the Fabric Interconnects, we'll get to that later
  10. Cable a windows workstation to the same network that the NetApp's management ports are connected to.
  11. Install and run the NetApp_SystemStartup.exe
  12. Press the Discover button
  13. Turn on the controller power switches when directed
  14. Follow the onscreen instructions to configure cluster information, authentication, disk, licenses, etc.
    • Unfortunately, I did not take any screen shots during this part or take good notes like I should have.
  15. If everything is cabled and configured correctly, the NetApp can be accessed over SSH or using OnCommand.
  16. Create a SVM with iSCSI enabled

UCS Mini Configuration
Please follow the Speak Virtual UCS setup guide to get the UCS Mini mostly setup.  I am going to document what was different here. Also remember that this guide is configuring everything for use with iSCSI

First thing that was different was configuring the Unified Ports. On each Fabric Interconnect:
  1. On LAN > Appliances > Fab a/b > VLANs
    • Create a vlan. I called mine "iSCSI-App-A/B"
      • Type in a vlan ID that is not in use on your network
      • Select Fab A or B, not both
      • Not native vlan
  2. On Lan > Policies > Appliances > Network Control Policies
    • Create a policy called "AppModeNCP"
      • CDP Disabled
      • MAC Register Mode: Only native vlan
      • Action on Uplink Fail: Warning
  3. On Equipment > Fab Interconnects > fab a/b > Select Configure Unified Ports
    • Set ports 1 and 2 to Appliance Ports. 
      • Set the VLAN to the VLAN created in step 1
      • Set the access control policy to AppModeNCP
      • The mode is Access
    • Set ports 3 and 4 to Network port
      • Mode is Trunk
  4. Wait for the Interconnect to reboot. Then change the other Fabric Interconnect.
  5. On LAN > Policies  > root > vNIC Templates
    1. Create a vNIC Template for iSCSI_A
      1. Select Fab A
      2. Select Adapter
      3. Select iSCSI-App-A for vlan
      4. Updating Template
      5. 9000 MTU
      6. ISCSI_A mac pool (see the speak virtual doc on how to create a mac pool)
      7. QoS Policy VMWare
      8. Network Control Policy: AppModeNCP
      9. Dyname vNIC
    2. Create a vNIC Template for iSCSI_B
      1. Select Fab B
      2. Select Adapter
      3. Select iSCSI-App-B for vlan
      4. Updating Template
      5. 9000 MTU
      6. ISCSI_A mac pool (see the speak virtual doc on how to create a mac pool)
      7. QoS Policy VMWare
      8. Network Control Policy: AppModeNCP
      9. Dyname vNIC

Next is cabling.
  1. On Fab A
    • Connect the top two ports to e0c on both NetApp controllers 1 and 2
    • Connect bottom two ports to 10 gig ports on the network switches.
  2. On Fab B
    • Connect the top two ports to e0d on both NetApp controllers 1 and 2.
    • Connect bottom two ports to 10 gig ports on the network switches.
There's some more stuff, I will update this page when I figure out what I did.

Friday, March 13, 2015

NetApp - Using SnapMirror to Migrate a 7mode NAS Volume to cDOT

At work, we're in the process of migrating from two FAS3250 running 7mode to two FAS8040's running clustered data on tap. Both sets of controllers are running side by side and we need to migrate everything from the 3250 to the 8040.

First hurdle, we have a bunch of ESXi hosts that boot from SAN.  Then we find out that it is impossible to SnapMirror SAN (FCP or iSCSI) volumes between 7mode and cDOT.  Had to rebuild all our ESXi hosts and do a full restore of our SQL cluster database since it was also on SAN volumes.

Next challenge, getting our many NFS volumes transferred over.  NetApp offers a transition tool to help with this process, but I did not try it. I found a bunch of guides on the NOW site and random blogs. They all missed a couple important steps which I wanted to document.

Here is the process I followed for migrating a 7mode NAS (nfs) volume to cDOT. DOES NOT WORK FOR SAN(fcp,iscsi) VOLUMES! I'm assuming the reader of this already has their cDOT system up and running with SVM's and LIFS configured.

CLUSTER = cDOT cluster name
NODE = individual controller node in the cDOT cluster
VSERVER = NAS SVM name
7MODE = 7mode controller with the source volume on it

Tab completion is your friend!

  1. SSH into the cluster using your favorite SSH client. Sign in with admin account.
  2. Set up a peer relationship
    1. vserver peer transition create -local-vserver VSERVER -src-filer-name 7MODE
  3. Create destination volume to snap mirror to
    1. volume create -volume <volume name> -aggregate <aggrname> -size 100GB -type DP
  4. Create an Intercluster LIF
    1. network interface create -vserver NODE -lif intclust01 -role intercluster -home-node NODE -home-port <port that has connectivity to 7MODE> -address <ip address> -netmask <netmask>
    2. network routing-groups route create -vserver NODE -routing-group <name> -destination 0.0.0.0/0 -gateway <gateway of the intercluster LIF IP>
  5. Verify communication
    1. network pint -lif intclust01 -lif-owner NODE -destination 7MODE
      1. should say that it is alive.
  6. IMPORTANT! This is where the other guides I found on the Internet failed.
  7. On 7MODE, edit the /etc/snapmirror.allow file.
  8. On a new line, put
    1. interclusterLIF:<volume to SnapMirror>
    2. example: fas8040-01:nfs_volume_01
    3. The interclusterLIF name needs to resolve to the 7MODE system exactly how you type this name in here. I had to edit the /etc/hosts file on the 7MODE system and add an entry for my intercluster IP address to resolve to fas8040-01 before it would work for me
  9. Back on the CLUSTER shell. Create the SnapMirror relationship
    1. snapmirror create -source-path 7MODE:<src_volume> -destination-path VSERVER:<volume created in step 3> -type TDP
  10. Initialize the SnapMirror
    1. snapmirror initialize -destination-path VSERVER:<volume>
  11. View progress
    1. snapmirror show -destination-path VSERVER:<volume>
  12. Status should be transferring. It took a few minutes for it to actually start transferring data for me.
  13. If it is failed, log onto the 7MODE node and try to initialize the cluster again from the cDOT system. The 7MODE system might give you a clue as to what is wrong. Might have to go mess with /etc/snapmirror.allow or hosts.
  14. The volume should now be SnapMirrored

Thursday, February 19, 2015

Red Hat Satellite - 5.7 - Kickstart hangs on running post installation scripts

After I upgraded from Satellite 5.6 to 5.7, I was unable to kickstart any new servers using my existing kickstart scripts.  The kickstart would start, do all my partitioning and package installation and then hang on "Running post installation scripts."  I let it sit for over 30 minutes, no go.

It did not register to my Satellite server yet, so I knew it had not processed the entire kickstart file yet.  I started making a new kickstart profile to see if it was something that didn't convert well in the upgrade.  When I got to the scripts part, I saw a new post script listed called "Registration and server actions"

I could not do anything to the script besides reorder. I reordered the scripts so that the weird script was on the bottom, updated the profile and tried to kickstart again. This time, it worked.

TLDR: Put the "Registration and server actions" post script on the bottom of your kickstart script listing.

Example:

Monday, February 2, 2015

Red Hat Satellite 5 - Ran out of disk after 5.7 update

I'm still using Satellite 5 in production because Satellite 6 is just not ready yet for prime time use. At least, not in my opinion.  The recent Satellite 5.7 update change the entire interface to look a lot more modern as well as a bunch of bug fixes.

The 5.6 to 5.7 update went perfectly for me. Nothing broke and I did not lose any functionality.  However, today my Satellite 5 server stopped working. I could not get to the web interface and client servers could not install packages.  Sign into the server and see that the / partition was full.

Some searching turned up, that one of the 5.7 changes was that they moved the location of where postgres was keeping its data. The upgrade process moved all of the data out of the old location (where I had a separate mount point) and moved it to a new location in /opt, where I don't have enough disk.

  • Old location in 5.6
    • /var/lib/pgsql
  • New location in 5.7
    • /opt/rh/postgresql92/root/var/lib/pgsql
I made sure Satellite was fully stopped
  • rhn-satellite stop
Then, I copied the data directory from the new location to the old location. After that, I umounted /var/lib/pgsql, changed the mount point in fstab and remounted it in the new location.  Started satellite up
  • rhn-satellite start
Once I verified that Satellite was back and happy. I stopped the Satellite, umounted the partition and deleted everything that was in that data directory so my / partition would get some space back. Remounted pgsql and started satellite. Everything is happy.