Wednesday, August 21, 2013

Openstack sqlalchemy db console

Coming from rails world, I was strongly missing "rails console" equivalent for python in openstack. I wanted to open a python console and call sqlalchemy db api methods to interact with database and to debug complex sqlalchemy queries.

Finally I figured out a way to do it in python console

Open a python console. Type the below lines in the console

import os
import sys

from oslo.config import cfg
from nova import config
from nova import context
from nova import db
from nova import version

CONF = cfg.CONF












cfg.CONF([], project='nova', version=version.version_
string(), default_config_files=None)
#tweak CONF to load up default configurations
#project represents the project you are working on (nova, glance etc)
#version comes from version_string
#default_config_files, keep it to None. It will look for config files in /etc, /etc/<project>, ~/<project>  #for any file with name as <project>.conf
 
db.instance_get_all(context.
get_admin_context())
#Now, start accessing sqlalchemy methods


Happy debugging......

Thursday, July 11, 2013

Installing Ubuntu Precise Server On A Xen Server Hypervisor

Recently, I had to install a vm on a xenserver hypervisor which I had locally. I intended to use this vm as a platform for my devstack setup with xenserver.

Along the way, I had to tweak lot of parameters in the VM to get this working. So, I thought of scripting this entire process out. Here is the script I used.

#!/bin/bash
if [ $# -ne 2 ]; then
   echo "Usage: $0 <vm-name-label> <ram-size-in-bytes>"
   exit
fi

#Create ubuntu precise template if does not exist

TEMPLATE_EXISTS=`xe template-list name-label="Ubuntu Precise (64-bit)" params=uuid --minimal`

if [ "$TEMPLATE_EXISTS" == "" ]; then

    TEMPLATE_UUID=`xe template-list \
            name-label="Ubuntu Lucid Lynx 10.04 (64-bit)" params=uuid --minimal`
    NEW_TEMPLATE_UUID=`xe vm-clone uuid=$TEMPLATE_UUID \
            new-name-label="Ubuntu Precise (64-bit)"`
    xe template-param-set other-config:default_template=true \
            other-config:debian-release=precise uuid=$NEW_TEMPLATE_UUID

fi

#Create a VM with the new template

VM_UUID=`xe vm-install new-name-label="$1" template="Ubuntu Precise (64-bit)"`

#Create network interfaces for this VM

XEN_NET_UUID=`xe network-list params=uuid bridge=xenbr0 --minimal`
xe vif-create network-uuid=${XEN_NET_UUID} vm-uuid=${VM_UUID} device=0
xe vm-param-set other-config:install-repository="http://archive.ubuntu.com/ubuntu/ubuntu/" uuid=${VM_UUID}
xe vm-param-set other-config:disable_pv_vnc=true uuid=${VM_UUID}
xe vm-memory-limits-set dynamic-max=$2 dynamic-min=$2 static-max=$2 static-min=$2 uuid=${VM_UUID}

echo "Starting VM with uuid ${VM_UUID}"
xe vm-start uuid=${VM_UUID}

Thursday, July 4, 2013

Creating a cell environment using devstack

Recently, I stumbled upon a task to create an openstack setup using devstack in a cell environment.

I am assuming that reader is familiar with openstack and devstack.

The idea I started, with was to have a VM, running nova-api, nova-cell, glance and keystone services.

I created two more VMs on two different XenServer hyervisors to be children cells.

Each child runs nova-cell, nova-compute, nova-network, nova-scheduler and nova-conductor services.

Below are the steps I followed in parent and children to get this working.

Parent

1. Download Devstack

git clone https://github.com/openstack-dev/devstack
 
2. I had to tweak localrc a bit before stack-up to enable nova-cells. My localrc contained,

enable_service n-cell n-api-meta
disable_service n-cpu, n-net, n-sch

3. Run ./stack.sh. Once stack.sh completed, I tweaked /etc/nova/nova.conf to include

[cells]
enable=True
name=api
cell_type=api


3. nova-api has to be restarted.


Children

1. Download devstack

git clone https://github.com/openstack-dev/devstack

2. Tweak localrc to contain below content,

enable_service n-cell
disable_service n-api, key, g-api

3. Run ./stack.sh

4. Tweak /etc/nova/nova.conf

[DEFAULT]
#Disable quota checking in child cells. Let API cell do it exclusively.
quota_driver=nova.quota.NoopQuotaDriver
glance_api_servers=<glance server of parent>

[cells]
enable=True
name=cell1
cell_type=compute

Notice, that I am telling child to use glance server running in parent.

On a side note, these are the essential settings in nova.conf for a compute node on XenServer,

compute_driver = xenapi.XenAPIDriver
firewall_driver = nova.virt.firewall.IptablesFirewallDriver 

xenapi_connection_password = ROOT PASSWORD FOR HYPERISOR
xenapi_connection_username = root
xenapi_connection_url = http://HYPERVISOR_IP
 


Cell Creation 

After this set-up, I had to hook up parent cell and children cells together

On parent, run this,

1. nova-manage cell create --name=cell1 --cell_type=child --username=guest --password=password --hostname=<child ip> --port=5672 --virtual_host=/ --woffset=1.0 --wscale=1.0 

2. Restart n-cell

On Child, run this,

1. nova-manage cell create --name=parent --cell_type=parent --username=guest --password=password --hostname=<parent ip> --port=5672 --virtual_host=/ --woffset=1.0 --wscale=1.0

2. Restart n-cell.

After this, if you tail logs for n-cell in parents and cell, you should be able to see children trying to update the parent with their capabilities. This concludes your cell set-up.