Tuesday, December 5, 2017




Posted on 





  • Continued to help several users with connecting to the new Controller Clearquest Database for Vinh's users
  • Added Version Tree button to the bin_merge prompt dialog box.
  • Resolved problem with forking from PerlTk

Forking in PerlTk

The issue here is that calling fork(2) in general on Windows just doesn't work well. Clearcase's Perl is based on ActiveState Perl and ActiveState Perl is not the best implementation of Perl. Cygwin's Perl is by far a much better implementation but Cygwin's Perl works from a true Posix environment which is specifically what Cygwin is all about. However Cygwin's Perl does not support PerlTk (I posted on Cygwin's mailing list about this - it would be great if Cygwin would finally support PerlTk) and Cygwin would need to be present on all Windows systems.

In general, while fork(2) does work under ActiveState is does not work if you have Tk objects created. Chris had suggested to use "start <program>" which is fine - for Windows - but would fail in Unix. I like to wrote my code such that it works on both Windows and Unix. So I tried in vain to get fork to work. However, ActiveState's fork(2) call is horribly broken so I had to code around it like this:

sub VersionTree {
  my $file = shift;

  my $cmd =  "cleartool lsvtree -graphical $file";

  if ($^O =~ /mswin|cygwin/i) {
    system "start /b $cmd";
  } else {
    my $pid = fork;

    return if $pid;

    system $cmd;
    exit;
  } # if
} # VersionTree

Friday, December 1, 2017

Configuring Linux to Authenticate to Active Directory using Winbind

Posted on 
Under Linux, you can use winbind from the Samba suite of tools to authenticate with Windows Active Directory. Refer to Setup CentOS to authenticate via Active Directory for how to set up CentOS to authenticate to Active directory. Windows uses Kerberos to perform authentication so you'll need to set that up. The above link talks about running authconf with lots of parameters to set it all up. That may be a better way in the end but I got it working starting with authconf then tweaking. Here are my resultant files that seem to work. Later I might figure out how to do it with authconfig.
  1. First you'll need some software if it was not previously installed. The following installs all you need for CentOS (Ubuntu still needs to be investigated for the corresponding apt-get installation):

    Install software
    $ yum -y install authconfig krb5-workstation pam_krb5 samba-common
  2. Edit /etc/krb5.conf to look like:
    /etc/krb5.conf (Audience)
[libdefaults]
default_realm = AUDIENCE.LOCALns_lookup_realm = true
d
ns_lookup_kdc = trueticket_lifetime = 24hrenew_lifetime = 7dforwardable = true
[realms]
audience.com = {  kdc = dc1.audience.local  admin_server = dc1.audience.local}
/etc/krb5.conf (Knowles)
[libdefaults]
default_realm = KNOWLES.COMdns_lookup_realm = truedns_lookup_kdc = trueticket_lifetime = 24hrenew_lifetime = 7dforwardable = true
[realms]
knowles.com = {  kdc = dc1.knowles.com  admin_server = dc1.knowles.com}

Syntactical sugar

Posted on 
Many people write code and specify all kinds of what I call "syntactic sugar". These come in the form of largely punctuation characters especially in Perl. Such things are usually included for the benefit of the compiler or more correctly the parser. They are there to avoid ambigutity but a coding style of specifing all of the defaults so that it's clear is IMHO ridiculous. Humans and human understanding of languages uses short cuts, context and implied meaning. Write your code for the next human - not the parser.

So, for example, I encountered the following line of Perl code the other day:

if (scalar(@ARGV != 1)) { die "\nUsage:\nAD_auth.pl \%username\%\n" }

Here are some of the unnecessary syntactic sugar:
  • scalar is unnessesary. An array evaluated in a scalar context, returns the number of entries in the area. Comparing an array to a number like 1 is evaluating an array in a scalar context.
  • The () around @ARGV != 1. Parantheses used to specify correct presidence in mathimatical expressions - sure, but only as many as you need. Here however the parantheses are unnessesary. Sure some say "it's a function call therefore its parms should be enclosed in ()" I ask "why?". Do you always likewise do print ("a string") or do you do print "a string"?
  • The () around the boolean expression for if. It's require, unless if appears at the end...
  • The {} around the then portion of the if statement. Technically not needed as without them the die statement would be the only statement in the then block. However in practice I pretty much always use {} even if only one statement. I find that way too often I need to stick more statements in there and if so then {} are already there.
  • Needless escape of %: There is no need to specify \%. Oh, and this is a bad, non-portable practice as if they meant %username% as in expand the environment variable username well that'd only work in Windows.
How do I think this would be better written:

die "Usage: $FindBin::Script <username>\n" unless @ARGV == 1;

I believe the advantages are:
  • Dynamically obtaining the script's name instead of hard coding it
  • Specifying <username> is a more standard way of indicating that the user should fill in this parameter
  • Positive logic, with the help of unless. Unless is like if not. But I find too many nots become confusing. It took a little bit of time for me to feel comfortable with unless, to trust it was the right thing. But it's pretty much English - unless the array ARGV has only 1 element. Unless also reads better at the end of a line.

Setting up PuTTY to do passwordless logins using pre-shared key

Posted on 

Setting up PuTTY to do passwordless logins using pre-shared key

Seems I've been here before: http://cygwin.com/ml/cygwin/2012-01/msg00284.html
This site seems helpful: https://support.hostgator.com/articles/specialized-help/technical/ssh-keying-through-putty-on-windows-or-linux

Generating your ssh keys

You need to use PuttyGen to generate your ssh keys to share. One problem is that Putty does it's own form of ssh keys which is non-standard or at least non-Unix like. Once you install PuttyGen you should generate your key. SSH-2 DSA is more secure than the default SSH-2 RSA keys so toggle that on then do generate, then move the cursor around the blank area. PuTTYgen uses this movement to generate the key's randomness.



Once this is generated, you can set a key comment, or a passphrase. The comment isn't important, as it's just a label, but the passphrase will require that you enter this passphrase when using the key. If you're trying to have a "passwordless login" then this should be left blank.



Now click Save Public Key, and save this file to a name of your choosing somewhere safe on your hard drive. Please remember the location of this file, as it is needed later. Then do the same for Save Private Key.

Installing your ssh keys into the server

Now that we have the keys generated and saved, we need to getthe keys on the server. Copy the data inside the putty window under Public Key for pasting into SSH authorized keys file. The key appears to be to put these keys into your ~/.ssh/authorized_keys2 file not you're ~/.ssh/authorized_keys file. You want to putthis into your NFS home directory not your Windows home directory. Why we maintain two different sets of home directories is unknown.
Note: If you don't have a .ssh directory on your Unix/Linux machines then execute ssh-keygen -t dsa on Linux to create that and your DSA keys.
Note 2: If a Linux machine does not use your NFS mounted home directory then you'll have to duplicate your home environment and things like ~/.ssh on the machines that do not share your one home directory.
Make sure your ~/.ssh/authorized_keys2 is set to 600.

Setting up colored sessions for PuTTY and tying that to shortcuts

It's best to set up sessions in putty. A session is merely a set of configuration parameters tied to a name. We will create sessions for different types or categories of machines then invoke them with different machine names. We will set up session for dev/test/prod environments.
In putty do the following:
  • Window: Lines of scrollback - you might want to set this to something larger than 200 like maybe 2000.
  • Window: Colours: Set Default Background and Default Bold Background to some color. You may want to use a theme of dev blue, test 3D orange and prod red, for example. I also change Default Foreground to a solid white and Default Bold Foreground to a bright yellow. This setting will be the main setting to change between dev/test and prod.
  • Connection: Data: Auto-login username. Set this to your username (e.g. adefaria)
  • Connection: SSH: Auth: Private key file for authentication: Browse to where you put your generated Putty Private Key.
  • Connection: SSH: X11 - toggle on Enable X11 forwarding. Consider installing Cygwin's Xorg server
Then go back to the Session page and enter a name for your Saved Session and click save. Next you can change that name, go toConnection: Colours, set up your color scheme for test or prod and save those sessions. Now you have dev/test and prod sessions colored to your liking.

Executing PuTTY sessions

Now you can set up shortcuts to use these saved session parameters but apply them to different machines like so:
C:\Program Files\Putty\Putty.exe -load dev cm-job-ldev01
C:\Program Files\Putty\Putty.exe -load test cm-job-ltest01

Bugzilla::Webservice

Posted on 
On mozilla.support.bugzilla Thorsten Schöning wrote:
Still how does one get this Bugzilla::Webserves::Bug?
There are different ways of course, one is BZ::Client, others depending on your environment may exist or you need to create your own client or whatever.
Name a way then. You see when I say BZ::Client I can do cpan BZ::Client and I will get downloaded actually code that I can run (It's pretty clear I'm workign in Perl no?). However when I say Bugzilla::Webservice::Bug I cannot do cpan Bugzilla::Webservice::Bug and get code because cpan says it can't find anything listed under Bugzilla::Webservice::Bug. Now I know that it's not part of CPAN so what is it part of and how do I get the code?

'Cause you just got through telling me that BZ::Client is not an official Bugzilla project and yet you pointed me to specifically to http://www.bugzilla.org/docs/4.4/en/html/api/Bugzilla/WebService/Bug.html#search which is in bugzilla.org. Was I not to think that that would be "an official Bugzilla project"? Yet there's nothing there that I can see about getting that code!
Looking at https://wiki.mozilla.org/Bugzilla:Addons#Client_libraries_for_the_Bugzilla_Webservices.2FREST_API I see it telling me to use BZ::Client (and not even mentioning BZ::Client::Bug, which is much more to the point)
Because the wiki can't know your use case. :-)
Looking at CPAN for BZ::Client (http://search.cpan.org/~jwied/BZ-Client-1.04/lib/BZ/Client.pm) we have the following methods:

new
url
user
password
login
api_call

Not exactly a bastion of functionality! The Wiki could anticipate that I might need a bit more functionality for any functional use case! Just my opinion of course :-).

Looking at BZ::Client::Bug (http://search.cpan.org/~jwied/BZ-Client-1.04/lib/BZ/Client/Bug.pm) I see:

CLASS METHODS
get
new
create
search
INSTANCE METHODS
id
alias
assigned_to
component
creation_time
dupe_of
is_open
last_change_time
priority
product
resolution
severity
status
summary

At the very least, couldn't the wiki mention both?
for a client side solution. I guess I didn't mention I'm working client side. Actually I see little difference between client and server side API's here.
That sounds like one of the design goals of BZ::Client, but I'm only guessing.
You mean a design goal to work on more than one machine? I'd say it's basic functionality but perhaps that's just me.
I want the API that can get me the Bugzilla data from wherever I happen to run it. Oh and I'm working in Perl.
I don't think I understand this paragraph, BZ::Client works wherever your client is able to access your Bugzilla from. But your problem is totally different, you want functionality that may or may not be provided currently by the WebService-API. Check the docs and if it's not sufficient check the code and maybe provide patches for the docs for anyone else with a similar problem.
It looked like you were telling me that BZ::Client was not an official project and that I shouldn't be using it. I realize you were saying that I'd be better off contacting that module's owner, and I have (have not heard back yet). It's just I thought there would be some sort of official or at least full featured way to access the functionality of Bugzilla from scripts, specifically Perl scripts as from what I can tell Bugzilla is largely written in Perl. And as it is quite old in Internet terms I find it surprising there isn't a full featured Perl module that everybody's using already.

So far I've found WWW::Bugzilla and things like WWW::Bugzilla::Bugtree and WWW::Bugzilla::Search. These use some sort of WWW::Mechanize thingy and seems to be horribly inefficient. For example, the new method requires a bugid and takes a while to instantiate. If a script had to process a 1000 bugs and instantiate each one of them at like 4-5 second per instantiation it would take just way too long to get anything done. The Search module's no better. If you have a couple of thousand bugs qualifying then it gathers all of the data, returns a huge structure and takes tens of minutes to return from the search call. I emailed the author about this module but haven't received a response.

I then found BZ::Client and BZ::Client::Bug, which work fairly decently. It's quicker and pretty flexible but got stopped at the lack of search capability.

I see this Bugzilla::Webservice::Bug but have no idea on how to download the code.

I also see this BzAPI thing but it appears to be server side only.

One thing I'm trying to accomplish: We have a number of saved searches. One process we have here executes a number of these and saves the results to .csv files. This is done manually through the web UI. Another Perl script gathers a bunch of these .csv files and produces another .csv file in the form of a report that is useful to management. I'd like to change this Perl process to simply interrogate the Bugzilla database directly, possibly utilizing the saved searches directly and produce the final .csv file directly (or perhaps make a web page instead). There are also many other scripts I could imagine Perl doing by having full functioned access to Bugzilla.

Oh and I checked BZ::Client::Bug::search's code. It descends off in to a plethora of web related technologies that sufficiently obscures what's going on, what's expected and what works WRT search.
I for example simply don't use the WebServices and therefore can't tell you if your needed behavior is implemented or not.
I don't particularly care if it's WebServices or not. I just want to interact with the functionality of Bugzilla programmatically through Perl. The backend can be WebServices, client/server socket stuff or a direct API.
Indeed. I've worked with 2 REST APIs so far and they both shared the characteristic of 1) being poorly documented
I don't find the WebServices API of Bugzilla documented that poorly,
Starting from http://www.bugzilla.org/docs/4.4/en/html/api/index.html it's not immediately clear that this is a server side only technology. If it's not a server side only technology then it's not at all clear how to get the client side code. And there's an appalling lack of real world examples.
if it is you are in a happy position because you just need to check the implementation and provide patches to improve the docs, because your behavior is already implemented.
You make the incorrect assumption that 1) I understand enough about the code base to make contributions, 2) I have that time and 3) I wish to work for free. I'm a consultant and I need to get stuff done for my client. My client does not pay me to work on open source projects. I'm not totally opposed to working on open source projects nor contributing to them but I don't have the time nor inclination to do so here. Sorry.
If it is not your have a bigger problem because your functionality needs implementation. :-)
Not necessarily. Somebody else could have already implemented this or there may be another solution. That's why I'm posting here. I'm shocked that such a mature technology has such a lack of an interface, supported or not...
and 2) not supporting full search. What's up with that?
As often it's a matter of available resources.
I see it as more of a matter of professionalism. I know that I would not release code, open source or not, unless it was not at least reasonably feature complete.

REST APIs seem, in my experience, to use too much web technologies and vagueness, attempting to be language agnostic, that they read as if they say nothing and you need to go elsewhere for answers to simple questions. There's never any good examples of real code, they often don't describe what data is returned other than to say that it's of XML or JSON format - how are multiple records indicated? How empty fields handled? Different field data types? How are exceptions handled? These questions are often not answered. And usually the search portion is "not totally implemented". I think that this is because representing search conditions appears to be difficult to do in a REST scenario (largely because they seem to try to represent the search in terms of data structures like XML instead of simply strings that must be parsed - REST APIs seem to eschew conditional parsers).

File this one under Paid Support vs Open Source

Posted on 
I use both proprietary software as well as open source software. One would think that when you pay for your software and pay a lot for support, then obviously you must be in a better situation should something not work correctly. But my experience has been the opposite. Not always but often. I can only attribute this to the fact that when dealing with OSS you often are talking directly with the developer who has pride in his work and wants it to work correctly. He is bothered when people report problems in his software and motivated to try and fix it.

On the other hand we've all had our "experiences" with so called front line support people who sometimes barely know how the software they support operates or even how to spell its name correctly, who ask their customers to reboot their Linux server that's been up for the last 3 years to see if that will "help".

IBM/Rational Support is far from that bad - often they are excellent. But it does seem that sometimes when the problem is a little thorny they will punt and say this is "outside of scope" - whatever that means.

I must admit my process is slightly complicated - a CQPerl script which serves as a multiprocess server which forks off a copy of itself to handle request to process Clearquest data. For anybody who has written such server processes they can be tricky at first to program and get right, but soon turn into just another programming task like any other.

The problem arises in an odd way in which a request comes in to add a record. BuildEntity is called and the record is successfully added. But when a second process later attempts to do a similar thing - add a record - the BuildEntity fails stating:
Status: 1 unknown exception from CQSession_BuildEntity in CQPerlExt at cqserver.pl line 31.
The support engineer eventually responded with:
On 1/25/2013 10:40 AM, Naomi Guerrero wrote:

Hi Andrew,

I'm following up on escalated PMR#16866,227,000. After escalating this PMR to L3 support, and Development having discussions about this issue, this request goes outside the scope of support. This is not something we can assist you with in support. Instead, I would recommend you reach out to your Sales agent at IBM (or I can) so that someone from the Rational Services team can further assist you.
To which I responded:
On 1/25/2013 11:00 AM, Andrew DeFaria wrote: 
How can you possibly say that this goes outside the scope of support?!? We have a situation here where your software returns the words "unknown exception", fails to do what it's advertised to do (Build and entity) and even stops my script from continuing! This is clearly an error in IBM's software. I have a reproducible test case (you'll need our schema, which I supplied). There's is nothing in my code that is outside of a supported situation - I'm using regular CQPerl stuff and every call is supported. It's on supported hardware, with supported versions of OS, Clearquest, CQPerl, etc. Why BuildEntity returning "unknown exception"? Surely this is in the code for BuildEntity. Somebody should examine it and report back! This is clearly an error and I fail to see how it goes outside of the scope of support at all. If the problem is difficult to solve that does not put it into the realm of "outside of support". 
My client pays IBM big $$$ for support every year if I remember how IBM support contracts go. We want our money's worth. While I fail to see how a "Sales" agent will be able to assist (I personally think a knowledgable software developer like the guy who's responsible for the BuildEntity code - you do have somebody like that no? - should look into the code and see exactly what circumstances causes BuildEntity to emit such an error) if that's the next step then by all means take it and reach out to whoever is next in line to assist. But from where I sit this is indeed a bug and is not outside the scope of support. If you believe it is then please explain yourself. Why is this "outside the scope of support"?
Now granted it appears that this happens only with out schema (Works fine with the SAMPL database) but that seems to point to either a problem somewhere with action hook code being executed (which would also be deemed a bug as action hook code should never cause unknown exceptions to happen or it could be caused by some corruption in my client's database - something that should be pursued - not dropped to "Sales"!

Problem report 16866,227 000: unknown exception from CQSession_BuildEntity

Shebang and script interpreters

Posted on 
Turns out that you cannot put a script as the interpreter for your #! line. It must be a binary. Also, many IT departments forced with supporting various Unix/Linux's often have a set of scripts that "do the right thing(tm)" to set up an environment for the target architecture then execute the architecturally appropriate binary. I did this way back with /app server.

So what do you do when you are say writing an expect script and wish to use #!/app/expect? The trick is to use something like #!/usr/bin/env /app/expect. Most people are familiar with using env(1) to print out the environment and it turns out it does - if you don't give it any other parameter. But it's real main purpose is "run a program in a modified environment". So if you wish to use an interpreter that is a script use #!/usr/bin/env /path/to/script as your shebang line.

Speed of network reads as opposed to network writes

Posted on 
I was asked to test the difference in speed between network reads and network writes. Now, of course, a lot of this is highly tuneable and depends on various things like protocol used (NFS vs SMB), whether you are writing over a LAN or a WAN, the rated speed of those links (1G vs 100M vs 10M or less), as well as the options used (for NFS things like rsize, wsize to name a few). However as currently configured the following test was done:

I created a file of some size (336M) which I will copied between local and remote file systems using a push strategy and a pull strategy. Lacking having root capability needed to mount filesystems via NFS between say San Jose and Irvine or playing around with SMB I decided to use my home directory, which is NFS mounted, and the local file system of /tmp.  By push I mean that cp copying the file from /tmp to my home directory which is NFS mounted thus over the network. By pull I mean that cp was copying the file from my NFS mounted home directory and writing it to /tmp. Therefore push = local reads with network writes and pull = network reads and local writes. Here are the results...

First I did a little loop:
Xl-irv-05:i=0; while [ $i -lt 100 ]; do
/usr/bin/time -f %E -a -o pull.csv cp ~/336megfile /tmp/336megfile
let i=i+1
done
This pulls this 336megfile 100 times from my home directory to the local /tmp directory. The GNU time command is used to capture the time each of these takes. Network conditions and system workloads can cause this to vary so I take 100 samples.

Similarly this loop does the push:
Xl-irv-05:i=0; while [ $i -lt 100 ]; do
/usr/bin/time -f %E -a -o push.csv cp /tmp/336megfile ~/336megfile
let i=i+1
done
Doing a little Excel yields:



Bottom line:

PullPushDiff
Average0.794.295.45
Pulling data where the writes are local took on average 0.79 seconds and is 5.45 times quicker than pushing data where the writes are over the network which took, on average, 4.29 seconds.

Moral: If you have to work over a LAN or WAN, try to make your writes local...

Eliminating Perl Syntactic Sugar

Posted on 
Programming complex systems is... well... complicated. You need to focus on the task at hand and be able to see the trees from the forest as they say. That's why I like to eliminate as much what I call syntactic sugar from the language. Syntactic sugar are ornaments, special characters and restating the obvious or default that makes your mind drift from the problem at hand to how to formulate syntactic sugar. Additionally such sugar takes time to type in, is prone to errors if you don't get it right and usually are comprised of characters that, as a touch typist, you have to often stretch to reach on the keyboard.

Here's an example. Recently I came across the following code:

delete($self->{'cqc'}->{'fields'}->{'PCP_ID'}) if (exists $self->{'cqc'}->{'fields'}->{'PCP_ID'}); # PCP_ID is read-only and should not be passed through.
delete($self->{'cqc'}->{'fields'}->{'Approved_by_CCB'}) if (exists $self->{'cqc'}->{'fields'}->{'Approved_by_CCB'});
delete($self->{'cqc'}->{'fields'}->{'record_type'}) if (exists $self->{'cqc'}->{'fields'}->{'record_type'});

Now that takes a some time to parse... (Did you see the comment buried in there?) And here's the same code with the syntactic sugar removed:

delete $self->{cqc}{fields}{PCP_ID};
delete $self->{cqc}{fields}{Approved_by_CCB};
delete $self->{cqc}{fields}{record_type};

which do you find easier to read?

Notes:
  • Surrounding hash keys with '' is unnecessary unless you use special characters in the key name like spaces and '-'. The '_' character is OK (the Perl interpreter views '-' as a possible subtraction). Use '' for hash keys only when necessary.
  • The additional '->' operators are unnecessary. The first one is necessary since $self is a hashref, but after that you don't need them - why type them?
  • The delete call need not have (). Granted, delete is a function and some people field all function should have () even if there are no parameters. And yet even people who feel this way rarely use () on Perl builtins like print and die. If you define your subroutines before they are used you can call them without ().
  • There's no reason to include "if (exists $self->{'cqc'}->{'fields'}->{'PCP_ID'})". First off, if clauses that follow the statement do not need (). Even "exists" is unnecessary as if it is omitted the if statement works the same anyway. Finally, delete $hashref->{unknown} will not error out if $hashref->{unknown} doesn't exist.

Creating Development Schema Repositories

Posted on 
When you have multiple Clearquest Designers you quickly realize that you cannot easily do parallel development of the schema. The best way to do this is to work through a development schema repository and to create development schemas for each schema designer. To create a development schema repository you should first create an empty database for Clearquest to work in. You can create an empty database by following the instructions for Creating a Test Database. Next you must use the Clearquest Maintenance Tool to create a new Schema Repository:
  • Start the Clearquest Maintenance Tool
  • Select Schema Respository: Create
  • The Maintenance Tool then asks you to fill out information regarding the location of your schema database. Fill in information about the database server and Administrator Name and password. Do not create a sample database at this time. The Clearquest Maintenance tool will take some time to set up the new schema repository.

Exporting a CQProfile.ini for this new Development Schema Repository

In order to see this new development schema repository you should export the schema repository from the Clearquest Maintenance Tool. This exported cqprofile.ini can be shared with out Schema Developers who would import the .ini file into their environment using the Clearquest Maintenance Tool. You export the cqprofile.ini by selecting File: Export Profile. You need only select the new repository you created.



Click on the "..." button to select where to store the .ini file and what it's name will be - I used C:\Cygwin\tmp\MPSDev.ini and click Finish. This file can be passed to your fellow Schema Developers.

Importing Users

Your new development schema repo has no users in it except a default set of users including the "admin" user (with no password). Run the User Administration tool on production (MCBU) and export all of the users. Run the User Administration tool again on the new development schema repo (e.g. MPSDev) and import the users.

Clean up Unnecessary Schemas

Now's a good time to remove additional default schemas in your development schema repo like Common, DefectTracking, Enterprise, etc. You cannot delete the Blank schema.

Seeding the Development Schema Repo with the Latest Version from Production

One trick to seed your new development schema repo with a recent version of the production schema is to create a new development schema in the production schema repo based off of the latest version of the production schema. Then export that whole schema of the new development schema and import it into the new development schema repo. You will only have the history of the latest version of the schema but that's OK for development purpose. Make sure you specify an appropriate Schema Name and Comment when you export the schema from production:



Note: We are giving this schema the name MPSDev because that's what we want it to be called in the new development schema repo. Also, the comment is appropriate when we will be looking at it in the MPSDev development schema repo.

Do not associate a database with this schema, there's no reason. We don't care as we are only using this to exportschema from production -> development schema repo. We'll create databases there. We don't need to check out this schema either.

Exporting the Schema from Production

We assume you have created a new development schema in the production schema repo based off of the tip of the production database. You need to export that with:

$ cqload exportschema -dbset MCBU admin <password> <schemaname> <path to <schemaname>.full.schema>]]>

Where <schemaname> is the name of the development schema name you created in the production schema repo

Importing the Schema into the Development Schema Repo

Next we import this full.schema of only the tip of production to seed a development schema in the development schema repo.


$ cqload importschema -dbset MPSDev admin <password> <path to <schemaname>.full.schema> ********************************************************* Starting importschema ********************************************************* CRMMD1264E The import file ".\MPSDev.full.schema" is invalid: CRMMD1422E The schema requires the following package(s), which is(are) not currently installed in the database... revision '2.1' of package 'EmailPlus' revision '1.2' of package 'Resolution' revision '2.1' of package 'Attachments'. ********************************************************* ERROR: importschema FAILED! *********************************************************

Oops! We need to import these packages into our new schema. Right click on your new development schema repo in Clearquest Designer and select Install Package. Expand the EmailPlus and select 2.1 to install that. Repeat this for Resolution and Attachments. Repeat the cqload importschema.

Remove Old Schema in Production Repo

You can remove the schema you created above in the Production repo as it is no longer needed.

Create Dev Schemas

You are now free to create development schemas in the new development schema repo as described Creating a Development Schema as well as test databases and seeding them.

Creating a Development Schema

Posted on 
Creating a dev schema is not that difficult. Note you can create a dev schema in the production schema repo or in a dev schema repo. The later is a just a bit safer as it is a bit more isolated.

To create a dev schema from the CQ Designer, right click on the schema repo and select New: Schema. We are creating a new schema by basing it off of an existing schema. Expand the + sign and select the version that you wish to base your new schema off of. Right now there is only Version 1. Select next and name your schema. I suggest that you use your username (e.g. adefaria) indicating that we are the owner of this dev schema. Enter comments if you like and then Finish.

After the schema is created it will ask you if you want to associate this with a database. You could select Yes and then go though naming your database and connecting it to an existing user database but chances are you don't have one of those yet. So select No for now. You now have a development schema.

Creating a Test Database

You must create an empty database on the database server. Use RDP to get a remote desktop there and run the SQL Server Management Studio and connect to the database server engine.

Next right click on databases and select New Database. Name your database. The convention for test databases is <schemaname>_<id>. For our personal test databases again I'd suggest using your user ID so I will create <schemaname>_adefaria for me.

Setting the db_owner and schema owner

In MSSQL we need to set the dbowner and the schema for this new database. Expand the folder tree (+) on your newly created database (MobDev_adefaria) then right click on Security and select New: User. Type <dbadmin> in the User Name edit box and then select and copy this string. We'll need it several times more in this process. Paste this into the Login name and Default Schema and then toggle on db_owner in both the Schemas owned by this user and Database role membership boxes then select OK.

Next right click on Security again and select New: Schema. Paste <dbadmin> into Schema Name and Schema Owner and click OK.

Now you have an empty database that you can associate with your schema.

Associating your new test database with your dev schema

Go back to CQ Designer and right click on your development schema again and select Show: User Databases. You should see a list of databases. Right click on an empty area and select Create Database. Seems odd to call it create database when the database has already been created - it really means "take this schema and it's definition of what should be in the user database and apply that definition/schema to my newly created empty database".

Give this database a Logical Database Name. Alas we only have 5 characters. I just use my initials - apd - short and simple. Add comments if you like. We use MSSQL for the database vendor. Then toggle Database Type to Test Database. We already have a production database in our dev schema repo then select next.

Now we fill in Physical Database Name with the name of the database (<schemaname>_adefaria) and the Database Server . Then paste that <dbadmin> into Administrator User and Administrator Password (See I told you you'd need it!) then Next and Next again.

Now we need to expand the adefaria (your dev schema) and select which version you want this new test database to start with. Select Version 1 and Finish.

Clearquest Designer now goes out and creates all the necessary tables and transfers all of the necessary data, hook scripts, etc. from Version 1 of the dev schema adefaria into your database. Get coffee...

After the database is created follow the steps at Seeding a test database to seed your test database with some test data.

Posted on  November 23, 2005 Continued to help several users with connecting to the new Controller Clearquest Database for Vinh...