Product SiteDocumentation Site

Red Hat Enterprise MRG 2

MRG Release Notes

Release Notes for the Red Hat Enterprise MRG 2.0 Release

Edition 1

Logo

Alison Young

Red Hat Engineering Content Services

Misha Husnain Ali

Red Hat Engineering Content Services

Legal Notice

Copyright © 2011 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the Infinity Logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
All other trademarks are the property of their respective owners.


1801 Varsity Drive
 RaleighNC 27606-2072 USA
 Phone: +1 919 754 3700
 Phone: 888 733 4281
 Fax: +1 919 754 3701

Abstract
These Release Notes contain important information available at the time of release of Red Hat Enterprise MRG 2.0. Known problems, resources, and other issues are discussed here. Read this document before beginning to use the Red Hat Enterprise MRG distributed computing platform.

1. System Requirements
1.1. Supported Hardware and Platforms
1.2. Installing and Configuring Red Hat Enterprise MRG
2. MRG Messaging
2.1. About MRG Messaging
2.2. MRG Messaging Update Notes
3. MRG Realtime
3.1. About MRG Realtime
3.2. MRG Realtime Update Notes
4. MRG Grid
4.1. About MRG Grid
4.2. MRG Grid Update Notes
5. MRG Management Console
5.1. About the MRG Management Console
5.2. MRG Management Console Update Notes
A. Red Hat Enterprise MRG 2.0 Package Manifest
A.1. Red Hat Enterprise MRG 2.0 Messaging Package Manifest
A.2. Red Hat Enterprise MRG 2.0 Realtime Package Manifest
A.3. Red Hat Enterprise MRG 2.0 Grid Package Manifest
A.4. Red Hat Enterprise MRG 2.0 Management Package Manifest
B. upgrade-wallaby-db Tool
C. Revision History

Chapter 1. System Requirements

This section contains information related to installing Red Hat Enterprise MRG, including hardware and platform requirements.

1.1. Supported Hardware and Platforms

Red Hat Enterprise MRG is highly optimized to run on Red Hat Enterprise Linux 6.1 and later due to its inclusion of MRG Realtime. The MRG Messaging and MRG Grid capabilities can also run on other platforms, but without the full benefits of running on Red Hat Enterprise Linux 5.6 and later.
Table 1.1. Supported Hardware and Platforms
Red Hat Enterprise Linux 5.6 (32-bit and 64-bit) Red Hat Enterprise Linux 6.1 (32-bit and 64-bit) Windows XP SP3+ (32-bit) Windows Server 2003+ (32-bit and 64-bit) Windows Server 2008 (32-bit and 64-bit) Windows Server 2008 R2 (64-bit) Windows 7 (32-bit and 64-bit)
MRG Messaging Native Linux Broker X X
MRG Messaging Client - Java/JMS[a] X X
MRG Messaging Client - C++ X X X X X X X
MRG Messaging Client - Python X X X X X X X
MRG Messaging Client - Ruby preview X X
MRG Grid Scheduler X X
MRG Grid Execute Node X X X X X X X
MRG Realtime X (64-bit only)
[a] The Java and JMS MRG Messaging Clients are supported for use with Java 1.5 and Java 6 JVMs. For Sun JVMs, it is recommended to use Java 1.5.15 or later or 1.6.06 or later.

1.2. Installing and Configuring Red Hat Enterprise MRG

In order to download and install Red Hat Enterprise MRG 2.0 on your system, you need to subscribe to the appropriate channels on the Red Hat Network (RHN).
Table 1.2. Red Hat Network Channels
Channel Name Platform Architecture
MRG Grid RHEL-5 Server 32-bit, 64-bit
MRG Grid RHEL-6 Server 32-bit, 64-bit
MRG Grid non-Linux 32-bit
MRG Grid Execute Node RHEL-5 Server 32-bit, 64-bit
MRG Grid Execute Node RHEL-6 Server, ComputeNode, Workstation 32-bit, 64-bit
MRG Management Console RHEL-5 Server 32-bit, 64-bit
MRG Messaging RHEL-5 Server 32-bit, 64-bit
MRG Messaging non-Linux 32-bit
MRG Messaging Base RHEL-5 Server 32-bit, 64-bit
MRG Realtime RHEL-6 Server 64-bit

Chapter 2. MRG Messaging

2.1. About MRG Messaging

The 2.0 release of MRG Messaging contains several new features and enhancements, including:
  • Thresholds in queues are now available to alert the user about an elongated queue.
  • Message priority is now considered when messages are delivered.
  • A delay is now added between the time a queue is unattached to any session and when it is automatically deleted.
  • LVQ is enhanced with improved handling of incoming messages.
  • Flow control is added to measure the amount of data in each queue.
  • Statistics are enhanced to track the number of messages transferred instead of just the number of frames and bytes transferred over the connection.
  • AMPQ now allows inspection of exclusive queues that were previously inaccessible.
  • The level of details being logged can now be altered at runtime and applied without requiring a restart.
  • The Python API now includes a tcp_nodelay option.
  • Flow control mechanism is now on by default.
  • Improved handling of the exception code 530.
  • Improved handling of erroneous cancellations of messages is handles with a 404 error.
  • Invalid arguments now result in a rejected queue-declare where previously they were ignored.
  • QMFv2 event broadcast is now enabled by default. QMFv1 is also enabled by default and is now independently toggled (previously either QMFv1 or QMFv2 had to be selected).
  • The C++ client now recognizes connection names that are the same as Python. Old names are also still supported and an exception occurs if an unrecognizable option is encountered.
  • Performance enhancements are added for synchronous transfers (the default for C++ and Python clients) with durable messages. The transfer call is blocked until the sent message is saved to the store. For messages less than 4 Kilobytes in size, this process can remain pending until the one-second flush timer expiry.
Installation information for MRG Messaging is available in the MRG Messaging Installation Guide. For use and configuration details, see the MRG Messaging User Guide. For information on developing your own programs for MRG Messaging, start with Programming in Apache Qpid.
Red Hat Enterprise MRG documentation is available for download at the Red Hat Documentation Website.

2.2. MRG Messaging Update Notes

The following is a list of issues to be aware of when upgrading to the 2.0 release of MRG Messaging. This is not a complete list of bugs fixed in this release. For a complete list, please refer to the Red Hat Enterprise MRG Technical Notes for this release.
Table 2.1. MRG Messaging Update Notes
Description Workaround Reference
Previously, using a java.util.UUID value as a Map message value or as part of a Map or List used within a Map message value displayed an exception. Additionally, the UUID from an incoming map message could not be read. A UUID is now correctly handled from an incoming Map message and the JMS client now allows a UUID to be set as a Map message value or as a Map or List within a Map message value.
No action required
BZ#632396
Red Hat Enterprise MRG now includes the tcp_nodelay option for the Python API.
No action required.
BZ#667463
Previously, the messaging-broker connected successfully from a client to a broker using SASL and SSL as the external SASL mechanism, but a similar connection between two brokers of a federation was not possible. Changes have been made to qpid-config and the location of SASL-related code to allow one federation broker to act as an SASL server while the other acts as an SASL client. Federated links can now be connected with SASL, with the external mechanism of SSL. A test demonstrating this new connectivity is available at cpp/src/tests/sasl_fed_ex.
No action required.
BZ#500430
A flow control mechanism is now added, allowing the broker to measure the current level of data in each queue using high_watermark and low_watermark flags. This flow control mechanism allows credit to be used to prevent a queue overflow event and provide information to a client about data levels in a queue.
No action necessary.
BZ#660291
AMPQ now allows the user to inspect exclusive queues that previously could not be browsed.
No action required.
BZ#624793
Previously, queues marked for automatic deletion were deleted immediately after being released from a session and , in the case of failover, queues were permanently lost. A delay has now been introduced between the time a queue is eligible for automatic deletion and the time it is actually deleted. Additionally, if this delay period is longer than the failover time, then the queue survives the failover and then, if it is not required, is automatically deleted.
No action required.
BZ#585844
Previously, the Messaging Broker did not consider signaled message priority during message delivery. The Messaging Broker can now be configured to recognize higher priority messages and adjust delivery accordingly.
No action required
BZ#453538
Previously, applications were forced to use older APIs and workarounds to dynamically create and delete broker entities as the messaging API was unable to perform these actions on entities such as queues, exchanges and their bindings. QMF can now deal with the creation and deletion of broker entities and QMFv2 can perform the same actions by sending a message of a defined format to a specified address.
No action required.
BZ#547743
When the Messaging component runs out of space, it must remove older messages to make space for new incoming messages. Messages to be deleted first are selected using an algorithm that assesses both the priority and the age of the message. This algorithm allows the oldest of the low priority messages to be considered expendable while the high priority messages are preserved.
No action required.
BZ#606357
Previously, browsing an LVQ prevented the messages browsed from being replaced, resulting in a queue that continues to grow as updates are added. This is now altered so that message equivalence is determined by the gpid.last_value_queue_key parameter, ensuring that LVQ receives the latest updates and new messages correctly replace their older versions.
No action required.
BZ#632000
Previously, the level of logging being done could not be altered at runtime without restarting the broker. A management method is now used to allow the user to change the level of logging while the program run without requiring a restart. This allows users to get detailed logs during troubleshooting and return to normal logging settings to prevent excessive logs.
No action required.
BZ#657398
Previously, messaging was unable to monitor the growing queue depth without polling constantly or waiting until the maximum level was reached to issue a warning. The broker now allows QMF events to be generated when the queue depth reaches a previously configured threshold, providing an early warning for elongated messaging queues.
No action required.
BZ#660289
A new feature is added that tracks statistics about the number of messages transferred over the connection instead of tracking only the numbers of frames and bytes transferred across the connection.
No action required.
BZ#667970
Previously, the RDMA protocol transport for Qpid supported only Infiniband network interfaces. As a consequence of using Qpid RDMA with an iWarp network interface, the client process was unable to transmit more than 30-40 messages on a single connection due to lost flow control messages. Qpid's use of RDMA now has changed to support iWarp network interfaces.
Current users of RDMA must upgrade any brokers before upgrading their clients if the upgrade is staged. This upgrade order is necessary as new brokers can detect both old and new protocols and switch automatically, but new clients will only use the new protocol.
BZ#484691
When there are multiple subscriptions on an AMQP queue, messages are distributed among subscribers. For example, two queue routes being fed from the same queue will each receive a load-balanced number of messages. If fanout behavior is required instead of load-balancing, use an exchange route.
No action required.
BZ#656226
A journal that is in use can quit after it displays a fatal error if its message store is full.
A full journal can be resized using a utility that can store and transfer all active records to a new, larger journal. This process will only occur when the broker is not running. The resize utility is located in /usr/libexec/qpid/ and to use it the Python path must include this directory.
The resize command resizes the message store then transfers all outstanding records from the old message store to the new one. An error message displays if the records are unable to fit in the file, but the old store remains preserved in a subdirectory. The store_chk command analyzes a store, and shows the outstanding records and transactions.
BZ#617488

Chapter 3. MRG Realtime

3.1. About MRG Realtime

As MRG Realtime provides an updated Linux kernel, it is certified for use on a subset of the hardware systems certified for Red Hat Enterprise Linux. MRG Realtime is certified on x86_64 architectures only. Red Hat works with hardware vendors to certify systems for use with MRG Realtime based on customer demand. For an updated list of certified systems, see the Red Hat Hardware Catalog.
MRG Realtime is incompatible with the Xen-based virtualization features in Red Hat Enterprise Linux 6.1 and is not supported for use with any virtualization technology.
The MRG Realtime kernel may be rebased over the lifetime of a Red Hat Enterprise MRG release, however there are no guarantees of a stable kernel Application Binary Interface (kABI) over the life of Red Hat Enterprise MRG.
Installation information for MRG Realtime is available in the MRG Realtime Installation Guide. For information on tuning MRG Realtime, see the MRG Realtime Tuning Guide and the MRG Realtime Reference Guide.
Red Hat Enterprise MRG documentation is available for download at the Red Hat Documentation Website.

3.2. MRG Realtime Update Notes

The following is a list of issues to be aware of when upgrading to the 2.0 release of MRG Realtime. This is not a complete list of bugs fixed in this release. For a complete list, please refer to the Red Hat Enterprise MRG Technical Notes for this release.
Table 3.1. MRG Realtime Update Notes
Description Workaround Reference
The MRG Realtime Kernel uses one PCI bridge by default, despite modern hardware being commonly equipped with multiple PCI host bridges. Due to this default setting, some devices may be inaccessible to the kernel.
The below line, if present in the kernel boot output (located at /var/log/dmesg), indicates an inconsistency in the number of PCI host bridges in the hardware and the MRG Realtime kernel:
pci_root PNP0A03:00: ignoring host bridge windows from ACPI; boot with "pci=use_crs" to use them
If the aforementioned kernel boot output displays, edit the corresponding boot entry at /etc/grub.conf and add the following text to the kernel line:
pci=use_crs
This edit enables the use of the ACPI property Current Resource Settings to enumerate all available Host Bridges.
None
The system display adapter may not function correctly if an ATI Radeon display adapter is in use.
Add the below boot parameter string to the grub entry for the kernel:
radeon.hw_i2c=0
None
Due to a path conflict, the kernel-rt-doc package conflicts with the kernel-doc package in Red Hat Enterprise Linux 6.
To prevent this conflict, only one of the two packages can be installed at a time.
None

Chapter 4. MRG Grid

4.1. About MRG Grid

The 2.0 release of MRG Grid contains several new features and enhancements, including:
  • Elastic binding for Condor EC2 jobs are now supported.
  • A customizable Power Management feature is now added.
  • Condor now imports validation for power management features to support IDLE machine hibernation.
  • Condor's wallaby database now include the AviaryPlugin, QueryServer and Axis2Home features.
  • Condor is now able to appropriately handle SIGHUP.
  • A simpler web service interface called Aviary is now included.
  • The Aviary API now replaces the SOAP API.
  • Condor's condor_dagman now recognizes its schedd and caches its own Condor schedd address file.
  • The administrator can now control the ads forwarded to a condor_view_host.
  • Configurations are now validated for syntax to prevent crashes due to invalid configuration parameters.
  • The PreJobPrio1, PreJobPrio2, PostJobPrio1, PostJobPrio2 job ad attributes are now included.
  • The scheduler now gathers additional statistics for detailed statistics presentation.
  • Condor now supports multiplexing among multiple view servers.
  • In a group quote scenario, the negotiator now includes submitter names for enhanced monitoring of group quota limits.
MRG Grid includes the ability to schedule workloads to Amazon EC2. For Red Hat Enterprise MRG 2.0, Red Hat is transitioning the ability to purchase this capability at Amazon from Amazon's Dev Pay system to Red Hat's Cloud Access program. For more information refer to the Cloud Access website http://www.redhat.com/solutions/cloud/access/. Purchasing via Cloud Access will be enabled shortly after the Red Hat Enterprise MRG 2.0 release date.
Installation and configuration information for MRG Grid is available in the MRG Grid Installation Guide. For user and configuration details, see the MRG Grid User Guide.
Red Hat Enterprise MRG documentation is available for download at the Red Hat Documentation Website..

4.2. MRG Grid Update Notes

The following is a list of issues to be aware of when upgrading to the 2.0 release of MRG Grid. This is not a complete list of bugs fixed in this release. For a complete list, please refer to the Red Hat Enterprise MRG Technical Notes for this release.
Table 4.1. MRG Grid Update Notes
Description Workaround Reference
Previously, running the wallaby feature-import command without supplying a file name resulted in an unclear error message. The wallaby feature-import command can now trap the error to display a clearer error message when run without a file name.
No action required.
BZ#673502
Certain wallaby utility subcommands caused wallaby to exit with an unintuitive exit code. For example, 0 (zero) which indicates success, instead of a non-zero exit code (indicating failure). With this update, wallaby exits with more intuitive exit codes that reflect the success or failure of the underlying operation.
No action required.
BZ#673520
Validation for Condor's power management features are now imported to allow support for hibernation of IDLE machines.
No action required
BZ#674161
Previously, Condor EC2 jobs could not be bound to an elastic IP, forcing a dynamic IP to be created for each instance. Now the ec2_elastic_ip parameter supports elastic IP binding for Condor EC2 jobs.
No action required.
BZ#621899
Previously, Condor did not support the EC2 RESTful API, that enables support for providers other than Amazon. Condor now provides validation and bug fixes for ec2_gahp and has replaced amazon_gahp, enabling all cloud providers support for the EC2 RESTful API.
No action required.
BZ#679553
Red Hat Enterprise MRG 2.0 includes a Power Management feature, configurable manually via the wallaby component. Power Management is configured through the remote configuration feature.
No action required.
BZ#678394
Condor's wallaby database now includes the AviaryPlugin, QueryServer and Axis2Home features with their respective parameters, as well as the subsystem query_server.
No action required.
BZ#692801
Previously, when condor_configure_pool was used to add a a feature containing the must_change parameter without a value, the condor_configure_pool did not prompt the user to supply the missing value. The condor_configure_pool now uses a new API call to detect must_change parameters and prompt the user if a must_change parameter value is missing.
No action required.
BZ#627957
Previously, when a reconfigure signal from Red Hat Enterprise MRG Grid or SIGHUP was sent to condor_configd, condor_configd would unexpectedly fail and then quit. Condor_configd is now able to handle SIGHUP on Linux, UNIX, and similar operating systems and then exit gracefully.
No action required.
BZ#680518
Previously, _triggerd's C++ Console interface in Condor could not detect and report absent nodes as ENABLE_ABSENT_NODES_DETECTION was set to FALSE as a default. The ENABLE_ABSENT_NODES_DETECTION is now set to TRUE as the default in Condor, allowing _triggerd to raise an event for each node in wallaby without a corresponding master qmf object.
No action required.
BZ#705325 and BZ#602766
Red Hat Enterprise MRG Grid 2.0 offers a simpler web interface called Aviary, created with the use of Axis2/C and WSO2.
No action required.
BZ#674349
The gSOAP backed SOAP API has been replaced by the Aviary API.
No action required.
BZ#674384
The -schedd-daemon-ad-file and -schedd-address-file flags are now added to condor_submit_dag to allow targeting a dag to a specific Shedd and binding all its operations to the Schedd. This operation was possible previously using -remote, with an impact on the performance due to collector queries.
No action required.
BZ#584562
The newly added CONDOR_VIEW_CLASSAD_TYPES configuration parameter allows an administrator to control the ads that are forwarded to a CONDOR_VIEW_HOST. The CONDOR_VIEW_CLASSAD_TYPES parameter can be changed with a reconfiguration.
No action required.
BZ#610258
Previously, when runtime reconfiguration was enabled, an authorized user could cause a daemon to crash by providing a faulty configuration. This configuration was accepted because neither condor_config_val -set/rset or the daemon being reconfigured would validate the input. This is changed to ensure both condor_config_val -set/rset and the target daemon validate the configuration provided, preventing crashes during runtime.
No action required.
BZ#668038
Condor now includes the PreJobPrio1, PreJobPrio2, PostJobPrio1, PostJobPrio2 job ad attributes, which allow jobs to be ordered outside the previously present JobPrio attribute.
No action required.
BZ#674659
Previously, the negotiator ran out of file descriptors and crashed when assigned a large number of jobs.
The NEGOTIATOR.MAX_FILE_DESCRIPTORS value can be user edited to a number larger than the expected number of jobs for the negotiation cycle. The recommended value for NEGOTIATOR.MAX_FILE_DESCRIPTORS is double the number of jobs per negotiation cycle.
BZ#603663
The scheduler was updated to collect the following additional statistics:
  • WindowedStatWidth: value of configuration parameter WINDOWED_STAT_WIDTH at the time the target ad was published.
  • UpdateInterval: number of seconds between current schedd ad publish time and previous ad.
  • JobsSubmitted: number of jobs submitted over the most recent sampling window.
  • JobSubmissionRate: rate of job submissions (jobs/sec) over the sampling window.
  • JobsStartedCum: number of jobs initiated over the schedd's lifetime.
  • JobsStarted: number of jobs started in the stat window (WINDOWED_STAT_WIDTH).
  • JobStartRate: rate (jobs/sec) of jobs starting in the stat window.
  • JobsCompleted: number of jobs successfully completed in the sampling window.
  • JobCompletionRate: rate of successful job completions in the sampling window.
  • JobsExited: number of jobs that exited (successful or otherwise) over in the sampling window.
  • ShadowExceptions: number of shadow exceptions in the sampling window.
  • ExitCodeXXX: number of jobs exited with code XXX (100, 115, etc.) in the sampling window.
  • JobsSubmittedCum: number of jobs submitted over the schedd's lifetime.
  • JobsCompletedCum: number of jobs successfully completed over the schedd's lifetime.
  • JobsExitedCum: number of jobs exited (successfully or otherwise) over the schedd's lifetime.
  • ShadowExceptionsCum: number of shadow exceptions over the schedd's lifetime.
  • ExitCodeCumXXX: number of jobs exited with code XXX over the schedd's lifetime.
  • MeanTimeToStartCum: mean time a job waits in the schedd until first started, in the schedd's lifetime.
  • MeanRunningTimeCum: mean running time for jobs in the schedd (wall-clock time), over the schedd's lifetime.
  • SumTimeToStartCum: sum of job wait times to first start, over the schedd's lifetime (intended for consumption by software like cumin).
  • SumRunningTimeCum: sum of job running times over the schedd's lifetime.
  • MeanTimeToStart: mean time a job waits in schedd until first start, over stat window.
  • MeanRunningTime: mean running (wall-clock) time of jobs in schedd, over stat window.
The follow are now published to all daemon ads:
  • DetectedMemory: detected machine RAM.
  • DetectedCpus: detected machine CPUs/cores.
No action required.
BZ#678025
Previously, Condor allowed declaration of only a single view server, this did not allow multiplexing among multiple view servers. The Condor collector now supports a list of multiple view servers declared using CONDOR_VIEW_HOST for improved scalability.
No action required.
BZ#610251
Previously, the LastNegotiationCycleSubmittersShareLimitN negotiator classad stat attribute did not account for a submitter reaching the share limits in a group-quote scenario. The negotiator now includes submitter names in the attribute when any submitter reaches the submitter limit, including group quota limits.
No action required.
BZ#674669
Due to case folding of submitted names in Accountant, multiple submitter name entries would be created in condor_userprio (one entirely in lower case and one with the correct mix of upper and lower case characters) if the submitter name entry contained upper case letters. Explicit case folding is now removed from the Accountant and data maps are updated with a case-insensitive sorting function. As a result, submitter names with upper case letters no longer appear as multiple entries and Accounting group entries now match updated entries by case and full submitted entries are case sensitive.
No action required.
BZ#675703
The top-level package condor-classads replaces the deprecated package classads in Condor version 7.6.1-0.1 and later. The old and new packages share debuginfo packages, which requires the user to manually remove classads-debuginfo packages.
If old and new packages share debuginfo packages, manually remove classads-debuginfo packages.
BZ#696324
The remote configuration database requires an update as loading a database snapshot destroys pool configuration.
Use the upgrade-wallaby-db tool to upgrade an existing deployment's database. See Appendix B, upgrade-wallaby-db Tool
BZ#688344

Chapter 5. MRG Management Console

5.1. About the MRG Management Console

Installation and configuration information for the MRG Management Console is available in the MRG Management Console Installation Guide.
The 2.0 release of MRG Management Console contains several new features and enhancements, including:
  • QMFv2 C++ library and the QMFv2 Ruby binding is added.
  • QMFv2 Python is now available.
  • A Persona feature allows customizable views.
  • Version information can now be viewed in the User Interface.
  • Qpid-tool's python clients can now select their authentication mechanism.
  • The User Interface displays elements to assist in selecting and sorting columns.
  • Tables that receive data from brokers now provide a search feature.
  • The console now informs the user when an action requesting data from a broker is pending.
  • Tables can be exported to a comma separated value format file.
  • Statistics involving the overall heath of the grid are now available.
  • Slot icons are now simply grouped as either Busy, Transitioning, Owner or Unclaimed.
Red Hat Enterprise MRG documentation is available for download at the Red Hat Documentation Website.

5.2. MRG Management Console Update Notes

The following is a list of issues to be aware of when upgrading to the 2.0 release of the MRG Management Console. This is not a complete list of bugs fixed in this release. For a complete list, please refer to the Red Hat Enterprise MRG Technical Notes for this release.
Table 5.1. MRG Management Console Update Notes
Description Workaround Reference
The QMFv2 C++ library and the QMFv2 Ruby binding have been added to the qpid-cpp component.
No action required.
BZ#659093
QMFv2 Python has been added to the python-qmf component.
No action required.
BZ#659095
Previously, users could not customize the content displayed in the GUI for a grid-only view or messaging-only view based on the deployment requirements. A persona feature is now added to the web section of the cumin configuration file, that allows the user to select grid-only or messaging-only views. The default view displays a mixture of both grid and messaging views.
No action required.
BZ#678029
Previously, the value for Max Allowance in Cumin was incorrectly displayed as an integer despite being a float value and unlimited was assigned as the default value instead of assigned to a specific limit. The Max Allowance value is correctly displayed as a float value and the unlimited value is now only assigned to values over 1,000,000.
No action required.
BZ#635197
Previously, the version information for Cumin could not be viewed from the web user interface, requiring users to log into the server host and use rpm commands to view the installed package information. The Cumin UI now has an About the console tab under the Your Account page where version information stored in $CUMIN_HOME/version displays.
No action required.
BZ#630544
File conflicts as a result of package reorganization (the QMF code was moved from the qpid-mrg set of packages to the qpid-qmf set of packages) caused direct upgrades of debuginfo from version 1.3.2 to a later version to fail if debug symbols were automatically upgraded.
The workaround for this problem is to manually uninstall the previous version of the debuginfo package before an installation or upgrade of the newer messaging and qmf packages is done, such as:
$ rpm -ev qpid-cpp-mrg-debuginfo
This workaround does not introduce any limitations and is simple to execute for debuginfo packages users.
BZ#684182
Qpid-tool's python clients (such as qpid-config, qpid-queue-stats, qpid-route, qpid-stat and qpid-printevents) are now able to select the mechanism used by them for authentication.
No action required.
BZ#604149
The MRG Management Console sorts data in ascending or descending order when the user clicks on the relevant column header. Previously in the GUI, there was no indication of the column used to sort rows, or if the sort order was ascending or descending. Now an arrow appears on the relevant column header, this indicates the sort order and a pop-up tooltip appears when the mouse cursor hovers over the column header describing how the column will be resorted if selected.
No action required.
BZ#673178
The database schema used by the MRG Management Console has changed in version 2.0.
It is necessary to rebuild the database after MRG Management Console is installed. This procedure will delete all database data except for user and password information. As the data in the console stores is largely dynamic, this should not present a problem. After the MRG Management Console is restarted it will gather information from the Qpid messaging broker about systems in the MRG deployment and resume calculating statistics as part of its normal operation. The first step is to preserve the console user information. This procedure exports a list of users, roles, and encrypted passwords to a text file. After the database is rebuilt this file can be imported by cumin-admin to recreate the user data.
Become the root user. Stop the cumin service if it is running.
# /sbin/service cumin stop
Export the user list to a file (here the file chosen is users.bak)
# cumin-admin export-users users.bak
Remove the existing cumin database. Enter yes when prompted.
# cumin-database drop
Recreate the database
# cumin-database create
Recreate the user list
# cumin-admin import-users users.bak
Restart the cumin service
# /sbin/service cumin start
BZ#683975
Previously, the MRG Management Console displayed tables that receive data directly from the broker but were unable to search for the desired records. Tables that receive their data directly from the broker now have the ability to search for specific records.
No action required.
BZ#673180
Certain actions in the MRG Management Console, such as displaying the job summary info and displaying the group quotas, get data directly from the broker, this process takes a few seconds to complete. While data retrieval occurs, no feedback is displayed to inform the user that the action is pending. The MRG Management Console now includes a mechanism to inform the user that an action is pending when data is being requested from the broker.
No action required.
BZ#673183
Cumin displays data in tables that display 100 records per page. When more than 100 records are present in a table, no easy method was available to save all the records to a file. Cumin now allows a user to save all records in a table to a comma separated value file.
No action required
BZ#673187
Previously, the Red Hat Enterprise MRG Cumin console presented a list of pools under the Grid tab. Generally, only one pool would display under the Grid tab, rendering the use of a dedicated page to display a list containing one entry unnecessary. The Red Hat Enterprise MRG Cumin console now does not display the list of pools and if more than one broker is listed in the brokers= line of the Cumin configuration file, the first broker is used as a default.
No action required
BZ#673189
An overview page was added to show the overall health of the grid and provide access to various grid statistics at a glance.
No action required
BZ#673194
The MRG Management Console now pulls data displayed for Limits, Quotas and Job summaries directly from the broker instead of the internal database. Tables that display these fields can now be exported whole into comma separated value format files.
No action required
BZ#642405
The Red Hat Enterprise MRG Cumin console displays slot icons grouped by the slot's state and activity. The slot state is indicated by the icon color and the activity is indicated by the icon shape. To prevent confusion, slots are now displayed in four groups: Busy, Transitioning, Owner, and Unclaimed.
No action required.
BZ#647500
The max_fsm_pages parameter in /var/lib/pgsql/data/postgresql.conf affects PostgreSQL's ability to reclaim free space. Free space will be reclaimed when the MRG Management Console runs the VACUUM command on the database (the vacuum interval can be set in /etc/cumin/cumin.conf). The default value for max_fsm_pages is 20,000. In medium scale deployments, it is recommended that max_fsm_pages be set to at least 64K. PostgreSQL must be restarted in order for changes to max_fsm_pages to take effect. Cumin should be restarted when PostgreSQL is restarted. Verify that max_fsm_pages is adequate using this procedure: Start an interactive PostgreSQL shell.
$ psql -d cumin -U cumin -h localhost
Run the following command from the PostgreSQL prompt. This will produce a large amount of output and may take quite a while to complete.
cumin=# VACUUM ANALYZE VERBOSE;
Set max_fsm_pages to at least the indicated value in /var/lib/pgsql/data/postgresql.conf. Restart the PostgreSQL service and perform this process again, repeating until PostgreSQL indicates that free space tracking is adequate:
DETAIL:  A total of 25712 page slots are in use (including overhead).
25712 page slots are required to track all free space.
Current limits are:  32000 page slots, 1000 relations, using 292 KB.
VACUUM
BZ#699859
Depending on the configuration and number of cumin users, the default max_connection configuration value in the PostgreSQL database can be changed to accommodate a large number of users.
The max_connections parameter in /var/lib/pgsql/data/postgresql.conf specifies the maximum number of concurrent connections allowed by the PostgreSQL server. This value is set to 100 by default.
The maximum number of concurrent connections needed by cumin can be estimated with the following formula:
(cumin-web instances * 36) + (cumin-data instances) + 2
For a default cumin configuration this number will be 43 but with multiple cumin-web instances, the number increases significantly. Ensure that the value of the max_connections parameter is set at a value that can accommodate the number of connections required by cumin and other applications that connect to the PostgreSQL server.
The text OperationalError: FATAL: sorry, too many clients already displayed in the UI, or contained in a Cumin log file, indicates that available database connections were exhausted and a Cumin operation failed.
BZ#702482
The args attribute of a job can accommodate numeric values as a default. Non numeric values require special formatting in the attributes form.
To set the args attribute of a job to a non numeric value, encapsulate the value and non numeric characters within quotation marks.
For example, to change the args attribute from 60s to 60m, encapsulate 60m within quotation marks ("60m"). If a non numeric value is provided without quotation marks, no error displays but the value remains unchanged.
BZ#705819

Red Hat Enterprise MRG 2.0 Package Manifest

This section contains a complete list of packages released in Red Hat Enterprise MRG 2.0
Core packages are provided with higher levels of support than non-core packages. To view details about the differences in support levels, refer to this page.

A.1. Red Hat Enterprise MRG 2.0 Messaging Package Manifest

The following is a list of packages and comments for each package for MRG Messaging.
Table A.1. Red Hat Enterprise MRG Messaging Package Manifest
Package Name Core Package Comments
mrg-release Yes
python-qpid Yes
python-saslwrapper No
qpid-cpp-server-ssl Yes
qpid-cpp-server-store Yes
qpid-cpp-server Yes
qpid-tools Yes
saslwrapper No
qpid-cpp-client-devel-docs Yes
qpid-cpp-client-devel Yes
qpid-cpp-client-ssl Yes
qpid-cpp-client Yes
qpid-cpp-server-cluster Yes
qpid-cpp-server-devel Yes
qpid-cpp-server-xml Yes
qpid-java-client Yes
qpid-java-common Yes
qpid-java-example Yes
rhm-docs Yes
ruby-saslwrapper No
saslwrapper-devel No
sesame Yes
sesame-debuginfo Yes
xerces-c-devel No
xerces-c-doc No
xerces-c No
xqilla-devel No
xqilla No
qpid-cpp-client-rdma Yes
qpid-cpp-server-rdma Yes
qpid-tests Yes
rh-qpid-cpp-tests No
ruby-qpid No

A.2. Red Hat Enterprise MRG 2.0 Realtime Package Manifest

The following is a list of packages and comments for each package for MRG Realtime.
Table A.2. Red Hat Enterprise MRG Realtime Package Manifest
Package Name Core Package Comments
kernel-rt Yes
mrg-realtime-docs Yes
rt-setup Yes
rtcheck Yes
rtctl Yes
tuna Yes
ibm-prtm Yes
kernel-rt-debug-devel Yes
kernel-rt-debug Yes
kernel-rt-devel Yes
kernel-rt-doc Yes
kernel-rt-firmware Yes
kernel-rt-trace-devel Yes
kernel-rt-trace Yes
kernel-rt-vanilla-devel Yes
kernel-rt-vanilla Yes
oscilloscope No Sub-package of tuna
python-numeric No Required by Tuna
python-linux-procfs No Required by tuna
python-schedutils No Required by tuna
rt-tests Yes
rteval-loads Yes
rteval Yes

A.3. Red Hat Enterprise MRG 2.0 Grid Package Manifest

The following is a list of packages and comments for each package of MRG Grid.
Table A.3. Red Hat Enterprise MRG Grid Package Manifest
Package Name Core Package Comments
condor Yes
condor-debuginfo Yes
condor-qmf Yes
mrg-release Yes
condor-aviary Yes
condor-classads Yes
condor-ec2-enhanced-hooks Yes
condor-ec2-enhanced Yes
condor-job-hooks Yes
condor-kbdd Yes
condor-low-latency Yes
condor-vm-gahp Yes
condor-wallaby-base-db Yes
condor-wallaby-client Yes
condor-wallaby-tools Yes
libyaml No Produced by libyaml spec and used by PyYAML
libyaml-devel No Produced by libyaml spec and used by PyYAML
libyaml-debuginfo No Produced by libyaml spec and used by PyYAML
PyYAML No Produced by the PyYAML spec and used by python-wallabyclient
PyYAML-debuginfo No Produced by the PyYAML spec and used by python-wallabyclient
python-boto No Produced by python-boto spec and used by condor-ec2-enhanced and condor-ec2-enhanced-hooks
python-condorec2e No Produced from condor-ec2-enhanced-hooks spec and used by condor-ec2-enhanced and condor-ec2-enhanced-hooks
python-condorutils No Produced by condor-job-hooks and used by condor-job-hooks, condor-low-latency, condor-ec2-enhanced and condor-ec2-enhanced-hooks
python-wallabyclient No Produced by condor-wallaby spec and used by condor-wallaby-client and condor-wallaby-tools
ruby-rhubarb No Produced by ruby-rhubarb spec and used by wallaby
ruby-spqr No Produced by ruby-spqr spec and used by wallaby
ruby-sqlite3 No Produced by ruby-sqlite3 spec and used by ruby-rhubarb
ruby-sqlite3-debuginfo No Produced by ruby-sqlite3 spec and used by ruby-rhubarb
ruby-wallaby No Produced from wallaby spec and used by wallaby and wallaby-utils
spqr-gen No Produced by ruby-spqr spec and used by wallaby
wallaby-utils Yes
wallaby Yes
wso2-rampart No Produced by wso2-wsf-cpp and used by condor-aviary
wso2-rampart-devel No Produced by wso2-wsf-cpp and used by condor-aviary
wso2-axis2 No Produced by wso2-wsf-cpp and used by condor-aviary
wso2-axis2-devel No Produced by wso2-wsf-cpp and used by condor-aviary
wso2-wsf-cpp No Produced by wso2-wsf-cpp and used by condor-aviary
wso2-wsf-cpp-devel No Produced by wso2-wsf-cpp and used by condor-aviary
wso2-wsf-cpp-debuginfo No Produced by wso2-wsf-cpp and used by condor-aviary

A.4. Red Hat Enterprise MRG 2.0 Management Package Manifest

The following is a list of packages and comments for each package for MRG Management Console.
Table A.4. Red Hat Enterprise MRG Management Package Manifest
Package Name Core Package Comments
cumin Yes
python-psycopg2-doc Yes
python-psycopg2 Yes

upgrade-wallaby-db Tool

This section contains the full contents of the upgrade-wallaby-db tool used to upgrade the remote configuration database of an existing deployment.
#!/usr/bin/ruby
# upgrade_wallaby_db:  
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

require 'mrg/grid/config/shell'

module Mrg
  module Grid
    module Config
      class UpgradeWallabyDb < ::Mrg::Grid::Config::Shell::Command
        # opname returns the operation name; for "wallaby foo", it
        # would return "foo".
        def self.opname
          "upgrade_wallaby_db"
        end
      
        # description returns a short description of this command, suitable 
        # for use in the output of "wallaby help commands".
        def self.description
          ""
        end
      
        def init_option_parser
          # Edit this method to generate a method that parses your command-line options.
          OptionParser.new do |opts|
            opts.banner = "Usage:  wallaby #{self.class.opname}\n#{self.class.description}"
      
            opts.on("-h", "--help", "displays this message") do
              puts @oparser
              exit
            end
          end
        end
      
        def act
          update_features = {"Master"=>{"cmd"=>"REMOVE",
                                        "params"=>{"SEC_DEFAULT_INTEGRITY"=>0, "SEC_DEFAULT_ENCRYPTION"=>0}},
                            "HACentralManager"=>{"cmd"=>"ADD",
                                                 "params"=>{"CONDOR_HOST"=>0}}, 
                            "TriggerService"=>{"cmd"=>"ADD",
                                               "params"=>{"ENABLE_ABSENT_NODES_DETECTION"=>"True", "DC_DAEMON_LIST"=>">= TRIGGERD"}},
                            "EC2"=>{"cmd"=>"ADD",
                                    "params"=>{"EC2_GAHP_LOG"=>"/tmp/EC2GahpLog.$(USERNAME)", "GRIDMANAGER_MAX_SUBMITTED_JOBS_PER_RESOURCE_EC2"=>"20", "EC2_GAHP"=>"$(SBIN)/ec2_gahp", "GRIDMANAGER_MAX_SUBMITTED_JOBS_PER_RESOURCE_AMAZON"=>"20"}}
          }

          fobj = store.getFeature("BaseDBVersion")
          if fobj != nil:
            db_ver = (fobj.params["BaseDBVersion"].to_f rescue 0)
          else
            db_ver = 0
          end

          if db_ver >= 1.9
            puts "The database is up to date"
          else
            t = Time.now.utc
            @snap_name = "Database upgrade automatically generated snapshot at #{t} -- #{((t.tv_sec * 1000000) + t.tv_usec).to_s(16)}"

            puts "Creating pre-upgrade snapshot named #{@snap_name}"
            if store.makeSnapshot(@snap_name) == nil
               exit!(1, "Failed to create pre-upgrade snapshot.  Database upgrade aborted")
            end

            puts "Upgrading database"

            # Add new params
            puts "Adding new Parameters"
            add_param("EC2_GAHP_LOG", 0, false, "Location of the EC2 Gahp log files", "String", false, "/tmp/EC2GahpLog.$(USERNAME)")

            add_param("GRIDMANAGER_MAX_SUBMITTED_JOBS_PER_RESOURCE_EC2", 0, false, "Amazon EC2 has a hard limit of 20 concurrently running instances.  This parameter limits the number of EC2 resources", "Integer", false, "20")

            add_param("EC2_GAHP", 0, false, "The location of the EC2 Gahp binary", "String", false, "$(SBIN)/ec2_gahp")

            add_param("GRIDMANAGER_MAX_SUBMITTED_JOBS_PER_RESOURCE_AMAZON", 0, false, "Amazon EC2 has a hard limit of 20 concurrently running instances.  This parameter limits the number of amazon resources", "Integer", false, "20")

            add_param("TimeToWait", 0, false, "Expression used to indicate the time since the last job was run", "String", false, "(2 * $(HOUR))")

            add_param("ShouldHibernate", 0, false, "Expression used to determine if the machine should hibernate due to inactivity", "String", false, "( (KeyboardIdle > $(StartIdleTime)) && $(CPUIdle) && ($(StateTimer) > $(TimeToWait)) )")

            add_param("HIBERNATE", 0, false, "An expression that represents a lower power state. When this state name evaluates to a valid non-NONE state, the machine will be put into the specified low power state", "String", false, 'ifThenElse( $(ShouldHibernate), "RAM", "NONE" )')

            add_param("OFFLINE_LOG", 0, false, "The full path and file name of a file that stores machine ClassAds for every hibernating machine", "String", false, "$(SPOOL)/OfflineLog")

            add_param("OFFLINE_EXPIRE_ADS_AFTER", 0, false, "The number of seconds specifying the lifetime of the persistent machine ClassAd representing a hibernating machine", "Integer", false, "28800")

            add_param("UNHIBERNATE", 0, false, "A boolean expression that specifies when an offline machine should be woken up", "String", false, "MachineLastMatchTime =!= UNDEFINED")

            add_param("ROOSTER", 0, true, "The location of the Rooster binary", "String", false, "$(LIBEXEC)/condor_rooster")

            add_param("ROOSTER_INTERVAL", 0, false, "The number of seconds between checks for offline machines that should be woken up", "Integer", false, "300")

            add_param("ROOSTER_MAX_UNHIBERNATE", 0, false, "The maximum number of machines to wake up per cycle.  A value of 0 means unlimited", "Integer", false, "0")

            add_param("ROOSTER_UNHIBERNATE_RANK", 0, false, "A ClassAd expression specifying which machines should be woken up first in a given cycle. Higher ranked machines are woken first", "String", false, "Mips*Cpus")

            add_param("ROOSTER_UNHIBERNATE", 0, false, "A boolean expression that specifies which machines should be woken up", "String", false, "Offline && Unhibernate")

            add_param("ROOSTER_WAKEUP_CMD", 0, false, "A string representing the command line to invoke by condor_rooster in order to wake up a machine", "String", false, "\"$(BIN)/condor_power -d -i -s 255.255.255.255\"")

            add_param("HIBERNATE_CHECK_INTERVAL", 0, false, "The number of seconds specifying how often the condor_startd checks to see if the machine is ready to enter a low power state", "Integer", false, "0")

            add_param("ROOSTER_SUBNET_MASK", 0, false, "The subnet used by condor_rooster when waking up a machine", "String", true, "")

            add_param("ENABLE_ABSENT_NODES_DETECTION", 0, true, "Determines whether the condor_triggerd will look for absent nodes", "Boolean", false, "TRUE")

            add_param("QMF_BROKER_AUTH_MECH", 0, true, "The mechanism to use when authenticating with a QMF broker", "String", true, "")

            add_param("QMF_BROKER_USERNAME", 0, true, "The username to use when authenticating with a QMF broker", "String", true, "")

            add_param("QMF_BROKER_PASSWORD_FILE", 0, true, "The location of a file containing a password to use when authenticating with a QMF broker", "String", true, "")

            add_param("WALLABY_FORCE_RESTART", 0, true, "A dummy param used to force all daemons to restart", "String", false, "")

            add_param("WALLABY_FORCE_CONFIG_PULL", 0, false, "A dummy param used to force a configuration pull", "String", false, "")

            add_param("SHARED_PORT", 0, true, "The Shared Port binary", "String", false, "$(LIBEXEC)/condor_shared_port")

            add_param("USE_SHARED_PORT", 0, false, "Specifies whether a condor process should rely on the Shared Port for receiving incoming connections", "Boolean", false, "False")

            add_param("SHARED_PORT_DEBUG", 0, false, "The debugging output that the Shared Port will produce in its log", "String", false, "")

            add_param("DAEMON_SOCKET_DIR", 0, false, "Specifies the directory where Unix versions of condor daemons will create named sockets so that incoming connections can be forwarded to them by the Shared Port.  Write access to this directory grants permission to receive connections through the Shared Port", "String", false, "$(RUN)")

            if db_ver < 1.5
              add_param("BaseDBVersion", 0, false, "The version of the base database", "String", false, "0")
            end

            # Add new features
            puts "Adding new Features"
            add_feature("PowerManagementNode",
                        {"ShouldHibernate"=>"( (KeyboardIdle > $(StartIdleTime)) && $(CPUIdle) && ($(StateTimer) > $(TimeToWait)) )",
                         "HIBERNATE_CHECK_INTERVAL"=>"300",
                         "HIBERNATE"=>'ifThenElse( $(ShouldHibernate), "RAM", "NONE" )',
                         "TimeToWait"=>"(2 * $(HOUR))"},
                         ["ExecuteNode"], [], ["Collector", "Negotiator", "Scheduler"])

            add_feature("PowerManagementCollector",
                        {"OFFLINE_LOG"=>"$(SPOOL)/OfflineLog",
                         "OFFLINE_EXPIRE_ADS_AFTER"=>"28800",
                         "VALID_SPOOL_FILES"=>"$(VALID_SPOOL_FILES), $(OFFLINE_LOG)"},
                         ["Collector"], [], [])

            add_feature("PowerManagementSubnetManager",
                        {"ROOSTER_MAX_UNHIBERNATE"=>"0",
                         "ROOSTER"=>"$(LIBEXEC)/condor_rooster",
                         "UNHIBERNATE"=>"MachineLastMatchTime =!= UNDEFINED",
                         "DAEMON_LIST"=>">= ROOSTER",
                         "ROOSTER_UNHIBERNATE_RANK"=>"Mips*Cpus",
                         "ROOSTER_UNHIBERNATE"=>"Offline && Unhibernate",
                         "ROOSTER_SUBNET_MASK"=>0,
                         "ROOSTER_INTERVAL"=>"300",
                         "ROOSTER_WAKEUP_CMD"=>"\"$(BIN)/condor_power -d -i $(ROOSTER_SUBNET_MASK)\""},
                         [], [], ["PowerManagementNode"])

            add_feature("SharedPort",
                        {"SHARED_PORT"=>"$(LIBEXEC)/condor_shared_port",
                         "USE_SHARED_PORT"=>"True",
                         "DAEMON_LIST"=>">= SHARED_PORT",
                         "DAEMON_SOCKET_DIR"=>"$(RUN)",
                         "SHARED_PORT_DEBUG"=>""},
                         [], [], [])

            if fobj == nil
              add_feature("BaseDBVersion", {"BaseDBVersion"=>"1.9"}, [], [], [""])
            else
              obj = store.getFeature("BaseDBVersion")
              obj.modifyParams("REPLACE", {"BaseDBVersion"=>"1.9"}, {})
            end

            # Add new subsystems
            puts "Adding new Subsystems"
            add_subsystem("rooster", ["ROOSTER", "ROOSTER_INTERVAL", "ROOSTER_MAX_UNHIBERNATE", "ROOSTER_UNHIBERNATE", "ROOSTER_UNHIBERNATE_RANK", "ROOSTER_WAKEUP_CMD"])
            add_subsystem("shared_port", ["SHARED_PORT", "USE_SHARED_PORT", "SHARED_PORT_DEBUG", "DAEMON_SOCKET_DIR"])

            # Update existing features
            puts "Updating existing Features"
            update_features.each_pair do |key, value|
              obj = store.getFeature(key)
              if obj != nil
                obj.modifyParams(value["cmd"], value["params"], {})
              else
                puts "Error updating Feature #{key}"
                upgrade_failed
              end
            end

#            obj = store.getFeature("Master")
#            if obj != nil
#              obj.modifyParams("REMOVE", {"SEC_DEFAULT_INTEGRITY"=>0, "SEC_DEFAULT_ENCRYPTION"=>0}, {})
#            else
#              puts "Error updating Feature Master"
#              upgrade_failed
#            end
#
#            obj = store.getFeature("HACentralManager")
#            obj.modifyParams("ADD", {"CONDOR_HOST"=>0}, {})
#
#            obj = store.getFeature("TriggerService")
#            obj.modifyParams("ADD", {"ENABLE_ABSENT_NODES_DETECTION"=>"True", "DC_DAEMON_LIST"=>">= TRIGGERD"}, {})
#
#            obj = store.getFeature("EC2")
#            obj.modifyParams("ADD", {"EC2_GAHP_LOG"=>"/tmp/EC2GahpLog.$(USERNAME)", "GRIDMANAGER_MAX_SUBMITTED_JOBS_PER_RESOURCE_EC2"=>"20", "EC2_GAHP"=>"$(SBIN)/ec2_gahp", "GRIDMANAGER_MAX_SUBMITTED_JOBS_PER_RESOURCE_AMAZON"=>"20"}, {})

            # Update existing params
            puts "Updating existing Parameters"
            obj = store.getParam("AMAZON_GAHP")
            if obj != nil
              obj.setRequiresRestart(false)
            else
                puts "Error updating Parameter AMAZON_GAHP"
                upgrade_failed
            end

            # Update existing subsystems
            puts "Updating existing Subsystems"
            subsys_list = ["collector", "job_server", "master", "negotiator", "schedd", "startd", "triggerd"]
            subsys_list.each do |sn|
              obj = store.getSubsys(sn)
              if obj != nil
                if sn == "triggerd"
                  obj.modifyParams("ADD", %w{QMF_BROKER_AUTH_MECH QMF_BROKER_USERNAME QMF_BROKER_PASSWORD_FILE ENABLE_ABSENT_NODES_DETECTION}, {})
                elsif sn == "master"
                  obj.modifyParams("ADD", %w{QMF_BROKER_AUTH_MECH QMF_BROKER_USERNAME QMF_BROKER_PASSWORD_FILE WALLABY_FORCE_CONFIG_PULL WALLABY_FORCE_RESTART}, {})
                else
                  obj.modifyParams("ADD", %w{QMF_BROKER_AUTH_MECH QMF_BROKER_USERNAME QMF_BROKER_PASSWORD_FILE}, {})
                end
              else
                puts "Error updating Subsystem #{sn}"
                upgrade_failed
              end
            end
            puts "Database upgraded successfully"
          end
    
          return 0
        end

        def add_param(name, level, restart, desc, kind, change, default)
          obj = store.addParam(name)
          if obj != nil
            obj.setVisibilityLevel(level)
            obj.setRequiresRestart(restart)
            obj.setDescription(desc)
            obj.setKind(kind)
            obj.setMustChange(change)
            obj.setDefault(default)
          else
            puts "Error adding Parameter #{name}.  Reverting database"
            upgrade_failed
          end
        end

        def add_feature(name, params, inc, dep, con)
          obj = store.addFeature(name)
          if obj != nil
            obj.modifyParams('REPLACE', params, {})
            obj.modifyIncludedFeatures('REPLACE', inc, {})
            obj.modifyDepends('REPLACE', dep, {})
            obj.modifyConflicts('REPLACE', con, {})
          else
            puts "Error adding Feature #{name}.  Reverting database"
            upgrade_failed
          end
        end

        def add_subsystem(name, params)
          obj = store.addSubsys(name)
          if obj != nil
            obj.modifyParams('REPLACE', params, {})
          else
            puts "Error adding Subsystem #{name}.  Reverting database"
            upgrade_failed
          end
        end

        def upgrade_failed
          store.loadSnapshot(@snap_name)
          exit!(1, "Database upgrade failed")
        end
      end
    end
  end
end

::Mrg::Grid::Config::Shell::register_command(::Mrg::Grid::Config::UpgradeWallabyDb)
::Mrg::Grid::Config::Shell::main(ARGV + ["upgrade_wallaby_db"])

Revision History

Revision History
Revision 1-1Thu Sep 22 2011Alison Young
Version numbering change
Revision 1-0Thu Jun 23 2011Alison Young
Prepared for publishing
Revision 0.1-17Thur Jun 23 2011Alison Young
BZ#674351 - minor update
Revision 0.1-16Wed Jun 22 2011Alison Young
rebuilt for docs-stage
Revision 0.1-15Wed Jun 22 2011Alison Young
BZ#696324 - minor fix
Revision 0.1-14Fri Jun 17 2011Alison Young
BZ#674351 - platform support update
Revision 0.1-13Thu Jun 16 2011Alison Young
BZ#688344 - remote configuration database procedure update
Revision 0.1-12Tue Jun 14 2011Alison Young
BZ#712092 - add a note about Amazon EC2 Support
Revision 0.1-11Tue Jun 14 2011Alison Young
BZ#702482 - setting max_connections configuration for cumin postgresql database
BZ#705819 - minor XML fix
BZ#712092 - add a note about Amazon EC2 Support
Revision 0.1-10Thu Jun 09 2011Alison Young
Updated RHN Channels
BZ#688344 - remote configuration database procedure
Revision 0.1-09Wed Jun 08 2011Misha Husnain Ali
Minor Updates
Revision 0.1-08Wed Jun 08 2011Misha Husnain Ali
Minor Updates
Revision 0.1-07Wed Jun 08 2011Misha Husnain Ali
Minor Updates
Revision 0.1-06Wed Jun 08 2011Alison Young
Updated RHN Channels
Revision 0.1-05Tue Jun 07 2011Misha Husnain Ali
Minor edits and additions.
Revision 0.1-04Tue Jun 07 2011Misha Husnain Ali
Minor edits and additions.
Revision 0.1-03Mon Jun 06 2011Misha Husnain Ali
Minor edits and supported platform updates.
Revision 0.1-02Fri Jun 03 2011Misha Husnain Ali
First draft.
Revision 0.1-01Tue Feb 22 2011Alison Young
Fork from 1.3