Commit 01f283ac authored by cvs2svn's avatar cvs2svn
Browse files

This commit was manufactured by cvs2svn to create tag 'COOL_2_6-patches'.

git-svn-id: file:///git/lcgcool.svndb/cool/tags/COOL_2_6-patches@15146 4525493e-7705-40b1-a816-d608a930855b
parent f58c67fb
// $Id: BoostThread_headers.h,v 1.1 2008-08-07 08:28:36 avalassi Exp $
#ifndef BOOSTTHREAD_HEADERS_H
#define BOOSTTHREAD_HEADERS_H 1
// TEMPORARY! This will be fixed in Boost 1.36.0
// See http://svn.boost.org/trac/boost/ticket/1556
// See http://www.nabble.com/-thread--Missing-initializer-warnings-from-gcc-in-1.35.0-td16447467.html
// Disable warnings triggered by the Boost 1.35.0 boost/thread.hpp header
// See http://wiki.services.openoffice.org/wiki/Writing_warning-free_code
// See also http://www.artima.com/cppsource/codestandards.html
// See also http://gcc.gnu.org/onlinedocs/gcc-4.1.1/cpp/System-Headers.html
// See also http://gcc.gnu.org/ml/gcc-help/2007-01/msg00172.html
#if defined __GNUC__
#pragma GCC system_header
#endif
// Include files
#include <boost/thread.hpp>
#endif // BOOSTTHREAD_HEADERS_H
<use name=CoolApplication>
# Dependency on CORAL RelationalAccess
# [Package granularity dropped: inherit dependency on all CORAL via CoolKernel]
###<external ref=CORAL use=RelationalAccess></external>
COOL VerificationClient
=======================
==== This document is not updated to reflect latest code changes. ====
==== In particular, some 'essential' options have been added ====
==== As of 12 April 2007, the most updated example test is: ====
==== examples/athenaReadWi.py ====
Getting started (for the impatient):
---------------
- Follow the 'Prerequisites'
- To run examples/small.py:
cd <basedir>/examples
# Follow examples/READMEexamples, in particular:
# Edit examples/small.py, assigning your values for options: 'testVars', 'aliasesDict', 'connectionString', 'dbName', 'dbUser', 'dbSchema'
source setPath
python small.py
# - The location of your results is printed on the screen when running small.py
# - As explained at 'examples/small.py', Compare your results reportSummary_small_<date>.txt
# to examples/example_reportSummary_small.txt
# - Further check results, as explained at 'Results'.
Description:
------------
VerificationClient is a framework to run a configurable suite of (COOL stress) tests
(via Oracle, Frontier or squid),
gather client (and optionally Oracle server) statistics into a DB,
and present them as tables (and graphs) via a web page.
A cpp COOL specific client is driven by a set of not client specific python and php scripts.
A cx_Oracle python client (cx_OracleClient.py) can run arbitrary SQL queries stress tests on Oracle
(current script supports COOL read queries only).
(cx_Oracle is a python module to connect Oracle)
By setting the values of configuration options, a 'testsSuiteFile' specifies, what tests to run and what metrics to report.
Many options have default values and hence may not be specified if the default is the desired value.
Clients are invoked, (possibly) remotely, via ssh.
A testsSuite can:
- write to a DB,
- read from a DB,
Each client may either create, write to or read from a (COOL) DB.
Writing/reading may either be done at a given rate, to simulate a given flow of information,
or continuously, to test the maximum capacity of a system.
Results: (Based on client log files,) a DB of results is created.
The results of some queries are saved as txt and excel files and as php data files for graphs.
Clients that sense client machine 'distress' (load it too high (>14) or the remaining swap space is low)
automatically stop.
Extending functionality:
Extending the framework functionality usually involves adding configuration options.
(Configuration options may be used by the python scripts, and/or passed to the client.)
Host selection: Host to run a client on is selected (from the hosts option list) in a round robin manner.
More client features:
- Sharing write work:
WRITING clients share work - each client takes a share of the folders to be written to.
READING client do not share work, they read from all folders.
- COOL block operations:
As default, writing and reading are done for all channels of a folder in one COOL API command.
- Folder selection:
Clients select folders to work upon in a round robin manner.
- COOL payload
Payload schema is configurable.
Payload content consistes of pseudo random digits.
Notation: <basedir> denotes the VerificationClient directory where this SW (and its logs) reside
Prerequisites:
--------------
- COOL should be installed and available to be used with oracle. in particular:
-- The user should have an Oracle account to use.
-- An authentiaction.xml file is needed
- Use csh rather than sh, because this SW has been tested on csh only.
- sqlite should be installed (http://www.sqlite.org/index.html).
- All machines that run clients must have (afs) read/write access to the main VerificationClient directory.
- The user that runs VerificationClient must be able to ssh all hosts to be used without typing a password.
- VerificationClient code should be available:
-- The VerificationClient directory should be checked out from cvs (under cool/contrib)
and be moved to <COOL version>/Utilities.
This is required in order to allow compilation of the cpp code
-- Compile the cpp code: cd <basedir> ; source setPath ; scram b
- cx_Oracle: server statistics (and optional non C++ client) access Oracle via cx_Oracle python module.
In order to use these features, cx_Oracle should be installed.
- jpgraph: In order to see results as graphs, php and jpgraph-source should be available
(php is installed on CERN afs and jpgraph-source is available to be linked (from <basedir>)
to ~dfront/public/cool/jpgraph*.)
Optional: In order to flush the Oracle cache before each test (a configurable option), an appropriate privilege is required.
Usage, per testsSuite:
----------------------
- Select/create/edit a testsSuiteFile:
-- testsSuiteFiles reside at <basedir>/tests directory.
-- <basedir>/examples contains a few example testSuites.
-- Either use one of the existing files, or create a new one (, by copying an existing one).
-- Edit the testsSuiteFile according to following 'Configuring a testsSuiteFile'.
- Run tests suite:
cd <basedir>/tests
source setPath
python <testsSuiteFile>
Configuring a testsSuiteFile:
-----------------------------
Before running a testsSuite, set the desired values of options:
File <basedir>/testOpts.py lists all supported options, and option specific constraints such as:
default value, type, allowed value, minVal, required (must be specified at testsSuite file).
At a testsSuite file, one only has to specify 'required' options and options for which default values are not desired.
The following required options should exist at a testsSuite file: "dbName" ,"dbUser" ,"dbSchema", "aliasesDict", "connectionString", "testVars"
In particular:
- "testVars" is a list of two names of options to be used for result graphs:
-- The options pointed to by "testVars" must be lists, and may have one or more values.
-- A test is run for each combination of values of first and second testVar.
-- The first option denotes the X axis of graphs and the second one has a graph per value.
-- If the first option has only one value, no graphs will appear.
In this case, results may be still seen:
- at .txt files at 'summary' and 'details' columns of the web display
- or as 'xls' files under 'logs directory' column of the web display
-- Config variables that belong to config key 'noTestPerListValueKeys' can not belong to "testVars"
- "aliasesDict" has an entry per connection mapping it to its alias.
Results:
--------
Web page:
The results web page defaults to be at <basedir>/web/indexNew.html.
As a result of each running of testsSuiteFile, one line is added to a webpage table.
Each table row has a name and links to: graphs, reports, configuration, run specific logs directory.
See also: Appendix: creating a web site to allow others browse your results
Details: Results reside under <basedir>/logs directory:
Results of running each invocation of testsSuiteFile, reside at its run specific <logsdir>:
<basedir>/logs/<testsSuiteName>/[<optional sub name>]/<Start date and time>
Example: <basedir>/logs/60vs240IOVstest12/1ClientRunning/06-03-03_09-37-06
The following files exist at each <logsdir>:
- coolTest.db: An sqlite DB with the collected/analyzed data in one table: coolTest.
- Two queries are done on this DB, resulting in files:
-- report<test name and date>.txt: All collected data.
-- reportSummary<test name and date>.txt: The results of a query that averages some frequently used parameters,
such as elapsed time, over clients, for each test and server statistics.
In addition to the .txt files, related .xls files are created as well.
- Various log files
Result examples:
My result directories may be read at ~dfront/public/www/cool/coolLogs/
See also appendix 'Results database and queries'
Limitations:
------------
- COOL VerificationClient is specific for COOL Oracle
- Some metrics of the client host monitoring are Linux specific
- When the script that runs a client crashes - related logs may be lost
Client limitations:
- Only single version folders are supported.
- All intervals are closed - no open intervals are written.
Contact:
--------
David Front
Weizmann Institute, CERN IT/LCG
dfront@cern.ch
+97226719644
Problem reports, enhancement suggestions and questions are welcome
About:
------
Document and SW updated at 13/10/6
cvs version: verificationClient_13_10_6
Tested on COOL_1_3_3
Access this document from the web
http://cool.cvs.cern.ch/cgi-bin/cool.cgi/cool/contrib/VerificationClient/README
Appendixes:
===========
** The appendixes are not up to date. Some recent small changes are missing.
appendix: 'Results database and queries'
----------------------------------------
Results database (table coolTest at <logsdir>/coolTest.db):
- Fields:
# for full list, see <basedir>/statsConfig.py, directory: opts
time - Time of test start
testName - Name of test: a name of configuration variable that has multiple values
(not 'hosts')
testValue - the value under test of the configuration variable
testNum - The numeric location of testValue within all values
readOrWrite - 'write' or 'read'
client - Number of client
runNum - Number of run (usually there is only one run)
Phase - Relevant only if option maxClientsRunning is used:
phase 0 - ramp up, phase 1 - steady
report - (Not in use:) Number of report within client.
elapsed - (Important:) Actual net elapsed time for write or read in seconds
duration - Expected duration for test
ActualWoR - Actual amount of data written or read in MB
ExpectedWoR - Expected amount of data written or read in MB
percWoR - Percent of data written or read from the expected amount
numPerSecWoR - Number of COOL writes or reads per second
MBWoR - Mega bytes, write or read
expectedMBWoR - Expected mega bytes, write or read
MBpMinWoR - Throughput: Number of MB per minute written or read
expectedMBpMinWoR - Expected throughput
load5 - The average load of client for last 5 minutes (from uptime)
clientsCPUperc - The CPU percent that the client consumed
clientsCPUsec - The CPU time in seconds that the client consumed
clientMemPerc - The memory percent that the client consumed
clientMemRss - The rss memory that the client consumed
Resident set size (the amount of physical memory), in kilobytes
clientMemRsz - The rsz memory that the client consumed
Total virtual memory in use by the process
<Optional: Oracle server statistics>
- Queries for *.xls and *.txt files:
File Query
-------------- --------------------------------------------------------------------
report select * from coolTest
reportSummary select min(time), testName, testValue, testNum, readOrWrite,
round(avg(elapsed)), round(min(elapsed)),round(max(elapsed)),
round(avg(load5)), runNum, round(avg(clientsCPUperc)),
round(avg(clientMemPerc)), <optional: Oracle server statistics>
from coolTest group by testName ,testValue, testNum, readOrWrite,
runNum ORDER BY readOrWrite, testName ,testNum, runNum'
Appendix: Running a coolVerificationClient client stand-alone:
---------------------------------------------------
coolVerificationClient may be run stand alone (after setting the appropriate environment). For example:
coolVerificationClient --dbName=test1_lb --numFieldsStr=1 --reportPeriod=3600 --lengtFieldStr=1000
--oracleGatherTableStatsLast=1 --sinceDelta=1 --iterations=1 --numIOVs=1 --numObjPerWriteOrRead=1
--totNumFolders=100 --testDuration=1 --numFieldsInt=0 --string --commitAfterIOVs=1 --numClients=1
--totNumChanns=1000 --readPeriod=1 --chVisitPeriod=5 --sleepTime=-1 --oracleGatherTableStats1=0
--firstSince=0 --dbUser=dfront --MCSVBS=True --noCool=0 --openAllFolders=1 --numFieldsFlt=0
--dbKind=oracle --CooldbName=LUCA --create3rdIndex=0 --dbSchema=dfront --runNum=0 --clientNum=0 --read
However, since there are so many options, and the defaults may not be what you want,
I find it preferable to run the client via scripts, as explained above, at: 'Usage, per testsSuite'
appendix: FILES:
----------------
General configuration files:
----------------------------
Optional: Edit the following configuration files to fit your testing needs:
- coolVersion.py - The COOL version and SCRAM_ARCH to test against
- statsConfig.py - The client (and optional Oracle server session) statistics to be monitored
- php/graphsConfig.php - Parameters that control results graphs
Configuration per tests suite run:
----------------------------------
testsSuiteFile - <basedir>/tests/<testsSuiteName>.py - The superset of a tests suite configuration + code to run tests
Scripts that run tests:
-----------------------
testOpts.py - Defines and applies constraints on test opt, such as:
default values, allowed/min/max values, types, isRequired
<logsdir>/coolVerificationClient_run - Runs one client. Self created according to contents of testsSuiteFile
multi_client_tests.py - Runs multiple tests. For each test, populates <logsdir>/config*.py and runs multi_client_test_.py.
multi_client_test_.py - Run a test according to a <logsdir>/config*.py configuration,
potentially with multiple clients, potentially on multiple hosts.
multi_client_utilities.py - Utilities
gengetoptt - script to make C options code
toDb.py -
Analyzes logs of tests that did run, creating the files mentioned above under 'Results'
Invoked automatically by multi_client_tests.py
Manual usage: follow logs of automatic invocation under (previous) <logsdir>/multi_client_tests_out_ logs.
args2opts.py - Utility to change argv option to a python dictionary
gdbb - Utility to run a client via gdb
Scripts that stop tests:
------------------------
killCoolMClient - Utility to kill local processes of clients on one machine.
In order to kill clients processes on multiple machines:
foreachh <list of hosts> ssh \$value `pwd`/killCoolMClient
foreachh - One liner foreach: For each value of $1, apply the rest of arguments
(Can be used also for 'less aggressive' purposes)
C++ COOL client:
----------------
coolVerificationClient.cpp - The COOL client code. Writes or reads. For writing, first recreates arg COOL DB.
BuildFile
distress.cppp, distress.h, distress1.h - handle distress:
In case of machine 'distress', when the load it too high (>14) or the remaining swap space is low, tests stop.
coolVerificationClientOpt.ggo, coolVerificationClientOpt_.h, coolVerificationClientOpt_.c - options parsing for coolVerificationClient.cpp
cx_Oracle related:
------------------
cx_Oracle_base.py - Common SW for cx_Oracle usage
cx_server_stat.py - Formats statistics reports and optionally collects Oracle server statistics
cx_OracleClient.py - A (example) cx_Oracle (COOL read) client.
opts.py - processes argv options (enhancement of args2opts.py), currently used by cx_*.py scripts.
Documentation:
--------------
README - this file
Various:
--------
seal.opts - seal options
passw - Envelope to call coolAuthentication to get password
setPath (,tests/setPath, examples/setPath) - sets environment.
Usage: source setPath; python <testsSuite script>
sql related:
-----------
flushcache.sql
flushcache_test1_lb.sql
sqlpluss - use 'passw' to extract password, calls sqlplus, run cmdsFile with args
createIndexIOVsinceUntil.sql - create ('3rd') index on (IOV_since,IOV_until)
oracleGatherTableStats.sql
Web related:
------------
web/indexNew.html - Automatically created and update per tests suite run
web/index.html - To be manually derived (after removing wrong tests results, if any)
The following 3 files are concatenated to create web results file (web/indexNew.html)
resultsHead - (Constant) header
web/resultsRows - Web results table rows. One is added per tests suite run
resultsTail - (Constant) tail
resultsPage.py - Creates/updates a web page with verification client testing results
resultsPageOptsCreate.py - Creates a config file to be used by resultsPage.py
Graphs related:
---------------
jpgraph-1.20.2 - symbolic link to jpgraph SW
php/graphs.php - The script that creates graphs
php/graphsConfig.php - Configuration for php/graphs.php
<logsdir>/dataArrays.php - tests suite data results formatted for graphs creation
Tests:
------
tests/ - Directory to put your tests
examples - Directory with example tests:
READMEexamples - Explains how to run example trsts
small.py - Small example: writes and reads data
example_reportSummary_small.txt - example report of small/py
smallRead.py - A small read tes that creates result graphs
smallcx_OracleRead.py - An example that runs a cx_Oracle COOL client
Appendix: creating a web site to allow others browse your results
-----------------------------------------------------------------
YOU may view your VerificationClient results web page by pointing your browser to the appropriate html files.
In order to allow OTHERS, in addition, to browse your results (assuming that you run from afs at CERN), do:
- Ask for a personal web site to be created for your home directory:
See: http://webredirect.web.cern.ch/WebServices/Docs/AuthDoc/CreateDeleteWeb/
As a result, http://<your user>.home.cern.ch/ will point to your ~/public/www.
- Setting up a VerificationClient web page:
ln -s <baseDir> ~/public/www/VerificationClient
ln -s <baseDir>/web/indexNew.html <baseDir>/index.html
# or do alternative commands that will cause <baseDir>/index.html to point to the resulte you want to present
# The following commands should be issued CAREFULLY for the following dirs once, before running tests,
# Giving others permissions to view your result files
at <baseDir> and at <basedir>/logs do the following:
-- create a file called .htaccess with one line: Options +Indexes
-- fs setacl -dir . -acl system:anyuser rl
-- fs setacl -dir * -acl system:anyuser rl
Appendix: links
---------------
- Partial description of collected Oracle statistics:
http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14237/stats002.htm
http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14237/dynviews_2087.htm#sthref3982
# use killl to Do things like kill -9 `ps -efw | grep 'arg string' | awk '{print $2}' `
#setenv HOSTS 'lxb0676 lxb0677 lxb0678 lxb0679 lxb0680'
#foreachh ${HOSTS} ssh \$value `pwd`/killCoolMClient
#foreachh ${HOSTS} ssh \$value ps -efw\| grep killCoolMClient
~/vc133/killCoolVerificationClient
#ssh lxb0677 ~/vc133/killCoolVerificationClient
#ssh lxb0678 ~/vc133/killCoolVerificationClient
#ssh lxb0679 ~/vc133/killCoolVerificationClient
#ssh lxb0680 ~/vc133/killCoolVerificationClient
#! /usr/bin/env python
import sys
# TBD: Change this to be a class
def helpp():
"""My help.
Using arg dictionary and invoking optional arg command/s, create a script help
"""
# TBD: Use eval or exec to get numbers as numbers rather than strings
# TBD: enhance to handle numbers as (part of) option names!
# TBD: Enhance to specify what options are allowed
def opts():
verbose=False
global optsDictt
optsDictt={}
for opt in sys.argv:
if verbose: print opt
if opt.startswith('--'): # An option
eq= opt.find('=')
if eq>0 : # Option has a value
optionn= opt[2:eq]
valuee= opt[eq+1:]
if verbose: print 'optionn:',optionn,'=',valuee
optsDictt[optionn]=valuee
else:
if verbose: print 'optionn:', opt[2:]
optsDictt[opt[2:]]='NoVal'
if verbose: print optsDictt
def checkRequired( required):
isOk= True
missing= ''
for aRequired in required:
if aRequired not in optsDictt.keys():
missing= missing + str(aRequired) + ', '
isOk= False
if not isOk:
print ' Missing required option/s:', missing
return isOk
opts()
#echo "athenaCoolClient args: $*"
set MBWoR=10 # Estimated
set LOGG=CLT
set startTime=`date +%y-%m-%d_%H-%M-%S`
echo "${LOGG} start time: ${startTime}"
@ beginTime = `date +%s`;
source ${HOME}/cmthome/setup.csh -tag=12.3.0;
setenv AthenaDBTestRecDir ${HOME}/w0/AtlasOffline-12.3.0/AtlasTest/DatabaseTest/AthenaDBTestRec;
source ${AthenaDBTestRecDir}/cmt/setup.csh;
setenv runDir ${AthenaDBTestRecDir}/run;
cd ${runDir};
# Set environment after cmt and before running athena
setenv CORAL_DBLOOKUP_PATH ${runDir};
setenv CORAL_AUTH_PATH ${HOME}/private
# Try to tell frontier/squid to log. This may not be the correct place:
#setenv FRONTIER_LOG_LEVEL debug
#setenv CORAL_OUTMSG_LEVEL D
#setenv FRONTIER_LOG_FILE frontier_client.log
rm -f core.*
# Run athena
athena.py ../share/ReadRefDBExample.py;
rm -f core.*
# 1 + is a kludge to avoid devision by 0
# Enhancement: For csh float arithmetics, use the following syntax: XX | bc -l
@ elapsed = 1 + `date +%s` - $beginTime;
set endTime=`date +%y-%m-%d_%H-%M-%S`
echo "${LOGG} end time: `date`"
echo "${LOGG} endTime: ${endTime}"
echo "${LOGG} elapsed: $elapsed";
if ( $elapsed == 0 ) then
@ MBpMinWoR = 99999
else
@ MBpMinWoR = ${MBWoR} * 60 / $elapsed
endif;
echo "MBpMinWoR: $MBpMinWoR";
cd ${baseDir} ;
setenv PYTHONPATH ${PYTHONPATH}:${logsPath} ;
#echo "athenaCoolClient args before cx_server_stat.py: $*"
python cx_server_stat.py $* --startTime=${startTime} --endTime=${endTime} --sidStats="(1, 1)" --elapsed=${elapsed} --MBpMinWoR=${MBpMinWoR} --expectedMBWoR=${MBWoR} --MBWoR=${MBWoR} --report=0 --duration=0 --ExpectedWoR=0 --percWoR=0 --numPerSecWoR=0 --expectedMBpMinWoR=0 --CombinedMbPerSec=0 --clientPid=0;
# Connections:
set oraXX="<dbConnection>impl=cool;techno=logical;schema=ATLAS_FRONTIER_ORACLE;X:REFC1301</dbConnection>"
set squidXX="dbConnection>impl=cool;techno=logical;schema=ATLAS_SQUID;X:REFC1301</dbConnection>"
set frontierXX="<dbConnection>impl=cool;techno=logical;schema=ATLAS_FRONTIER;X:REFC1301</dbConnection>"
set squid3d2XX="<dbConnection>impl=cool;techno=logical;schema=ATLAS_SQUID3D2;X:REFC1301</dbConnection>"
set wisquid3d2XX="<dbConnection>impl=cool;techno=logical;schema=WI_SQUID3D2;X:REFC1301</dbConnection>"
set frontier3d2XX="<dbConnection>impl=cool;techno=logical;schema=ATLAS_FRONTIER3D2;X:REFC1301</dbConnection>"
set sqliteXX="<dbConnection>impl=cool;techno=logical;schema=ATLAS_SQLITE;X:REFC1301</dbConnection>" #TBD: Fake naming
#setenv CondDBCoolSchema0 wisquid3d2XX
#echo "athenaCoolClient args: $*"
set MBWoR=10 # Estimated
set LOGG=CLT
set homee=/srv01/agrp/dfront/panfshome/
set startTime=`date +%y-%m-%d_%H-%M-%S`
echo "${LOGG} start time: ${startTime}"
@ beginTime = `date +%s`;
#source ${HOME}/cmthome/setup.csh -tag=12.3.0;
source ${homee}/setup.csh
setenv AthenaDBTestRecDir ${homee}/atlasRead/w0/AtlasOffline-12.3.0/AtlasTest/DatabaseTest/AthenaDBTestRec;
source ${AthenaDBTestRecDir}/cmt/setup.csh;
setenv runDir ${AthenaDBTestRecDir}/run;
cd ${runDir};
# Set environment after cmt and before running athena
setenv CORAL_DBLOOKUP_PATH ${runDir};
setenv CORAL_AUTH_PATH ${homee}/atlasRead/private
# Try to tell frontier/squid to log. This may not be the correct place:
setenv FRONTIER_LOG_LEVEL debug
setenv CORAL_OUTMSG_LEVEL D
setenv FRONTIER_LOG_FILE frontier_client.log
rm -f core.*
# Run athena
athena.py ../share/ReadRefDBExample.py;
rm -f core.*
# 1 + is a kludge to avoid devision by 0
# Enhancement: For csh float arithmetics, use the following syntax: XX | bc -l
@ elapsed = 1 + `date +%s` - $beginTime;
set endTime=`date +%y-%m-%d_%H-%M-%S`
echo "${LOGG} end time: `date`"
echo "${LOGG} endTime: ${endTime}"
echo "${LOGG} elapsed: $elapsed";
if ( $elapsed == 0 ) then
@ MBpMinWoR = 99999
else
@ MBpMinWoR = ${MBWoR} * 60 / $elapsed
endif;
echo "MBpMinWoR: $MBpMinWoR";
cd ${baseDir} ;
setenv PYTHONPATH ${PYTHONPATH}:${logsPath} ;
#echo "athenaCoolClient args before cx_server_stat.py: $*"
python cx_server_stat.py $* --startTime=${startTime} --endTime=${endTime} --sidStats="(1, 1)" --elapsed=${elapsed} --MBpMinWoR=${MBpMinWoR} --expectedMBWoR=${MBWoR} --MBWoR=${MBWoR} --report=0 --duration=0 --ExpectedWoR=0 --percWoR=0 --numPerSecWoR=0 --expectedMBpMinWoR=0 --CombinedMbPerSec=0 --clientPid=0;
#============================================================================
# $ Id: requirements,v 1.4 2005/08/24 17:31:07 marcocle Exp $
#============================================================================
package VerificationClient
#============================================================================
#============================================================================
# Public dependencies
#============================================================================
use CoolKernel v*
use CoolApplication v*
include_path none
#============================================================================
private
#============================================================================
macro_append SEAL_linkopts ' -llcg_SealServices' \
WIN32 ' lcg_SealServices.lib'
macro_append use_linkopts ' $(Boost_linkopts_thread) $(CORAL_libs)'
#============================================================================
# Build rules
#============================================================================
application coolVerificationClient ../*.cpp
# Fake target for tests
action tests "echo No tests in this package"
macro_remove cmt_actions_constituents "tests"
# Fake target for examples
action examples "echo No examples in this package"
macro_remove cmt_actions_constituents "examples"
This diff is collapsed.
This diff is collapsed.
# gengetopt (See: http://www.gnu.org/software/gengetopt/gengetopt.html)