The CA Agile Central Database Connector retrieves Agile Central (AC) entity item information and writes to a CSV file or Postgres database table. The Agile Central Database Connector is classified as a one-way and one-time mechanism. The connector runs on Windows, Linux or Mac. Information in Agile Central is never altered. Information is only written to CSV files which are used to populate tables in your target database. The Agile Central Database Connector consists of software that you run on your platform according to your desired schedule. The configuration of the connector is expressed in a YAML text file. A configuration file can list one or more Agile Central entities as long as each entity specified is a leaf entity in the Agile Central Data Model. Common entities are story, defect, task, release, and user. See the AC Help documentation, in particular the sub-document concerning the WSAPI, for more information on AC entities and the data model.
Database Requirements and Considerations
- You must already have a database installed, set up, and operational; this connector does not create a database. The connector does not need to be installed or run on the same host on which your database resides. The database must be on a host that is reachable on the network on which you install the connector. The user you specify in the DB section of the configuration must have permissions to create database tables.
- The connector emits CSV files for the purpose of bulk loading tables in your target database and saves them to CSV/<database_name> directory. The database name is set in the DB section of the configuration file, in Name property. Do not name your database default since CSV/default directory is reserved for CSV files created during CSV-only mode.
- If you want the data retrieved and written to the CSV file to be loaded into your target database, you must have the DB section in the configuration file with all required entries and any optional entries. By default, in this mode the CSV files will be removed once the database tables are loaded with the information in the CSV files. You can alter this behavior by a setting in the DB section of the configuration file.
- By default, when the connector is run, after the CSV file for an entity is written and before the database table is populated with that data, the connector will drop and recreate the target table. You can alter this behavior in the DB section of the configuration file. Be aware that this has implications about the suitability and coherence of data in the database tables.
- Most database implementations allow table aliasing. This aliasing capability can be used in a scenario where you indicate that you want the database table names to be timestamped, so that each time you run the connector an entirely new set of tables is created and populated with the AC data retrieved by the connector. You employ the alias (typically the name of the AC entity, for example, Task) to refer to the database table Task_<timestamp_value> with the most recent <timestamp_value>. This would allow you to have queries or reports using just the entity names.
- The connector requires Bulk Insert permissions for the database. Please consult your database Administrator for assistance with this.
This guide includes:
- Software Requirements
- Connector Download
- CSV-Only Mode
- Agile Central Considerations
- Appendix A
- Appendix B
- Appendix C
- Revision history
- Installed Postgres database.
- Python 3.5 or 3.6 (the 64-bit version is recommended for Windows). Download the installer from www.python.org.
To download the connector, follow the steps here.
LICENSE # CA Copyright file README-Postgres.txt # this file ac2db # connector initiation script, this is what you will run configs # holds any configuration files used with this connector sample_postgres.yml # a sample configuration to use as a base reference csv # holds CSV files produced by running the connector dbeif # dbeif module root directory log # holds log files produced by running the connector
Configure the CA Agile Central Database Connector software to write entity item information to a CSV file or Postgres database table.
- Mention one or two entities to process.
- Mention a minimum number of fields of interest.
- Use the Query configuration item to limit the amount of Agile Central entity records to retrieve or write.
- Use the Service, Preview: True configuration setting to confirm successful operation and realistic record counts.
Decide on the database table naming policy:
- It is simplest to use the AC entity names as the name of the database tables.
- You can specify that the table names will contain a common timestamp from each run, but this means you need to ensure adequate capacity in your database. Unless you cull older tables, your database will add n new tables each run, where n is the number of different AC entities you specify in the configuration file.
- Once you have gotten a simple configuration to work, you can incrementally add entities, fields, selectors, and more as needed to accomplish your intended objectives.
Review the log files while you are in the initial stages to see if there are any errors or warnings. Once you have confirmed proper operation, you may want to set up automated monitoring of the log files.
Setup the Software
Follow these steps:
- Locate the configs * subdirectory.
- Copy the sample_postgres.yml file to a file named suitably for your environment such as, cp sample_postgres.yml to product_x.yml (or some other suitably named file that has a .yml suffix).
- Edit your product_x.yml file. Change the sample values for credentials, Workspace, Project, Server, Name to values that are relevant and valid for your environment.
*See Appendix A on configuration file syntax.
You can run the Agile Central Database Connector software on your platform either manually or on a desired schedule. This software requires that the configuration file reside in the configs subdirectory. You specify the base name of the file on the command line. Do not specify the subdirectory in the command line argument.
Using a terminal window or console:
- cd to the installation root directory e.g. /opt/local/sw/dbeif-1.1.0
python3.6 ac2db acdata.yml
Follow these steps:
- Use cron, launchctl, or Windows Task Scheduler.
- Make sure the environment in effect when running this software is appropriately set so that you can run:
python3.6 $DBEIF/ac2db_connector your_config_file_name
$DBEIF is the reference to an environment variable containing the fully qualified path to the directory where the software is installed. For example, if you unzipped the package in /opt/local/sw, then your DBEIF would be set like this:
The connector always writes a log file name based on the configuration file name. The log file is written into the log subdirectory under the base installation directory. Within the configuration file, you can set the LogLevel which determines the amount of logging information written to the log file. When you set the LogLevel to DEBUG, you will get the full range of logging messages that can be helpful in pinpointing where a problem occurred and what information was being processed at the time of the anomaly.
Run the connector in Preview mode when setting things up for the first time. This allows you to get the connections to Agile Central and your database to initialize and validate correctly without creating and populating database tables. This mode can show you counts of records on a per Agile Central entity basis that would be written to the destination CSV file or the destination database tables.
- Error when field is listed twice in the fetch
Solution: Remove one of the STATE fields from the fetch.
..... DEBUG: AgileCentralConnection.createFieldCategories - ACConn.createFieldCategories has pi_state_fields:  and field_categories['pi_state'] of:  ERROR: DBConnector.run - RallyResponseError in DBConnector.run processing entity: Story -- Unable to retrieve 10 chunks of data .....
- Error using an APIKey with read only access
Solution: Use an APIKey with full access.
..... FATAL: AgileCentralConnection.connect(128) - <ConfigurationError> Unable to connect to Agile Central at rally1.rallydev.com: 403 b'<html>\n<head>\n<meta http-equiv=Content-Type content=text/html; charset=UTF-8' .....
- EnvironmentalKey ident_vector not valid error
Solution: Re-enter the userID and password for both DB and CA Agile Central.
..... FATAL: SecurityManager.decrypt(51) - <ConfigurationError> EnvironmentalKey ident_vector not valid for decryption target value. Reset all credential values to clear text in config file! .....
If you want to export data to a CSV file and do not intend to use a database, you can choose to export data to CSV files. In this situation, you do not have to install a database and you will omit the database section of the configuration file. The connector writes a CSV file (whose name contains a timestamp) for each entity processed into a CSV/default folder. If run in this manner, the connector does not ever remove the CSV file from the CSV subdirectory.
For example, in the config below the DB section is omitted since we are not interested in populating a database. The Service section is optional: in the absence of Service section, its properties are set to default values.
AgileCentral: Server : rally1.rallydev.com APIKey : _abc123x Workspaces: - Workspace: Jazz Projects: - Deep Cats Entities: HierarchicalRequirement,Defect Fetch: ObjectID,FormattedID,Name,ScheduleState,State,Project,Workspace ResolveUser: False
There are some database table settings you to configure how you store data.
Time Stamp Files
Postgres only - If you want to export data to a CSV file and do not intend to use a database, you can choose to export data to CSV files. In this situation, you do not have to install a database and you will omit the database section of the configuration file. The connector writes a CSV file (whose name contains a timestamp) for each entity processed into a CSV/default folder. If run in this manner, the connector does not ever remove the CSV file from the CSV subdirectory.
For example: Should the files be named per the entity processed appended with a common timestamp for the run or just for the entity?
TimeStampFiles: True or False
Drop Existing Tables
Controls whether or not any existing database table named for the entity being processed can be dropped prior to processing which will create and populate a table named per the entity. (Postgres only - Only relevant if TimeStampFiles is False)
DropExistingTables: True or False
Agile Central Considerations
There is additional configuration you can perform to retrieve data from Agile Central.
ResolveUser property set to True in the configuration file's AgileCentral section will instruct the connector to represent users by their _refObjectName, which in AgileCentral defaults to DisplayName as long as DisplayName is not empty. If ResolveUser is set to True, the Owner column of a work item table will be populated by display names, such as Wallace, Gromit. If ResolveUser is set to False, the Owner column of a work item table will be populated by ObjectIDs. Note that DisplayName field on User object in AgileCentral does not have unique constraint: an AgileCentral subscription may have two different users with the same DisplayName Wallace.
Instead of resolving a user to a DisplayName, you may use this approach:
- Set ResolveUser to False
- Export AgileCentral users to Users table in your database
- Export AgileCentral work items to respective tables in the same database, such as Task, Story tables
- Consider using a Join between two tables, such as Task and Users after the connector run completes
Here is an example of a JOIN for task and users table that combines columns from both tables:
SELECT task.objectid, task.formattedid,task.owner, users.username AS "owner username", users.officelocation AS "owner officelocation" FROM task, users WHERE task.owner=users.objectid; objectid | formattedid | owner | owner username | owner officelocation ------------+-------------+-------------+-----------------------------+---------------------- 86253669480 | TA93985 | 32429145056 | [email protected] | 62 West Wallaby St 86253670264 | TA93986 | 32429145056 | [email protected] | 62 West Wallaby St 86253670816 | TA93987 | 32429145000 | [email protected] | 62 West Wallaby St 86253670827 | TA93999 | 32429147147 | [email protected] | Mossy Bottom Farm
- In Agile Central WebServices API, a query on the Agile Central User entity will return that user only if a query criteria is not supplied. In order to get more than the single user, the query (expressed either as a BaseQuery or a Selector for a User entity) should not be empty.
Example: BaseQuery: ((Disabled = True) OR (Disabled != True))
While it is possible to use a single configuration file, in some scenarios it is more convenient to use multiple configuration files for the same database. For example, you want to create Task, Story, Feature, and Users tables in a single database. You may use one configuration file, but combining dissimilar objects in the single configuration file makes it harder to come up with an Agile Central BaseQuery (which serves as a lowest common denominator) and may also make the Fetch list unwieldy. You may also create two configuration files. One configuration file's Entities property will list Task, Story and Feature, while the other configuration file's Entity will list User object only. Note that since User is a reserved table name in Postgres database, the connector creates Users table to avoid a conflict. This happens automatically: the Entities property of the configuration file must still use a valid AgileCentral entity name, User.
- When listing the field names in the Fetch specification you must use the attribute names as they appear in the Agile Central WSAPI documentation. Mostly this means that non-custom fields must have the first letter capitalized. For attributes that are compound words like ScheduleState, each word must be capitalized. Use c_ prefix when specifying a custom field. For example:
- There is no need to duplicate a field in the Fetch even if it appears in more than one entity. For example, if your entities list include both HierarchicalRequirement and Defect, one mention of ScheduleState will suffice.
- The connector supports both standard and custom portfolio item types. Portfolio item entities must listed in the Entities property of the configuration file's AgileCentral section without PortfolioItem/. In the example below, both a standard portfolio item type Feature and a custom portfolio item type Strategy are included:
Entities: Feature, Strategy
The connector does not support ambiguous use cases when a custom portfolio item type's name matches a standard entity name. The following example illustrates this: a regular entity in Agile Central → task and a similarly-named portfolio item → PortfolioItem/Task.
This above constraint applies even if you reference an ambiguously named item. If you have the situation where a custom portfolio item type has the same name as a existing entity in the Agile Central system, you cannot use this connector.
Retrieving Revision History
It is not possible to do a direct query to the RevisionHistory table (as can be seen in the API, there is only a read option, not a query option), so you must get the information using the Revision table.
In order to do this, it would take creating two database tables and then performing a join on those tables. One table for Story, Defect, and so on, and one for Revision. To populate the Revision table, query by CreationDate and do this in small increments so the API query will not time out.
DatabaseConnector: Entities : HierarchicalRequirement Fetch : ObjectID,Name,RevisionHistory,FormattedID BaseQuery : (FormattedID = US2) ...
Here is an example query for Story:
DatabaseConnector: Entities : Revision Fetch : ObjectID,Description,RevisionNumber,RevisionHistory BaseQuery : (CreationDate > 2018-12-02T19:08:42.035Z) ...
Appendix A: Configuration File Editing
The Agile Central Database Connector uses a text file in the YAML format. For brevity, this document mentions several of the most relevant syntax items and covers the three sections of a valid YAML configuration that can be used with the connector.
Refer to the Download Samples section at the bottom of this page for a sample configuration file with explanatory notes.
- Use a text editor (not MS Word or Google Doc) to edit the file
- Never use a tab character in the file; YAML does not allow or recognize tab characters
- Save the file in UTF-8 format
- Use a monospace font
- Be consistent with the number of spaces you use to indent (four spaces is a common indent level in YAML docs)
- On a line, the first occurrence of a non-quoted # character indicates a comment, the # character and all characters to the right are ignored in processing
- Keep the sections in the same order as is present in the sample.yml file
- Be aware that the colon character (:) is significant, it separates a key from the value
- Be aware that the dash character (-) is significant, it denotes the start of a list which may have one or more key value pairs that constitute the list item
- You usually do not have to quote values if they have spaces in them; you will have to quote the value if it contains an embedded # character
For complete information on YAML syntax, consult the web page at www.yaml.org/start.html.
Here is a skeleton of the template_config.yml file.
DatabaseConnector: AgileCentral: ... # several key value pairs are relevant for this section DB: ... # several key value pairs are relevant for this section Service: ... # a few key value pairs relevant for the overall operation of the connector appear in this section ...
The AgileCentral section specifies values to use to obtain a connection with Agile Central. The DB section specifies values to use to obtain a connection with the target destination database and specifies the policies governing time stamping of tables, whether dropping existing tables is allowed. The Service section controls some aspects of the connector behavior on an overall basis.
Appendix B: Agile Central Project Specification
In AgileCentral, project names do not have to be unique. The connector provides a mechanism of distinguishing projects with the same name.
For example, if a fragment of a project tree looks like this:
Use a // sequence to separate the path components.
To specify the first occurrence of the Salamandra project you would have:
Biospheric // Salamandra
To specify the second occurrence of the Salamandra project you would have:
Biospheric // Corral // Salamandra
The connector by default does not scope to project's parent or children. A /* appended to a project name will scope down to project's child projects, as in the example of a configuration file's AgileCentral section below:
AgileCentral: Server : rally1.rallydev.com APIKey : _abc123x Workspaces: - Workspace: Jazz Projects: - Deep Cats/* Entities : HierarchicalRequirement Fetch : ObjectID,FormattedID,Name,Project,Workspace ResolveUser: False
Some scenarios require only a workspace scope, and Project specification can be entirely omitted from the AgileCentral section. Here is a config file example for exporting AgileCentral user data to a rallyusers database:
AgileCentral: Server : rally1.rallydev.com APIKey : _abc123x Workspaces: - Workspace: N BaseQuery: (LastLoginDate > 2017-05-01) Fetch: ObjectID,UserName,LastLoginDate Entities: User ResolveUser: False DB: Type: Postgres Name: rallyusers User : postgres Password : postgres Server: 127.0.0.1 Port: 5432 SaveCSV: True Populate: True DropExistingTables: True TimeStampTables: False
Appendix C: Postgres Database User Privileges
If the Superuser property of DB section is set to True, the connector will execute an SQL COPY command to bulk copy csv rows to the database. Set Superuser property to False if you receive the following error: must be superuser to COPY to or from a file. Hint: Anyone can COPY to stdout or from stdin. Psql's \copy command also works for anyone. The connector will attempt to bulk copy using psql's \copy command.
- 1.1.0-master --- 25-Jan-2018
- Supports MS SQL Server.
- 1.0.1-master --- 2-Dec-2017
- Accounted for inconsistencies of spacing in the schema.
- 1.0.0-master -- 22-June-2017
- Initial Release - works with Postgres DB.