DBWorker configuration
To configure DBWorker you can use the XML file:
/opt/ptv-optima-vv.n.xxxxptv-optima-as/standalone/configuration/dbworker.xml.
In particular, into the section <jobs> are listed 9 distinct jobs.

<?xml version="1.0" encoding="ISO-8859-1" standalone="no"?> <configuration> <dbConf> <isDataSource>true</isDataSource> <ds>java:jboss/datasources/optima</ds> </dbConf> <jobs> <!-- #=============================================================================# This job is intended to delete from the system all events of the day before. Note that these queries can't be changed without asking SW production because they are strictly related to the internal DB model of the system. #=============================================================================# --> <job id='1'> <transaction>false</transaction> <name>cleaning_evnt</name> <schedule> <second>0</second> <minute>30</minute> <hour>0</hour> <dayOfMonth>*</dayOfMonth> <month>*</month> <dayOfWeek>*</dayOfWeek> <year>*</year> </schedule> <query id='1'>delete from evnt where ldat < now();</query> <query id='2'>delete from locations where id not in (select eloc from evnt);</query> <query id='3'>delete from situations where id not in (select situ from evnt);</query> <query id='4'>delete from evnt_dval where idno not in (select publication_dval from evnt) and idno not in (select dval from evnt);</query> </job> <!-- #=============================================================================# This job is intended to delete from the system all data related to trafficstate, harmonized trafficstate. #=============================================================================# --> <job id='2'> <transaction>false</transaction> <name>cleaning TrafficState and Harmonized TrafficState</name> <schedule> <second>0</second> <minute>1</minute> <hour>0</hour> <dayOfMonth>*</dayOfMonth> <month>*</month> <dayOfWeek>*</dayOfWeek> <year>*</year> </schedule> <!-- *EDIT* add the trafficstate_* table used in the project --> <query id='1'>TRUNCATE TABLE trafficstate</query> <query id='2'>TRUNCATE TABLE tsta</query> <!-- #=============================================================================# This query is just a template if you want to make a backup of raw trafficstate tables #=============================================================================# <query>DO $$ BEGIN EXECUTE format('CREATE TABLE %I AS SELECT * FROM trafficstate', 'trafficstate_' || to_char(CURRENT_DATE - interval '1 day','yyyy_mm_dd')); END; $$ LANGUAGE plpgsql;</query> --> </job> <!-- #=============================================================================# This job is intended to clean KPI results. Disabled by default WARNING: IF OPTIMA-PLANNING IS deployed, with the following query the DBWORKER will erase also optima planning kpi results. Usually the user wants to keep into the database OPTIMA PLANNING results! #=============================================================================# --> <job id='3'> <transaction>false</transaction> <name>cleaning_rkpi</name> <enabled>false</enabled> <schedule> <second>0</second> <minute>1</minute> <hour>0</hour> <dayOfMonth>*</dayOfMonth> <month>*</month> <dayOfWeek>*</dayOfWeek> <year>*</year> </schedule> <query id='1'>DELETE FROM rkpi WHERE start_time < now() - interval '2 day'</query> </job> <!-- #=============================================================================# This job is intended to clean simu table. Enabled by default to avoid that the dimension of the table continuosly growing and create a performance problem or a space disk problem. Evalaute into the project for how long store the result of the simu table #=============================================================================# --> <job id='4'> <transaction>false</transaction> <name>cleaning_simu</name> <enabled>true</enabled> <schedule> <second>0</second> <minute>1</minute> <hour>0</hour> <dayOfMonth>*</dayOfMonth> <month>*</month> <dayOfWeek>*</dayOfWeek> <year>*</year> </schedule> <query id='1'>DELETE FROM simu WHERE (inst < to_char(now() - interval '1 day', 'YYYYMMDDHH24MISS')::bigint AND not exists (select simu.idno FROM simulation_object so where so.simulation = simu.idno))</query> <query id='2'>DELETE FROM simu USING simulation_object so WHERE (inst < to_char(now() - interval '4 day', 'YYYYMMDDHH24MISS')::bigint AND (simu.idno = so.simulation AND so.object_type = 'MICRO'))</query> <query id='3'>VACUUM simu</query> <query id='4'>DELETE FROM simg WHERE not EXISTS (select simu.idno from simu where simg.idno = simu.simg)</query> </job> <!-- #=============================================================================# This job is intended to clean vehicle_xsharing table. Disabled by default to avoid that the index of the table grows too much. #=============================================================================# --> <job id='7'> <transaction>false</transaction> <name>cleaning vehicle_xsharing table</name> <enabled>false</enabled> <schedule> <second>1</second> <minute>0</minute> <hour>0</hour> <dayOfMonth>*</dayOfMonth> <month>*</month> <dayOfWeek>*</dayOfWeek> <year>*</year> </schedule> <query id='1'>DELETE FROM vehicle_xsharing</query> <query id='2'>ALTER SEQUENCE xsharing_id_seq RESTART</query> </job> <!-- #=============================================================================# This job is intended to periodical cleanup simulation groups #=============================================================================# --> <job id='8'> <transaction>false</transaction> <name>cleaning_simg</name> <enabled>true</enabled> <schedule> <second>1</second> <minute>1</minute> <hour>0</hour> <dayOfMonth>*</dayOfMonth> <month>*</month> <dayOfWeek>*</dayOfWeek> <year>*</year> </schedule> <query id='1'>select delete_simg_older_than(interval '24 hours')</query> </job> </jobs> </configuration>

This node contains the database configuration in standard SISTeMA DBConf format.
Parameter | Description |
---|---|
dbConf.isDataSource | Boolean value.
|
dbConf.url | Only relevant if dbConf.isDataSource = FALSE. JDBC URL of the connection. |
dbConf.user | Only relevant if dbConf.isDataSource = FALSE. Username for the connection. |
dbConf.pass | Only relevant if dbConf.isDataSource = FALSE. Password for the connection. |
dbConf.jdbcClass | Only relevant if dbConf.isDataSource = FALSE. JDBC class name for the connection. |
dbConf.ds | Only relevant if dbConf.isDataSource = TRUE JNDI name of the data source. For details on how to configure a data source connection, please refer to the WildFly AS documentation and to the Release Document. |

This node contains a list of jobs to be executed. A job is a set of queries scheduled to be executed at a specified time.
Parameter | Description |
---|---|
jobs.job | Configures a single job. |
jobs.job.name | Name of the job (for debugging and monitoring purposes). |
jobs.job.enabled | Enables or disables the job.
Default is TRUE. |
jobs.job.transaction | Specifies whether to execute the queries in a single transaction.
|
jobs.job.schedule | Job schedule. Each attribute configures the corresponding value in a standard Java EE ScheduleExpression. For details, please refer to the ScheduleExpression documentation at: http://docs.oracle.com/javaee/6/api/javax/ejb/ScheduleExpression.html. |
jobs.job.schedule.second | Value of the second attribute in the ScheduleExpression corresponding to this job. |
jobs.job.schedule.minute | Value of the minute attribute in the ScheduleExpression corresponding to this job. |
jobs.job.schedule.hour | Value of the hour attribute in the ScheduleExpression corresponding to this job. |
jobs.job.schedule.dayOfMonth | Value of the dayOfMonth attribute in the ScheduleExpression corresponding to this job. |
jobs.job.schedule.month | Value of the month attribute in the ScheduleExpression corresponding to this job. |
jobs.job.schedule.dayOfWeek | Value of the dayOfWeek attribute in the ScheduleExpression corresponding to this job. |
jobs.job.schedule.year | Value of the year attribute in the ScheduleExpression corresponding to this job. |
jobs.job.group | This node can contain a single query or multiple queries. In case of multiple queries, if a subquery fails, the next queries are not executed. |
jobs.job.query | Each query element contains a single query to be executed. |
jobs.trigger | Optional. Configures a query trigger. Jobs can be launched by name, inserting their name in a table specified by the parameter jobs.trigger.table. |
jobs.trigger.schedule | Trigger schedule. This parameter defines how often the jobs triggers table is to be checked. For configuration, see jobs.job.schedule. |
jobs.trigger.table | Name of the table that contains the triggers. The table must have the following columns:
|