The Web API is a component which makes it possible for external systems to access and manipulate data stored in an instance of DHIS2. More precisely, it provides a programmatic interface to a wide range of exposed data and service methods for applications such as third-party software clients, web portals and internal DHIS2 modules.
The Web API adheres to many of the principles behind the REST architectural style. To mention some few and important ones:
The fundamental building blocks are referred to as resources. A resource can be anything exposed to the Web, from a document to a business process - anything a client might want to interact with. The information aspects of a resource can be retrieved or exchanged through resource representations. A representation is a view of a resource’s state at any given time. For instance, the reportTable resource in DHIS2 represents a tabular report of aggregated data for a certain set of parameters. This resource can be retrieved in a variety of representation formats including HTML, PDF, and MS Excel.
All resources can be uniquely identified by a URI (also referred to a/s URL). All resources have a default representation. You can indicate that you are interested in a specific representation by supplying an Accept HTTP header, a file extension or a format query parameter. So in order to retrieve the PDF representation of a report table you can supply a Accept: application/pdf header or append .pdf or ?format=pdf to your request URL.
Interactions with the API requires correct use of HTTP methods or verbs. This implies that for a resource you must issue a GET request when you want to retrieve it, POST request when you want to create one, PUT when you want to update it and DELETE when you want to remove it. So if you want to retrieve the default representation of a report table you can send a GET request to e.g. /reportTable/iu8j/hYgF6t, where the last part is the report table identifier.
Resource representations are linkable, meaning that representations advertise other resources which are relevant to the current one by embedding links into itself (please be aware that you need to request href in your field filter to have this working. This feature greatly improves the usability and robustness of the API as we will see later. For instance, you can easily navigate to the indicators which are associated with a report table from the reportTable resource through the embedded links using your preferred representation format.
While all of this might sound complicated, the Web API is actually very simple to use. We will proceed with a few practical examples in a minute.
The DHIS2 Web API supports two protocols for authentication, Basic Authentication and OAuth 2. You can verify and get information about the currently authenticated user by making a GET request to the following URL:
And more information about authorities (and if a user have a certain authority) by using the endpoints:
1.2.1 Basic Authentication
The DHIS2 Web API supports Basic authentication. Basic authentication is a technique for clients to send login credentials over HTTP to a web server. Technically speaking, the username is appended with a colon and the password, Base64-encoded, prefixed Basic and supplied as the value of the Authorization HTTP header. More formally that is Most network-aware development frameworks provides support for authentication using Basic, such as Apache HttpClient, Spring RestTemplate and C# WebClient. An important note is that this authentication scheme provides no security since the username and password is sent in plain text and can be easily decoded. Using it is recommended only if the server is using SSL/TLS (HTTPS) to encrypt communication between itself and the client. Consider it a hard requirement to provide secure interactions with the Web API.
DHIS2 supports the OAuth2 authentication protocol. OAuth2 is an open standard for authorization which it allows third-party clients to connect on behalf of a DHIS2 user and get a reusable bearer token for subsequent requests to the Web API. DHIS 2 does not support fine-grained OAuth2 roles but rather provides applications access based on user roles of the DHIS2 user.
Each client for which you want to allow OAuth 2 authentication must be registered in DHIS2. To add a new OAuth2 client go to Apps > Settings > OAuth2 Clients, click add new and enter the desired client name and the grant types.
18.104.22.168 Adding a client using the Web API
An OAuth2 client can be added through the Web API. As an example we can send a payload like this:
We will use this client as the basis for our next grant type examples.
22.214.171.124 Grant type password
The simplest of all grant types is the password grant type. This grant type is similar to basic authentication in the sense that it requires the client to collect the users username and password. As an example we can use our demo server:
This will give you a response similar to this:
For now, we will concentrate on the access_token, which is what we will use as our authentication (bearer) token. As an example we will get all data elements using our token:
126.96.36.199 Grant type refresh_token
In general the access tokens have limited validity. You can have a look at the expires_in property of the response in the previous example to understand when a token expires. To get a fresh access_token you can make another round trip to the server and use refresh_token which allows you to get an updated token without needing to ask for the user credentials one more time.
The response will be exactly the same as when you get an token to start with.
188.8.131.52 Grant type authorization_code
Authorized code grant type is the recommended approach if you don’t want to store the user credentials externally. It allows DHIS2 to collect the username/password directly from the user instead of the client collecting them and then authenticating on behalf of the user. Please be aware that this approach uses the redirect_uris part of the client payload.
Step 1: Using a browser visit this URL (if you have more than one redirect URIs, you might want to add &redirect_uri=http://www.example.org) :
Step 2: After the user have successfully logged in and accepted your client access, it will redirect back to your redirect uri like this:
Step 3: This step is similar to what we did in the password grant type, using the given code, we will now ask for a access token:
1.3 Error and info messages
The Web API uses a consistent format for all error/warning and informational messages:
Here we can see from the message that the user tried to access a resource I did not have access to. It uses the http status code 403, the http status message forbidden and a descriptive message.
| Name || Description |
| httpStatus || HTTP Status message for this response, see RFC 2616 (Section 10) for more information. |
| httpStatusCode || HTTP Status code for this response, see RFC 2616 (Section 10) for more information. |
| status || DHIS2 status, possible values are OK | WARNING | ERROR, where OK is means everything was successful, ERROR means that operation did not complete and WARNING means operation was partially successful, if there message contains a response property, please look there for more information. |
| message || A user friendly message telling whether the operation was a success or not. |
| devMessage || A more technical, developer-friendly message (not currently in use). |
| response || Extension point for future extension to the WebMessage format. This will be documented when it starts being used. |
1.4 Date and period format
Throughout the Web API we refer to dates and periods. The date format is:
For instance, if you want to express March 20, 2014 you must use 2014-03-20.
The period format is described in the following table (also available on API endpoint /api/periodTypes)
| Interval || Format || Example || Description |
| Day ||yyyyMMdd|| 20040315 || March 15 2004 |
| Week ||yyyyWn|| 2004W10 || Week 10 2004 |
| Week Wednesday ||yyyyWedWn|| 2015WedW5 || Week 5 with start Wednesday |
| Week Thursday ||yyyyThuWn|| 2015ThuW6 || Week 6 with start Thursday |
| Week Saturday ||yyyySatWn|| 2015SatW7 || Week 7 with start Saturday |
| Week Sunday ||yyyySunWn|| 2015SunW8 || Week 8 with start Sunday |
| Bi-week ||yyyyBiWn|| 2015BiW1 || Week 1-2 20015 |
| Month ||yyyyMM|| 200403 || March 2004 |
| Bi-month ||yyyyMMB || 200401B || January-February 2004 |
| Quarter ||yyyyQn|| 2004Q1 || January-March 2004 |
| Six-month ||yyyySn|| 2004S1 || January-June 2004 |
| Six-month April ||yyyyAprilSn || 2004AprilS1 || April-September 2004 |
| Year || yyyy || 2004 || 2004 |
| Financial Year April || yyyyApril || 2004April || Apr 2004-Mar 2005 |
| Financial Year July || yyyyJuly || 2004July || July 2004-June 2005 |
| Financial Year Oct || yyyyOct || 2004Oct || Oct 2004-Sep 2005 |
In some parts of the API, like for the analytics resource, you can utilize relative periods in addition to fixed periods (defined above). The relative periods are relative to the current date, and allows e.g. for creating dynamic reports. The available relative period values are:
1.5 Identifier schemes
This section provides an explanation of the identifier scheme concept. Identifier schemes are used to map metadata objects to other metadata during import, and to render metadata as part of exports. Please note that not all schemes works for all web-api calls, and not not all schemes can be used for both input and output (this is outlined in the sections explaining the various Web APIs).
The full set of identifier scheme object types available are listed below, using the name of the property to use in queries:
The general idScheme applies to all types of objects. It can be overridden by specific object types.
The default scheme for all parameters is UID (stable DHIS 2 identifiers). The supported identifier schemes are described in the table below.
| Scheme || Description |
| ID, UID || Match on DHIS2 stable Identifier, this is the default id scheme. |
| CODE || Match on DHIS2 Code, mainly used to exchange data with an external system. |
| NAME || Match on DHIS2 Name, please not that this uses what is available as object.name, and not the translated name. Also not that names are not always unique, and in that case they can not be used. |
| ATTRIBUTE:ID || Match on metadata attribute, this attribute needs to be assigned to the type you are matching on, and also that the unique property is set to true. The main usage of this is also to exchange data with external systems, it has some advantages over CODE since multiple attributes can be added, so it can be used to synchronize with more than one system. |
Note that identifier schemes is not an independent feature but needs to be used in combination with resources such as data value import and meta data import.
As an example, to specify CODE as the general id scheme and override with UID for organisation unit id scheme you can use these query parameters:
As another example, to specify an attribute for the organisation unit id scheme, code for the data element id scheme and use the default UID id scheme for all other objects you can use these parameters:
1.6 Browsing the Web API
The entry point for browsing the Web API is /api/. This resource provide links to all available resources. Four resource representation formats are consistently available for all resources: HTML, XML, JSON and JSONP. Some resources will have other formats available, like MS Excel, PDF, CSV and PNG. To explore the API from a web browser, navigate to the /api/ entry point and follow the links to your desired resource, for instance /api/dataElements. For all resources which return a list of elements certain query parameters can be used to modify the response:
| Param || Option values || Default option || Description |
| paging || true | false || true || Indicates whether to return lists of elements in pages. |
| page || number || 1 || Defines which page number to return. |
| pageSize || number || 50 || Defines the number of elements to return for each page. |
| order || property:asc/iasc/desc/idesc || Order the output using a specified order, only properties that are both persisted and simple (no collections, idObjects etc) are supported. iasc and idesc are case insensitive sorting. |
An example of how these parameters can be used to get a full list of data element groups in XML response format is:
You can query for elements on the name property instead of returning full list of elements using the query query variable. In this example we query for all data elements with the word “anaemia” in the name:
You can get specific pages and page sizes of objects like this:
You can completely disable paging like this:
To order the result based on a specific property:
You can find an object based on its ID across all object types through the identifiableObjects resource:
DHIS2 supports translations of database content, such as data elements, indicators and programs. All metadata objects in the Web API have properties meant to be used for display / UI purposes, which includes displayName, displayShortName and displayDescription.
| Parameter || Values || Description |
| translate || true | false || Translate display* properties in metadata output (displayName, displayShortName, displayDescription, and displayFormName for data elements). Default value is true. |
| locale || Locale to use || Translate metadata output using a specified locale (requires translate=true). |
1.6.2 Translation API
The translations for an object is rendered as part of the object itself in the translations array. Note that the translations array in the JSON/XML payloads are normally pre-filtered for you, which means they can not directly be used to import/export translations (as that would normally overwrite locales other than current users).
Example of data element with translation array filtered on user locale:
Example of data element with translations turned off:
Note that even if you get the unfiltered result, and are using the appropriate type endpoint i..e /api/26/dataElements we do not allow updates, as it would be too easy to make mistakes and overwrite the other available locales.
To read and update translations you can use the special translations endpoint for each object resource. These can be accessed by GET or PUT on the appropriate /api/26/<object-type>/<object-id>/translations endpoint. As an example, for a data element with identifier FTRrcoaog83 you could use /api/26/dataElements/FTRrcoaog83/translations to get and update translations. The fields available are property with options NAME, SHORT_NAME, DESCRIPTION, the locale which supports any valid locale ID and the the value itself.
Example of NAME property for French locale:
This payload would then be added to a translation array, and sent back to the appropriate endpoint:
For a an data element with ID FTRrcoaog83 you can PUT this to /api/26/dataElements/FTRrcoaog83/translations. Make sure to send all translations for the specific object and not just for a single locale (if not you will potentially overwrite existing locales for other locales).
1.6.3 Web API versions
The Web API is versioned starting from DHIS 2.25. The API versioning follows the DHIS 2 major version numbering. As an example, the API version for DHIS 2.25 is 25.
You can access a specific API version by including the version number after the /api component, as an example like this:
If you omit the version part of the URL, the system will use the current API version. As an example, for DHIS 2.25, when omitting the API part, the system will use API version 25. When developing API clients it is recommended to use explicit API versions (rather than omitting the API version), as this will protect the client from unforeseen API changes.
The last three API versions will be supported. As an example, DHIS version 2.27 will support API version 27, 26 and 25.
Note that the metadata model is not versioned, and that you might experience changes e.g. in associations between objects. These changes will be documented in the DHIS2 major version release notes.
1.7 Metadata object filter
To filter the metadata there are several filter operations that can be applied to the returned list of metadata. The format of the filter itself is straight-forward and follows the pattern property:operator:value, where property is the property on the metadata you want to filter on, operator is the comparison operator you want to perform and value is the value to check against (not all operators require value). Please see the schema section to discover which properties are available. Recursive filtering, ie. filtering on associated objects or collection of objects, are supported as well.
| Operator || Types || Value required || Description |
| eq || string | boolean | integer | float | enum | collection (checks for size) | date || true || Equality |
| !eq || string | boolean | integer | float | enum | collection (checks for size) | date || true || Inequality |
| ne || string | boolean | integer | float | enum | collection (checks for size) | date || true || Inequality |
| like || string || true || Case sensitive string, match anywhere |
| !like || string || true || Case sensitive string, not match anywhere |
| $like || string || true || Case sensitive string, match start |
| !$like || string || true || Case sensitive string, not match start |
| like$ || string || true || Case sensitive string, match end |
| !like$ || string || true || Case sensitive string, not match end |
| ilike || string || true || Case insensitive string, match anywhere |
| !ilike || string || true || Case insensitive string, not match anywhere |
| $ilike || string || true || Case insensitive string, match start |
| !$ilike || string || true || Case insensitive string, not match start |
| ilike$ || string || true || Case insensitive string, match end |
| !ilike$ || string || true || Case insensitive string, not match end |
| gt || string | boolean | integer | float | collection (checks for size) | date || true || Greater than |
| ge || string | boolean | integer | float | collection (checks for size) | date || true || Greater than or equal |
| lt || string | boolean | integer | float | collection (checks for size) | date || true || Less than |
| le || string | boolean | integer | float | collection (checks for size) | date || true || Less than or equal |
| null || all || false || Property is null |
| !null || all || false || Property is not null |
| empty || collection || false || Collection is empty |
| token || string || true || Match on multiple tokens in search property |
| !token || string || true || Not match on multiple tokens in search property |
| in || string | boolean | integer | float | date || true || Find objects matching 1 or more values |
| !in || string | boolean | integer | float | date || true || Find objects not matching 1 or more values |
Operators will be applied as logical and query, if you need a or query, you can have a look at our in filter (also have a look at the section below). The filtering mechanism allows for recursion. See below for some examples.
Get data elements with id property ID1 or ID2:
Get all data elements which has the dataSet with id ID1:
Get all data elements with aggregation operator “sum” and value type “int”:
You can do filtering within collections, e.g. to get data elements which are members of the “ANC” data element group you can use the following query using the id property of the associated data element groups:
Since all operators are and by default, you can’t find a data element matching more than one id, for that purpose you can use the in operator.
1.7.1 Logical operators
As mentioned in the section before, the default logical operator applied to the filters are AND which means that all object filters must be matched. There are however cases where you want to match on one of several filters (maybe id and code field) and in those cases it is possible to switch the root logical operator from AND to OR using the rootJunction parameter.
Example: Normal filtering where both id and code must match to have a result returned
Example: Filtering where the logical operator has been switched to OR and now only one of the filters must match to have a result returned
1.8 Metadata field filter
In certain situations the default views of the metadata can be too verbose. A client might only need a few fields from each object and want to remove unnecessary fields from the response. To discover which fields are available for each object please see the schema section.
The format for include/exclude is very simple and allows for infinite recursion. To filter at the “root” level you can just use the name of the field, i.e. ?fields=id,name which would only display the id and name for every object. For objects that are either collections or complex objects with properties on their own you can use the format ?fields=id,name,dataSets[id,name] which would return id, name of the root, and the id and name of every data set on that object. Negation can be done with the exclamation operator, and we have a set of presets of field select (see below). Both XML and JSON are supported.
Example: Get id and name on the indicators resource:
Example: Get id and name from dataElements, and id and name from the dataSets on dataElements:
To exclude a field from the output you can use the exclamation (!) operator. This is allowed anywhere in the query and will simply not include that property (as it might have been inserted in some of the presets).
A few presets (selected fields groups) are available and can be applied using the ‘:’ operator.
| Operator || Description |
| <field-name> || Include property with name, if it exists. |
| <object>[<field-name>, …] || Includes a field within either a collection (will be applied to every object in that collection), or just on a single object. |
| !<field-name>, <object>[!<field-name> || Do not include this field name, also works inside objects/collections. Useful when you use a preset to include fields. |
| *, <object>[*] || Include all fields on a certain object, if applied to a collection, it will include all fields on all objects on that collection. |
| :<preset> || Alias to select multiple fields. Three presets are currently available, see table below for descriptions. |
| Preset || Description |
| all || All fields of the object |
| * || Alias for all |
| identifiable || Includes id, name, code, created and lastUpdated fields |
| nameable || Includes id, name, shortName, code, description, created and lastUpdated fields |
| persisted || Returns all persisted property on a object, does not take into consideration if the object is the owner of the relation. |
| owner || Returns all persisted property on a object where the object is the owner of all properties, this payload can be used to update through the web-api. |
Example: Include all fields from dataSets except organisationUnits:
Example: Include only id, name and the collection of organisation units from a data set, but exclude the id from organisation units:
Example: Include nameable properties from all indicators:
1.8.1 Field transformers
In DHIS2.17 we introduced field transformers, the idea is to allow further customization of the properties on the server side.
This will rename the id property to i and name property to n.
Multiple transformers can be used by repeating the transformer syntax:
| Name || Arguments || Description |
| size || Gives sizes of strings (length) and collections |
| isEmpty || Is string or collection empty |
| isNotEmpty || Is string or collection not empty |
| rename || Arg1: name || Renames the property name |
| paging || Arg1: page,Arg2: pageSize || Pages a collection, default pageSize is 50. |
Examples of transformer usage.
1.9 Metadata create, read, update, delete, validate
While some of the web-api endpoints already contains support for CRUD (create, read, update, delete), from version 2.15 this is now supported on all endpoints. It should work as you expect, and the subsections will give more detailed information about create, update, and delete (read is already covered elsewhere, and have been supported for a long time).
1.9.1 Create / update parameters
The following query parameters are available for customizing your request.
| Param || Type || Required || Options (default first) || Description |
| preheatCache || boolean || false || true | false || Turn cache-map preheating on/off. This is on by default, turning this off will make initial load time for importer much shorter (but will make the import itself slower). This is mostly used for cases where you have a small XML/JSON file you want to import, and don’t want to wait for cache-map preheating. |
| strategy || enum || false || CREATE_AND_UPDATE | CREATE | UPDATE | DELETE || Import strategy to use, see below for more information. |
| mergeMode || enum || false || REPLACE, MERGE || Strategy for merging of objects when doing updates. REPLACE will just overwrite the property with the new value provided, MERGE will only set the property if its not null (only if the property was provided). |
1.9.2 Creating and updating objects
For creating new objects you will need to know the endpoint, the type format, and make sure that you have the required authorities. As an example , we will create and update an constant. To figure out the format, we can use the new schema endpoint for getting format description. So we will start with getting that info:
From the output, you can see that the required authorities for create are F_CONSTANT_ADD, and the important properties are: name and value. From this we can create a JSON payload and save it as a file called constant.json:
The same content as an XML payload:
We are now ready create the new constant by sending a POST request to the constantsendpoint with the JSON payload using curl:
A specific example of posting the constant to the demo server:
If everything went well, you should see an output similar to:
The process will be exactly the same for updating, you make your changes to the JSON/XML payload, find out the ID of the constant, and then send a PUT request to the endpoint including ID:
1.9.3 Deleting objects
Deleting objects are very straight forward, you will need to know the ID and the endpoint of the type you want delete, let’s continue our example from the last section and use a constant. Let’s assume that the id is abc123, then all you need to do is the send the DELETE request to the endpoint + id:
A successful delete should return HTTP status 204 (no content).
1.9.4 Adding and removing objects in collections
The collections resource lets you modify collections of objects.
184.108.40.206 Adding or removing single objects
In order to add or remove objects to or from a collection of objects you can use the following pattern:
You should use the POST method to add, and the DELETE method to remove an object. When there is a many-to-many relationship between objects, you must first determine which object owns the relationship. If it isn’t clear which object this is, try the call both ways to see which works.
The components of the pattern are:
collection object: The type of objects that owns the collection you want to modify.
collection object id: The identifier of the object that owns the collection you want to modify.
collection name: The name of the collection you want to modify.
object id: The identifier of the object you want to add or remove from the collection.
As an example, in order to remove a data element with identifier IDB from a data element group with identifier IDA you can do a DELETE request:
To add a category option with identifier IDB to a category with identifier IDA you can do a POST request:
220.127.116.11 Adding or removing multiple objects
You can add or remove multiple objects from a collection in one request with a payload like this:
Using this payload you can add, replace or delete items:
18.104.22.168 Adding and removing objects in a single request
You can both add and remove objects from a collection in a single POST request with the following type of payload:
1.9.5 Validating payloads
System wide validation of metadata payloads are enabled from 2.19 release, this means that create/update operations on the web-api endpoints will be checked for valid payload before allowed changes to be made, to find out what validations are in place for a endpoint, please have a look at the /api/schemas endpoint, i.e. to figure out which constraints a data element have, you would go to /api/schemas/dataElement.
You can also validate your payload manually by sending it to the proper schema endpoint. If you wanted to validate the constant from the create section before, you would send it like this:
A simple (non-validating) example would be:
Which would yield the result:
1.9.6 Partial updates
For cases where you don’t want or need to update all properties on a object (which means downloading a potentially huge payload, change one property, then upload again) we now support partial update, for one or more properties.
The payload for doing partial updates are the same as when you are doing a full update, the only difference is that you only include the properties you want to update, i.e.:
1.10 Metadata export
This section explains the metatada API which is available at /api/23/metadata and /api/26/metadataendpoints. XML and JSON resource representations are supported.
The most common parameters are described below in the “Export Parameter” table. You can also apply this to all available types by using type:fields=<filter> and type:filter=<filter>- You can also enable/disable export of certain types by setting type=true/false.
| Name || Options || Description |
| fields || Same as metadata field filter || Default field filter to apply for all types, default is :owner. |
| filter || Same as metadata object filter || Default object filter to apply for all types, default is none. |
| order || Same as metadata order || Default order to apply to all types, default is name if available, or created if not. |
| translate || false/true || Enable translations. Be aware that this is turned off by default (in other endpoints this is on by default). |
| locale || <locale> || Change from user locale, to your own custom locale. |
| defaults || INCLUDE/EXCLUDE || Should auto-generated category object be included or not in the payload. If you are moving metadata between 2 non-synced instances, it might make sense to set this to EXCLUDE to ease the handling of these generated objects. |
| skipSharing || false/true || Enabling this will strip the sharing properties from the exported objects. This includes user, publicAccess, userGroupAccesses, userAccesses, and externalAccess. |
1.10.1 Metadata export examples
Export all metadata:
Export all metadata ordered by lastUpdated descending:
Export id and displayName for all data elements, ordered by displayName:
Export data elements and indicators where name starts with “ANC”:
1.10.2 Metadata export with dependencies
When you want to move a whole set of data set, program or category combo metadata from one server to another (possibly empty) server, we have three special endpoints for just that purpose:
These exports can then be imported using /api/26/metadata.
1.11 Metadata import
This section explains the metadata API which is available at /api/23/metadata and /api/26/metadataendpoints. XML and JSON resource representations are supported.
The importer allows you to import metadata exported with the new exporter. The various parameters are listed below.
| Name || Options (first is default) || Description |
| importMode || COMMIT, VALIDATE || Sets overall import mode, decides whether or not to only VALIDATE or also COMMIT the metadata, this has similar functionality as our old dryRun flag. |
| identifier || UID, CODE, AUTO || Sets the identifier scheme to use for reference matching. AUTO means try UID first, then CODE. |
| importReportMode || ERRORS, FULL, DEBUG || Sets the ImportReport mode, controls how much is reported back after the import is done. ERRORS only includes ObjectReports for object which has errors. FULL returns an ObjectReport for all objects imported, and DEBUG returns the same plus a name for the object (if available). |
| preheatMode || REFERENCE, ALL, NONE || Sets the preheater mode, used to signal if preheating should be done for ALL (as it was before with preheatCache=true) or do a more intelligent scan of the objects to see what to preheat (now the default), setting this to NONE is not recommended. |
| importStrategy || CREATE_AND_UPDATE, CREATE, UPDATE, DELETE || Sets import strategy, CREATE_AND_UPDATE will try and match on identifier, if it doesn’t exist, it will create the object. |
| atomicMode || ALL, NONE || Sets atomic mode, in the old importer we always did a best effort import, which means that even if some references did not exist, we would still import (i.e. missing data elements on a data element group import). Default for new importer is to not allow this, and similar reject any validation errors. Setting the NONE mode emulated the old behavior. |
| mergeMode || MERGE, REPLACE || Sets the merge mode, when doing updates we have two ways of merging the old object with the new one, MERGE mode will only overwrite the old property if the new one is not-null, for REPLACE mode all properties are overwritten regardless of null or not. |
| flushMode || AUTO, OBJECT || Sets the flush mode, which controls when to flush the internal cache. It is strongly recommended to keep this to AUTO (which is the default). Only use OBJECT for debugging purposes, where you are seeing hibernate exceptions and want to pinpoint the exact place where the stack happens (hibernate will only throw when flushing, so it can be hard to know which object had issues). |
| skipSharing || false, true || Skip sharing properties, does not merge sharing when doing updates, and does not add user group access when creating new objects. |
| skipValidation || false, true || Skip validation for import. NOT RECOMMENDED. |
| async || false, true || Asynchronous import, returns immediately with a Location header pointing to the location of the importReport. The payload also contains a json object of the job created. |
| inclusionStrategy || NON_NULL, ALWAYS, NON_EMPTY ||NON_NULL includes properties which are not null, ALWAYS include all properties, NON_EMPTY includes non empty properties (will not include strings of 0 length, collections of size 0, etc.) |
| userOverrideMode || NONE, CURRENT, SELECTED || Allows you to override the user property of every object you are importing, the options are NONE (do nothing), CURRENT (use import user), SELECTED (select a specific user using overrideUser=X) |
| overrideUser || User ID || If userOverrideMode is SELECTED, use this parameter to select the user you want override with. |
1.12 Metadata audit
If you need information about who created, edited, or deleted DHIS2 metadata objects you can enable metadata audit. There are two configuration options (dhis.conf) you can enable to support this:
This enables additional log output in your servlet container (e.g. tomcat catalina.log) which contains full information about the object created, object edited, or object deleted including full JSON payload, date of audit event, and the user who did the action.
This enables persisted audits, i.e. audits saved to the database. The information stored is the same as with audit log; however this information is now placed into the metadataaudit table in the database.
We do not recommended enabling these options on a empty database if you intend to bootstrap your system, as it slows down the import and the audit might not be that useful.
1.12.1 Metadata audit query
If you have enabled persisted metadata audits on your DHIS2 instance, you can access metadata audits at the following endpoint:
The endpoints supports the following query parameters:
| Name || Values || Description |
| uid || Object uid to query by (can be more than one) |
| code || Object code to query by (can be more than one) |
| klass || Object class to query by (can be more than one), please note that the full java package name is required here (to avoid name collisions) |
| createdAt || Query by creation date |
| createdBy || Query by who made the change (username) |
| type || CREATE, UPDATE, DELETE || Query by audit type |
1.13 Render type (Experimental)
Some metadata types have a property named renderType. The render type property is a map between a device and a renderingType. Applications can use this information as a hint on how the object should be rendered on a specific device. For example, a mobile device might want to render a data element differently than a desktop computer.
There is currently two different kinds of renderingTypes available:
Value type rendering
Program stage section rendering
There is also 2 device types available:
The following table lists the metadata and rendering types available. The value type rendering has addition constraints based on the metadata configuration, which will be shown in a second table.
| Metadata type || Available RenderingTypes |
| Program Stage Section |
| Data element |