4.2.3. Configuration du déploiement

Voir aussi

L’architecture de la solution logicielle, les éléments de dimensionnement ainsi que les principes de déploiement sont définis dans le DAT.

4.2.3.1. Fichiers de déploiement

Les fichiers de déploiement sont disponibles dans la version VITAM livrée, dans le sous-répertoire deployment . Concernant l’installation, ils se déclinent en 2 parties :

  • les playbook ansible de déploiement, présents dans le sous-répertoire ansible-vitam, qui est indépendant de l’environnement à déployer ; ces fichiers ne sont normalement pas à modifier pour réaliser une installation.
  • l’arborescence d’inventaire ; des fichiers d’exemples sont disponibles dans le sous-répertoire environments. Cette arborescence est valable pour le déploiement d’un environnement, et doit être dupliquée lors de l’installation d’environnements ultérieurs. Les fichiers contenus dans cette arborescence doivent être adaptés avant le déploiement, comme expliqué dans les paragraphes suivants.

4.2.3.2. Informations plate-forme

4.2.3.2.1. Inventaire

Pour configurer le déploiement, il est nécessaire de créer, dans le répertoire environments, un nouveau fichier d’inventaire (par la suite, ce fichier sera communément appelé hosts.<environnement>). Ce fichier devra se conformer à la structure présente dans le fichier hosts.example (et notamment respecter scrupuleusement l’arborescence des groupes ansible). Les commentaires dans ce fichier fournissent les explications permettant l’adaptation à l’environnement cible :

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
# Group definition ; DO NOT MODIFY
[hosts]

# Group definition ; DO NOT MODIFY
[hosts:children]
vitam
prometheus
grafana
reverse
hosts_dev_tools
ldap


########### Tests environments specifics ###########

# EXTRA : Front reverse-proxy (test environments ONLY) ; add machine name after
[reverse]
# optional : after machine, if this machine is different from VITAM machines, you can specify another become user
# Example
# vitam-centos-01.vitam ansible_ssh_user=centos

########### Extra VITAM applications ###########
[prometheus:children]
hosts_prometheus
hosts_alertmanager

[hosts_prometheus]
# TODO: Put here server where this service will be deployed : prometheus server

[hosts_alertmanager]
# TODO: Put here servers where this service will be deployed : alertmanager

[grafana]
# TODO: Put here servers where this service will be deployed : grafana

[ldap] # Extra : OpenLDAP server
# LDAP server !!! NOT FOR PRODUCTION !!! Test only

[library]
# TODO: Put here servers where this service will be deployed : library

[hosts_dev_tools]
# TODO: Put here servers where this service will be deployed : mongo-express, elasticsearch-head

[elasticsearch:children] # EXTRA : elasticsearch
hosts_elasticsearch_data
hosts_elasticsearch_log

########### VITAM services ###########

# Group definition ; DO NOT MODIFY
[vitam:children]
zone_external
zone_access
zone_applicative
zone_storage
zone_data
zone_admin
library


##### Zone externe


[zone_external:children]
hosts_ihm_demo
hosts_ihm_recette


[hosts_ihm_demo]
# TODO: Put here servers where this service will be deployed : ihm-demo. If you own another frontend, it is recommended to leave this group blank
# If you don't need consul for ihm-demo, you can set this var after each hostname :
# consul_disabled=true

[hosts_ihm_recette]
# TODO: Put here servers where this service will be deployed : ihm-recette (extra feature)


##### Zone access

# Group definition ; DO NOT MODIFY
[zone_access:children]
hosts_ingest_external
hosts_access_external

[hosts_ingest_external]
# TODO: Put here servers where this service will be deployed : ingest-external


[hosts_access_external]
# TODO: Put here servers where this service will be deployed : access-external


##### Zone applicative

# Group definition ; DO NOT MODIFY
[zone_applicative:children]
hosts_ingest_internal
hosts_processing
hosts_batch_report
hosts_worker
hosts_access_internal
hosts_metadata
hosts_functional_administration
hosts_logbook
hosts_workspace
hosts_storage_engine
hosts_security_internal

[hosts_security_internal]
# TODO: Put here servers where this service will be deployed : security-internal


[hosts_logbook]
# TODO: Put here servers where this service will be deployed : logbook


[hosts_workspace]
# TODO: Put the server where this service will be deployed : workspace
# WARNING: put only one server for this service, not more !


[hosts_ingest_internal]
# TODO: Put here servers where this service will be deployed : ingest-internal


[hosts_access_internal]
# TODO: Put here servers where this service will be deployed : access-internal


[hosts_metadata]
# TODO: Put here servers where this service will be deployed : metadata


[hosts_functional_administration]
# TODO: Put here servers where this service will be deployed : functional-administration


[hosts_processing]
# TODO: Put the server where this service will be deployed : processing
# WARNING: put only one server for this service, not more !


[hosts_storage_engine]
# TODO: Put here servers where this service will be deployed : storage-engine

[hosts_batch_report]
# TODO: Put here servers where this service will be deployed : batch-report

[hosts_worker]
# TODO: Put here servers where this service will be deployed : worker
# Optional parameter after each host : vitam_worker_capacity=<integer> ; please refer to your infrastructure for defining this number ; default is ansible_processor_vcpus value (cpu number in /proc/cpuinfo file)


##### Zone storage

[zone_storage:children] # DO NOT MODIFY
hosts_storage_offer_default
hosts_mongodb_offer

[hosts_storage_offer_default]
# TODO: Put here servers where this service will be deployed : storage-offer-default
# LIMIT : only 1 offer per machine 
# LIMIT and 1 machine per offer when filesystem or filesystem-hash provider
# Possibility to declare multiple machines with same provider only when provider is s3 or swift.
# Mandatory param for each offer is offer_conf and points to offer_opts.yml & vault-vitam.yml (with same tree)
# for swift
# hostname-offre-1.vitam offer_conf=offer-swift-1
# hostname-offre-2.vitam offer_conf=offer-swift-1
# for filesystem
# hostname-offre-2.vitam offer_conf=offer-fs-1
# for s3
# hostname-offre-3.vitam offer_conf=offer-s3-1
# hostname-offre-4.vitam offer_conf=offer-s3-1

[hosts_mongodb_offer:children]
hosts_mongos_offer
hosts_mongoc_offer
hosts_mongod_offer

[hosts_mongos_offer]
# WARNING : DO NOT COLLOCATE WITH [hosts_mongos_data]
# TODO: put here servers where this service will be deployed : mongos cluster for storage offers
# Mandatory param : mongo_cluster_name : name of the cluster (should exist in the offer_conf configuration)
# The recommended practice is to install the mongos instance on the same servers as the mongoc instances
# Example (for a more complete one, see the one in the group hosts_mongos_data) :
# vitam-mongo-swift-offer-01   mongo_cluster_name=offer-swift-1
# vitam-mongo-swift-offer-02   mongo_cluster_name=offer-swift-1
# vitam-mongo-fs-offer-01      mongo_cluster_name=offer-fs-1
# vitam-mongo-fs-offer-02      mongo_cluster_name=offer-fs-1
# vitam-mongo-s3-offer-01      mongo_cluster_name=offer-s3-1
# vitam-mongo-s3-offer-02      mongo_cluster_name=offer-s3-1

[hosts_mongoc_offer]
# WARNING : DO NOT COLLOCATE WITH [hosts_mongoc_data]
# TODO: put here servers where this service will be deployed : mongoc cluster for storage offers
# Mandatory param : mongo_cluster_name : name of the cluster (should exist in the offer_conf configuration)
# Optional param : mandatory for 1 node of the shard, some init commands will be executed on it
# Optional param : mongo_arbiter=true : the node will be only an arbiter ; do not add this paramter on a mongo_rs_bootstrap node
# Recommended practice in production: use 3 instances
# Example :
# vitam-mongo-swift-offer-01   mongo_cluster_name=offer-swift-1                       mongo_rs_bootstrap=true
# vitam-mongo-swift-offer-02   mongo_cluster_name=offer-swift-1
# vitam-swift-offer            mongo_cluster_name=offer-swift-1                       mongo_arbiter=true
# vitam-mongo-fs-offer-01      mongo_cluster_name=offer-fs-1                          mongo_rs_bootstrap=true
# vitam-mongo-fs-offer-02      mongo_cluster_name=offer-fs-1
# vitam-fs-offer               mongo_cluster_name=offer-fs-1                          mongo_arbiter=true
# vitam-mongo-s3-offer-01      mongo_cluster_name=offer-s3-1                       mongo_rs_bootstrap=true
# vitam-mongo-s3-offer-02      mongo_cluster_name=offer-s3-1
# vitam-s3-offer               mongo_cluster_name=offer-s3-1                       mongo_arbiter=true

[hosts_mongod_offer]
# WARNING : DO NOT COLLOCATE WITH [hosts_mongod_data]
# TODO: put here servers where this service will be deployed : mongod cluster for storage offers
# Mandatory param : mongo_cluster_name : name of the cluster (should exist in the offer_conf configuration)
# Mandatory param : id of the current shard, increment by 1 from 0 to n
# Optional param : mandatory for 1 node of the shard, some init commands will be executed on it
# Optional param : mongo_arbiter=true : the node will be only an arbiter ; do not add this paramter on a mongo_rs_bootstrap node
# Optional param : mongod_memory=x ; this will force the wiredtiger cache size to x (unit is GB) ; can be usefull when colocalization with elasticsearch
# Optional param : is_small=true ; this will force the priority for this server to be lower when electing master ; hardware can be downgraded for this machine
# Recommended practice in production: use 3 instances per shard
# Example :
# vitam-mongo-swift-offer-01   mongo_cluster_name=offer-swift-1    mongo_shard_id=0                   mongo_rs_bootstrap=true
# vitam-mongo-swift-offer-02   mongo_cluster_name=offer-swift-1    mongo_shard_id=0
# vitam-swift-offer            mongo_cluster_name=offer-swift-1    mongo_shard_id=0                   mongo_arbiter=true
# vitam-mongo-fs-offer-01      mongo_cluster_name=offer-fs-1       mongo_shard_id=0                   mongo_rs_bootstrap=true
# vitam-mongo-fs-offer-02      mongo_cluster_name=offer-fs-1       mongo_shard_id=0
# vitam-fs-offer               mongo_cluster_name=offer-fs-1       mongo_shard_id=0                   mongo_arbiter=true
# vitam-mongo-s3-offer-01      mongo_cluster_name=offer-s3-1       mongo_shard_id=0                   mongo_rs_bootstrap=true
# vitam-mongo-s3-offer-02      mongo_cluster_name=offer-s3-1       mongo_shard_id=0                   is_small=true # PSsmin, this machine needs less hardware
# vitam-s3-offer               mongo_cluster_name=offer-s3-1       mongo_shard_id=0                   mongo_arbiter=true

##### Zone data

# Group definition ; DO NOT MODIFY
[zone_data:children]
hosts_elasticsearch_data
hosts_mongodb_data

[hosts_elasticsearch_data]
# TODO: Put here servers where this service will be deployed : elasticsearch-data cluster
# 2 params available for huge environments (parameter to be declared after each server) :
#    is_data=true/false
#    is_master=true/false
#    for site/room balancing : is_balancing=<whatever> so replica can be applied on all sites/rooms ; default is vitam_site_name
#    other options are not handled yet
# defaults are set to true, if undefined. If defined, at least one server MUST be is_data=true
# Examples :
# server1 is_master=true is_data=false
# server2 is_master=false is_data=true
# More explanation here : https://www.elastic.co/guide/en/elasticsearch/reference/5.6/modules-node.html


# Group definition ; DO NOT MODIFY
[hosts_mongodb_data:children]
hosts_mongos_data
hosts_mongoc_data
hosts_mongod_data

[hosts_mongos_data]
# WARNING : DO NOT COLLOCATE WITH [hosts_mongos_offer]
# TODO: Put here servers where this service will be deployed : mongos cluster
# Mandatory param : mongo_cluster_name=mongo-data  ("mongo-data" is mandatory)
# The recommended practice is to install the mongos instance on the same servers as the mongoc instances
# Example :
# vitam-mdbs-01   mongo_cluster_name=mongo-data
# vitam-mdbs-02   mongo_cluster_name=mongo-data
# vitam-mdbs-03   mongo_cluster_name=mongo-data

[hosts_mongoc_data]
# WARNING : DO NOT COLLOCATE WITH [hosts_mongoc_offer]
# TODO: Put here servers where this service will be deployed : mongoc cluster
# Mandatory param : mongo_cluster_name=mongo-data  ("mongo-data" is mandatory)
# Optional param : mandatory for 1 node of the shard, some init commands will be executed on it
# Recommended practice in production: use 3 instances
# Example :
# vitam-mdbc-01   mongo_cluster_name=mongo-data                     mongo_rs_bootstrap=true
# vitam-mdbc-02   mongo_cluster_name=mongo-data
# vitam-mdbc-03   mongo_cluster_name=mongo-data

[hosts_mongod_data]
# WARNING : DO NOT COLLOCATE WITH [hosts_mongod_offer]
# TODO: Put here servers where this service will be deployed : mongod cluster
# Each replica_set should have an odd number of members (2n + 1)
# Reminder: For Vitam, one mongodb shard is using one replica_set
# Mandatory param : mongo_cluster_name=mongo-data ("mongo-data" is mandatory)
# Mandatory param : id of the current shard, increment by 1 from 0 to n
# Optional param : mandatory for 1 node of the shard, some init commands will be executed on it
# Optional param : mongod_memory=x ; this will force the wiredtiger cache size to x (unit is GB) ; can be usefull when colocalization with elasticsearch
# Recommended practice in production: use 3 instances per shard
# Example:
# vitam-mdbd-01  mongo_cluster_name=mongo-data   mongo_shard_id=0  mongo_rs_bootstrap=true
# vitam-mdbd-02  mongo_cluster_name=mongo-data   mongo_shard_id=0
# vitam-mdbd-03  mongo_cluster_name=mongo-data   mongo_shard_id=0
# vitam-mdbd-04  mongo_cluster_name=mongo-data   mongo_shard_id=1  mongo_rs_bootstrap=true
# vitam-mdbd-05  mongo_cluster_name=mongo-data   mongo_shard_id=1
# vitam-mdbd-06  mongo_cluster_name=mongo-data   mongo_shard_id=1

###### Zone admin

# Group definition ; DO NOT MODIFY
[zone_admin:children]
hosts_cerebro
hosts_consul_server
hosts_kibana_data
log_servers
hosts_elasticsearch_log

[hosts_cerebro]
# TODO: Put here servers where this service will be deployed : vitam-elasticsearch-cerebro

[hosts_consul_server]
# TODO: Put here servers where this service will be deployed : consul

[hosts_kibana_data]
# TODO: Put here servers where this service will be deployed : kibana (for data cluster)

[log_servers:children]
hosts_kibana_log
hosts_logstash


[hosts_kibana_log]
# TODO: Put here servers where this service will be deployed : kibana (for log cluster)

[hosts_logstash]
# TODO: Put here servers where this service will be deployed : logstash
# IF you connect VITAM to external SIEM, DO NOT FILL THE SECTION


[hosts_elasticsearch_log]
# TODO: Put here servers where this service will be deployed : elasticsearch-log cluster
# IF you connect VITAM to external SIEM, DO NOT FILL THE SECTION

########### Global vars ###########

[hosts:vars]

# ===============================
# VITAM
# ===============================

# Declare user for ansible on target machines
ansible_ssh_user=
# Can target user become as root ? ; true is required by VITAM (usage of a sudoer is mandatory)
ansible_become=true
# How can ansible switch to root ?
# See https://docs.ansible.com/ansible/latest/user_guide/become.html

# Related to Consul ; apply in a table your DNS server(s)
# Example : dns_servers=["8.8.8.8","8.8.4.4"]
# If no recursors, use : dns_servers=
dns_servers=

### Logback configuration ###
# Days before deleting logback log files (java & access logs for vitam components)
days_to_delete_logback_logfiles=

# Define local Consul datacenter name
# CAUTION !!! Only alphanumeric characters when using s3 as offer backend !!!
vitam_site_name=prod-dc1
# On offer, value is the prefix for all containers' names. If upgrading from R8, you MUST UNCOMMENT this parameter AS IS !!!
#vitam_prefix_offer=""
# EXAMPLE : vitam_site_name = prod-dc1
# check whether on primary site (true) or secondary (false)
primary_site=true


# ===============================
# EXTRA
# ===============================
# Environment (defines title in extra on reverse homepage). Variable is DEPRECATED !
#environnement=

### vitam-itest repository ###
vitam_tests_branch=master
vitam_tests_gitrepo_protocol=
vitam_tests_gitrepo_baseurl=
vitam_tests_gitrepo_url=

# Used when VITAM is behind a reverse proxy (provides configuration for reverse proxy && displayed in header page)
vitam_reverse_external_dns=
# For reverse proxy use
reverse_proxy_port=443
vitam_reverse_external_protocol=https
# http_proxy env var to use ; has to be declared even if empty
http_proxy_environnement=

Pour chaque type de host, indiquer le(s) serveur(s) défini(s), pour chaque fonction. Une colocalisation de composants est possible (Cf. le paragraphe idoine du DAT)

Note

Concernant le groupe hosts_consul_server, il est nécessaire de déclarer au minimum 3 machines.

Avertissement

Il n’est pas possible de colocaliser les clusters MongoDB data et offer.

Avertissement

Il n’est pas possible de colocaliser kibana-data et kibana-log.

Note

Pour les composants considérés par l’exploitant comme étant « hors VITAM » (typiquement, le composant ihm-demo), il est possible de désactiver la création du servcie Consul associé. Pour cela, après chaque hostname impliqué, il faut rajouter la directive suivante : consul_disabled=true.

Prudence

Concernant la valeur de vitam_site_name, seuls les caractères alphanumériques et le tiret (« -« ) sont autorisés.

Note

Il est possible de multi-instancier le composant « storage-offer-default » dans le cas d’un provider de type objet (s3, swift). Il faut ajouter offer_conf=<le nom>.

4.2.3.2.2. Fichier vitam_security.yml

La configuration des droits d’accès à VITAM est réalisée dans le fichier environments /group_vars/all/vitam_security.yml, comme suit :

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
---

hide_passwords_during_deploy: true

### Admin context name and tenants ###
admin_context_name: "admin-context"
admin_context_tenants: "{{ vitam_tenant_ids }}"
# Indicate context certificates relative paths under {{ inventory_dir }}/certs/client-external/clients
# vitam-admin-int is mandatory for internal use (PRONOM upload)
admin_context_certs: [ "ihm-demo/ihm-demo.crt", "ihm-recette/ihm-recette.crt", "reverse/reverse.crt", "vitam-admin-int/vitam-admin-int.crt" ]
# Indicate here all the personal certificates relative paths under {{ inventory_dir }}/certs/client-vitam-users/clients
admin_personal_certs: [ "userOK.crt" ]

# Admin security profile name
admin_security_profile: "admin-security-profile"

admin_basic_auth_user: "adminUser"

# SElinux state, can be: enforcing, permissive, disabled
selinux_state: "disabled"
# SELinux Policy, can be: targeted, minimum, mls
selinux_policy: "targeted"
# If needed, reboot the VM to enable SELinux
selinux_reboot: True
# Relabel the entire filesystem ?
selinux_relabel: False

Note

Pour la directive admin_context_certs concernant l’intégration de certificats SIA au déploiement, se reporter à la section Intégration d’une application externe (cliente).

Note

Pour la directive admin_personal_certs concernant l’intégration de certificats personnels (personae) au déploiement, se reporter à la section Intégration d’un certificat personnel (personae).

4.2.3.2.3. Fichier offers_opts.yml

Indice

Fichier à créer depuis offers_opts.yml.example et à paramétrer selon le besoin.

La déclaration de configuration des offres de stockage associées se fait dans le fichier environments /group_vars/all/offers_opts.yml :

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
# This is the default vitam strategy ('default'). It is mandatory and must define a referent offer.
# This list of offers is ordered. It can and has to be completed if more offers are necessary
# Strategy order (1st has to be the preferred one)
vitam_strategy:
  - name: offer-fs-1
    referent: true
#    status: ACTIVE # status : enable (value=ACTIVE, default value) or disable (value=INACTIVE) this offer
#    vitam_site_name: prod-dc2 # optional, should be related to vitam_site_name if local ; remote vitam_site_name if distant
#  - name: offer-swift-1
# Example distant:
#  - name: distant
#    referent: true
#    status: INACTIVE
#    vitam_site_name: distant-dc2
#    distant: true # Only add this parameter when distant offer (not on same platform)

# WARNING : multi-strategy is a BETA functionality
# More strategies can be added but are optional
# Strategy name must only use [a-z][a-z0-9-]* pattern
# Any strategy must contain at least one offer
# This list of offers is ordered. It can and has to be completed if more offers are necessary
# Every strategy can define at most one referent offer.
# other_strategies:
#  metadata:
#    - name: offer-fs-1
#      referent: true
#    - name: offer-fs-2
#      referent: false
#  binary:
#    - name: offer-fs-2
#      referent: false
#    - name: offer-s3-1
#      referent: false

# DON'T forget to add associated passwords in vault-vitam.yml with same tree when using provider openstack-swift*
# ATTENTION !!! Each offer has to have a distinct name, except for clusters binding a same physical storage
# WARNING : for offer names, please only use [a-z][a-z0-9-]* pattern
vitam_offers:
  offer-tape-1:
    provider: tape-library
    tapeLibraryConfiguration:
      maxTarEntrySize: 100000
      maxTarFileSize: 1000000
      # Enable overriding non empty cartridges
      # WARNING : FOR DEV/TEST ONLY. DO NOT ENABLE IN PRODUCTION.
      forceOverrideNonEmptyCartridges: false
      # Archive (Tar) file expire time for retention in local FS
      archiveRetentionCacheTimeoutInMinutes: 30

      useSudo: false
    topology:
      buckets:
        -
          name: test
          tenants: [0]
          tarBufferingTimeoutInMinutes: 60
        -
          name: admin
          tenants: [1]
          tarBufferingTimeoutInMinutes: 60
        -
          name: prod
          tenants: [2,3,4,5,6,7,8,9]
          tarBufferingTimeoutInMinutes: 60
    tapeLibraries:
      -
        name: TAPE_LIB_1
        robots:
          -
            device: /dev/tape/by-id/scsi-1QUANTUM_10F73224E6664C84A1D00000
            mtxPath: "/usr/sbin/mtx"
            timeoutInMilliseconds: 3600000
        drives:
          -
            index: 0
            device: /dev/tape/by-id/scsi-1IBM_ULT3580-TD6_1235308739-nst
            mtPath: "/bin/mt"
            ddPath: "/bin/dd"
            tarPath: "/bin/tar"
            timeoutInMilliseconds: 3600000
            readWritePriority: BACKUP
          -
            index: 1
            device: /dev/tape/by-id/scsi-1IBM_ULT3580-TD6_0951859786-nst
            mtPath: "/bin/mt"
            ddPath: "/bin/dd"
            tarPath: "/bin/tar"
            timeoutInMilliseconds: 3600000
            readWritePriority: READ
          -
            index: 2
            device: /dev/tape/by-id/scsi-1IBM_ULT3580-TD6_0269493808-nst
            mtPath: "/bin/mt"
            ddPath: "/bin/dd"
            tarPath: "/bin/tar"
            timeoutInMilliseconds: 3600000
          -
            index: 3
            device: /dev/tape/by-id/scsi-1IBM_ULT3580-TD6_0566471858-nst
            mtPath: "/bin/mt"
            ddPath: "/bin/dd"
            tarPath: "/bin/tar"
            readWritePriority: READ
            timeoutInMilliseconds: 3600000
    offer_log_compaction:
      ## Expiration, here offer logs 21 days old will be compacted
      expiration_value: 21
      ## Choose one of "MILLENNIA", "HALF_DAYS", "MILLIS", "FOREVER", "MICROS", "CENTURIES", "DECADES", "YEARS", "DAYS", "SECONDS", "HOURS", "MONTHS", "WEEKS", "NANOS", "MINUTES", "ERAS"
      expiration_unit: "DAYS"
      ## Compaction bulk size here 10 000 offers logs (at most) will be compacted (Expected value between 1 000 and 200 000)
      compaction_size: 10000
  offer-fs-1:
    # param can be filesystem-hash (recomended) or filesystem (not recomended)
    provider: filesystem-hash
    # Offer log compaction
    offer_log_compaction:
      ## Expiration, here offer logs 21 days old will be compacted
      expiration_value: 21
      ## Choose one of "MILLENNIA", "HALF_DAYS", "MILLIS", "FOREVER", "MICROS", "CENTURIES", "DECADES", "YEARS", "DAYS", "SECONDS", "HOURS", "MONTHS", "WEEKS", "NANOS", "MINUTES", "ERAS"
      expiration_unit: "DAYS"
      ## Compaction bulk size here 10 000 offers logs (at most) will be compacted (Expected value between 1 000 and 200 000)
      compaction_size: 10000
  offer-swift-1:
    # provider : openstack-swift for v1 or openstack-swift-v3 for v3
    provider: openstack-swift-v3
    # swiftKeystoneAuthUrl : URL de connexion à keystone
    swiftKeystoneAuthUrl: https://openstack-hostname:port/auth/1.0
    # swiftDomain : domaine OpenStack dans lequel l'utilisateur est enregistré
    swiftDomain: domaine
    # swiftUser : identifiant de l'utilisateur
    swiftUser: utilisateur
    # swiftPassword: has to be set in vault-vitam.yml (encrypted) with same structure => DO NOT COMMENT OUT
    # swiftProjectName : nom du projet openstack
    swiftProjectName: monTenant
    # swiftUrl: optional variable to force the swift URL
    # swiftUrl: https://swift-hostname:port/swift/v1
    #SSL TrustStore
    swiftTrustStore: /chemin_vers_mon_fichier/monSwiftTrustStore.jks
    #Max connection (concurrent connections), per route, to keep in pool (if a pooling ConnectionManager is used) (by default 2 for Apache HttpClient)
    swiftMaxConnectionsPerRoute: 200
    #Max total connection (concurrent connections) to keep in pool (if a pooling ConnectionManager is used) (by default 20 for Apache HttpClient)
    swiftMaxConnections: 1000
    #Max time (in milliseconds) for waiting to establish connection
    swiftConnectionTimeout: 200000
    #Max time (in milliseconds) waiting for a data from the server (socket)
    swiftReadTimeout: 60000
    #Time (in seconds) to renew a token before expiration occurs (blocking)
    swiftHardRenewTokenDelayBeforeExpireTime: 60
    offer_log_compaction:
      ## Expiration, here offer logs 21 days old will be compacted
      expiration_value: 21
      ## Choose one of "MILLENNIA", "HALF_DAYS", "MILLIS", "FOREVER", "MICROS", "CENTURIES", "DECADES", "YEARS", "DAYS", "SECONDS", "HOURS", "MONTHS", "WEEKS", "NANOS", "MINUTES", "ERAS"
      expiration_unit: "DAYS"
      ## Compaction bulk size here 10 000 offers logs (at most) will be compacted (Expected value between 1 000 and 200 000)
      compaction_size: 10000
  offer-s3-1:
    # provider : can only be amazon-s3-v1 for Amazon SDK S3 V1
    provider: 'amazon-s3-v1'
    # s3Endpoint :  : URL of connection to S3
    s3Endpoint: https://s3.domain/
    # s3RegionName (optional):  Region name (default value us-east-1)
    s3RegionName: us-east-1
    # s3SignerType (optional):  Signing algorithm.
    #     - signature V4 : 'AWSS3V4SignerType' (default value)
    #     - signature V2 : 'S3SignerType'
    s3SignerType: AWSS3V4SignerType
    # s3PathStyleAccessEnabled (optional):  'true' to access bucket in "path-style", else "virtual-hosted-style" (false by default in java client, true by default in ansible scripts) 
    s3PathStyleAccessEnabled: true
    # s3MaxConnections (optional): Max total connection (concurrent connections) (50 by default)
    s3MaxConnections: 50
    # s3ConnectionTimeout (optional): Max time (in milliseconds) for waiting to establish connection (10000 by default)
    s3ConnectionTimeout: 10000
    # s3SocketTimeout (optional): Max time (in milliseconds) for reading from a connected socket (50000 by default)
    s3SocketTimeout: 50000
    # s3RequestTimeout (optional): Max time (in milliseconds) for a request (0 by default, disabled)
    s3RequestTimeout: 0
    # s3ClientExecutionTimeout (optional): Max time (in milliseconds) for a request by java client (0 by default, disabled)
    s3ClientExecutionTimeout: 0

    #Time (in seconds) to renew a token before expiration occurs
    swiftSoftRenewTokenDelayBeforeExpireTime: 300
    offer_log_compaction:
      ## Expiration, here offer logs 21 days old will be compacted
      expiration_value: 21
      ## Choose one of "MILLENNIA", "HALF_DAYS", "MILLIS", "FOREVER", "MICROS", "CENTURIES", "DECADES", "YEARS", "DAYS", "SECONDS", "HOURS", "MONTHS", "WEEKS", "NANOS", "MINUTES", "ERAS"
      expiration_unit: "DAYS"
      ## Compaction bulk size here 10 000 offers logs (at most) will be compacted (Expected value between 1 000 and 200 000)
      compaction_size: 10000

  # example_swift_v1:
  #    provider: openstack-swift
  #    swiftKeystoneAuthUrl: https://keystone/auth/1.0
  #    swiftDomain: domain
  #    swiftUser: user
  #    swiftPassword: has to be set in vault-vitam.yml (encrypted) with same structure => DO NOT COMMENT OUT
  # example_swift_v3:
  #    provider: openstack-swift-v3
  #    swiftKeystoneAuthUrl: https://keystone/v3
  #    swiftDomain: domaine
  #    swiftUser: user
  #    swiftPassword: has to be set in vault-vitam.yml (encrypted) with same structure => DO NOT COMMENT OUT
  #    swiftProjectName: monTenant
  #    projectName: monTenant
  # swiftTrustStore: /chemin_vers_mon_fichier/monSwiftTrustStore.jks
  # swiftMaxConnectionsPerRoute: 200
  # swiftMaxConnections: 1000
  # swiftConnectionTimeout: 200000
  # swiftReadTimeout: 60000
  # Time (in seconds) to renew a token before expiration occurs
  # swiftHardRenewTokenDelayBeforeExpireTime: 60
  # swiftSoftRenewTokenDelayBeforeExpireTime: 300

Se référer aux commentaires dans le fichier pour le renseigner correctement.

Note

Dans le cas d’un déploiement multi-sites, dans la section vitam_strategy, la directive vitam_site_name définit pour l’offre associée le nom du datacenter Consul. Par défaut, si non définie, c’est la valeur de la variable vitam_site_name définie dans l’inventaire qui est prise en compte.

Avertissement

La cohérence entre l’inventaire et la section vitam_strategy (et other_strategies si multi-stratégies) est critique pour le bon déploiement et fonctionnement de la solution logicielle VITAM. En particulier, la liste d’offres de vitam_strategy doit correspondre exactement aux noms d’offres déclarés dans l’inventaire (ou les inventaires de chaque datacenter, en cas de fonctionnement multi-site).

Avertissement

Ne pas oublier, en cas de connexion à un keystone en https, de répercuter dans la PKI la clé publique de la CA du keystone.

4.2.3.2.4. Fichier cots_vars.yml

La configuration s’effectue dans le fichier environments /group_vars/all/cots_vars.yml :

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
---

consul:
    retry_interval: 10 # in seconds
    check_internal: 10 # in seconds
    check_timeout: 5 # in seconds
    network: "ip_admin" # Which network to use for consul communications ? ip_admin or ip_service ?

consul_remote_sites:
    # wan contains the wan addresses of the consul server instances of the external vitam sites
    # Exemple, if our local dc is dc1, we will need to set dc2 & dc3 wan conf:
    # - dc2:
    #   wan: ["10.10.10.10","1.1.1.1"]
    # - dc3:
    #   wan: ["10.10.10.11","1.1.1.1"]
# Please uncomment and fill values if you want to connect VITAM to external SIEM
# external_siem:
#     host:
#     port:

elasticsearch:
    log:
        host: "elasticsearch-log.service.{{ consul_domain }}"
        port_http: "9201"
        groupe: "log"
        baseuri: "elasticsearch-log"
        cluster_name: "elasticsearch-log"
        consul_check_http: 10 # in seconds
        consul_check_tcp: 10 # in seconds
        action_log_level: error
        https_enabled: false
        indices_fielddata_cache_size: '30%' # related to https://www.elastic.co/guide/en/elasticsearch/reference/7.6/modules-fielddata.html
        indices_breaker_fielddata_limit: '40%' # related to https://www.elastic.co/guide/en/elasticsearch/reference/7.6/circuit-breaker.html#fielddata-circuit-breaker
        dynamic_timeout: 30s
        # default index template
        index_templates:
            default:
                shards: 1
                replica: 1
            packetbeat:
                shards: 5
        log_appenders:
            root:
                log_level: "info"
            rolling:
                max_log_file_size: "100MB"
                max_total_log_size: "5GB"
                max_files: "50"
            deprecation_rolling:
                max_log_file_size: "100MB"
                max_total_log_size: "1GB"
                max_files: "10"
                log_level: "warn"
            index_search_slowlog_rolling:
                max_log_file_size: "100MB"
                max_total_log_size: "1GB"
                max_files: "10"
                log_level: "warn"
            index_indexing_slowlog_rolling:
                max_log_file_size: "100MB"
                max_total_log_size: "1GB"
                max_files: "10"
                log_level: "warn"
        # By default, is commented. Should be uncommented if ansible computes badly vCPUs number ;  values are associated vCPUs numbers ; please adapt to your configuration
        # thread_pool:
        #     index:
        #         size: 2
        #     get:
        #         size: 2
        #     search:
        #         size: 2
        #     write:
        #         size: 2
        #     warmer:
        #         max: 2
    data:
        host: "elasticsearch-data.service.{{ consul_domain }}"
        # default is 0.1 (10%) and should be quite enough in most cases
        #index_buffer_size_ratio: "0.15"
        port_http: "9200"
        groupe: "data"
        baseuri: "elasticsearch-data"
        cluster_name: "elasticsearch-data"
        consul_check_http: 10 # in seconds
        consul_check_tcp: 10 # in seconds
        action_log_level: debug
        https_enabled: false
        # discovery_zen_minimum_master_nodes: 2 # comented by default ; by default, value is half the length of ansible associated group whose racks have the same number of machine. If it is not the case, this value have to be set with the smallest rack (if using param is_balancing). ONLY EXISTS FOR DATA CLUSTER !!!! DO NOT FORGET TO APPLY PARAMETER WITH REPLICA NUMBER !!!!
        indices_fielddata_cache_size: '30%' # related to https://www.elastic.co/guide/en/elasticsearch/reference/6.5/modules-fielddata.html
        indices_breaker_fielddata_limit: '40%' # related to https://www.elastic.co/guide/en/elasticsearch/reference/6.5/circuit-breaker.html#fielddata-circuit-breaker
        dynamic_timeout: 30s
        # default index template
        index_templates:
            default:
                shards: 10
                replica: 2
        log_appenders:
            root:
                log_level: "info"
            rolling:
                max_log_file_size: "100MB"
                max_total_log_size: "5GB"
                max_files: "50"
            deprecation_rolling:
                max_log_file_size: "100MB"
                max_total_log_size: "5GB"
                max_files: "50"
                log_level: "warn"
            index_search_slowlog_rolling:
                max_log_file_size: "100MB"
                max_total_log_size: "5GB"
                max_files: "50"
                log_level: "warn"
            index_indexing_slowlog_rolling:
                max_log_file_size: "100MB"
                max_total_log_size: "5GB"
                max_files: "50"
                log_level: "warn"
        # By default, is commented. Should be uncommented if ansible computes badly vCPUs number ;  values are associated vCPUs numbers ; please adapt to your configuration
        # thread_pool:
        #     index:
        #         size: 2
        #     get:
        #         size: 2
        #     search:
        #         size: 2
        #     write:
        #         size: 2
        #     warmer:
        #         max: 2

mongodb:
    mongos_port: 27017
    mongoc_port: 27018
    mongod_port: 27019
    mongo_authentication: "true"
    host: "mongos.service.{{ consul_domain }}"
    check_consul: 10 # in seconds
    drop_info_log: false # Drop mongo (I)nformational log, for Verbosity Level of 0

logstash:
    host: "logstash.service.{{ consul_domain }}"
    user: logstash
    port: 10514
    rest_port: 20514
    check_consul: 10 # in seconds
    # logstash xms & xmx in Megabytes
    # jvm_xms: 2048
    # jvm_xmx: 2048
    # workers_number: 4
    log_appenders:
        rolling:
            max_log_file_size: "100MB"
            max_total_log_size: "5GB"
        json_rolling:
            max_log_file_size: "100MB"
            max_total_log_size: "5GB"

# Prometheus params
prometheus:
    metrics_path: /admin/v1/metrics
    check_consul: 10 # in seconds
    prometheus_config_file_target_directory: # Set path where "prometheus.yml" file will be generated. Example: /tmp/
    server:
        enabled: false
        port: 19090
    node_exporter:
        enabled: true
        port: 19100
        metrics_path: /metrics
    alertmanager:
        enabled: false
        api_port: 19093
        cluster_port: 19094
grafana:
    enabled: false
    check_consul: 10 # in seconds
    http_port: 13000

# Curator units: days
curator:
    log:
        metrics:
            close: 5
            delete: 30
        logstash:
            close: 5
            delete: 30
        metricbeat:
            close: 5
            delete: 30
        packetbeat:
            close: 5
            delete: 30

kibana:
    header_value: "reporting"
    import_delay: 10
    import_retries: 10
    log:
        baseuri: "kibana_log"
        api_call_timeout: 120
        groupe: "log"
        port: 5601
        default_index_pattern: "logstash-vitam*"
        check_consul: 10 # in seconds
        # default shards & replica
        shards: 5
        replica: 1
        # pour index logstash-*
        metrics:
            shards: 5
            replica: 1
        # pour index metrics-vitam-*
        logs:
            shards: 5
            replica: 1
        # pour index metricbeat-*
        metricbeat:
            shards: 5 # must be a factor of 30
            replica: 1
    data:
        baseuri: "kibana_data"
        # OMA : bugdette : api_call_timeout is used for retries ; should ceate a separate variable rather than this one
        api_call_timeout: 120
        groupe: "data"
        port: 5601
        default_index_pattern: "logbookoperation_*"
        check_consul: 10 # in seconds
        # index template for .kibana
        shards: 1
        replica: 1

syslog:
    # value can be syslog-ng or rsyslog
    name: "rsyslog"

cerebro:
    baseuri: "cerebro"
    port: 9000
    check_consul: 10 # in seconds

siegfried:
    port: 19000
    consul_check: 10 # in seconds

clamav:
    port: 3310
    # frequency freshclam for database update per day (from 0 to 24 - 24 meaning hourly check)
    db_update_periodicity: 1

mongo_express:
    baseuri: "mongo-express"

ldap_authentification:
    ldap_protocol: "ldap"
    ldap_server: "{% if groups['ldap']|length > 0 %}{{ groups['ldap']|first }}{% endif %}"
    ldap_port: "389"
    ldap_base: "dc=programmevitam,dc=fr"
    ldap_login: "cn=Manager,dc=programmevitam,dc=fr"
    uid_field: "uid"
    ldap_userDn_Template: "uid={0},ou=people,dc=programmevitam,dc=fr"
    ldap_group_request: "(&(objectClass=groupOfNames)(member={0}))"
    ldap_admin_group: "cn=admin,ou=groups,dc=programmevitam, dc=fr"
    ldap_user_group: "cn=user,ou=groups,dc=programmevitam, dc=fr"
    ldap_guest_group: "cn=guest,ou=groups,dc=programmevitam, dc=fr"

java_prerequisites:
    debian: "openjdk-11-jre-headless"
    redhat: "java-11-openjdk-headless"

Dans le cas du choix du COTS d’envoi des messages syslog dans logastsh, il est possible de choisir entre syslog-ng et rsyslog. Il faut alors modifier la valeur de la directive syslog.name ; la valeur par défaut est rsyslog.

Note

si vous décommentez et renseignez les valeurs dans le bloc external_siem, les messages seront envoyés (par syslog ou syslog-ng, selon votre choix de déploiement) dans un SIEM externe à la solution logicielle VITAM, aux valeurs indiquées dans le bloc ; il n’est alors pas nécessaire de renseigner de partitions pour les groupes ansible [hosts_logstash] et [hosts_elasticsearch_log].

4.2.3.2.5. Fichier tenants_vars.yml

Indice

Fichier à créer depuis tenants_vars.yml.example et à paramétrer selon le besoin.

Le fichier environments /group_vars/all/tenants_vars.yml permet de gérer les configurations spécifiques associés aux tenants de la plateforme (liste des tenants, regroupement de tenants, configuration du nombre de shards et replicas, etc…).

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
### tenants ###
# List of active tenants
vitam_tenant_ids: [0,1,2,3,4,5,6,7,8,9]
# List of dead / removed tenants that should never be reused / present in vitam_tenant_ids
vitam_removed_tenants: []
# Administration tenant
vitam_tenant_admin: 1

###
# Elasticsearch tenant indexation
# ===============================
#
# Elastic search index configuration settings :
# - 'number_of_shards' : number of shards per index. Every ES shard is stored as a lucene index.
# - 'number_of_replicas': number of additional copies of primary shards
# The total number of shards : number_of_shards * (1 primary + M number_of_replicas)
#
# Default settings should be okay for most use cases.
# For more data-intensive workloads or deployments with high number of tenants, custom tenant and/or collection configuration might be specified.
#
# Tenant list may be specified as :
# - A specific tenant                                                 : eg. '1'
# - A tenant range                                                    : eg. '10-19'
# - A comma-separated combination of specific tenants & tenant ranges : eg. '1, 5, 10-19, 50-59'
#
# Masterdata collections (accesscontract, filerules...) are indexed as single elasticsearch indexes :
# - Index name format : {collection}_{date_time_of_creation}. e.g. accesscontract_20200415_042011
# - Index alias name : {collection}. e.g. accesscontract
#
# Metadata collections (unit & objectgroup), and logbook operation collections are stored on a per-tenant index basis :
# - Index name       : {collection}_{tenant}_{date_time_of_creation}. e.g. unit_1_20200517_025041
# - Index alias name : {collection}_{tenant}. e.g. unit_1
#
# Very small tenants (1-100K entries) may be grouped in a "tenant group", and hence, stored in a single elasticsearch index.
# This allows reducing the number of indexes & shards that the elasticsearch cluster need to manage :
# - Index name       : {collection}_{tenant_group_name}_{date_time_of_creation}. e.g. logbookoperation_grp5_20200517_025041
# - Index alias name : {collection}_{tenant_group_name}. e.g. logbookoperation_grp5
#
# Tenant list can be wide ranges (eg: 100-199), and may contain non-existing (yet) tenants. i.e. tenant lists might be wider that 'vitam_tenant_ids' section
# This allows specifying predefined tenant families (whether normal tenants ranges, or tenant groups) to which tenants can be added in the future.
# However, tenant lists may not intersect (i.e. a single tenant cannot belong to 2 configuration sections).
#
# Sizing recommendations :
#  - 1 shard per 5-10M records for small documents (eg. masterdata collections)
#  - 1 shard per 1-2M records for larger documents (eg. metadata & logbook collections)
#  - As a general rule, shard size should not exceed 30GB per shard
#  - A single ES node should not handle > 200 shards (be it a primary or a replica)
#  - It is recommended to start small and add more shards when needed (re-sharding requires a re-indexation operation)
#
# /!\ IMPORTANT :
# Changing the configuration of an existing tenant requires re-indexation of the tenants and/or tenant groups
#
# Please refer to documentation for more details.
#
###
vitam_elasticsearch_tenant_indexation:

  default_config:
    # Default settings for masterdata collections (1 index per collection)
    masterdata:
      number_of_shards: 1
      number_of_replicas: 0
    # Default settings for unit indexes (1 index per tenant)
    unit:
      number_of_shards: 3
      number_of_replicas: 0
    # Default settings for object group indexes (1 index per tenant)
    objectgroup:
      number_of_shards: 3
      number_of_replicas: 0
    # Default settings for logbook operation indexes (1 index per tenant)
    logbookoperation:
      number_of_shards: 2
      number_of_replicas: 0

  ###
  # Default masterdata collection indexation settings (default_config section) apply for all master data collections
  # Custom settings can be defined for the following masterdata collections:
  #   - accesscontract
  #   - accessionregisterdetail
  #   - accessionregistersummary
  #   - accessionregistersymbolic
  #   - agencies
  #   - archiveunitprofile
  #   - context
  #   - fileformat
  #   - filerules
  #   - griffin
  #   - ingestcontract
  #   - managementcontract
  #   - ontology
  #   - preservationscenario
  #   - profile
  #   - securityprofile
  ###
  masterdata:
  #  {collection}:
  #    number_of_shards: 1
  #    number_of_replicas: 2
  #  ...


  ###
  # Custom index settings for regular tenants.
  ###
  dedicated_tenants:
  #  - tenants: '1, 3, 11-20'
  #    unit:
  #      number_of_shards: 4
  #      number_of_replicas: 0
  #    objectgroup:
  #      number_of_shards: 5
  #      number_of_replicas: 0
  #    logbookoperation:
  #      number_of_shards: 3
  #      number_of_replicas: 0
  #  ...




  ###
  # Custom index settings for grouped tenants.
  # Group name must meet the following criteria:
  #  - alphanumeric characters
  #  - lowercase only
  #  - not start with a number
  #  - be less than 64 characters long.
  #  - NO special characters - / _ | ...
  ###
  grouped_tenants:
  #  - name: 'grp1'
  #    tenants: '5-10'
  #    unit:
  #      number_of_shards: 5
  #      number_of_replicas: 0
  #    objectgroup:
  #      number_of_shards: 6
  #      number_of_replicas: 0
  #    logbookoperation:
  #      number_of_shards: 7
  #      number_of_replicas: 0
  #  ...

Se référer aux commentaires dans le fichier pour le renseigner correctement.

Une attention particulère doit être porté à la configuration du nombre de shards et de replicas dans le paramètre vitam_elasticsearch_tenant_indexation.default_config (le fichier tenants_vars.yml.example représente les valeurs recommandées par Vitam dans le cadre d’un déploiement en production). Ce paramètre est obligatoire.

Voir aussi

Se référer au chapitre « Gestion des indexes Elasticseach dans un contexte massivement multi-tenants » du DEX pour plus d’informations sur cette fonctionnalité.

Avertissement

Attention, en cas de modification de la distribution des tenants, une procédure de réindexation de la base elasticsearch-data est nécessaire. Cette procédure est à la charge de l’exploitation et nécessite un arrêt de service sur la plateforme. La durée d’exécution de cette réindexation dépend de la quantité de données à traiter.

Voir aussi

Se référer au chapitre « Réindexation » du DEX pour plus d’informations.

4.2.3.3. Déclaration des secrets

Avertissement

L’ensemble des mots de passe fournis ci-après le sont par défaut et doivent être changés !

4.2.3.3.1. vitam

Avertissement

Cette section décrit des fichiers contenant des données sensibles. Il est important d’implémenter une politique de mot de passe robuste conforme à ce que l’ANSSI préconise. Par exemple: ne pas utiliser le même mot de passe pour chaque service, renouveler régulièrement son mot de passe, utiliser des majuscules, minuscules, chiffres et caractères spéciaux (Se référer à la documentation ANSSI https://www.ssi.gouv.fr/guide/mot-de-passe). En cas d’usage d’un fichier de mot de passe (vault-password-file), il faut renseigner ce mot de passe comme contenu du fichier et ne pas oublier de sécuriser ou supprimer ce fichier à l’issue de l’installation.

Les secrets utilisés par la solution logicielle (en-dehors des certificats qui sont abordés dans une section ultérieure) sont définis dans des fichiers chiffrés par ansible-vault.

Important

Tous les vault présents dans l’arborescence d’inventaire doivent être tous protégés par le même mot de passe !

La première étape consiste à changer les mots de passe de tous les vaults présents dans l’arborescence de déploiement (le mot de passe par défaut est contenu dans le fichier vault_pass.txt) à l’aide de la commande ansible-vault rekey <fichier vault>.

Voici la liste des vaults pour lesquels il est nécessaire de modifier le mot de passe:

  • environments/group_vars/all/vault-vitam.yml
  • environments/group_vars/all/vault-keystores.yml
  • environments/group_vars/all/vault-extra.yml
  • environments/certs/vault-certs.yml

2 vaults sont principalement utilisés dans le déploiement d’une version :

Avertissement

Leur contenu est donc à modifier avant tout déploiement.

  • Le fichier environments /group_vars/all/vault-vitam.yml contient les secrets généraux :

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    ---
    # Vitam platform secret key
    plateforme_secret: vitamsecret
    
    # The consul key must be 16-bytes, Base64 encoded: https://www.consul.io/docs/agent/encryption.html
    # You can generate it with the "consul keygen" command
    # Or you can use this script: deployment/pki/scripts/generate_consul_key.sh
    consul_encrypt: Biz14ohqN4HtvZmrXp3N4A==
    
    mongodb:
      mongo-data:
        passphrase: changeitkM4L6zBgK527tWBb
        admin:
          user: vitamdb-admin
          password: change_it_1MpG22m2MywvKW5E
        localadmin:
          user: vitamdb-localadmin
          password: change_it_HycFEVD74g397iRe
        system:
          user: vitamdb-system
          password: change_it_HycFEVD74g397iRe
        metadata:
          user: metadata
          password: change_it_37b97KVaDV8YbCwt
        logbook:
          user: logbook
          password: change_it_jVi6q8eX4H1Ce8UC
        report:
          user: report
          password: change_it_jb7TASZbU6n85t8L
        functionalAdmin:
          user: functional-admin
          password: change_it_9eA2zMCL6tm6KF1e
        securityInternal:
          user: security-internal
          password: change_it_m39XvRQWixyDX566
      offer-fs-1:
        passphrase: changeitmB5rnk1M5TY61PqZ
        admin:
          user: vitamdb-admin
          password: change_it_FLkM5emt63N73EcN
        localadmin:
          user: vitamdb-localadmin
          password: change_it_QeH8q4e16ah4QKXS
        system:
          user: vitamdb-system
          password: change_it_HycFEVD74g397iRe
        offer:
          user: offer
          password: change_it_pQi1T1yT9LAF8au8
      offer-fs-2:
        passphrase: changeiteSY1By57qZr4MX2s
        admin:
          user: vitamdb-admin
          password: change_it_84aTMFZ7h8e2NgMe
        localadmin:
          user: vitamdb-localadmin
          password: change_it_Am1B37tGY1w5VfvX
        system:
          user: vitamdb-system
          password: change_it_HycFEVD74g397iRe
        offer:
          user: offer
          password: change_it_mLDYds957sNQ53mA
      offer-tape-1:
        passphrase: changeitmB5rnk1M5TY61PqZ
        admin:
          user: vitamdb-admin
          password: change_it_FLkM5emt63N73EcN
        localadmin:
          user: vitamdb-localadmin
          password: change_it_QeH8q4e16ah4QKXS
        system:
          user: vitamdb-system
          password: change_it_HycFEVD74g397iRe
        offer:
          user: offer
          password: change_it_pQi1T1yT9LAF8au8
      offer-swift-1:
        passphrase: changeitgYvt42M2pKL6Zx3T
        admin:
          user: vitamdb-admin
          password: change_it_e21hLp51WNa4sJFS
        localadmin:
          user: vitamdb-localadmin
          password: change_it_QB8857SJrGrQh2yu
        system:
          user: vitamdb-system
          password: change_it_HycFEVD74g397iRe
        offer:
          user: offer
          password: change_it_AWJg2Bp3s69P6nMe
      offer-s3-1:
        passphrase: changeituF1jVdR9NqdTG625
        admin:
          user: vitamdb-admin
          password: change_it_5b7cSWcS5M1NF4kv
        localadmin:
          user: vitamdb-localadmin
          password: change_it_S9jE24rxHwUZP6y5
        system:
          user: vitamdb-system
          password: change_it_HycFEVD74g397iRe
        offer:
          user: offer
          password: change_it_TuTB1i2k7iQW3zL2
      offer-tape-1:
        passphrase: changeituF1jghT9NqdTG625
        admin:
          user: vitamdb-admin
          password: change_it_5b7cSWcab91NF4kv
        localadmin:
          user: vitamdb-localadmin
          password: change_it_S9jE24rxHwUZP5a6
        system:
          user: vitamdb-system
          password: change_it_HycFEVD74g397iRe
        offer:
          user: offer
          password: change_it_TuTB1i2k7iQW3c2a
    
    vitam_users:
      - vitam_aadmin:
        login: aadmin
        password: change_it_z5MP7GC4qnR8nL9t
        role: admin
      - vitam_uuser:
        login: uuser
        password: change_it_w94Q3jPAT2aJYm8b
        role: user
      - vitam_gguest:
        login: gguest
        password: change_it_E5v7Tr4h6tYaQG2W
        role: guest
      - techadmin:
        login: techadmin
        password: change_it_K29E1uHcPZ8zXji8
        role: admin
    
    ldap_authentification:
        ldap_pwd: "change_it_t69Rn5NdUv39EYkC"
    
    admin_basic_auth_password: change_it_5Yn74JgXwbQ9KdP8
    
    vitam_offers:
        offer-swift-1:
            swiftPassword: change_it_m44j57aYeRPnPXQ2
        offer-s3-1:
            s3AccessKey: accessKey_change_grLS8372Uga5EJSx
            s3SecretKey: secretKey_change_p97es2m2CHXPJA1m
    

Prudence

Seuls les caractères alphanumériques sont valides pour les directives passphrase.

Avertissement

Le paramétrage du mode d’authentifications des utilisateurs à l”IHM démo est géré au niveau du fichier deployment/environments/group_vars/all/vitam_vars.yml. Plusieurs modes d’authentifications sont proposés au niveau de la section authentication_realms. Dans le cas d’une authentification se basant sur le mécanisme iniRealm (configuration shiro par défaut), les mots de passe déclarés dans la section vitam_users devront s’appuyer sur une politique de mot de passe robuste, comme indiqué en début de chapitre. Il est par ailleurs possible de choisir un mode d’authentification s’appuyant sur un annuaire LDAP externe (ldapRealm dans la section authentication_realms).

Note

Dans le cadre d’une installation avec au moins une offre swift, il faut déclarer, dans la section vitam_offers, le nom de chaque offre et le mot de passe de connexion swift associé, défini dans le fichier offers_opts.yml. L’exemple ci-dessus présente la déclaration du mot de passe pour l’offre swift offer-swift-1.

Note

Dans le cadre d’une installation avec au moins une offre s3, il faut déclarer, dans la section vitam_offers, le nom de chaque offre et l’access key secret s3 associé, défini dans le fichier offers_opts.yml. L’exemple ci-dessus présente la déclaration du mot de passe pour l’offre s3 offer-s3-1.

  • Le fichier environments /group_vars/all/vault-keystores.yml contient les mots de passe des magasins de certificats utilisés dans VITAM :

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    # NO UNDERSCORE ALLOWED IN VALUES
    keystores:
      server:
        offer: changeit817NR75vWsZtgAgJ
        access_external: changeitMZFD2YM4279miitu
        ingest_external: changeita2C74cQhy84BLWCr
        ihm_recette: changeit4FWYVK1347mxjGfe
        ihm_demo: changeit6kQ16eyDY7QPS9fy
      client_external:
        ihm_demo: changeitGT38hhTiA32x1PLy
        gatling: changeit2sBC5ac7NfGF9Qj7
        ihm_recette: changeitdAZ9Eq65UhDZd9p4
        reverse: changeite5XTzb5yVPcEX464
        vitam_admin_int: changeitz6xZe5gDu7nhDZd9
      client_storage:
        storage: changeit647D7LWiyM6qYMnm
      timestamping:
        secure_logbook: changeitMn9Skuyx87VYU62U
        secure_storage: changeite5gDu9Skuy84BLW9
    truststores:
      server: changeitxNe4JLfn528PVHj7
      client_external: changeitJ2eS93DcPH1v4jAp
      client_storage: changeitHpSCa31aG8ttB87S
    grantedstores:
      client_external: changeitLL22HkmDCA2e2vj7
      client_storage: changeitR3wwp5C8KQS76Vcu
    

Avertissement

il convient de sécuriser votre environnement en définissant des mots de passe forts.

4.2.3.3.2. Cas des extras

  • Le fichier environments /group_vars/all/vault-extra.yml contient les mots de passe des magasins de certificats utilisés dans VITAM :

    1
    2
    3
    # Example for git lfs ; uncomment & use if needed
    #vitam_gitlab_itest_login: "account"
    #vitam_gitlab_itest_password: "change_it_4DU42JVf2x2xmPBs"
    

Note

Il est possible, depuis le fichier cots_var.yml d’activer ou désactiver l’installation de la stack prometheus et grafana. La co-localisation de la stack prometheus et grafa est fortement recommandable.

Note

le playbook vitam.yml comprend des étapes avec la mention no_log afin de ne pas afficher en clair des étapes comme les mots de passe des certificats. En cas d’erreur, il est possible de retirer la ligne dans le fichier pour une analyse plus fine d’un éventuel problème sur une de ces étapes.

4.2.3.3.3. Commande ansible-vault

Certains fichiers présents sous environments/group_vars/all commençant par vault- doivent être protégés (encryptés) avec l’utilitaire ansible-vault.

Note

Ne pas oublier de mettre en conformité le fichier vault_pass.txt

4.2.3.3.3.1. Générer des fichiers vaultés depuis des fichier en clair

Exemple du fichier vault-cots.example

cp vault-cots.example vault-cots.yml
ansible-vault encrypt vault-cots.yml

4.2.3.3.3.2. Ré-encoder un fichier vaulté

Exemple du fichier vault-cots.yml

ansible-vault rekey vault-cots.yml

4.2.3.4. Le mapping ELasticsearch pour Unit et ObjectGroup

Les mappings des index elasticsearch pour les collections masterdata Unit et ObjectGroup sont configurables de l’extérieur, plus spécifiquement dans le dossier environments deployment/ansible-vitam/roles/elasticsearch-mapping/files/, ce dossier contient:

  • deployment/ansible-vitam/roles/elasticsearch-mapping/files/unit-es-mapping.json
  • deployment/ansible-vitam/roles/elasticsearch-mapping/files/og-es-mapping.json

Exemple du fichier mapping de la collection ObjectGroup :

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
{
  "dynamic_templates": [
    {
      "object": {
        "match_mapping_type": "object",
        "mapping": {
          "type": "object"
        }
      }
    },
    {
      "all_string": {
        "match": "*",
        "mapping": {
          "type": "text"
        }
      }
    }
  ],
  "properties": {
    "FileInfo": {
      "properties": {
        "CreatingApplicationName": {
          "type": "text"
        },
        "CreatingApplicationVersion": {
          "type": "text"
        },
        "CreatingOs": {
          "type": "text"
        },
        "CreatingOsVersion": {
          "type": "text"
        },
        "DateCreatedByApplication": {
          "type": "date",
          "format": "strict_date_optional_time"
        },
        "Filename": {
          "type": "text"
        },
        "LastModified": {
          "type": "date",
          "format": "strict_date_optional_time"
        }
      }
    },
    "Metadata": {
      "properties": {
        "Text": {
          "type": "object"
        },
        "Document": {
          "type": "object"
        },
        "Image": {
          "type": "object"
        },
        "Audio": {
          "type": "object"
        },
        "Video": {
          "type": "object"
        }
      }
    },
    "OtherMetadata": {
      "type": "object",
      "properties": {
        "RawMetadata": {
          "type": "object"
        }
      }
    },
    "_profil": {
      "type": "keyword"
    },
    "_qualifiers": {
      "properties": {
        "_nbc": {
          "type": "long"
        },
        "qualifier": {
          "type": "keyword"
        },
        "versions": {
          "type": "nested",
          "properties": {
            "Compressed": {
              "type": "text"
            },
            "DataObjectGroupId": {
              "type": "keyword"
            },
            "DataObjectVersion": {
              "type": "keyword"
            },
            "DataObjectSystemId": {
              "type": "keyword"
            },
            "DataObjectGroupSystemId": {
              "type": "keyword"
            },
            "_opi": {
              "type": "keyword"
            },
            "FileInfo": {
              "properties": {
                "CreatingApplicationName": {
                  "type": "text"
                },
                "CreatingApplicationVersion": {
                  "type": "text"
                },
                "CreatingOs": {
                  "type": "text"
                },
                "CreatingOsVersion": {
                  "type": "text"
                },
                "DateCreatedByApplication": {
                  "type": "date",
                  "format": "strict_date_optional_time"
                },
                "Filename": {
                  "type": "text"
                },
                "LastModified": {
                  "type": "date",
                  "format": "strict_date_optional_time"
                }
              }
            },
            "FormatIdentification": {
              "properties": {
                "FormatId": {
                  "type": "keyword"
                },
                "FormatLitteral": {
                  "type": "keyword"
                },
                "MimeType": {
                  "type": "keyword"
                },
                "Encoding": {
                  "type": "keyword"
                }
              }
            },
            "MessageDigest": {
              "type": "keyword"
            },
            "Algorithm": {
              "type": "keyword"
            },
            "PhysicalDimensions": {
              "properties": {
                "Diameter": {
                  "properties": {
                    "unit": {
                      "type": "keyword"
                    },
                    "dValue": {
                      "type": "double"
                    }
                  }
                },
                "Height": {
                  "properties": {
                    "unit": {
                      "type": "keyword"
                    },
                    "dValue": {
                      "type": "double"
                    }
                  }
                },
                "Depth": {
                  "properties": {
                    "unit": {
                      "type": "keyword"
                    },
                    "dValue": {
                      "type": "double"
                    }
                  }
                },
                "Shape": {
                  "type": "keyword"
                },
                "Thickness": {
                  "properties": {
                    "unit": {
                      "type": "keyword"
                    },
                    "dValue": {
                      "type": "double"
                    }
                  }
                },
                "Length": {
                  "properties": {
                    "unit": {
                      "type": "keyword"
                    },
                    "dValue": {
                      "type": "double"
                    }
                  }
                },
                "NumberOfPage": {
                  "type": "long"
                },
                "Weight": {
                  "properties": {
                    "unit": {
                      "type": "keyword"
                    },
                    "dValue": {
                      "type": "double"
                    }
                  }
                },
                "Width": {
                  "properties": {
                    "unit": {
                      "type": "keyword"
                    },
                    "dValue": {
                      "type": "double"
                    }
                  }
                }
              }
            },
            "PhysicalId": {
              "type": "keyword"
            },
            "Size": {
              "type": "long"
            },
            "Uri": {
              "type": "keyword"
            },
            "_id": {
              "type": "keyword"
            },
            "_storage": {
              "properties": {
                "_nbc": {
                  "type": "long"
                },
                "offerIds": {
                  "type": "keyword"
                },
                "strategyId": {
                  "type": "keyword"
                }
              }
            }
          }
        }
      }
    },
    "_v": {
      "type": "long"
    },
    "_av": {
      "type": "long"
    },
    "_nbc": {
      "type": "long"
    },
    "_ops": {
      "type": "keyword"
    },
    "_opi": {
      "type": "keyword"
    },
    "_sp": {
      "type": "keyword"
    },
    "_sps": {
      "type": "keyword"
    },
    "_tenant": {
      "type": "long"
    },
    "_up": {
      "type": "keyword"
    },
    "_uds": {
      "type": "object",
      "enabled": false
    },
    "_us": {
      "type": "keyword"
    },
    "_storage": {
      "properties": {
        "_nbc": {
          "type": "long"
        },
        "offerIds": {
          "type": "keyword"
        },
        "strategyId": {
          "type": "keyword"
        }
      }
    },
    "_glpd": {
      "enabled": false
    }
  }
}

Note

Le paramétrage de ce mapping se fait sur les deux composants Metadata et le composant extra``Ihm Recette``.

Prudence

En cas de changement du mapping, il faut vailler à ce que cette mise à jour soit en accord avec l’Ontologie de VITAM.

Le mapping est pris en compte lors de la première création des indexes. Pour une nouvelle installation de VITAM, les mapping seront automatiquement pris en compte. Cependant, la modification des mapping nécessite une réindexation via l’API dédiée si VITAM est déjà installé.