4.2.3. Configuration du déploiement

Voir aussi

L’architecture de la solution logicielle, les éléments de dimensionnement ainsi que les principes de déploiement sont définis dans le DAT.

4.2.3.1. Fichiers de déploiement

Les fichiers de déploiement sont disponibles dans la version VITAM livrée, dans le sous-répertoire deployment/. Concernant l’installation, ils se déclinent en 2 parties :

  • les playbooks ansible de déploiement, présents dans le sous-répertoire ansible-vitam/, qui est indépendant de l’environnement à déployer ; ces fichiers ne sont normalement pas à modifier pour réaliser une installation.
  • l’arborescence d’inventaire ; des fichiers d’exemples sont disponibles dans le sous-répertoire environments/. Cette arborescence est valable pour le déploiement d’un environnement, et doit être dupliquée lors de l’installation d’environnements ultérieurs. Les fichiers contenus dans cette arborescence doivent être adaptés avant le déploiement, comme expliqué dans les paragraphes suivants.

4.2.3.2. Informations plate-forme

4.2.3.2.1. Inventaire

Pour configurer le déploiement, il est nécessaire de créer, dans le répertoire environments/, un nouveau fichier d’inventaire (par la suite, ce fichier sera communément appelé hosts.<environnement>). Ce fichier devra se conformer à la structure présente dans le fichier hosts.example (et notamment respecter scrupuleusement l’arborescence des groupes ansible). Les commentaires dans ce fichier fournissent les explications permettant l’adaptation à l’environnement cible :

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
# Group definition ; DO NOT MODIFY
[hosts]

# Group definition ; DO NOT MODIFY
[hosts:children]
vitam
reverse
hosts_dev_tools
ldap

########### Tests environments specifics ###########

# EXTRA : Front reverse-proxy (test environments ONLY) ; add machine name after
[reverse]
# optional : after machine, if this machine is different from VITAM machines, you can specify another become user
# Example
# vitam-centos-01.vitam ansible_ssh_user=centos


[ldap] # Extra : OpenLDAP server
# LDAP server !!! NOT FOR PRODUCTION !!! Test only


[library]
# TODO: Put here servers where this service will be deployed : library


[hosts_dev_tools]
# TODO: Put here servers where this service will be deployed : mongo-express, elasticsearch-head
# /!\ WARNING !!! NOT FOR PRODUCTION


[elasticsearch:children] # EXTRA : elasticsearch
hosts_elasticsearch_data
hosts_elasticsearch_log

########### VITAM services ###########

# Group definition ; DO NOT MODIFY
[vitam:children]
zone_external
zone_access
zone_applicative
zone_storage
zone_data
zone_admin
library

##### Zone externe
[zone_external:children]
hosts_ihm_demo
hosts_ihm_recette

[hosts_ihm_demo]
# TODO: Put here servers where this service will be deployed : ihm-demo. If you use vitam-ui or your own frontend, it is recommended to leave this group blank
# If you don't need consul for ihm-demo, you can set this var after each hostname :
# consul_disabled=true
# DEPRECATED / We'll soon be removed. Please consider using vitam-ui or your own front-end
# /!\ WARNING !!! NOT recommended for PRODUCTION


[hosts_ihm_recette]
# TODO: Put here servers where this service will be deployed : ihm-recette (extra feature)
# DEPRECATED / We'll soon be removed.
# /!\ WARNING !!! NOT FOR PRODUCTION


##### Zone access

# Group definition ; DO NOT MODIFY
[zone_access:children]
hosts_ingest_external
hosts_access_external
hosts_collect_external

[hosts_ingest_external]
# TODO: Put here servers where this service will be deployed : ingest-external


[hosts_access_external]
# TODO: Put here servers where this service will be deployed : access-external


[hosts_collect_external]
# TODO: Put here servers where this service will be deployed : collect-external


##### Zone applicative

# Group definition ; DO NOT MODIFY
[zone_applicative:children]
hosts_ingest_internal
hosts_processing
hosts_batch_report
hosts_worker
hosts_access_internal
hosts_metadata
hosts_functional_administration
hosts_scheduler
hosts_logbook
hosts_workspace
hosts_storage_engine
hosts_security_internal
hosts_collect_internal
hosts_metadata_collect
hosts_workspace_collect


[hosts_security_internal]
# TODO: Put here servers where this service will be deployed : security-internal


[hosts_logbook]
# TODO: Put here servers where this service will be deployed : logbook


[hosts_workspace]
# TODO: Put the server where this service will be deployed : workspace
# WARNING: put only ONE server for this service, not more !


[hosts_ingest_internal]
# TODO: Put here servers where this service will be deployed : ingest-internal


[hosts_access_internal]
# TODO: Put here servers where this service will be deployed : access-internal


[hosts_metadata]
# TODO: Put here servers where this service will be deployed : metadata


[hosts_functional_administration]
# TODO: Put here servers where this service will be deployed : functional-administration


[hosts_scheduler]
# TODO: Put here servers where this service will be deployed : scheduler
# Optional parameter after each host : vitam_scheduler_thread_count=<integer> ; This is the number of threads that are available for concurrent execution of jobs. ; default is 3 thread


[hosts_processing]
# TODO: Put the server where this service will be deployed : processing
# WARNING: put only one server for this service, not more !


[hosts_storage_engine]
# TODO: Put here servers where this service will be deployed : storage-engine


[hosts_batch_report]
# TODO: Put here servers where this service will be deployed : batch-report


[hosts_worker]
# TODO: Put here servers where this service will be deployed : worker
# Optional parameter after each host : vitam_worker_capacity=<integer> ; please refer to your infrastructure for defining this number ; default is ansible_processor_vcpus value (cpu number in /proc/cpuinfo file)


[hosts_collect_internal]
# TODO: Put here servers where this service will be deployed : collect_internal


[hosts_metadata_collect]
# TODO: Put here servers where this service will be deployed : metadata_collect


[hosts_workspace_collect]
# TODO: Put the server where this service will be deployed : workspace_collect
# WARNING: put only ONE server for this service, not more !



##### Zone storage

[zone_storage:children] # DO NOT MODIFY
hosts_storage_offer_default
hosts_mongodb_offer

[hosts_storage_offer_default]
# TODO: Put here servers where this service will be deployed : storage-offer-default
# LIMIT : only 1 offer per machine
# LIMIT and 1 machine per offer when filesystem or filesystem-hash provider
# Possibility to declare multiple machines with same provider only when provider is s3 or swift.
# Mandatory param for each offer is offer_conf and points to offer_opts.yml & vault-vitam.yml (with same tree)
# Optionnal parameter: restic_enabled=true (only 1 per offer_conf) available for providers filesystem*, openstack-swift-v3 & amazon-s3-v1
# for swift
# hostname-offre-1.vitam offer_conf=offer-swift-1 restic_enabled=true
# hostname-offre-2.vitam offer_conf=offer-swift-1
# for filesystem
# hostname-offre-2.vitam offer_conf=offer-fs-1 restic_enabled=true
# for s3
# hostname-offre-3.vitam offer_conf=offer-s3-1 restic_enabled=true
# hostname-offre-4.vitam offer_conf=offer-s3-1


[hosts_mongodb_offer:children]
hosts_mongos_offer
hosts_mongoc_offer
hosts_mongod_offer

[hosts_mongos_offer]
# WARNING : DO NOT COLLOCATE WITH [hosts_mongos_data]
# TODO: put here servers where this service will be deployed : mongos cluster for storage offers
# Mandatory params
#  - mongo_cluster_name=<offer_name> ; name of the cluster (should exist on vitam_strategy configuration in offer_opts.yml)
# The recommended practice is to install the mongos instance on the same servers as the mongoc instances
# Example
# vitam-mongo-swift-offer-01   mongo_cluster_name=offer-swift-1
# vitam-mongo-swift-offer-02   mongo_cluster_name=offer-swift-1
# vitam-mongo-fs-offer-01      mongo_cluster_name=offer-fs-1
# vitam-mongo-fs-offer-02      mongo_cluster_name=offer-fs-1
# vitam-mongo-s3-offer-01      mongo_cluster_name=offer-s3-1
# vitam-mongo-s3-offer-02      mongo_cluster_name=offer-s3-1


[hosts_mongoc_offer]
# WARNING : DO NOT COLLOCATE WITH [hosts_mongoc_data]
# TODO: put here servers where this service will be deployed : mongoc cluster for storage offers
# Mandatory params
#  - mongo_cluster_name=<offer_name> ; name of the cluster (should exist on vitam_strategy configuration in offer_opts.yml)
# Optional params
#  - mongo_rs_bootstrap=true ; mandatory for 1 node, some init commands will be executed on it
# The recommended practice is to install the mongoc instance on the same servers as the mongos instances
# Recommended practice in production: use 3 instances
# IMPORTANT : Updating cluster configuration is NOT supported. Do NOT add/remove a host to an existing replica set.
# Example :
# vitam-mongo-swift-offer-01   mongo_cluster_name=offer-swift-1   mongo_rs_bootstrap=true
# vitam-mongo-swift-offer-02   mongo_cluster_name=offer-swift-1
# vitam-swift-offer            mongo_cluster_name=offer-swift-1
# vitam-mongo-fs-offer-01      mongo_cluster_name=offer-fs-1      mongo_rs_bootstrap=true
# vitam-mongo-fs-offer-02      mongo_cluster_name=offer-fs-1
# vitam-fs-offer               mongo_cluster_name=offer-fs-1
# vitam-mongo-s3-offer-01      mongo_cluster_name=offer-s3-1      mongo_rs_bootstrap=true
# vitam-mongo-s3-offer-02      mongo_cluster_name=offer-s3-1
# vitam-s3-offer               mongo_cluster_name=offer-s3-1


[hosts_mongod_offer]
# WARNING : DO NOT COLLOCATE WITH [hosts_mongod_data]
# TODO: put here servers where this service will be deployed : mongod cluster for storage offers
# Mandatory params
#  - mongo_cluster_name=<offer_name> ; name of the cluster (should exist on vitam_strategy configuration in offer_opts.yml)
#  - mongo_shard_id=x ; increment by 1 from 0 to n to create multiple shards
# Optional params
#  - mongo_rs_bootstrap=true (default: false); mandatory for 1 node of the shard, some init commands will be executed on it
#  - mongo_arbiter=true (default: false); the node will be only an arbiter, it will not store data ; do not add this parameter on a mongo_rs_bootstrap node, maximum 1 node per shard
#  - mongod_memory=x (default: unset); this will force the wiredtiger cache size to x (unit is GB)
#  - is_small=true (default: false); this will force the priority for this server to be lower when electing master ; hardware can be downgraded for this machine
# Recommended practice in production: use 3 instances per shard
# IMPORTANT : Updating cluster configuration is NOT supported. Do NOT add/remove a host to an existing replica set, update shard id, arbiter mode or PSSmin configuration.
# Example :
# vitam-mongo-swift-offer-01   mongo_cluster_name=offer-swift-1   mongo_shard_id=0   mongo_rs_bootstrap=true
# vitam-mongo-swift-offer-02   mongo_cluster_name=offer-swift-1   mongo_shard_id=0
# vitam-swift-offer            mongo_cluster_name=offer-swift-1   mongo_shard_id=0   mongo_arbiter=true
# vitam-mongo-fs-offer-01      mongo_cluster_name=offer-fs-1      mongo_shard_id=0   mongo_rs_bootstrap=true
# vitam-mongo-fs-offer-02      mongo_cluster_name=offer-fs-1      mongo_shard_id=0
# vitam-fs-offer               mongo_cluster_name=offer-fs-1      mongo_shard_id=0   mongo_arbiter=true
# vitam-mongo-s3-offer-01      mongo_cluster_name=offer-s3-1      mongo_shard_id=0   mongo_rs_bootstrap=true
# vitam-mongo-s3-offer-02      mongo_cluster_name=offer-s3-1      mongo_shard_id=0   is_small=true # PSSmin, this machine needs less hardware
# vitam-s3-offer               mongo_cluster_name=offer-s3-1      mongo_shard_id=0   mongo_arbiter=true


##### Zone data

# Group definition ; DO NOT MODIFY
[zone_data:children]
hosts_elasticsearch_data
hosts_mongodb_data

[hosts_elasticsearch_data]
# TODO: Put here servers where this service will be deployed : elasticsearch-data cluster
# 2 params available for huge environments (parameter to be declared after each server) :
#    is_data=true/false
#    is_master=true/false
#    for site/room balancing : is_balancing=<whatever> so replica can be applied on all sites/rooms ; default is vitam_site_name
#    other options are not handled yet
# defaults are set to true, if undefined. If defined, at least one server MUST be is_data=true
# Examples :
# server1 is_master=true is_data=false
# server2 is_master=false is_data=true
# More explanation here : https://www.elastic.co/guide/en/elasticsearch/reference/5.6/modules-node.html


# Group definition ; DO NOT MODIFY
[hosts_mongodb_data:children]
hosts_mongos_data
hosts_mongoc_data
hosts_mongod_data

[hosts_mongos_data]
# WARNING : DO NOT COLLOCATE WITH [hosts_mongos_offer]
# TODO: Put here servers where this service will be deployed : mongos_data cluster
# Mandatory params
#  - mongo_cluster_name=mongo-data ; "mongo-data" is mandatory
# The recommended practice is to install the mongos instance on the same servers as the mongoc instances
# Example :
# vitam-mdbs-01   mongo_cluster_name=mongo-data
# vitam-mdbs-02   mongo_cluster_name=mongo-data
# vitam-mdbs-03   mongo_cluster_name=mongo-data


[hosts_mongoc_data]
# WARNING : DO NOT COLLOCATE WITH [hosts_mongoc_offer]
# TODO: Put here servers where this service will be deployed : mongoc_data cluster
# Mandatory params
#  - mongo_cluster_name=mongo-data ; "mongo-data" is mandatory
# Optional params
#  - mongo_rs_bootstrap=true ; mandatory for 1 node, some init commands will be executed on it
# The recommended practice is to install the mongoc instance on the same servers as the mongos instances
# Recommended practice in production: use 3 instances
# IMPORTANT : Updating cluster configuration is NOT supported. Do NOT add/remove a host to an existing replica set.
# Example :
# vitam-mdbs-01   mongo_cluster_name=mongo-data   mongo_rs_bootstrap=true
# vitam-mdbs-02   mongo_cluster_name=mongo-data
# vitam-mdbs-03   mongo_cluster_name=mongo-data


[hosts_mongod_data]
# WARNING : DO NOT COLLOCATE WITH [hosts_mongod_offer]
# TODO: Put here servers where this service will be deployed : mongod_data cluster
# Each replica_set should have an odd number of members (2n + 1)
# Reminder: For Vitam, one mongodb shard is using one replica_set
# Mandatory params
#  - mongo_cluster_name=mongo-data ; "mongo-data" is mandatory
#  - mongo_shard_id=x ; increment by 1 from 0 to n to create multiple shards
# Optional params
#  - mongo_rs_bootstrap=true (default: false); mandatory for 1 node of the shard, some init commands will be executed on it
#  - mongo_arbiter=true (default: false); the node will be only an arbiter, it will not store data ; do not add this parameter on a mongo_rs_bootstrap node, maximum 1 node per shard
#  - mongod_memory=x (default: unset); this will force the wiredtiger cache size to x (unit is GB) ; can be usefull when colocalization with elasticsearch
#  - is_small=true (default: false); this will force the priority for this server to be lower when electing master ; hardware can be downgraded for this machine
# Recommended practice in production: use 3 instances per shard
# IMPORTANT : Updating cluster configuration is NOT supported. Do NOT add/remove a host to an existing replica set, update shard id, arbiter mode or PSSmin configuration.
# Example:
# vitam-mdbd-01  mongo_cluster_name=mongo-data   mongo_shard_id=0   mongo_rs_bootstrap=true
# vitam-mdbd-02  mongo_cluster_name=mongo-data   mongo_shard_id=0
# vitam-mdbd-03  mongo_cluster_name=mongo-data   mongo_shard_id=0   is_small=true # PSSmin, this machine needs less hardware
# vitam-mdbd-04  mongo_cluster_name=mongo-data   mongo_shard_id=1   mongo_rs_bootstrap=true
# vitam-mdbd-05  mongo_cluster_name=mongo-data   mongo_shard_id=1
# vitam-mdbd-06  mongo_cluster_name=mongo-data   mongo_shard_id=1   mongo_arbiter=true


###### Zone admin

# Group definition ; DO NOT MODIFY
[zone_admin:children]
hosts_cerebro
hosts_consul_server
hosts_kibana_data
log_servers
hosts_elasticsearch_log
prometheus
hosts_grafana

[hosts_cerebro]
# TODO: Put here servers where this service will be deployed : vitam-elasticsearch-cerebro
# /!\ WARNING !!! NOT recommended for PRODUCTION


[hosts_consul_server]
# TODO: Put here servers where this service will be deployed : consul
# Recommended practice in production: use 3 instances


[hosts_kibana_data]
# TODO: Put here servers where this service will be deployed : kibana (for data cluster)
# WARNING : DEPRECATED / We'll soon be removed.
# /!\ WARNING !!! NOT FOR PRODUCTION


[log_servers:children]
hosts_kibana_log
hosts_logstash

[hosts_kibana_log]
# TODO: Put here servers where this service will be deployed : kibana (for log cluster)


[hosts_logstash]
# TODO: Put here servers where this service will be deployed : logstash
# IF you connect VITAM to external SIEM, DO NOT FILL THE SECTION


[hosts_elasticsearch_log]
# TODO: Put here servers where this service will be deployed : elasticsearch-log cluster
# IF you connect VITAM to external SIEM, DO NOT FILL THE SECTION


########### Extra VITAM applications ###########
[prometheus:children]
hosts_prometheus
hosts_alertmanager

[hosts_prometheus]
# TODO: Put here server where this service will be deployed : prometheus server


[hosts_alertmanager]
# TODO: Put here servers where this service will be deployed : alertmanager


[hosts_grafana]
# TODO: Put here servers where this service will be deployed : grafana-server


########### Global vars ###########

[hosts:vars]

# ===============================
# VITAM
# ===============================

# Declare user for ansible on target machines
ansible_ssh_user=
# Can target user become as root ? ; true is required by VITAM (usage of a sudoer is mandatory)
ansible_become=true
# How can ansible switch to root ?
# See https://docs.ansible.com/ansible/latest/user_guide/become.html

# Related to Consul ; apply in a table your DNS server(s)
# Example : dns_servers=["8.8.8.8","8.8.4.4"]
# If no dns recursors are available, leave this value empty.
dns_servers=

# Define local Consul datacenter name
# CAUTION !!! Only alphanumeric characters when using s3 as offer backend !!!
vitam_site_name=prod-dc1

# On offer, value is the prefix for all container's names. If upgrading from R8, you MUST UNCOMMENT this parameter AS IS !!!
#vitam_prefix_offer=""

# check whether on primary site (true) or secondary (false)
primary_site=true

# ===============================
# EXTRA
# ===============================

### vitam-itest repository ###
vitam_tests_branch=master
vitam_tests_gitrepo_protocol=
vitam_tests_gitrepo_baseurl=
vitam_tests_gitrepo_url=

# Used when VITAM is behind a reverse proxy (provides configuration for reverse proxy && displayed in header page)
vitam_reverse_external_dns=
# For reverse proxy use
reverse_proxy_port=443
vitam_reverse_external_protocol=https
# http_proxy env var to use ; has to be declared even if empty
http_proxy_environnement=

Pour chaque type de host, indiquer le(s) serveur(s) défini(s), pour chaque fonction. Une colocalisation de composants est possible (Cf. le paragraphe idoine du DAT)

Note

Concernant le groupe hosts_consul_server, il est nécessaire de déclarer au minimum 3 machines.

Avertissement

Il n’est pas possible de colocaliser les clusters MongoDB data et offer.

Avertissement

Il n’est pas possible de colocaliser kibana-data et kibana-log.

Note

Pour les composants considérés par l’exploitant comme étant « hors VITAM » (typiquement, le composant ihm-demo), il est possible de désactiver la création du service Consul associé. Pour cela, après chaque hostname impliqué, il faut rajouter la directive suivante : consul_disabled=true.

Prudence

Concernant la valeur de vitam_site_name, seuls les caractères alphanumériques et le tiret (« - ») sont autorisés (regexp: [A-Za-z0-9-]).

Note

Il est possible de multi-instancier le composant « storage-offer-default » dans le cas d’un provider de type objet (s3, swift). Il faut ajouter offer_conf=<le nom>.

4.2.3.2.2. Fichier main.yml

La configuration des principaux paramètres est réalisée dans le fichier |repertoire_inventory|``group_vars/all/main/main.yml``, comme suit :

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
---

# TENANTS
# List of active tenants
vitam_tenant_ids: [0,1,2,3,4,5,6,7,8,9]
# For functional-administration, manage master/slave tenant configuration
# http://www.programmevitam.fr/ressources/DocCourante/html/installation/installation/21-addons.html#passage-des-identifiants-des-referentiels-en-mode-esclave
vitam_tenants_usage_external:
  - name: 0
    identifiers:
      - INGEST_CONTRACT
      - ACCESS_CONTRACT
      - MANAGEMENT_CONTRACT
      - ARCHIVE_UNIT_PROFILE
  - name: 1
    identifiers:
      - INGEST_CONTRACT
      - ACCESS_CONTRACT
      - MANAGEMENT_CONTRACT
      - PROFILE
      - SECURITY_PROFILE
      - CONTEXT

# GRIFFINS
# Vitam griffins required to launch preservation scenario
# Example:
# vitam_griffins: ["vitam-imagemagick-griffin", "vitam-libreoffice-griffin", "vitam-jhove-griffin", "vitam-odfvalidator-griffin", "vitam-siegfried-griffin", "vitam-tesseract-griffin", "vitam-verapdf-griffin", "vitam-ffmpeg-griffin"]
vitam_griffins: []

# CONSUL
consul:
  network: "ip_admin" # Which network to use for consul communications ? ip_admin or ip_service ?
consul_remote_sites:
#  wan contains the wan addresses of the consul server instances of the external vitam sites
#  Exemple, if our local dc is dc1, we will need to set dc2 & dc3 wan conf:
#   - dc2:
#     wan: ["10.10.10.10","1.1.1.1"]
#   - dc3:
#     wan: ["10.10.10.11","1.1.1.1"]

# LOGGING
# vitam_defaults:
#   access_retention_days: 30 # Number of days for file retention
#   access_total_size_cap: "10GB" # total acceptable size
#   logback_max_file_size: "10MB"
#   logback_total_size_cap:
#     file:
#       history_days: 30
#       totalsize: "5GB"
#     security:
#       history_days: 30
#       totalsize: "5GB"

# ELASTICSEARCH
# 'number_of_shards': number of shards per index, every ES shard is stored as a lucene index
# 'number_of_replicas': number of additional copies of primary shards
# Total number of shards: number_of_shards * (1 primary + M number_of_replicas)
# CAUTION: The total number of shards should be lower than or equal to the number of elasticsearch-data instances in the cluster
# More details in groups_vars/all/advanced/tenants_vars.yml file
vitam_elasticsearch_tenant_indexation:
  default_config:
    # Default settings for masterdata collections (1 index per collection)
    masterdata:
      number_of_shards: 1
      number_of_replicas: 2
    # Default settings for unit indexes (1 index per tenant)
    unit:
      number_of_shards: 1
      number_of_replicas: 2
    # Default settings for object group indexes (1 index per tenant)
    objectgroup:
      number_of_shards: 1
      number_of_replicas: 2
    # Default settings for logbook operation indexes (1 index per tenant)
    logbookoperation:
      number_of_shards: 1
      number_of_replicas: 2
    # Default settings for collect_unit indexes
    collect_unit:
      number_of_shards: 1
      number_of_replicas: 2
    # Default settings for collect_objectgroup indexes
    collect_objectgroup:
      number_of_shards: 1
      number_of_replicas: 2

  collect_grouped_tenants:
  - name: 'all'
    # Group all tenants for collect's indexes (collect_unit & collect_objectgroup)
    tenants: "{{ vitam_tenant_ids | join(',') }}"

elasticsearch:
  log:
    index_templates:
      default:
        shards: 1
        replica: 1
  data:
    index_templates:
      default:
        shards: 1
        replica: 2
curator:
  log:
    metrics:
      close: 7
      delete: 30
    logstash:
      close: 7
      delete: 30

# PACKAGES
disable_internet_repositories_install: true # Disable EPEL or Debian backports repositories install

Une attention particulère doit être portée à la configuration du nombre de shards et de replicas dans le paramètre vitam_elasticsearch_tenant_indexation.default_config.

Voir aussi

Se référer au chapitre « Gestion des indexes Elasticseach dans un contexte massivement multi-tenants » du DEX pour plus d’informations sur cette fonctionnalité.

Avertissement

Attention, en cas de modification de la distribution des tenants, une procédure de réindexation de la base elasticsearch-data est nécessaire. Cette procédure est à la charge de l’exploitation et nécessite un arrêt de service sur la plateforme. La durée d’exécution de cette réindexation dépend de la quantité de données à traiter.

Voir aussi

Se référer au chapitre « Réindexation » du DEX pour plus d’informations.

4.2.3.2.3. Fichier vitam_security.yml

La configuration des droits d’accès à VITAM est réalisée dans le fichier |repertoire_inventory|``group_vars/all/advanced/vitam_security.yml``, comme suit :

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---

hide_passwords_during_deploy: true

### Admin context name and tenants ###
admin_context_name: "admin-context"
admin_context_tenants: "{{ vitam_tenant_ids }}"

# Indicate context certificates relative paths under {{ inventory_dir }}/certs/client-external/clients
# vitam-admin-int is mandatory for internal use (PRONOM upload)
admin_context_certs:
  - "{{ 'collect-external/collect-external.crt' if groups['hosts_collect_external'] | default([]) | length > 0 else '' }}"
  - "{{ 'ihm-demo/ihm-demo.crt' if groups['hosts_ihm_demo'] | default([]) | length > 0 else '' }}"
  - "{{ 'ihm-recette/ihm-recette.crt' if groups['hosts_ihm_recette'] | default([]) | length > 0 else '' }}"
  - "vitam-admin-int/vitam-admin-int.crt"

# Indicate here all the personal certificates relative paths under {{ inventory_dir }}/certs/client-vitam-users/clients
admin_personal_certs: [ ]

# Admin security profile name
admin_security_profile: "admin-security-profile"

admin_basic_auth_user: "adminUser"

# SElinux state, can be: enforcing, permissive, disabled
selinux_state: "disabled"
# SELinux Policy, can be: targeted, minimum, mls
selinux_policy: "targeted"
# If needed, reboot the VM to enable SELinux
selinux_reboot: True
# Relabel the entire filesystem ?
selinux_relabel: False

Note

Pour la directive admin_context_certs concernant l’intégration de certificats SIA au déploiement, se reporter à la section Intégration d’une application externe (cliente).

Note

Pour la directive admin_personal_certs concernant l’intégration de certificats personnels (personae) au déploiement, se reporter à la section Intégration d’un certificat personnel (personae).

4.2.3.2.4. Fichier offers_opts.yml

La déclaration de configuration des offres de stockage associées se fait dans le fichier |repertoire_inventory|``group_vars/all/main/offers_opts.yml`` :

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
# This is the default vitam strategy ('default'). It is mandatory and must define a referent offer.
# This list of offers will be ordered by the property rank. It has to be completed if more offers are necessary
# The property rank indicates the rank of the offer in the strategy. The ranking is done is ASC order and should be different for all declared offers
vitam_strategy:
  - name: offer-fs-1
    referent: true
    rank: 0

# Optional params for each offers in vitam_strategy. If not set, the default values are applied.
#    referent: false              # true / false (default), only one per site must be referent
#    status: ACTIVE               # ACTIVE (default) / INACTIVE
#    vitam_site_name: distant-dc2 # default is the value of vitam_site_name defined in your local inventory file, should be specified with the vitam_site_name defined for the distant offer
#    distant: false               # true / false (default). If set to true, it will not check if the provider for this offer is correctly set
#    id: idoffre                  # OPTIONAL, but IF ACTIVATED, MUST BE UNIQUE & SAME if on another site
#    asyncRead: false             # true / false (default). Should be set to true for tape offer only
#    rank: 0                      # Integer that indicates in ascending order the priority of the offer in the strategy

# Example for tape offer:
# Tape offer mustn't be referent (referent: false) and should be configured as asynchrone read (asyncRead: true)
#  - name: offer-tape-1
#    referent: false
#    asyncRead: true
#    rank: 0

# Example distant offer:
#  - name: distant
#    referent: false
#    vitam_site_name: distant-dc2
#    distant: true # Only add this parameter when distant offer (not on same platform)
#    rank: 1

# WARNING : multi-strategy is a BETA functionality
# More strategies can be added but are optional
# Strategy name must only use [a-z][a-z0-9-]* pattern
# Any strategy must contain at least one offer
# This list of offers is ordered. It can and has to be completed if more offers are necessary
# Every strategy can define at most one referent offer.
# other_strategies:
#  metadata:
#    - name: offer-fs-1
#      referent: true
#      rank: 0
#    - name: offer-fs-2
#      referent: false
#      rank: 1
#  binary:
#    - name: offer-fs-2
#      referent: false
#      rank: 0
#    - name: offer-s3-1
#      referent: false
#      rank: 1

# DON'T forget to add associated passwords in vault-vitam.yml with same tree when using provider openstack-swift*
# ATTENTION !!! Each offer has to have a distinct name, except for clusters binding a same physical storage
# WARNING : for offer names, please only use [a-z][a-z0-9-]* pattern
vitam_offers:
  offer-fs-1:
    # param can be filesystem-hash (recomended) or filesystem (not recomended)
    provider: filesystem-hash
    ### Optional parameters
    # Offer log compaction
    offer_log_compaction:
      ## Expiration, here offer logs 21 days old will be compacted
      expiration_value: 21
      ## Choose one of "MILLENNIA", "HALF_DAYS", "MILLIS", "FOREVER", "MICROS", "CENTURIES", "DECADES", "YEARS", "DAYS", "SECONDS", "HOURS", "MONTHS", "WEEKS", "NANOS", "MINUTES", "ERAS"
      expiration_unit: "DAYS"
      ## Compaction bulk size here 10 000 offers logs (at most) will be compacted (Expected value between 1 000 and 200 000)
      compaction_size: 10000
    # Batch processing thread pool size
    maxBatchThreadPoolSize: 32
    # Batch metadata computation timeout in seconds
    batchMetadataComputationTimeout: 600
################################################################################
  offer-swift-1:
    # provider : openstack-swift for v1 or openstack-swift-v3 for v3
    provider: openstack-swift-v3
    # swiftKeystoneAuthUrl : URL de connexion à keystone
    swiftKeystoneAuthUrl: https://openstack-hostname:port/auth/1.0
    # swiftDomain : domaine OpenStack dans lequel l'utilisateur est enregistré
    swiftDomain: domaine
    # swiftUser: has to be set in vault-vitam.yml (encrypted) with same structure => DO NOT COMMENT OUT
    # swiftPassword: has to be set in vault-vitam.yml (encrypted) with same structure => DO NOT COMMENT OUT
    # swiftProjectName : nom du projet openstack
    swiftProjectName: monTenant
    ### Optional parameters
    # swiftUrl: optional variable to force the swift URL
    # swiftUrl: https://swift-hostname:port/swift/v1
    #SSL TrustStore
    swiftTrustStore: /chemin_vers_mon_fichier/monSwiftTrustStore.jks
    #Max connection (concurrent connections), per route, to keep in pool (if a pooling ConnectionManager is used) (optional, 200 by default)
    swiftMaxConnectionsPerRoute: 200
    #Max total connection (concurrent connections) to keep in pool (if a pooling ConnectionManager is used) (optional, 1000 by default)
    swiftMaxConnections: 1000
    #Max time (in milliseconds) for waiting to establish connection (optional, 200000 by default)
    swiftConnectionTimeout: 200000
    #Max time (in milliseconds) waiting for a data from the server (socket) (optional, 60000 by default)
    swiftReadTimeout: 60000
    #Default number of retries on errors
    swiftNbRetries: 3
    #Time (in seconds) to renew a token before expiration occurs (blocking) (optional, 60 by default)
    swiftHardRenewTokenDelayBeforeExpireTime: 60
    #Time (in seconds) to renew a token before expiration occurs (optional, 300 by default)
    swiftSoftRenewTokenDelayBeforeExpireTime: 300
    # Offer log compaction
    offer_log_compaction:
      ## Expiration, here offer logs 21 days old will be compacted
      expiration_value: 21
      ## Choose one of "MILLENNIA", "HALF_DAYS", "MILLIS", "FOREVER", "MICROS", "CENTURIES", "DECADES", "YEARS", "DAYS", "SECONDS", "HOURS", "MONTHS", "WEEKS", "NANOS", "MINUTES", "ERAS"
      expiration_unit: "DAYS"
      ## Compaction bulk size here 10 000 offers logs (at most) will be compacted (Expected value between 1 000 and 200 000)
      compaction_size: 10000
    # Batch processing thread pool size
    maxBatchThreadPoolSize: 32
    # Batch metadata computation timeout in seconds
    batchMetadataComputationTimeout: 600
    # Enable / Disable use of vitam custom headers for offer requests
    enableCustomHeaders: false
    # List of vitam custom headers used by offer requests
    #customHeaders:
    #  - key: 'Cookie'
    #    value: 'Origin=vitam'
################################################################################
  offer-s3-1:
    # provider : can only be amazon-s3-v1 for Amazon SDK S3 V1
    provider: 'amazon-s3-v1'
    # s3Endpoint : URL of connection to S3
    s3Endpoint: http://172.17.0.2:6007
    ### Optional parameters
    # s3RegionName (optional): Region name (default value us-east-1)
    s3RegionName: us-west-1
    # s3SignerType (optional): Signing algorithm.
    #     - signature V4 : 'AWSS3V4SignerType' (default value)
    #     - signature V2 : 'S3SignerType'
    s3SignerType: AWSS3V4SignerType
    # s3PathStyleAccessEnabled (optional): 'true' to access bucket in "path-style", else "virtual-hosted-style" (true by default)
    s3PathStyleAccessEnabled: true
    # s3MaxConnections (optional): Max total connection (concurrent connections) (50 by default)
    s3MaxConnections: 1000
    # s3ConnectionTimeout (optional): Max time (in milliseconds) for waiting to establish connection (10000 by default)
    s3ConnectionTimeout: 200000
    # s3SocketTimeout (optional): Max time (in milliseconds) for reading from a connected socket (50000 by default)
    s3SocketTimeout: 50000
    # s3RequestTimeout (optional): Max time (in milliseconds) for a request (0 by default, disabled)
    s3RequestTimeout: 0
    # s3ClientExecutionTimeout (optional): Max time (in milliseconds) for a request by java client (0 by default, disabled)
    s3ClientExecutionTimeout: 0
    # Offer log compaction
    offer_log_compaction:
      ## Expiration, here offer logs 21 days old will be compacted
      expiration_value: 21
      ## Choose one of "MILLENNIA", "HALF_DAYS", "MILLIS", "FOREVER", "MICROS", "CENTURIES", "DECADES", "YEARS", "DAYS", "SECONDS", "HOURS", "MONTHS", "WEEKS", "NANOS", "MINUTES", "ERAS"
      expiration_unit: "DAYS"
      ## Compaction bulk size here 10 000 offers logs (at most) will be compacted (Expected value between 1 000 and 200 000)
      compaction_size: 10000
    # Batch processing thread pool size
    maxBatchThreadPoolSize: 32
    # Batch metadata computation timeout in seconds
    batchMetadataComputationTimeout: 600
################################################################################
  offer-tape-1:
    provider: tape-library
    # tapeLibraryConfiguration:
    #   ...
    # topology:
    #   ...
    # tapeLibraries:
    #   ...
    # Offer log compaction
    offer_log_compaction:
      ## Expiration, here offer logs 21 days old will be compacted
      expiration_value: 21
      ## Choose one of "MILLENNIA", "HALF_DAYS", "MILLIS", "FOREVER", "MICROS", "CENTURIES", "DECADES", "YEARS", "DAYS", "SECONDS", "HOURS", "MONTHS", "WEEKS", "NANOS", "MINUTES", "ERAS"
      expiration_unit: "DAYS"
      ## Compaction bulk size here 10 000 offers logs (at most) will be compacted (Expected value between 1 000 and 200 000)
      compaction_size: 10000
    # Batch processing thread pool size
    maxBatchThreadPoolSize: 32
    # Batch metadata computation timeout in seconds
    batchMetadataComputationTimeout: 600
################################################################################
  # WARNING: Swift V1 is deprecated
  # example_swift_v1:
  #    provider: openstack-swift
  #    swiftKeystoneAuthUrl: https://keystone/auth/1.0
  #    swiftDomain: domain
  #    swiftUser: has to be set in vault-vitam.yml (encrypted) with same structure => DO NOT COMMENT OUT
  #    swiftPassword: has to be set in vault-vitam.yml (encrypted) with same structure => DO NOT COMMENT OUT
  # THIS PART IS ONLY FOR CLEANING (and mandatory for this use case)
  #    swiftProjectId: related to OS_PROJECT_ID
  #    swiftRegionName: related to OS_REGION_NAME
  #    swiftInterface: related to OS_INTERFACE
  # example_swift_v3:
  #    provider: openstack-swift-v3
  #    swiftKeystoneAuthUrl: https://keystone/v3
  #    swiftDomain: domaine
  #    swiftUser: has to be set in vault-vitam.yml (encrypted) with same structure => DO NOT COMMENT OUT
  #    swiftPassword: has to be set in vault-vitam.yml (encrypted) with same structure => DO NOT COMMENT OUT
  #    swiftProjectName: monTenant
  #    projectName: monTenant
  # THIS PART IS ONLY FOR CLEANING (and mandatory for this use case)
  #    swiftProjectId: related to OS_PROJECT_ID
  #    swiftRegionName: related to OS_REGION_NAME
  #    swiftInterface: related to OS_INTERFACE

  #    swiftTrustStore: /chemin_vers_mon_fichier/monSwiftTrustStore.jks
  #    swiftMaxConnectionsPerRoute: 200
  #    swiftMaxConnections: 1000
  #    swiftConnectionTimeout: 200000
  #    swiftReadTimeout: 60000
  #    Time (in seconds) to renew a token before expiration occurs
  #    swiftHardRenewTokenDelayBeforeExpireTime: 60
  #    swiftSoftRenewTokenDelayBeforeExpireTime: 300
  #    enableCustomHeaders: false
  #    customHeaders:
  #      - key: 'Cookie'
  #        value: 'Origin=vitam'

Se référer aux commentaires dans le fichier pour le renseigner correctement.

Note

Dans le cas d’un déploiement multi-sites, dans la section vitam_strategy, la directive vitam_site_name définit pour l’offre associée le nom du datacenter Consul. Par défaut, si non définie, c’est la valeur de la variable vitam_site_name définie dans l’inventaire qui est prise en compte.

Avertissement

La cohérence entre l’inventaire et la section vitam_strategy (et other_strategies si multi-stratégies) est critique pour le bon déploiement et fonctionnement de la solution logicielle VITAM. En particulier, la liste d’offres de vitam_strategy doit correspondre exactement aux noms d’offres déclarés dans l’inventaire (ou les inventaires de chaque datacenter, en cas de fonctionnement multi-site).

Avertissement

Ne pas oublier, en cas de connexion à un keystone en https, de répercuter dans la PKI la clé publique de la CA du keystone.

4.2.3.2.5. Fichier cots_vars.yml

La configuration s’effectue dans le fichier |repertoire_inventory|``group_vars/all/advanced/cots_vars.yml`` :

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
---

consul:
    retry_interval: 10 # in seconds
    check_interval: 10 # in seconds
    check_timeout: 5 # in seconds
    log_level: WARN # Available log_level are: TRACE, DEBUG, INFO, WARN or ERR

# Please uncomment and fill values if you want to connect VITAM to external SIEM
# external_siem:
#     host:
#     port:

elasticsearch:
    log:
        host: "elasticsearch-log.service.{{ consul_domain }}"
        port_http: "9201"
        groupe: "log"
        baseuri: "elasticsearch-log"
        cluster_name: "elasticsearch-log"
        consul_check_http: 10 # in seconds
        consul_check_tcp: 10 # in seconds
        action_log_level: error
        https_enabled: false
        indices_fielddata_cache_size: '30%' # related to https://www.elastic.co/guide/en/elasticsearch/reference/7.6/modules-fielddata.html
        indices_breaker_fielddata_limit: '40%' # related to https://www.elastic.co/guide/en/elasticsearch/reference/7.6/circuit-breaker.html#fielddata-circuit-breaker
        dynamic_timeout: 30s
        # default index template
        index_templates:
            packetbeat:
                shards: 5
        log_appenders:
            root:
                log_level: "info"
            rolling:
                max_log_file_size: "100MB"
                max_total_log_size: "5GB"
                max_files: "50"
            deprecation_rolling:
                max_log_file_size: "100MB"
                max_total_log_size: "1GB"
                max_files: "10"
                log_level: "warn"
            index_search_slowlog_rolling:
                max_log_file_size: "100MB"
                max_total_log_size: "1GB"
                max_files: "10"
                log_level: "warn"
            index_indexing_slowlog_rolling:
                max_log_file_size: "100MB"
                max_total_log_size: "1GB"
                max_files: "10"
                log_level: "warn"
        # By default, is commented. Should be uncommented if ansible computes badly vCPUs number ;  values are associated vCPUs numbers ; please adapt to your configuration
        # thread_pool:
        #     index:
        #         size: 2
        #     get:
        #         size: 2
        #     search:
        #         size: 2
        #     write:
        #         size: 2
        #     warmer:
        #         max: 2
    data:
        host: "elasticsearch-data.service.{{ consul_domain }}"
        # default is 0.1 (10%) and should be quite enough in most cases
        #index_buffer_size_ratio: "0.15"
        port_http: "9200"
        groupe: "data"
        baseuri: "elasticsearch-data"
        cluster_name: "elasticsearch-data"
        consul_check_http: 10 # in seconds
        consul_check_tcp: 10 # in seconds
        action_log_level: debug
        https_enabled: false
        indices_fielddata_cache_size: '30%' # related to https://www.elastic.co/guide/en/elasticsearch/reference/6.5/modules-fielddata.html
        indices_breaker_fielddata_limit: '40%' # related to https://www.elastic.co/guide/en/elasticsearch/reference/6.5/circuit-breaker.html#fielddata-circuit-breaker
        dynamic_timeout: 30s
        # default index template
        index_templates:
        log_appenders:
            root:
                log_level: "info"
            rolling:
                max_log_file_size: "100MB"
                max_total_log_size: "5GB"
                max_files: "50"
            deprecation_rolling:
                max_log_file_size: "100MB"
                max_total_log_size: "5GB"
                max_files: "50"
                log_level: "warn"
            index_search_slowlog_rolling:
                max_log_file_size: "100MB"
                max_total_log_size: "5GB"
                max_files: "50"
                log_level: "warn"
            index_indexing_slowlog_rolling:
                max_log_file_size: "100MB"
                max_total_log_size: "5GB"
                max_files: "50"
                log_level: "warn"
        # By default, is commented. Should be uncommented if ansible computes badly vCPUs number ;  values are associated vCPUs numbers ; please adapt to your configuration
        # thread_pool:
        #     index:
        #         size: 2
        #     get:
        #         size: 2
        #     search:
        #         size: 2
        #     write:
        #         size: 2
        #     warmer:
        #         max: 2

mongodb:
    mongos_port: 27017
    mongoc_port: 27018
    mongod_port: 27019
    mongo_authentication: "true"
    host: "mongos.service.{{ consul_domain }}"
    check_consul: 10 # in seconds
    drop_info_log: false # Drop mongo (I)nformational log, for Verbosity Level of 0
    # logs configuration
    logrotate: enabled # or disabled
    history_days: 30 # How many days to store logs if logrotate is set to 'enabled'

logstash:
    host: "logstash.service.{{ consul_domain }}"
    user: logstash
    port: 10514
    rest_port: 20514
    check_consul: 10 # in seconds
    # logstash xms & xmx in Megabytes
    # jvm_xms: 2048
    # jvm_xmx: 2048
    # workers_number: 4
    log_appenders:
        rolling:
            max_log_file_size: "100MB"
            max_total_log_size: "5GB"
        json_rolling:
            max_log_file_size: "100MB"
            max_total_log_size: "5GB"

# Prometheus params
prometheus:
    metrics_path: /admin/v1/metrics
    check_consul: 10 # in seconds
    prometheus_config_file_target_directory: # Set path where "prometheus.yml" file will be generated. Example: /tmp/
    server:
        port: 9090
        tsdb_retention_time: "7d"
        tsdb_retention_size: "5GB"
    node_exporter:
        enabled: true
        port: 9101
        metrics_path: /metrics
        log_level: "warn"
        logrotate: enabled # or disabled
        history_days: 30 # How many days to store logs if logrotate is set to 'enabled'
    consul_exporter:
        enabled: true
        port: 9107
        metrics_path: /metrics
    elasticsearch_exporter:
        enabled: true
        port: 9114
        metrics_path: /metrics
        log_level: "warn"
        logrotate: enabled # or disabled
        history_days: 30 # How many days to store logs if logrotate is set to 'enabled'
    alertmanager:
        api_port: 9093
        cluster_port: 9094
        #receivers: # https://grafana.com/blog/2020/02/25/step-by-step-guide-to-setting-up-prometheus-alertmanager-with-slack-pagerduty-and-gmail/
        #- name: "slack_alert"
        #  slack_configs:
        #  - api_url: "https://hooks.slack.com/services/xxxxxxx/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
        #    channel: '#your_alert_channel'
        #    send_resolved: true

grafana:
    check_consul: 10 # in seconds
    http_port: 3000
    proxy: false
    grafana_datasources:
      - name: "Prometheus"
        type: "prometheus"
        access: "proxy"
        url: "http://prometheus-server.service.{{ consul_domain }}:{{ prometheus.server.port | default(9090) }}/prometheus"
        basicAuth: false
        editable: true
      - name: "Prometheus AlertManager"
        type: "camptocamp-prometheus-alertmanager-datasource"
        access: "proxy"
        url: "http://prometheus-alertmanager.service.{{ consul_domain }}:{{ prometheus.alertmanager.api_port | default(9093) }}"
        basicAuth: false
        editable: true
        jsonData:
          keepCookies: []
          severity_critical: "4"
          severity_high: "3"
          severity_warning: "2"
          severity_info: "1"
    grafana_dashboards:
      - name: 'vitam-dashboard'
        orgId: 1
        folder: ''
        folderUid: ''
        type: file
        disableDeletion: false
        updateIntervalSeconds: 10
        allowUiUpdates: true
        options:
          path: "/etc/grafana/provisioning/dashboards"

# Curator units: days
curator:
    log:
        metricbeat:
            close: 5
            delete: 10
        packetbeat:
            close: 5
            delete: 10

kibana:
    header_value: "reporting"
    import_delay: 10
    import_retries: 10
    # logs configuration
    logrotate: enabled # or disabled
    history_days: 30 # How many days to store logs if logrotate is set to 'enabled'
    log:
        baseuri: "kibana_log"
        api_call_timeout: 120
        groupe: "log"
        port: 5601
        default_index_pattern: "logstash-vitam*"
        check_consul: 10 # in seconds
        # default shards & replica
        shards: 1
        replica: 1
        # pour index logstash-*
        metrics:
            shards: 1
            replica: 1
        # pour index metricbeat-*
        metricbeat:
            shards: 3 # must be a factor of 30
            replica: 1
    data:
        baseuri: "kibana_data"
        # OMA : bugdette : api_call_timeout is used for retries ; should ceate a separate variable rather than this one
        api_call_timeout: 120
        groupe: "data"
        port: 5601
        default_index_pattern: "logbookoperation_*"
        check_consul: 10 # in seconds
        # index template for .kibana
        shards: 1
        replica: 1

syslog:
    # value can be syslog-ng or rsyslog
    name: "rsyslog"

cerebro:
    baseuri: "cerebro"
    port: 9000
    check_consul: 10 # in seconds
    # logs configuration
    logrotate: enabled # or disabled
    history_days: 30 # How many days to store logs if logrotate is set to 'enabled'

siegfried:
    port: 19000
    consul_check: 10 # in seconds

clamav:
    port: 3310
    # logs configuration
    logrotate: enabled # or disabled
    history_days: 30 # How many days to store logs if logrotate is set to 'enabled'
    freshclam:
        # frequency freshclam for database update per day (from 0 to 24 - 24 meaning hourly check)
        db_update_periodicity: 1
        private_mirror_address:
        use_proxy: "no"

## Avast Business Antivirus for Linux
## if undefined, the following default values are applied.
# avast:
#     # logs configuration
#     logrotate: enabled # or disabled
#     history_days: 30 # How many days to store logs if logrotate is set to 'enabled'
#     manage_repository: true
#     repository:
#         state: present
#         # For CentOS
#         baseurl: http://rpm.avast.com/lin/repo/dists/rhel/release
#         gpgcheck: no
#         proxy: _none_
#         # For Debian
#         baseurl: 'deb http://deb.avast.com/lin/repo debian-buster release'
#     vps_repository: http://linux-av.u.avcdn.net/linux-av/avast/x86_64
#     ## List of sha256 hash of excluded files from antivirus. Useful for test environments.
#     whitelist:
#         - xxxxxx
#         - yyyyyy

mongo_express:
    baseuri: "mongo-express"

ldap_authentification:
    ldap_protocol: "ldap"
    ldap_server: "{% if groups['ldap']|length > 0 %}{{ groups['ldap']|first }}{% endif %}"
    ldap_port: "389"
    ldap_base: "dc=programmevitam,dc=fr"
    ldap_login: "cn=Manager,dc=programmevitam,dc=fr"
    uid_field: "uid"
    ldap_userDn_Template: "uid={0},ou=people,dc=programmevitam,dc=fr"
    ldap_group_request: "(&(objectClass=groupOfNames)(member={0}))"
    ldap_admin_group: "cn=admin,ou=groups,dc=programmevitam, dc=fr"
    ldap_user_group: "cn=user,ou=groups,dc=programmevitam, dc=fr"
    ldap_guest_group: "cn=guest,ou=groups,dc=programmevitam, dc=fr"

# Backup tool on storage-offer
restic:
    snapshot_retention: 30 # number of snapshots to keep
    # default run backup at 23:00 everydays
    cron:
        minute: '00'
        hour: '23'
        day: '*'
        month: '*'
        weekday: '*'
    # [hosts_storage_offer_default] must be able to connect to the listed databases below to properly backup.
    backup:
        # mongo-offer
        - name: "{{ offer_conf }}"
          type: mongodb
          host: "{{ offer_conf }}-mongos.service.consul:{{ mongodb.mongos_port }}"
          user: "{{ mongodb[offer_conf].admin.user }}"
          password: "{{ mongodb[offer_conf].admin.password }}"
        # # mongo-data (only if mono-sharded cluster)
        # - name: mongo-data
        #   type: mongodb
        #   host: "mongo-data-mongos.service.consul:{{ mongodb.mongos_port }}"
        #   user: "{{ mongodb['mongo-data'].admin.user }}"
        #   password: "{{ mongodb['mongo-data'].admin.password }}"
        # # mongo-vitamui (only if vitamui is deployed)
        # - name: mongo-vitamui
        #   type: mongodb
        #   host: mongo-vitamui-mongod.service.consul:{{ mongodb.mongod_port }}
        #   # Add the following params on environments/group_vars/all/main/vault-vitam.yml
        #   # They can be found under vitamui's deployment sources on environments/group_vars/all/vault-mongodb.yml
        #   user: "{{ mongodb['mongo-vitamui'].admin.user }}"
        #   password: "{{ mongodb['mongo-vitamui'].admin.password }}"

Dans le cas du choix du COTS d’envoi des messages syslog dans logastsh, il est possible de choisir entre syslog-ng et rsyslog. Il faut alors modifier la valeur de la directive syslog.name; la valeur par défaut est rsyslog.

Note

si vous décommentez et renseignez les valeurs dans le bloc external_siem, les messages seront envoyés (par syslog ou syslog-ng, selon votre choix de déploiement) dans un SIEM externe à la solution logicielle VITAM, aux valeurs indiquées dans le bloc ; il n’est alors pas nécessaire de renseigner de partitions pour les groupes ansible [hosts_logstash] et [hosts_elasticsearch_log].

4.2.3.2.6. Fichier tenants_vars.yml

Le fichier |repertoire_inventory|``group_vars/all/advanced/tenants_vars.yml`` permet de gérer les configurations spécifiques associés aux tenants de la plateforme (liste des tenants, regroupement de tenants, configuration du nombre de shards et replicas, etc…).

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
### tenants ###
# List of dead / removed tenants that should never be reused / present in vitam_tenant_ids
vitam_removed_tenants: []
# Administration tenant
vitam_tenant_admin: 1

###
# Elasticsearch tenant indexation
# ===============================
#
# Elastic search index configuration settings :
# - 'number_of_shards' : number of shards per index. Every ES shard is stored as a lucene index.
# - 'number_of_replicas': number of additional copies of primary shards
# The total number of shards : number_of_shards * (1 primary + M number_of_replicas)
#
# CAUTION : The total number of shards should be lower than or equal to the number of elasticsearch-data instances in the cluster
#
# Default settings should be okay for most use cases.
# For more data-intensive workloads or deployments with high number of tenants, custom tenant and/or collection configuration might be specified.
#
# Tenant list may be specified as :
# - A specific tenant                                                 : eg. '1'
# - A tenant range                                                    : eg. '10-19'
# - A comma-separated combination of specific tenants & tenant ranges : eg. '1, 5, 10-19, 50-59'
#
# Masterdata collections (accesscontract, filerules...) are indexed as single elasticsearch indexes :
# - Index name format : {collection}_{date_time_of_creation}. e.g. accesscontract_20200415_042011
# - Index alias name : {collection}. e.g. accesscontract
#
# Metadata collections (unit & objectgroup), and logbook operation collections are stored on a per-tenant index basis :
# - Index name       : {collection}_{tenant}_{date_time_of_creation}. e.g. unit_1_20200517_025041
# - Index alias name : {collection}_{tenant}. e.g. unit_1
#
# Very small tenants (1-100K entries) may be grouped in a "tenant group", and hence, stored in a single elasticsearch index.
# This allows reducing the number of indexes & shards that the elasticsearch cluster need to manage :
# - Index name       : {collection}_{tenant_group_name}_{date_time_of_creation}. e.g. logbookoperation_grp5_20200517_025041
# - Index alias name : {collection}_{tenant_group_name}. e.g. logbookoperation_grp5
#
# Tenant list can be wide ranges (eg: 100-199), and may contain non-existing (yet) tenants. i.e. tenant lists might be wider that 'vitam_tenant_ids' section
# This allows specifying predefined tenant families (whether normal tenants ranges, or tenant groups) to which tenants can be added in the future.
# However, tenant lists may not intersect (i.e. a single tenant cannot belong to 2 configuration sections).
#
# Sizing recommendations :
#  - 1 shard per 5-10M records for small documents (eg. masterdata collections)
#  - 1 shard per 1-2M records for larger documents (eg. metadata & logbook collections)
#  - As a general rule, shard size should not exceed 30GB per shard
#  - A single ES node should not handle > 200 shards (be it a primary or a replica)
#  - It is recommended to start small and add more shards when needed (re-sharding requires a re-indexation operation)
#
# /!\ IMPORTANT :
# Changing the configuration of an existing tenant requires re-indexation of the tenants and/or tenant groups
#
# Please refer to documentation for more details.
#
###
vitam_elasticsearch_tenant_indexation:

  ###
  # Default masterdata collection indexation settings (default_config section) apply for all master data collections
  # Custom settings can be defined for the following masterdata collections:
  #   - accesscontract
  #   - accessionregisterdetail
  #   - accessionregistersummary
  #   - accessionregistersymbolic
  #   - agencies
  #   - archiveunitprofile
  #   - context
  #   - fileformat
  #   - filerules
  #   - griffin
  #   - ingestcontract
  #   - managementcontract
  #   - ontology
  #   - preservationscenario
  #   - profile
  #   - securityprofile
  ###
  masterdata:
  #  {collection}:
  #    number_of_shards: 1
  #    number_of_replicas: 2
  #  ...


  ###
  # Custom index settings for regular tenants.
  ###
  dedicated_tenants:
  #  - tenants: '1, 3, 11-20'
  #    unit:
  #      number_of_shards: 4
  #      number_of_replicas: 0
  #    objectgroup:
  #      number_of_shards: 5
  #      number_of_replicas: 0
  #    logbookoperation:
  #      number_of_shards: 3
  #      number_of_replicas: 0
  #  ...




  ###
  # Custom index settings for grouped tenants.
  # Group name must meet the following criteria:
  #  - alphanumeric characters
  #  - lowercase only
  #  - not start with a number
  #  - be less than 64 characters long.
  #  - NO special characters - / _ | ...
  ###
  grouped_tenants:
  #  - name: 'grp1'
  #    tenants: '5-10'
  #    unit:
  #      number_of_shards: 5
  #      number_of_replicas: 0
  #    objectgroup:
  #      number_of_shards: 6
  #      number_of_replicas: 0
  #    logbookoperation:
  #      number_of_shards: 7
  #      number_of_replicas: 0
  #  ...

extendedConfiguration:
  default:
    eliminationReportExtraFields: [ ]
    objectGroupBlackListedFields: ['Filename']
  custom:
  # The `eliminationReportExtraFields` configuration option specifies the metadata keys that should be included in the report when performing an elimination.
  #   It determines which additional metadata fields should be retained and displayed in the elimination report.
  #   You can include any of the following extra fields: "#id", "#version", "#unitups", "#originating_agency", "#approximate_creation_date", "approximate_update_date", "FilePlanPosition", "SystemId", "OriginatingSystemId", "ArchivalAgencyArchiveUnitIdentifier", "OriginatingAgencyArchiveUnitIdentifier", TransferringAgencyArchiveUnitIdentifier"
  #
  # The `objectGroupBlackListedFields` configuration option specifies the fields that should not be reported by access-external.
  #
  # Example for tenant 0 :
  #   0:
  #     eliminationReportExtraFields: ["#id", "FilePlanPosition", "SystemId"]
  #     objectGroupBlackListedFields: ['Filename']

Se référer aux commentaires dans le fichier pour le renseigner correctement.

Voir aussi

Se référer au chapitre « Gestion des indexes Elasticseach dans un contexte massivement multi-tenants » du DEX pour plus d’informations sur cette fonctionnalité.

Avertissement

Attention, en cas de modification de la distribution des tenants, une procédure de réindexation de la base elasticsearch-data est nécessaire. Cette procédure est à la charge de l’exploitation et nécessite un arrêt de service sur la plateforme. La durée d’exécution de cette réindexation dépend de la quantité de données à traiter.

Voir aussi

Se référer au chapitre « Réindexation » du DEX pour plus d’informations.

4.2.3.3. Déclaration des secrets

Avertissement

L’ensemble des mots de passe fournis ci-après le sont par défaut et doivent être changés !

4.2.3.3.1. vitam

Avertissement

Cette section décrit des fichiers contenant des données sensibles. Il est important d’implémenter une politique de mot de passe robuste conforme à ce que l’ANSSI préconise. Par exemple: ne pas utiliser le même mot de passe pour chaque service, renouveler régulièrement son mot de passe, utiliser des majuscules, minuscules, chiffres et caractères spéciaux (Se référer à la documentation ANSSI https://www.ssi.gouv.fr/guide/mot-de-passe). En cas d’usage d’un fichier de mot de passe (vault-password-file), il faut renseigner ce mot de passe comme contenu du fichier et ne pas oublier de sécuriser ou supprimer ce fichier à l’issue de l’installation.

Les secrets utilisés par la solution logicielle (en-dehors des certificats qui sont abordés dans une section ultérieure) sont définis dans des fichiers chiffrés par ansible-vault.

Important

Tous les vault présents dans l’arborescence d’inventaire doivent être tous protégés par le même mot de passe !

La première étape consiste à changer les mots de passe de tous les vaults présents dans l’arborescence de déploiement (le mot de passe par défaut est contenu dans le fichier vault_pass.txt) à l’aide de la commande ansible-vault rekey <fichier vault>.

Voici la liste des vaults pour lesquels il est nécessaire de modifier le mot de passe:

  • environments/group_vars/all/main/vault-vitam.yml
  • environments/group_vars/all/main/vault-keystores.yml
  • environments/group_vars/all/main/vault-extra.yml
  • environments/certs/vault-certs.yml

2 vaults sont principalement utilisés dans le déploiement d’une version :

Avertissement

Leur contenu est donc à modifier avant tout déploiement.

  • Le fichier |repertoire_inventory|``group_vars/all/main/vault-vitam.yml`` contient les secrets généraux :

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    ---
    # Vitam platform secret key
    # Note: It has to be the same on all sites
    plateforme_secret: change_it_vitamsecret
    
    # The consul key must be 16-bytes, Base64 encoded: https://www.consul.io/docs/agent/encryption.html
    # You can generate it with the "consul keygen" command
    # Or you can use this script: deployment/pki/scripts/generate_consul_key.sh
    # Note: It has to be the same on all sites
    consul_encrypt: Biz14ohqN4HtvZmrXp3N4A==
    
    mongodb:
      mongo-data:
        passphrase: changeitkM4L6zBgK527tWBb
        admin:
          user: vitamdb-admin
          password: change_it_1MpG22m2MywvKW5E
        localadmin:
          user: vitamdb-localadmin
          password: change_it_HycFEVD74g397iRe
        system:
          user: vitamdb-system
          password: change_it_HycFEVD74g397iRe
        metadata:
          user: metadata
          password: change_it_37b97KVaDV8YbCwt
        logbook:
          user: logbook
          password: change_it_jVi6q8eX4H1Ce8UC
        report:
          user: report
          password: change_it_jb7TASZbU6n85t8L
        functionalAdmin:
          user: functional-admin
          password: change_it_9eA2zMCL6tm6KF1e
        securityInternal:
          user: security-internal
          password: change_it_m39XvRQWixyDX566
        scheduler:
          user: scheduler
          password: change_it_Q8WEdxhXXOe2NEhp
        collect:
          user: collect
          password: change_it_m39XvRQWixyDX566
        metadataCollect:
          user: metadata-collect
          password: change_it_37b97KVaDV8YbCwt
      offer-fs-1:
        passphrase: changeitmB5rnk1M5TY61PqZ
        admin:
          user: vitamdb-admin
          password: change_it_FLkM5emt63N73EcN
        localadmin:
          user: vitamdb-localadmin
          password: change_it_QeH8q4e16ah4QKXS
        system:
          user: vitamdb-system
          password: change_it_HycFEVD74g397iRe
        offer:
          user: offer
          password: change_it_pQi1T1yT9LAF8au8
      offer-fs-2:
        passphrase: changeiteSY1By57qZr4MX2s
        admin:
          user: vitamdb-admin
          password: change_it_84aTMFZ7h8e2NgMe
        localadmin:
          user: vitamdb-localadmin
          password: change_it_Am1B37tGY1w5VfvX
        system:
          user: vitamdb-system
          password: change_it_HycFEVD74g397iRe
        offer:
          user: offer
          password: change_it_mLDYds957sNQ53mA
      offer-tape-1:
        passphrase: changeitmB5rnk1M5TY61PqZ
        admin:
          user: vitamdb-admin
          password: change_it_FLkM5emt63N73EcN
        localadmin:
          user: vitamdb-localadmin
          password: change_it_QeH8q4e16ah4QKXS
        system:
          user: vitamdb-system
          password: change_it_HycFEVD74g397iRe
        offer:
          user: offer
          password: change_it_pQi1T1yT9LAF8au8
      offer-swift-1:
        passphrase: changeitgYvt42M2pKL6Zx3T
        admin:
          user: vitamdb-admin
          password: change_it_e21hLp51WNa4sJFS
        localadmin:
          user: vitamdb-localadmin
          password: change_it_QB8857SJrGrQh2yu
        system:
          user: vitamdb-system
          password: change_it_HycFEVD74g397iRe
        offer:
          user: offer
          password: change_it_AWJg2Bp3s69P6nMe
      offer-s3-1:
        passphrase: changeituF1jVdR9NqdTG625
        admin:
          user: vitamdb-admin
          password: change_it_5b7cSWcS5M1NF4kv
        localadmin:
          user: vitamdb-localadmin
          password: change_it_S9jE24rxHwUZP6y5
        system:
          user: vitamdb-system
          password: change_it_HycFEVD74g397iRe
        offer:
          user: offer
          password: change_it_TuTB1i2k7iQW3zL2
      offer-tape-1:
        passphrase: changeituF1jghT9NqdTG625
        admin:
          user: vitamdb-admin
          password: change_it_5b7cSWcab91NF4kv
        localadmin:
          user: vitamdb-localadmin
          password: change_it_S9jE24rxHwUZP5a6
        system:
          user: vitamdb-system
          password: change_it_HycFEVD74g397iRe
        offer:
          user: offer
          password: change_it_TuTB1i2k7iQW3c2a
    
    vitam_users:
      - vitam_aadmin:
        login: aadmin
        password: change_it_z5MP7GC4qnR8nL9t
        role: admin
      - vitam_uuser:
        login: uuser
        password: change_it_w94Q3jPAT2aJYm8b
        role: user
      - vitam_gguest:
        login: gguest
        password: change_it_E5v7Tr4h6tYaQG2W
        role: guest
      - techadmin:
        login: techadmin
        password: change_it_K29E1uHcPZ8zXji8
        role: admin
    
    ldap_authentification:
        ldap_pwd: "change_it_t69Rn5NdUv39EYkC"
    
    admin_basic_auth_password: change_it_5Yn74JgXwbQ9KdP8
    
    vitam_offers:
        offer-swift-1:
            swiftUser: swift_user
            swiftPassword: password_change_m44j57aYeRPnPXQ2
        offer-s3-1:
            s3AccessKey: accessKey_change_grLS8372Uga5EJSx
            s3SecretKey: secretKey_change_p97es2m2CHXPJA1m
    

Prudence

Seuls les caractères alphanumériques sont valides pour les directives passphrase.

Avertissement

Le paramétrage du mode d’authentifications des utilisateurs à l”IHM démo est géré au niveau du fichier deployment/environments/group_vars/all/advanced/vitam_vars.yml. Plusieurs modes d’authentifications sont proposés au niveau de la section authentication_realms. Dans le cas d’une authentification se basant sur le mécanisme iniRealm (configuration shiro par défaut), les mots de passe déclarés dans la section vitam_users devront s’appuyer sur une politique de mot de passe robuste, comme indiqué en début de chapitre. Il est par ailleurs possible de choisir un mode d’authentification s’appuyant sur un annuaire LDAP externe (ldapRealm dans la section authentication_realms).

Note

Dans le cadre d’une installation avec au moins une offre swift, il faut déclarer, dans la section vitam_offers, le nom de chaque offre et le mot de passe de connexion swift associé, défini dans le fichier offers_opts.yml. L’exemple ci-dessus présente la déclaration du mot de passe pour l’offre swift offer-swift-1.

Note

Dans le cadre d’une installation avec au moins une offre s3, il faut déclarer, dans la section vitam_offers, le nom de chaque offre et l’access key secret s3 associé, défini dans le fichier offers_opts.yml. L’exemple ci-dessus présente la déclaration du mot de passe pour l’offre s3 offer-s3-1.

  • Le fichier |repertoire_inventory|``group_vars/all/main/vault-keystores.yml`` contient les mots de passe des magasins de certificats utilisés dans VITAM :

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    # NO UNDERSCORE ALLOWED IN VALUES
    keystores:
      server:
        offer: changeit817NR75vWsZtgAgJ
        access_external: changeitMZFD2YM4279miitu
        ingest_external: changeita2C74cQhy84BLWCr
        ihm_recette: changeit4FWYVK1347mxjGfe
        ihm_demo: changeit6kQ16eyDY7QPS9fy
        collect_external: changeit6kQ16eyDYAoPS9fy
      client_external:
        ihm_demo: changeitGT38hhTiA32x1PLy
        gatling: changeit2sBC5ac7NfGF9Qj7
        ihm_recette: changeitdAZ9Eq65UhDZd9p4
        reverse: changeite5XTzb5yVPcEX464
        vitam_admin_int: changeitz6xZe5gDu7nhDZd9
        collect_external: changeitz6xZe5gDu7nhDZA12
      client_storage:
        storage: changeit647D7LWiyM6qYMnm
      timestamping:
        secure_logbook: changeitMn9Skuyx87VYU62U
        secure_storage: changeite5gDu9Skuy84BLW9
    truststores:
      server: changeitxNe4JLfn528PVHj7
      client_external: changeitJ2eS93DcPH1v4jAp
      client_storage: changeitHpSCa31aG8ttB87S
    grantedstores:
      client_external: changeitLL22HkmDCA2e2vj7
      client_storage: changeitR3wwp5C8KQS76Vcu
    

Avertissement

Il convient de sécuriser votre environnement en définissant des mots de passe forts.

4.2.3.3.2. Cas des extras

  • Le fichier |repertoire_inventory|``group_vars/all/main/vault-extra.yml`` contient les mots de passe des magasins de certificats utilisés dans VITAM :

    1
    2
    3
    # Example for git lfs ; uncomment & use if needed
    #vitam_gitlab_itest_login: "account"
    #vitam_gitlab_itest_password: "change_it_4DU42JVf2x2xmPBs"
    

Note

Le playbook vitam.yml comprend des étapes avec la mention no_log afin de ne pas afficher en clair des étapes comme les mots de passe des certificats. En cas d’erreur, il est possible de retirer la ligne dans le fichier pour une analyse plus fine d’un éventuel problème sur une de ces étapes.

4.2.3.3.3. Commande ansible-vault

Certains fichiers présents sous |repertoire_inventory|``group_vars/all`` commençant par vault- doivent être protégés (chiffrés) avec l’utilitaire ansible-vault.

Note

Ne pas oublier de mettre en conformité le fichier vault_pass.txt

4.2.3.3.3.1. Générer des fichiers vaultés depuis des fichier en clair

Exemple du fichier vault-cots.yml

cp vault-cots.yml.plain vault-cots.yml
ansible-vault encrypt vault-cots.yml

4.2.3.3.3.2. Re-chiffrer un fichier vaulté avec un nouveau mot de passe

Exemple du fichier vault-cots.yml

ansible-vault rekey vault-cots.yml

4.2.3.4. Le mapping ElasticSearch pour Unit et ObjectGroup

Les mappings des indexes elasticsearch pour les collections masterdata Unit et ObjectGroup sont configurables de l’extérieur, plus spécifiquement dans le dossier |repertoire_inventory|``deployment/ansible-vitam/roles/elasticsearch-mapping/files/``, ce dossier contient:

  • deployment/ansible-vitam/roles/elasticsearch-mapping/files/unit-es-mapping.json
  • deployment/ansible-vitam/roles/elasticsearch-mapping/files/og-es-mapping.json

Exemple du fichier mapping de la collection ObjectGroup :

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
{
  "dynamic_templates": [
    {
      "object": {
        "match_mapping_type": "object",
        "mapping": {
          "type": "object"
        }
      }
    },
    {
      "all_string": {
        "match": "*",
        "mapping": {
          "type": "text"
        }
      }
    }
  ],
  "properties": {
    "FileInfo": {
      "properties": {
        "CreatingApplicationName": {
          "type": "text"
        },
        "CreatingApplicationVersion": {
          "type": "text"
        },
        "CreatingOs": {
          "type": "text"
        },
        "CreatingOsVersion": {
          "type": "text"
        },
        "DateCreatedByApplication": {
          "type": "date",
          "format": "strict_date_optional_time"
        },
        "Filename": {
          "type": "text"
        },
        "LastModified": {
          "type": "date",
          "format": "strict_date_optional_time"
        }
      }
    },
    "Metadata": {
      "properties": {
        "Text": {
          "type": "object"
        },
        "Document": {
          "type": "object"
        },
        "Image": {
          "type": "object"
        },
        "Audio": {
          "type": "object"
        },
        "Video": {
          "type": "object"
        }
      }
    },
    "OtherMetadata": {
      "type": "object",
      "properties": {
        "RawMetadata": {
          "type": "object"
        }
      }
    },
    "_profil": {
      "type": "keyword"
    },
    "_qualifiers": {
      "properties": {
        "_nbc": {
          "type": "long"
        },
        "qualifier": {
          "type": "keyword"
        },
        "versions": {
          "type": "nested",
          "properties": {
            "Compressed": {
              "type": "text"
            },
            "DataObjectGroupId": {
              "type": "keyword"
            },
            "DataObjectVersion": {
              "type": "keyword"
            },
            "DataObjectProfile": {
              "type": "keyword"
            },
            "DataObjectSystemId": {
              "type": "keyword"
            },
            "DataObjectGroupSystemId": {
              "type": "keyword"
            },
            "_opi": {
              "type": "keyword"
            },
            "FileInfo": {
              "properties": {
                "CreatingApplicationName": {
                  "type": "text"
                },
                "CreatingApplicationVersion": {
                  "type": "text"
                },
                "CreatingOs": {
                  "type": "text"
                },
                "CreatingOsVersion": {
                  "type": "text"
                },
                "DateCreatedByApplication": {
                  "type": "date",
                  "format": "strict_date_optional_time"
                },
                "Filename": {
                  "type": "text"
                },
                "LastModified": {
                  "type": "date",
                  "format": "strict_date_optional_time"
                }
              }
            },
            "FormatIdentification": {
              "properties": {
                "FormatId": {
                  "type": "keyword"
                },
                "FormatLitteral": {
                  "type": "keyword"
                },
                "MimeType": {
                  "type": "keyword"
                },
                "Encoding": {
                  "type": "keyword"
                }
              }
            },
            "MessageDigest": {
              "type": "keyword"
            },
            "Algorithm": {
              "type": "keyword"
            },
            "PhysicalDimensions": {
              "properties": {
                "Diameter": {
                  "properties": {
                    "unit": {
                      "type": "keyword"
                    },
                    "dValue": {
                      "type": "double"
                    }
                  }
                },
                "Height": {
                  "properties": {
                    "unit": {
                      "type": "keyword"
                    },
                    "dValue": {
                      "type": "double"
                    }
                  }
                },
                "Depth": {
                  "properties": {
                    "unit": {
                      "type": "keyword"
                    },
                    "dValue": {
                      "type": "double"
                    }
                  }
                },
                "Shape": {
                  "type": "keyword"
                },
                "Thickness": {
                  "properties": {
                    "unit": {
                      "type": "keyword"
                    },
                    "dValue": {
                      "type": "double"
                    }
                  }
                },
                "Length": {
                  "properties": {
                    "unit": {
                      "type": "keyword"
                    },
                    "dValue": {
                      "type": "double"
                    }
                  }
                },
                "NumberOfPage": {
                  "type": "long"
                },
                "Weight": {
                  "properties": {
                    "unit": {
                      "type": "keyword"
                    },
                    "dValue": {
                      "type": "double"
                    }
                  }
                },
                "Width": {
                  "properties": {
                    "unit": {
                      "type": "keyword"
                    },
                    "dValue": {
                      "type": "double"
                    }
                  }
                }
              }
            },
            "PhysicalId": {
              "type": "keyword"
            },
            "Size": {
              "type": "long"
            },
            "Uri": {
              "type": "keyword"
            },
            "_id": {
              "type": "keyword"
            },
            "_storage": {
              "properties": {
                "_nbc": {
                  "type": "long"
                },
                "offerIds": {
                  "type": "keyword"
                },
                "strategyId": {
                  "type": "keyword"
                }
              }
            },
            "PersistentIdentifier": {
              "properties": {
                "PersistentIdentifierType": {
                  "type": "keyword"
                },
                "PersistentIdentifierOrigin": {
                  "type": "keyword"
                },
                "PersistentIdentifierReference": {
                  "type": "keyword"
                },
                "PersistentIdentifierContent": {
                  "type": "keyword"
                }
              }
            },
            "DataObjectUse": {
              "type": "keyword"
            },
            "DataObjectNumber": {
              "type": "long"
            }
          }
        }
      }
    },
    "_v": {
      "type": "long"
    },
    "_av": {
      "type": "long"
    },
    "_nbc": {
      "type": "long"
    },
    "_ops": {
      "type": "keyword"
    },
    "_opi": {
      "type": "keyword"
    },
    "_sp": {
      "type": "keyword"
    },
    "_sps": {
      "type": "keyword"
    },
    "_tenant": {
      "type": "long"
    },
    "_up": {
      "type": "keyword"
    },
    "_uds": {
      "type": "object",
      "enabled": false
    },
    "_us": {
      "type": "keyword"
    },
    "_storage": {
      "properties": {
        "_nbc": {
          "type": "long"
        },
        "offerIds": {
          "type": "keyword"
        },
        "strategyId": {
          "type": "keyword"
        }
      }
    },
    "_glpd": {
      "enabled": false
    },
    "_acd": {
      "type": "date",
      "format": "strict_date_optional_time"
    },
    "_aud": {
      "type": "date",
      "format": "strict_date_optional_time"
    }
  }
}

Note

Le paramétrage de ce mapping se fait sur les deux composants metadata et le composant extra``ihm-recette``.

Prudence

En cas de changement du mapping, il faut veiller à ce que cette mise à jour soit en accord avec l’Ontologie de VITAM.

Le mapping est pris en compte lors de la première création des indexes. Pour une nouvelle installation de VITAM, les mapping seront automatiquement pris en compte. Cependant, la modification des mappings nécessite une réindexation via l’API dédiée si VITAM est déjà installé.