4.2.2. Configuration du déploiement

Voir aussi

L’architecture de la solution logicielle, les éléments de dimensionnement ainsi que les principes de déploiement sont définis dans le DAT.

4.2.2.1. Fichiers de déploiement

Les fichiers de déploiement sont disponibles dans la version VITAM livrée dans le sous-répertoire deployment . Concernant l’installation, ils consistent en 2 parties :

  • les playbook ansible de déploiement, présents dans le sous-répertoire ansible-vitam, qui est indépendant de l’environnement à déployer ; ces fichiers ne sont normalement pas à modifier pour réaliser une installation.
  • l’arborescence d’inventaire ; des fichiers d’exemple sont disponibles dans le sous-répertoire environments. Cette arborescence est valable pour le déploiement d’un environnement, et est à dupliquer lors de l’installation d’environnements ultérieurs. Les fichiers qui y sont contenus doivent être adaptés avant le déploiement, comme il est expliqué dans les paragraphes suivants.

4.2.2.2. Informations “plate-forme”

Pour configurer le déploiement, il est nécessaire de créer dans le répertoire environments un nouveau fichier d’inventaire (dans la suite, ce fichier sera communément appelé hosts.<environnement>). Ce fichier doit être basé sur la structure présente dans le fichier hosts.example (et notamment respecter scrupuleusement l’arborescence des groupes ansible) ; les commentaires dans ce fichier donnent les explications permettant l’adaptation à l’environnement cible :

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
# Group definition ; DO NOT MODIFY
[hosts]

# Group definition ; DO NOT MODIFY
[hosts:children]
vitam
reverse
library
hosts-dev-tools
ldap


########### Tests environments specifics ###########

# EXTRA : Front reverse-proxy (test environments ONLY) ; add machine name after
[reverse]
# optional : after machine, if this machine is different from VITAM machines, you can specify another become user
# Example
# vitam-centos-01.vitam ansible_ssh_user=centos

########### Extra VITAM applications ###########

[ldap] # Extra : OpenLDAP server
# LDAP server !!! NOT FOR PRODUCTION !!! Test only

[library]
# TODO: Put here servers where this service will be deployed : library

[hosts-dev-tools]
# TODO: Put here servers where this service will be deployed : mongo-express, elasticsearch-head

[elasticsearch:children] # EXTRA : elasticsearch
hosts-elasticsearch-data
hosts-elasticsearch-log

########### VITAM services ###########

# Group definition ; DO NOT MODIFY
[vitam:children]
zone-external
zone-access
zone-applicative
zone-storage
zone-data
zone-admin


##### Zone externe


[zone-external:children]
hosts-ihm-demo
hosts-cerebro
hosts-ihm-recette

[hosts-ihm-demo]
# TODO: Put here servers where this service will be deployed : ihm-demo

[hosts-ihm-recette]
# TODO: Put here servers where this service will be deployed : ihm-recette (extra feature)

[hosts-cerebro]
# TODO: Put here servers where this service will be deployed : vitam-elasticsearch-cerebro


##### Zone access

# Group definition ; DO NOT MODIFY
[zone-access:children]
hosts-ingest-external
hosts-access-external

[hosts-ingest-external]
# TODO: Put here servers where this service will be deployed : ingest-external


[hosts-access-external]
# TODO: Put here servers where this service will be deployed : access-external


##### Zone applicative

# Group definition ; DO NOT MODIFY
[zone-applicative:children]
hosts-ingest-internal
hosts-processing
hosts-worker
hosts-access-internal
hosts-metadata
hosts-functional-administration
hosts-logbook
hosts-workspace
hosts-storage-engine
hosts-security-internal

[hosts-security-internal]
# TODO: Put here servers where this service will be deployed : security-internal


[hosts-logbook]
# TODO: Put here servers where this service will be deployed : logbook


[hosts-workspace]
# TODO: Put here servers where this service will be deployed : workspace


[hosts-ingest-internal]
# TODO: Put here servers where this service will be deployed : ingest-internal


[hosts-access-internal]
# TODO: Put here servers where this service will be deployed : access-internal


[hosts-metadata]
# TODO: Put here servers where this service will be deployed : metadata


[hosts-functional-administration]
# TODO: Put here servers where this service will be deployed : functional-administration


[hosts-processing]
# TODO: Put here servers where this service will be deployed : processing


[hosts-storage-engine]
# TODO: Put here servers where this service will be deployed : storage-engine


[hosts-worker]
# TODO: Put here servers where this service will be deployed : worker
# Optional parameter after each host : vitam_worker_capacity=<integer> ; please refer to your infrastructure for defining this number ; default is 1


##### Zone storage

[zone-storage:children] # DO NOT MODIFY
hosts-storage-offer-default


[hosts-storage-offer-default]
# TODO: Put here servers where this service will be deployed : storage-offer-default
# LIMIT : only 1 offer per machine and 1 machine per offer
# Mandatory param for each offer is offer_conf and points to offer_opts.yml & vault-vitam.yml (with same tree)
# hostname-offre-1.vitam offer_conf=offer-swift-1
# for filesystem
# hostname-offre-2.vitam offer_conf=offer-fs-1

[hosts-mongodb-offer:children]
hosts-mongos-data
hosts-mongoc-data
hosts-mongod-data

[hosts-mongos-offer]
# TODO: put here servers where this service will be deployed : mongos cluster for storage offers
# Mandatory param : mongo_cluster_name : name of the cluster (should exist in the offer_conf configuration)
# Example (for a more complete one, see the one in the group hosts-mongos-data) :
# vitam-mongo-swift-offer-01   mongo_cluster_name=offer-swift-1
# vitam-mongo-swift-offer-02   mongo_cluster_name=offer-swift-1
# vitam-mongo-fs-offer-01      mongo_cluster_name=offer-fs-1
# vitam-mongo-fs-offer-02      mongo_cluster_name=offer-fs-1

[hosts-mongoc-offer]
# TODO: put here servers where this service will be deployed : mongoc cluster for storage offers
# Mandatory param : mongo_cluster_name : name of the cluster (should exist in the offer_conf configuration)
# Optional param : mandatory for 1 node of the shard, some init commands will be executed on it
# Optional param : mongo_arbiter=true : the node will be only an arbiter ; do not add this paramter on a mongo_rs_bootstrap node
# Example :
# vitam-mongo-swift-offer-01   mongo_cluster_name=offer-swift-1                       mongo_rs_bootstrap=true
# vitam-mongo-swift-offer-02   mongo_cluster_name=offer-swift-1
# vitam-swift-offer            mongo_cluster_name=offer-swift-1                       mongo_arbiter=true
# vitam-mongo-fs-offer-01      mongo_cluster_name=offer-fs-1                          mongo_rs_bootstrap=true
# vitam-mongo-fs-offer-02      mongo_cluster_name=offer-fs-1
# vitam-fs-offer               mongo_cluster_name=offer-fs-1                          mongo_arbiter=true

[hosts-mongod-offer]
# TODO: put here servers where this service will be deployed : mongod cluster for storage offers
# Mandatory param : mongo_cluster_name : name of the cluster (should exist in the offer_conf configuration)
# Mandatory param : id of the current shard, increment by 1 from 0 to n
# Optional param : mandatory for 1 node of the shard, some init commands will be executed on it
# Optional param : mongo_arbiter=true : the node will be only an arbiter ; do not add this paramter on a mongo_rs_bootstrap node
# Example :
# vitam-mongo-swift-offer-01   mongo_cluster_name=offer-swift-1    mongo_shard_id=0                   mongo_rs_bootstrap=true
# vitam-mongo-swift-offer-02   mongo_cluster_name=offer-swift-1    mongo_shard_id=0
# vitam-swift-offer            mongo_cluster_name=offer-swift-1    mongo_shard_id=0                   mongo_arbiter=true
# vitam-mongo-fs-offer-01      mongo_cluster_name=offer-fs-1       mongo_shard_id=0                   mongo_rs_bootstrap=true
# vitam-mongo-fs-offer-02      mongo_cluster_name=offer-fs-1       mongo_shard_id=0
# vitam-fs-offer               mongo_cluster_name=offer-fs-1       mongo_shard_id=0                   mongo_arbiter=true

##### Zone data

# Group definition ; DO NOT MODIFY
[zone-data:children]
hosts-elasticsearch-data
hosts-mongodb-data

[hosts-elasticsearch-data]
# TODO: Put here servers where this service will be deployed : elasticsearch-data cluster
# 2 params available for huge environments (parameter to be declared after each server) :
#    is_data=true/false
#    is_master=true/false
#    other options are not handled yet
# defaults are set to true
# Examples :
# server1 is_master=true is_data=false
# server2 is_master=false is_data=true
# More explanation here : https://www.elastic.co/guide/en/elasticsearch/reference/5.6/modules-node.html


# Group definition ; DO NOT MODIFY
[hosts-mongodb-data:children]
hosts-mongos-data
hosts-mongoc-data
hosts-mongod-data

[hosts-mongos-data]
# TODO: Put here servers where this service will be deployed : mongos cluster
# Mandatory param : mongo_cluster_name=mongo-data  ("mongo-data" is mandatory)
# Example :
# vitam-mdbs-01   mongo_cluster_name=mongo-data
# vitam-mdbs-01   mongo_cluster_name=mongo-data
# vitam-mdbs-01   mongo_cluster_name=mongo-data

[hosts-mongoc-data]
# TODO: Put here servers where this service will be deployed : mongoc cluster
# Mandatory param : mongo_cluster_name=mongo-data  ("mongo-data" is mandatory)
# Optional param : mandatory for 1 node of the shard, some init commands will be executed on it
# Example :
# vitam-mdbc-01   mongo_cluster_name=mongo-data                     mongo_rs_bootstrap=true
# vitam-mdbc-01   mongo_cluster_name=mongo-data
# vitam-mdbc-01   mongo_cluster_name=mongo-data

[hosts-mongod-data]
# TODO: Put here servers where this service will be deployed : mongod cluster
# Each replica_set should have an odd number of members (2n + 1)
# Reminder: For Vitam, one mongodb shard is using one replica_set
# Mandatory param : mongo_cluster_name=mongo-data ("mongo-data" is mandatory)
# Mandatory param : id of the current shard, increment by 1 from 0 to n
# Optional param : mandatory for 1 node of the shard, some init commands will be executed on it
# Example:
# vitam-mdbd-01  mongo_cluster_name=mongo-data   mongo_shard_id=0  mongo_rs_bootstrap=true
# vitam-mdbd-02  mongo_cluster_name=mongo-data   mongo_shard_id=0
# vitam-mdbd-03  mongo_cluster_name=mongo-data   mongo_shard_id=0
# vitam-mdbd-04  mongo_cluster_name=mongo-data   mongo_shard_id=1  mongo_rs_bootstrap=true
# vitam-mdbd-05  mongo_cluster_name=mongo-data   mongo_shard_id=1
# vitam-mdbd-06  mongo_cluster_name=mongo-data   mongo_shard_id=1

###### Zone admin

# Group definition ; DO NOT MODIFY
[zone-admin:children]
hosts-consul-server
hosts-kibana-data
log-servers
hosts-elasticsearch-log

[hosts-consul-server]
# TODO: Put here servers where this service will be deployed : consul

[hosts-kibana-data]
# TODO: Put here servers where this service will be deployed : kibana (for data cluster)

[log-servers:children]
hosts-kibana-log
hosts-logstash


[hosts-kibana-log]
# TODO: Put here servers where this service will be deployed : kibana (for log cluster)

[hosts-logstash]
# TODO: Put here servers where this service will be deployed : logstash


[hosts-elasticsearch-log]
# TODO: Put here servers where this service will be deployed : elasticsearch-log cluster

########### Global vars ###########

[hosts:vars]

# ===============================
# VITAM
# ===============================

# Declare user for ansible on target machines
ansible_ssh_user=
# Can target user become as root ? ; true is required by VITAM (usage of a sudoer is mandatory)
ansible_become=true

# Related to Consul ; apply in a table your DNS server(s)
# Example : dns_servers=["8.8.8.8","8.8.4.4"]
dns_servers=

# Vitam tenants to create
vitam_tenant_ids=[0,1,2]
vitam_tenant_admin=1

### Logback configuration ###
# Days before deleting logback log files (java & access logs for vitam components)
days_to_delete_logback_logfiles=

# Configuration for Curator
#	Days before deletion on log management cluster; 365 for production environment
days_to_delete_logstash_indexes=
#	Days before closing "old" indexes on log management cluster; 30 for production environment
days_to_close_logstash_indexes=

# Define local Consul datacenter name
vitam_site_name=prod-dc1
# EXAMPLE : vitam_site_name = prod-dc1
# check whether on primary site (true) or secondary (false)
primary_site=true


# ===============================
# EXTRA
# ===============================
# Environment (defines title in extra on reverse homepage). Variable is DEPRECATED !
#environnement=

### vitam-itest repository ###
vitam_tests_branch=master
vitam_tests_gitrepo_protocol=
vitam_tests_gitrepo_baseurl=
vitam_tests_gitrepo_url=

# Curator configuration
#	Days before deletion for packetbeat index only on log management cluster
days_to_delete_packetbeat_indexes=5
#	Days before deletion for metricbeat index only on log management cluster; 30 for production environment
days_to_delete_metricbeat_indexes=30
# Days before closing metrics elasticsearch indexes
days_to_close_metrics_indexes=7
# Days before deleting metrics elasticsearch indexes
days_to_delete_metrics_indexes=30
days_to_delete_packetbeat_indexes=20
days_to_delete_metricbeat_indexes=20



# Used when VITAM is behind a reverse proxy (provides configuration for reverse proxy && displayed in header page)
vitam_reverse_external_dns=
# For reverse proxy use
reverse_proxy_port=80
# http_proxy env var to use ; has to be declared even if empty
http_proxy_environnement=

Pour chaque type de “host”, indiquer le(s) serveur(s) défini(s) pour chaque fonction. Une colocalisation de composants est possible (Cf. le paragraphe idoine du DAT)

Note

Concernant le groupe “hosts-consul-server”, il est nécessaire de déclarer un minimum de 3 machines.

La configuration des droits d’accès à VITAM est réalisée dans le fichier environments /group_vars/all/vitam_security.yml, comme suit :

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
---

# Business vars

### Admin context name and tenants ###
admin_context_name: "admin-context"
admin_context_tenants: "{{vitam_tenant_ids}}"
# Indicate context certificates relative paths under {{inventory_dir}}/certs/client-external/clients
# vitam-admin-int is mandatory for internal use (PRONOM upload)
admin_context_certs: [ "ihm-demo/ihm-demo.crt", "ihm-recette/ihm-recette.crt", "reverse/reverse.crt", "vitam-admin-int/vitam-admin-int.crt" ]
# Indicate here all the personal certificates relative paths under {{inventory_dir}}/certs/client-vitam-users/clients
admin_personal_certs: [ "userOK.crt" ]

# Admin security profile name
admin_security_profile: "admin-security-profile"

admin_basic_auth_user: "adminUser"

Enfin, la déclaration des configuration des offres de stockage est réalisée dans le fichier environments /group_vars/all/offers_opts.yml :

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
# This list is ordered. It can and has to be completed if more offers are necessary
# Strategy order (1st has to be the prefered one)
vitam_strategy:
  - name: offer-fs-1
    referent: true
#    vitam_site_name: prod-dc2
#  - name: offer_swift_1
# Example :
#  - name: distant
#    referent: true
#    vitam_site_name: distant-dc2

# DON'T forget to add associated passwords in vault-vitam.yml with same tree when using provider openstack-swift*
# ATTENTION !!! Each offer has to have a distinct name, except for clusters binding a same physical storage
# WARNING : for offer names, please only use [a-z][a-z0-9-]* pattern
vitam_offers:
  offer-fs-1:
    # param can be filesystem or filesystem-hash
    provider: filesystem
  offer-swift-1:
    # provider : openstack-swift for v1 or openstack-swift-v3 for v3
    provider: openstack-swift
    # keystoneEndPoint : URL de connexion à keystone
    keystoneEndPoint: http://hostname-rados-gw:port/auth/1.0
    # deprecated
    keystone_auth_url: http://hostname-rados-gw:port/auth/1.0
    # swiftUid : domaine OpenStack dans lequel l'utilisateur est enregistré
    swift_uid: domaine
    # swiftSubUser : identifiant de l'utilisateur
    swift_subuser: utilisateur
    # cephMode : doit être à false si offre v3 ; true si offre v1
    cephMode: false
    # projectName : tenant openstack
    projectName: monTenant


    
  # example_swift_v1:
  #   provider: openstack-swift
  #   keystoneEndPoint: https://keystone/auth/1.0
  #   swift_uid: tenant$user # <tenant>$<user>
  #   swift_subuser: subuser
  #   cephMode: true
  # example_swift_v3:
  #   provider: openstack-swift-v3
  #   keystoneEndPoint: https://keystone/v3
  #   swift_uid: domaine
  #   swift_subuser: user
  #   cephMode: false
  #   projectName: monTenant

Se référer aux commentaires dans le fichier pour le renseigner correctement.

Note

dans le cas d’un déploiement multi-sites, dans la section vitam_strategy, la directive vitam_site_name définit pour l’offre associée le nom du datacenter consul. Par défaut, si non défini, c’est la valeur de la variable vitam_site_name définie dans l’inventaire.

Avertissement

Ne pas oublier, en cas de connexion à un keystone en https, de répercuter dans la PKI la clé publique de la CA du keystone.

4.2.2.3. Déclaration des secrets

Avertissement

Cette section décrit des fichiers contenant des données sensibles ; il convient de sécuriser ces fichiers avec un mot de passe “fort”. En cas d’usage d’un fichier de mot de passe (“vault-password-file”), il faut renseigner ce mot de passe comme contenu du fichier et ne pas oublier de sécuriser ou supprimer ce fichier à l’issue de l’installation.

Les secrets utilisés par la solution logicielle (en-dehors des certificats qui sont abordés dans une section ultérieure) sont définis dans des fichiers chiffrés par ansible-vault.

Important

Tous les vault présents dans l’arborescence d’inventaire doivent être tous protégés par le même mot de passe !

La première étape consiste à changer les mots de passe de tous les vault présents dans l’arborescence de déploiement (le mot de passe par défaut est contenu dans le fichier vault_pass.txt) à l’aide de la commande ansible-vault rekey <fichier vault>.

2 vaults sont principalement utilisés dans le déploiement d’une version ; leur contenu est donc à modifier avant tout déploiement :

  • Le fichier environments /group_vars/all/vault-vitam.yml contient les secrets généraux :

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    
    # Vitam platform secret key
    plateforme_secret: vitamsecret
    
    # Cerebro key
    cerebro_secret_key: tGz28hJkiW[p@a34G
    
    # The consul key must be 16-bytes, Base64 encoded: https://www.consul.io/docs/agent/encryption.html
    # You can generate it with the "consul keygen" command
    # Or you can use this script: deployment/pki/scripts/generate_consul_key.sh
    consul_encrypt: Biz14ohqN4HtvZmrXp3N4A==
    
    mongodb:
      mongo-data:
        passphrase: mongogo
        admin:
          user: vitamdb-admin
          password: azerty
        localadmin:
          user: vitamdb-localadmin
          password: qwerty
        metadata:
          user: metadata
          password: azerty1
        logbook:
          user: logbook
          password: azerty2
        functionalAdmin:
          user: functional-admin
          password: azerty3
        securityInternal:
          user: security-internal
          password: azerty4
      offer-fs-1:
        passphrase: mongogo
        admin:
          user: vitamdb-admin
          password: azerty
        localadmin:
          user: vitamdb-localadmin
          password: qwerty
        offer:
          user: offer
          password: azerty5
      offer-fs-2:
        passphrase: mongogo
        admin:
          user: vitamdb-admin
          password: azerty
        localadmin:
          user: vitamdb-localadmin
          password: qwerty
        offer:
          user: offer
          password: azerty5
      offer-swift-1:
        passphrase: mongogo
        admin:
          user: vitamdb-admin
          password: azerty
        localadmin:
          user: vitamdb-localadmin
          password: qwerty
        offer:
          user: offer
          password: azerty5
    
    vitam_users:
      - vitam_aadmin:
        login: aadmin
        password: aadmin1234
        role: admin
      - vitam_uuser:
        login: uuser
        password: uuser1234
        role: user
      - vitam_gguest:
        login: gguest
        password: gguest1234
        role: guest
      - techadmin:
        login: techadmin
        password: techadmin1234
        role: admin
    ldap_authentification:
        ldap_pwd: "admin"
    
    admin_basic_auth_password: adminPassword
    
  • Le fichier environments /group_vars/all/vault-keystores.yml contient les mots de passe des magasins de certificats utilisés dans VITAM :

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    keystores:
      server:
        offer: azerty1
        access_external: azerty2
        ingest_external: azerty3
        ihm_recette: azerty16
        ihm_demo: azerty17
      client_external:
        ihm_demo: azerty4
        gatling: azerty4
        ihm_recette: azerty5
        reverse: azerty6
      client_storage:
        storage: azerty7
      timestamping:
        secure_logbook: azerty8
    truststores:
      server: azerty9
      client_external: azerty10
      client_storage: azerty11
    grantedstores:
      client_external: azerty12
      client_storage: azerty13
    

Avertissement

il convient de sécuriser votre environnement en définissant des mots de passe “forts”.

4.2.2.3.1. Cas des extra

  • Le fichier environments /group_vars/all/vault-extra.yml contient les mot de passe des magasins de certificats utilisés dans VITAM :

    1
    2
    3
    # Example for git lfs ; uncomment & use if needed
    #vitam_gitlab_itest_login: "account"
    #vitam_gitlab_itest_password: "password"
    

Note

le playbook vitam.yml comprend des étapes avec la mention no_log afin de ne pas afficher en clair des étapes comme les mots de passe des certificats. En cas d’erreur, il est possible de retirer la ligne dans le fichier pour une analyse plus fine d’un éventuel problème sur une de ces étapes.