Modification_du_shell_Bash__english_

Modification of the Bash shell

To improve productivity, it may be useful to extend the bash. There are two main ways to do this:

  • creating aliases,
  • creating functions.

Creating an alias

There are two files in this directory:

  • a normal file myfile.txt,
  • a hidden file myhiddenfile.txt.

To define an alias of command, use the alias command. For example, let’s define an alias myls for the command ls -ah.

Let’s make sure we only see the uncached file with the `ls’ command:

ls
myfile.txt
]0;vagrant@centos71:/vagrant/test

Let’s check that the command myls doesn’t exist:

myls
bash: myls : commande introuvable

Let’s create the alias:

echo 'alias myls="ls -ah"' >> ~/.bashrc

Let’s tell Bash that we want to use this alias in the current session:

. ~/.bashrc
]0;vagrant@centos71:/vagrant/test

Let’s check that the Bash environment now recognizes this alias:

myls
.  ..  myfile.txt  .myhiddenfile
]0;vagrant@centos71:/vagrant/test

So the alias myls does exist.

We could also have created an alias for the command them itself:

echo 'alias ls="ls -ah"' >> ~/.bashrc
]0;vagrant@centos71:/vagrant/test

Reload the alias:

. ~/.bashrc
]0;vagrant@centos71:/vagrant/test

Check that the command ls now has the default arguments -ah:

ls
.  ..  myfile.txt  .myhiddenfile
]0;vagrant@centos71:/vagrant/test

To display the list of aliases, you can use the command alias:

alias
alias egrep='egrep --color=auto'
alias fgrep='fgrep --color=auto'
alias grep='grep --color=auto'
alias l.='ls -d .* --color=auto'
alias ll='ls -l --color=auto'
alias ls='ls -ah'
alias myls='ls -ah'
alias which='alias | /usr/bin/which --tty-only --read-alias --show-dot --show-tilde'
]0;vagrant@centos71:/vagrant/test

Creating a fonction

It is also possible to create a function that accepts the command arguments as parameters. This function must be defined in the ~/.bashrc file. For example, you can define a dpull function to run the docker pull command:

cat <<EOF>> ~/.bashrc
function dpull()
{
    docker pull \$1;
}
EOF

Let’s source the file .bashrc:

. ~/.bashrc
]0;vagrant@centos71:/vagrant/notebooks/linux

Let’s check that the dpull alpine command has the same effect as the docker pull alpine command:

dpull alpine
Using default tag: latest
latest: Pulling from library/alpine

Digest: sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a
Status: Downloaded newer image for alpine:latest
docker.io/library/alpine:latest
]0;vagrant@centos71:/vagrant/notebooks/linux

So we’ve seen two ways to extend Bash to improve our productivity.

Installation_de_CouchDB__centos7___english_

Installing CouchDB [centos7]

CouchDB is a document oriented NoSQL database. We will install a native CouchDB database under Centos 7.

Add a new EPEL repository:

sudo bash -c 'cat <<EOF> /etc/yum.repos.d/apache-couchdb.repo
[bintray--apache-couchdb-rpm]
name=bintray--apache-couchdb-rpm
baseurl=http://apache.bintray.com/couchdb-rpm/el\$releasever/\$basearch/
gpgcheck=0
repo_gpgcheck=0
enabled=1
EOF'

Install couchdb :

sudo yum -y install couchdb
Failed to set locale, defaulting to C.UTF-8
Last metadata expiration check: 0:20:39 ago on Fri Oct 23 08:06:05 2020.
Dependencies resolved.
================================================================================
 Package     Arch       Version           Repository                       Size
================================================================================
Installing:
 couchdb     x86_64     3.1.1-1.el8       bintray--apache-couchdb-rpm      24 M

Transaction Summary
================================================================================
Install  1 Package

Total download size: 24 M
Installed size: 51 M
Downloading Packages:
couchdb-3.1.1-1.el8.x86_64.rpm                  2.6 MB/s |  24 MB     00:09    
--------------------------------------------------------------------------------
Total                                           2.6 MB/s |  24 MB     00:09     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                        1/1 
  Running scriptlet: couchdb-3.1.1-1.el8.x86_64                             1/1 
  Installing       : couchdb-3.1.1-1.el8.x86_64                             1/1 
  Running scriptlet: couchdb-3.1.1-1.el8.x86_64                             1/1 
  Verifying        : couchdb-3.1.1-1.el8.x86_64                             1/1 

Installed:
  couchdb-3.1.1-1.el8.x86_64                                                    

Complete!

Start the service couchdb:

sudo systemctl start couchdb

Enable it:

sudo systemctl enable couchdb
Created symlink /etc/systemd/system/multi-user.target.wants/couchdb.service → /usr/lib/systemd/system/couchdb.service.

Check that it has been started and that it has been enabled:

sudo systemctl status couchdb
● couchdb.service - Apache CouchDB
   Loaded: loaded (/usr/lib/systemd/system/couchdb.service; enabled; vendor pre>
   Active: active (running) since Fri 2020-10-23 08:08:57 UTC; 1s ago
 Main PID: 9011 (beam.smp)
    Tasks: 36 (limit: 12523)
   Memory: 28.1M
   CGroup: /system.slice/couchdb.service
           ├─9011 /opt/couchdb/bin/../erts-9.3.3.14/bin/beam.smp -K true -A 16 >
           ├─9023 /opt/couchdb/bin/../erts-9.3.3.14/bin/epmd -daemon
           └─9042 erl_child_setup 65536

Oct 23 08:08:57 centos systemd[1]: Started Apache CouchDB.
[?1l>1-12/12 (END)

Show its configuration :

  • the default configuration (default.ini),
cat /opt/couchdb/etc/default.ini
; Upgrading CouchDB will overwrite this file.
[vendor]
name = The Apache Software Foundation

[couchdb]
uuid = 
database_dir = ./data
view_index_dir = ./data
; util_driver_dir =
; plugin_dir =
os_process_timeout = 5000 ; 5 seconds. for view servers.
max_dbs_open = 500
; Method used to compress everything that is appended to database and view index files, except
; for attachments (see the attachments section). Available methods are:
;
; none         - no compression
; snappy       - use google snappy, a very fast compressor/decompressor
; deflate_N    - use zlib's deflate, N is the compression level which ranges from 1 (fastest,
;                lowest compression ratio) to 9 (slowest, highest compression ratio)
file_compression = snappy
; Higher values may give better read performance due to less read operations
; and/or more OS page cache hits, but they can also increase overall response
; time for writes when there are many attachment write requests in parallel.
attachment_stream_buffer_size = 4096
; Default security object for databases if not explicitly set
; everyone - same as couchdb 1.0, everyone can read/write
; admin_only - only admins can read/write
; admin_local - sharded dbs on :5984 are read/write for everyone,
;               local dbs on :5986 are read/write for admins only
default_security = admin_only
; btree_chunk_size = 1279
; maintenance_mode = false
; stem_interactive_updates = true
; uri_file =
; The speed of processing the _changes feed with doc_ids filter can be
; influenced directly with this setting - increase for faster processing at the
; expense of more memory usage.
changes_doc_ids_optimization_threshold = 100
; Maximum document ID length. Can be set to an integer or 'infinity'.
;max_document_id_length = infinity
;
; Limit maximum document size. Requests to create / update documents with a body
; size larger than this will fail with a 413 http error. This limit applies to
; requests which update a single document as well as individual documents from
; a _bulk_docs request. Since there is no canonical size of json encoded data,
; due to variabiliy in what is escaped or how floats are encoded, this limit is
; applied conservatively. For example 1.0e+16 could be encoded as 1e16, so 4 used
; for size calculation instead of 7.
max_document_size = 8000000 ; bytes
;
; Maximum attachment size.
; max_attachment_size = infinity
;
; Do not update the least recently used DB cache on reads, only writes
;update_lru_on_read = false
;
; The default storage engine to use when creating databases
; is set as a key into the [couchdb_engines] section.
default_engine = couch
;
; Enable this to only "soft-delete" databases when DELETE /{db} requests are
; made. This will place a .recovery directory in your data directory and
; move deleted databases/shards there instead. You can then manually delete
; these files later, as desired.
;enable_database_recovery = false
;
; Set the maximum size allowed for a partition. This helps users avoid
; inadvertently abusing partitions resulting in hot shards. The default
; is 10GiB. A value of 0 or less will disable partition size checks.
;max_partition_size = 10737418240
;
; When true, system databases _users and _replicator are created immediately
; on startup if not present.
;single_node = false

; Allow edits on the _security object in the user db. By default, it's disabled.
users_db_security_editable = false

[purge]
; Allowed maximum number of documents in one purge request
;max_document_id_number = 100
;
; Allowed maximum number of accumulated revisions in one purge request
;max_revisions_number = 1000
;
; Allowed durations when index is not updated for local purge checkpoint
; document. Default is 24 hours.
;index_lag_warn_seconds = 86400

[couchdb_engines]
; The keys in this section are the filename extension that
; the specified engine module will use. This is important so
; that couch_server is able to find an existing database without
; having to ask every configured engine.
couch = couch_bt_engine

[process_priority]
; Selectively disable altering process priorities for modules that request it.
; * NOTE: couch_server priority has been shown to lead to CouchDB hangs and
;     failures on Erlang releases 21.0 - 21.3.8.12 and 22.0 -> 22.2.4. Do not
;     enable when running with those versions.
;couch_server = false

[cluster]
q=2
n=3
; placement = metro-dc-a:2,metro-dc-b:1

; Supply a comma-delimited list of node names that this node should
; contact in order to join a cluster. If a seedlist is configured the ``_up``
; endpoint will return a 404 until the node has successfully contacted at
; least one of the members of the seedlist and replicated an up-to-date copy
; of the ``_nodes``, ``_dbs``, and ``_users`` system databases.
; seedlist = couchdb@node1.example.com,couchdb@node2.example.com

[chttpd]
; These settings affect the main, clustered port (5984 by default).
port = 5984
bind_address = 127.0.0.1
backlog = 512
socket_options = [{sndbuf, 262144}, {nodelay, true}]
server_options = [{recbuf, undefined}]
require_valid_user = false
; require_valid_user_except_for_up = false
; List of headers that will be kept when the header Prefer: return=minimal is included in a request.
; If Server header is left out, Mochiweb will add its own one in.
prefer_minimal = Cache-Control, Content-Length, Content-Range, Content-Type, ETag, Server, Transfer-Encoding, Vary
;
; Limit maximum number of databases when tying to get detailed information using
; _dbs_info in a request
max_db_number_for_dbs_info_req = 100

; set to true to delay the start of a response until the end has been calculated
;buffer_response = false

; authentication handlers
; authentication_handlers = {chttpd_auth, cookie_authentication_handler}, {chttpd_auth, default_authentication_handler}
; uncomment the next line to enable proxy authentication
; authentication_handlers = {chttpd_auth, proxy_authentication_handler}, {chttpd_auth, cookie_authentication_handler}, {chttpd_auth, default_authentication_handler}
; uncomment the next line to enable JWT authentication
; authentication_handlers = {chttpd_auth, jwt_authentication_handler}, {chttpd_auth, cookie_authentication_handler}, {chttpd_auth, default_authentication_handler}

; prevent non-admins from accessing /_all_dbs
; admin_only_all_dbs = true

;[jwt_auth]
; List of claims to validate
; required_claims =
;
; [jwt_keys]
; Configure at least one key here if using the JWT auth handler.
; If your JWT tokens do not include a "kid" attribute, use "_default"
; as the config key, otherwise use the kid as the config key.
; Examples
; hmac:_default = aGVsbG8=
; hmac:foo = aGVsbG8=
; The config values can represent symmetric and asymmetrics keys.
; For symmetrics keys, the value is base64 encoded;
; hmac:_default = aGVsbG8= # base64-encoded form of "hello"
; For asymmetric keys, the value is the PEM encoding of the public
; key with newlines replaced with the escape sequence \n.
; rsa:foo = -----BEGIN PUBLIC KEY-----\nMIIBIjAN...IDAQAB\n-----END PUBLIC KEY-----\n
; ec:bar = -----BEGIN PUBLIC KEY-----\nMHYwEAYHK...AzztRs\n-----END PUBLIC KEY-----\n

[couch_peruser]
; If enabled, couch_peruser ensures that a private per-user database
; exists for each document in _users. These databases are writable only
; by the corresponding user. Databases are in the following form:
; userdb-{hex encoded username}
enable = false
; If set to true and a user is deleted, the respective database gets
; deleted as well.
delete_dbs = false
; Set a default q value for peruser-created databases that is different from
; cluster / q
;q = 1
; prefix for user databases. If you change this after user dbs have been
; created, the existing databases won't get deleted if the associated user
; gets deleted because of the then prefix mismatch.
database_prefix = userdb-

[httpd]
port = 5986
bind_address = 127.0.0.1
authentication_handlers = {couch_httpd_auth, cookie_authentication_handler}, {couch_httpd_auth, default_authentication_handler}
secure_rewrites = true
allow_jsonp = false
; Options for the MochiWeb HTTP server.
;server_options = [{backlog, 128}, {acceptor_pool_size, 16}]
; For more socket options, consult Erlang's module 'inet' man page.
;socket_options = [{recbuf, undefined}, {sndbuf, 262144}, {nodelay, true}]
socket_options = [{sndbuf, 262144}]
enable_cors = false
enable_xframe_options = false
; CouchDB can optionally enforce a maximum uri length;
; max_uri_length = 8000
; changes_timeout = 60000
; config_whitelist = 
; max_uri_length = 
; rewrite_limit = 100
; x_forwarded_host = X-Forwarded-Host
; x_forwarded_proto = X-Forwarded-Proto
; x_forwarded_ssl = X-Forwarded-Ssl
; Maximum allowed http request size. Applies to both clustered and local port.
max_http_request_size = 4294967296 ; 4GB

; [httpd_design_handlers]
; _view = 

; [ioq]
; concurrency = 10
; ratio = 0.01

[ssl]
port = 6984

; [chttpd_auth]
; authentication_db = _users

; [chttpd_auth_cache]
; max_lifetime = 600000
; max_objects = 
; max_size = 104857600

; [mem3]
; nodes_db = _nodes
; shard_cache_size = 25000
; shards_db = _dbs
; sync_concurrency = 10

; [fabric]
; all_docs_concurrency = 10
; changes_duration = 
; shard_timeout_factor = 2
; uuid_prefix_len = 7
; request_timeout = 60000
; all_docs_timeout = 10000
; attachments_timeout = 60000
; view_timeout = 3600000
; partition_view_timeout = 3600000

; [rexi]
; buffer_count = 2000
; server_per_node = true
; stream_limit = 5
;
; Use a single message to kill a group of remote workers This is
; mostly is an upgrade clause to allow operating in a mixed cluster of
; 2.x and 3.x nodes. After upgrading switch to true to save some
; network bandwidth
;use_kill_all = false

; [global_changes]
; max_event_delay = 25
; max_write_delay = 500
; update_db = true

; [view_updater]
; min_writer_items = 100
; min_writer_size = 16777216

[couch_httpd_auth]
; WARNING! This only affects the node-local port (5986 by default).
; You probably want the settings under [chttpd].
authentication_db = _users
authentication_redirect = /_utils/session.html
require_valid_user = false
timeout = 600 ; number of seconds before automatic logout
auth_cache_size = 50 ; size is number of cache entries
allow_persistent_cookies = true ; set to false to disallow persistent cookies
iterations = 10 ; iterations for password hashing
; min_iterations = 1
; max_iterations = 1000000000
; password_scheme = pbkdf2
; proxy_use_secret = false
; comma-separated list of public fields, 404 if empty
; public_fields =
; secret = 
; users_db_public = false
; cookie_domain = example.com
; Set the SameSite cookie property for the auth cookie. If empty, the SameSite property is not set.
; same_site =

; CSP (Content Security Policy) Support for _utils
[csp]
enable = true
; header_value = default-src 'self'; img-src 'self'; font-src *; script-src 'self' 'unsafe-eval'; style-src 'self' 'unsafe-inline';

[cors]
credentials = false
; List of origins separated by a comma, * means accept all
; Origins must include the scheme: http://example.com
; You can't set origins: * and credentials = true at the same time.
;origins = *
; List of accepted headers separated by a comma
; headers =
; List of accepted methods
; methods =

; Configuration for a vhost
;[cors:http://example.com]
; credentials = false
; List of origins separated by a comma
; Origins must include the scheme: http://example.com
; You can't set origins: * and credentials = true at the same time.
;origins =
; List of accepted headers separated by a comma
; headers =
; List of accepted methods
; methods =

; Configuration for the design document cache
;[ddoc_cache]
; The maximum size of the cache in bytes
;max_size = 104857600 ; 100MiB
; The period each cache entry should wait before
; automatically refreshing in milliseconds
;refresh_timeout = 67000

[x_frame_options]
; Settings same-origin will return X-Frame-Options: SAMEORIGIN.
; If same origin is set, it will ignore the hosts setting
; same_origin = true
; Settings hosts will return X-Frame-Options: ALLOW-FROM https://example.com/
; List of hosts separated by a comma. * means accept all
; hosts =

[native_query_servers]
; erlang query server
; enable_erlang_query_server = false

; Changing reduce_limit to false will disable reduce_limit.
; If you think you're hitting reduce_limit with a "good" reduce function,
; please let us know on the mailing list so we can fine tune the heuristic.
[query_server_config]
; commit_freq = 5
reduce_limit = true
os_process_limit = 100
; os_process_idle_limit = 300
; os_process_soft_limit = 100
; Timeout for how long a response from a busy view group server can take.
; "infinity" is also a valid configuration value.
;group_info_timeout = 5000
;query_limit = 268435456
;partition_query_limit = 268435456

[mango]
; Set to true to disable the "index all fields" text index, which can lead
; to out of memory issues when users have documents with nested array fields.
;index_all_disabled = false
; Default limit value for mango _find queries.
;default_limit = 25
; Ratio between documents scanned and results matched that will
; generate a warning in the _find response. Setting this to 0 disables
; the warning.
;index_scan_warning_threshold = 10

[indexers]
couch_mrview = true

[feature_flags]
; This enables any database to be created as a partitioned databases (except system db's). 
; Setting this to false will stop the creation of paritioned databases.
; paritioned||allowed* = true will scope the creation of partitioned databases
; to databases with 'allowed' prefix.
partitioned||* = true

[uuids]
; Known algorithms:
;   random - 128 bits of random awesome
;     All awesome, all the time.
;   sequential - monotonically increasing ids with random increments
;     First 26 hex characters are random. Last 6 increment in
;     random amounts until an overflow occurs. On overflow, the
;     random prefix is regenerated and the process starts over.
;   utc_random - Time since Jan 1, 1970 UTC with microseconds
;     First 14 characters are the time in hex. Last 18 are random.
;   utc_id - Time since Jan 1, 1970 UTC with microseconds, plus utc_id_suffix string
;     First 14 characters are the time in hex. uuids/utc_id_suffix string value is appended to these.
algorithm = sequential
; The utc_id_suffix value will be appended to uuids generated by the utc_id algorithm.
; Replicating instances should have unique utc_id_suffix values to ensure uniqueness of utc_id ids.
utc_id_suffix =
# Maximum number of UUIDs retrievable from /_uuids in a single request
max_count = 1000

[attachments]
compression_level = 8 ; from 1 (lowest, fastest) to 9 (highest, slowest), 0 to disable compression
compressible_types = text/*, application/javascript, application/json, application/xml

[replicator]
; Random jitter applied on replication job startup (milliseconds)
startup_jitter = 5000
; Number of actively running replications
max_jobs = 500
;Scheduling interval in milliseconds. During each reschedule cycle
interval = 60000
; Maximum number of replications to start and stop during rescheduling.
max_churn = 20
; More worker processes can give higher network throughput but can also
; imply more disk and network IO.
worker_processes = 4
; With lower batch sizes checkpoints are done more frequently. Lower batch sizes
; also reduce the total amount of used RAM memory.
worker_batch_size = 500
; Maximum number of HTTP connections per replication.
http_connections = 20
; HTTP connection timeout per replication.
; Even for very fast/reliable networks it might need to be increased if a remote
; database is too busy.
connection_timeout = 30000
; Request timeout
;request_timeout = infinity
; If a request fails, the replicator will retry it up to N times.
retries_per_request = 5
; Use checkpoints
;use_checkpoints = true
; Checkpoint interval
;checkpoint_interval = 30000
; Some socket options that might boost performance in some scenarios:
;       {nodelay, boolean()}
;       {sndbuf, integer()}
;       {recbuf, integer()}
;       {priority, integer()}
; See the `inet` Erlang module's man page for the full list of options.
socket_options = [{keepalive, true}, {nodelay, false}]
; Path to a file containing the user's certificate.
;cert_file = /full/path/to/server_cert.pem
; Path to file containing user's private PEM encoded key.
;key_file = /full/path/to/server_key.pem
; String containing the user's password. Only used if the private keyfile is password protected.
;password = somepassword
; Set to true to validate peer certificates.
verify_ssl_certificates = false
; File containing a list of peer trusted certificates (in the PEM format).
;ssl_trusted_certificates_file = /etc/ssl/certs/ca-certificates.crt
; Maximum peer certificate depth (must be set even if certificate validation is off).
ssl_certificate_max_depth = 3
; Maximum document ID length for replication.
;max_document_id_length = infinity
; How much time to wait before retrying after a missing doc exception. This
; exception happens if the document was seen in the changes feed, but internal
; replication hasn't caught up yet, and fetching document's revisions
; fails. This a common scenario when source is updated while continous
; replication is running. The retry period would depend on how quickly internal
; replication is expected to catch up. In general this is an optimisation to
; avoid crashing the whole replication job, which would consume more resources
; and add log noise.
;missing_doc_retry_msec = 2000
; Wait this many seconds after startup before attaching changes listeners
; cluster_start_period = 5
; Re-check cluster state at least every cluster_quiet_period seconds
; cluster_quiet_period = 60

; List of replicator client authentication plugins to try. Plugins will be
; tried in order. The first to initialize successfully will be used for that
; particular endpoint (source or target). Normally couch_replicator_auth_noop
; would be used at the end of the list as a "catch-all". It doesn't do anything
; and effectively implements the previous behavior of using basic auth.
; There are currently two plugins available:
;   couch_replicator_auth_session - use _session cookie authentication
;   couch_replicator_auth_noop - use basic authentication (previous default)
; Currently, the new _session cookie authentication is tried first, before
; falling back to the old basic authenticaion default:
;auth_plugins = couch_replicator_auth_session,couch_replicator_auth_noop
; To restore the old behaviour, use the following value:
;auth_plugins = couch_replicator_auth_noop

; Force couch_replicator_auth_session plugin to refresh the session
; periodically if max-age is not present in the cookie. This is mostly to
; handle the case where anonymous writes are allowed to the database and a VDU
; function is used to forbid writes based on the authenticated user name. In
; that case this value should be adjusted based on the expected minimum session
; expiry timeout on replication endpoints. If session expiry results in a 401
; or 403 response this setting is not needed.
;session_refresh_interval_sec = 550

[log]
; Possible log levels:
;  debug
;  info
;  notice
;  warning, warn
;  error, err
;  critical, crit
;  alert
;  emergency, emerg
;  none
;
level = info
;
; Set the maximum log message length in bytes that will be
; passed through the writer
;
; max_message_size = 16000
;
;
; There are four different log writers that can be configured
; to write log messages. The default writes to stderr of the
; Erlang VM which is useful for debugging/development as well
; as a lot of container deployments.
;
; There's also a file writer that works with logrotate, a
; rsyslog writer for deployments that need to have logs sent
; over the network, and a journald writer that's more suitable
; when using systemd journald.
;
writer = stderr
; Journald Writer notes:
;
; The journald writer doesn't have any options. It still writes
; the logs to stderr, but without the timestamp prepended, since
; the journal will add it automatically, and with the log level
; formated as per
; https://www.freedesktop.org/software/systemd/man/sd-daemon.html
;
;
; File Writer Options:
;
; The file writer will check every 30s to see if it needs
; to reopen its file. This is useful for people that configure
; logrotate to move log files periodically.
;
; file = ./couch.log ; Path name to write logs to
;
; Write operations will happen either every write_buffer bytes
; or write_delay milliseconds. These are passed directly to the
; Erlang file module with the write_delay option documented here:
;
;     http://erlang.org/doc/man/file.html
;
; write_buffer = 0
; write_delay = 0
;
;
; Syslog Writer Options:
;
; The syslog writer options all correspond to their obvious
; counter parts in rsyslog nomenclature.
;
; syslog_host =
; syslog_port = 514
; syslog_appid = couchdb
; syslog_facility = local2

[stats]
; Stats collection interval in seconds. Default 10 seconds.
;interval = 10

[smoosh]
;
; More documentation on these is in the Automatic Compaction
; section of the documentation.
;
;db_channels = upgrade_dbs,ratio_dbs,slack_dbs
;view_channels = upgrade_views,ratio_views,slack_views
;
;[smoosh.ratio_dbs]
;priority = ratio
;min_priority = 2.0
;
;[smoosh.ratio_views]
;priority = ratio
;min_priority = 2.0
;
;[smoosh.slack_dbs]
;priority = slack
;min_priority = 16777216
;
;[smoosh.slack_views]
;priority = slack
;min_priority = 16777216

[ioq]
; The maximum number of concurrent in-flight IO requests that
concurrency = 10

; The fraction of the time that a background IO request will be selected
; over an interactive IO request when both queues are non-empty
ratio = 0.01

[ioq.bypass]
; System administrators can choose to submit specific classes of IO directly
; to the underlying file descriptor or OS process, bypassing the queues
; altogether. Installing a bypass can yield higher throughput and lower
; latency, but relinquishes some control over prioritization. The following
; classes are recognized with the following defaults:

; Messages on their way to an external process (e.g., couchjs) are bypassed
os_process = true

; Disk IO fulfilling interactive read requests is bypassed
read = true

; Disk IO required to update a database is bypassed
write = true

; Disk IO required to update views and other secondary indexes is bypassed
view_update = true

; Disk IO issued by the background replication processes that fix any
; inconsistencies between shard copies is queued
shard_sync = false

; Disk IO issued by compaction jobs is queued
compaction = false

[dreyfus]
; The name and location of the Clouseau Java service required to
; enable Search functionality.
; name = clouseau@127.0.0.1

; CouchDB will try to re-connect to Clouseau using a bounded
; exponential backoff with the following number of iterations.
; retry_limit = 5

; The default number of results returned from a global search query.
; limit = 25

; The default number of results returned from a search on a partition
; of a database.
; limit_partitions = 2000

; The maximum number of results that can be returned from a global
; search query (or any search query on a database without user-defined
; partitions). Attempts to set ?limit=N higher than this value will
; be rejected.
; max_limit = 200

; The maximum number of results that can be returned when searching
; a partition of a database. Attempts to set ?limit=N higher than this
; value will be rejected. If this config setting is not defined,
; CouchDB will use the value of `max_limit` instead. If neither is
; defined, the default is 2000 as stated here.
; max_limit_partitions = 2000

[reshard]
;max_jobs = 48
;max_history = 20
;max_retries = 1
;retry_interval_sec = 10
;delete_source = true
;update_shard_map_timeout_sec = 60
;source_close_timeout_sec = 600
;require_node_param = false
;require_range_param = false
  • the localized changes of configuration (local.ini).
sudo cat /opt/couchdb/etc/local.ini
; CouchDB Configuration Settings

; Custom settings should be made in this file. They will override settings
; in default.ini, but unlike changes made to default.ini, this file won't be
; overwritten on server upgrade.

[couchdb]
;max_document_size = 4294967296 ; bytes
;os_process_timeout = 5000

[couch_peruser]
; If enabled, couch_peruser ensures that a private per-user database
; exists for each document in _users. These databases are writable only
; by the corresponding user. Databases are in the following form:
; userdb-{hex encoded username}
;enable = true
; If set to true and a user is deleted, the respective database gets
; deleted as well.
;delete_dbs = true
; Set a default q value for peruser-created databases that is different from
; cluster / q
;q = 1

[chttpd]
;port = 5984
;bind_address = 127.0.0.1
; Options for the MochiWeb HTTP server.
;server_options = [{backlog, 128}, {acceptor_pool_size, 16}]
; For more socket options, consult Erlang's module 'inet' man page.
;socket_options = [{sndbuf, 262144}, {nodelay, true}]

[httpd]
; NOTE that this only configures the "backend" node-local port, not the
; "frontend" clustered port. You probably don't want to change anything in
; this section.
; Uncomment next line to trigger basic-auth popup on unauthorized requests.
;WWW-Authenticate = Basic realm="administrator"

; Uncomment next line to set the configuration modification whitelist. Only
; whitelisted values may be changed via the /_config URLs. To allow the admin
; to change this value over HTTP, remember to include {httpd,config_whitelist}
; itself. Excluding it from the list would require editing this file to update
; the whitelist.
;config_whitelist = [{httpd,config_whitelist}, {log,level}, {etc,etc}]

[couch_httpd_auth]
; If you set this to true, you should also uncomment the WWW-Authenticate line
; above. If you don't configure a WWW-Authenticate header, CouchDB will send
; Basic realm="server" in order to prevent you getting logged out.
; require_valid_user = false

[ssl]
;enable = true
;cert_file = /full/path/to/server_cert.pem
;key_file = /full/path/to/server_key.pem
;password = somepassword
; set to true to validate peer certificates
;verify_ssl_certificates = false
; Set to true to fail if the client does not send a certificate. Only used if verify_ssl_certificates is true.
;fail_if_no_peer_cert = false
; Path to file containing PEM encoded CA certificates (trusted
; certificates used for verifying a peer certificate). May be omitted if
; you do not want to verify the peer.
;cacert_file = /full/path/to/cacertf
; The verification fun (optional) if not specified, the default
; verification fun will be used.
;verify_fun = {Module, VerifyFun}
; maximum peer certificate depth
;ssl_certificate_max_depth = 1
;
; Reject renegotiations that do not live up to RFC 5746.
;secure_renegotiate = true
; The cipher suites that should be supported.
; Can be specified in erlang format "{ecdhe_ecdsa,aes_128_cbc,sha256}"
; or in OpenSSL format "ECDHE-ECDSA-AES128-SHA256".
;ciphers = ["ECDHE-ECDSA-AES128-SHA256", "ECDHE-ECDSA-AES128-SHA"]
; The SSL/TLS versions to support
;tls_versions = [tlsv1, 'tlsv1.1', 'tlsv1.2']

; To enable Virtual Hosts in CouchDB, add a vhost = path directive. All requests to
; the Virual Host will be redirected to the path. In the example below all requests
; to http://example.com/ are redirected to /database.
; If you run CouchDB on a specific port, include the port number in the vhost:
; example.com:5984 = /database
[vhosts]
;example.com = /database/

; To create an admin account uncomment the '[admins]' section below and add a
; line in the format 'username = password'. When you next start CouchDB, it
; will change the password to a hash (so that your passwords don't linger
; around in plain-text files). You can add more admin accounts with more
; 'username = password' lines. Don't forget to restart CouchDB after
; changing this.
[admins]
;admin = mysecretpassword

Change the default password for the admin account:

sudo sed -i 's/;admin = mysecretpassword/admin = mypassword/g' /opt/couchdb/etc/local.ini

Restart the service couchdb:

sudo systemctl restart couchdb

Call the API REST of CouchDB by authenticating:

curl -u admin:mypassword -X GET http://127.0.0.1:5984/_utils/
<!--
// Licensed under the Apache License, Version 2.0 (the "License"); you may not
// use this file except in compliance with the License. You may obtain a copy of
// the License at
//
//   http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
// License for the specific language governing permissions and limitations under
// the License.
-->
<!doctype html>
<html lang="en">
<head>
  <meta charset="utf-8">
  <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
  <meta name="viewport" content="width=device-width,initial-scale=1">
  <meta http-equiv="Content-Language" content="en" />
  <link rel="shortcut icon" type="image/png" href="dashboard.assets/img/couchdb-logo.png"/>
  <title>Project Fauxton</title>

  <!-- Application styles. -->
  <style>
    .noscript-warning {
      font-family: "Helvetica Neue", Helvetica, Arial, sans-serif;
      padding: 1px 30px 10px 30px;
      color: #fff;
      background: @brandHighlight;
      margin: 100px;
      box-shadow: 2px 2px 5px #989898;
    }
  </style>

<link href="dashboard.assets/css/styles.2ca2557452a177700f4c.css" rel="stylesheet"><link href="dashboard.assets/css/styles.bdfacd9ba862d16e41b9.css" rel="stylesheet"></head>

<body id="home">

  <noscript>
    <div class="noscript-warning">
      <h1>Please turn on JavaScript</h1>
      <p>Fauxton <strong>requires</strong> JavaScript to be enabled.</p>
    </div>
  </noscript>


  <div id="app"></div>

 <!-- Fauxton Release : 2020-09-11T22:46:54.927Z -->
<script type="text/javascript" src="dashboard.assets/js/manifest.583577db79221d5ae84e.js"></script><script type="text/javascript" src="dashboard.assets/js/vendor.2ca2557452a177700f4c.js"></script><script type="text/javascript" src="dashboard.assets/js/bundle.bdfacd9ba862d16e41b9.js"></script></body>
</html>

Now you know how to install CouchDB on Centos 7 and verify its installation.

Installation_de_CouchDB__centos7___french_

Installation de CouchDB [centos7]

CouchDB est une base de données NoSQL orientée documents. Nous allons installer une base de données CouchDB en natif sous Centos 7.

Ajouter un nouveau repository EPEL :

sudo bash -c 'cat <<EOF> /etc/yum.repos.d/apache-couchdb.repo
[bintray--apache-couchdb-rpm]
name=bintray--apache-couchdb-rpm
baseurl=http://apache.bintray.com/couchdb-rpm/el\$releasever/\$basearch/
gpgcheck=0
repo_gpgcheck=0
enabled=1
EOF'

Installer couchdb :

sudo yum -y install couchdb
Failed to set locale, defaulting to C.UTF-8
Last metadata expiration check: 0:20:39 ago on Fri Oct 23 08:06:05 2020.
Dependencies resolved.
================================================================================
 Package     Arch       Version           Repository                       Size
================================================================================
Installing:
 couchdb     x86_64     3.1.1-1.el8       bintray--apache-couchdb-rpm      24 M

Transaction Summary
================================================================================
Install  1 Package

Total download size: 24 M
Installed size: 51 M
Downloading Packages:
couchdb-3.1.1-1.el8.x86_64.rpm                  2.6 MB/s |  24 MB     00:09    
--------------------------------------------------------------------------------
Total                                           2.6 MB/s |  24 MB     00:09     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                        1/1 
  Running scriptlet: couchdb-3.1.1-1.el8.x86_64                             1/1 
  Installing       : couchdb-3.1.1-1.el8.x86_64                             1/1 
  Running scriptlet: couchdb-3.1.1-1.el8.x86_64                             1/1 
  Verifying        : couchdb-3.1.1-1.el8.x86_64                             1/1 

Installed:
  couchdb-3.1.1-1.el8.x86_64                                                    

Complete!

Démarrer le service couchdb :

sudo systemctl start couchdb

L’activer :

sudo systemctl enable couchdb
Created symlink /etc/systemd/system/multi-user.target.wants/couchdb.service → /usr/lib/systemd/system/couchdb.service.

Vérifier qu’il a bien démarré et qu’il a bien été activé :

sudo systemctl status couchdb
● couchdb.service - Apache CouchDB
   Loaded: loaded (/usr/lib/systemd/system/couchdb.service; enabled; vendor pre>
   Active: active (running) since Fri 2020-10-23 08:08:57 UTC; 1s ago
 Main PID: 9011 (beam.smp)
    Tasks: 36 (limit: 12523)
   Memory: 28.1M
   CGroup: /system.slice/couchdb.service
           ├─9011 /opt/couchdb/bin/../erts-9.3.3.14/bin/beam.smp -K true -A 16 >
           ├─9023 /opt/couchdb/bin/../erts-9.3.3.14/bin/epmd -daemon
           └─9042 erl_child_setup 65536

Oct 23 08:08:57 centos systemd[1]: Started Apache CouchDB.
[?1l>1-12/12 (END)

Afficher sa configuration :

  • la configuration par défaut (default.ini),
cat /opt/couchdb/etc/default.ini
; Upgrading CouchDB will overwrite this file.
[vendor]
name = The Apache Software Foundation

[couchdb]
uuid = 
database_dir = ./data
view_index_dir = ./data
; util_driver_dir =
; plugin_dir =
os_process_timeout = 5000 ; 5 seconds. for view servers.
max_dbs_open = 500
; Method used to compress everything that is appended to database and view index files, except
; for attachments (see the attachments section). Available methods are:
;
; none         - no compression
; snappy       - use google snappy, a very fast compressor/decompressor
; deflate_N    - use zlib's deflate, N is the compression level which ranges from 1 (fastest,
;                lowest compression ratio) to 9 (slowest, highest compression ratio)
file_compression = snappy
; Higher values may give better read performance due to less read operations
; and/or more OS page cache hits, but they can also increase overall response
; time for writes when there are many attachment write requests in parallel.
attachment_stream_buffer_size = 4096
; Default security object for databases if not explicitly set
; everyone - same as couchdb 1.0, everyone can read/write
; admin_only - only admins can read/write
; admin_local - sharded dbs on :5984 are read/write for everyone,
;               local dbs on :5986 are read/write for admins only
default_security = admin_only
; btree_chunk_size = 1279
; maintenance_mode = false
; stem_interactive_updates = true
; uri_file =
; The speed of processing the _changes feed with doc_ids filter can be
; influenced directly with this setting - increase for faster processing at the
; expense of more memory usage.
changes_doc_ids_optimization_threshold = 100
; Maximum document ID length. Can be set to an integer or 'infinity'.
;max_document_id_length = infinity
;
; Limit maximum document size. Requests to create / update documents with a body
; size larger than this will fail with a 413 http error. This limit applies to
; requests which update a single document as well as individual documents from
; a _bulk_docs request. Since there is no canonical size of json encoded data,
; due to variabiliy in what is escaped or how floats are encoded, this limit is
; applied conservatively. For example 1.0e+16 could be encoded as 1e16, so 4 used
; for size calculation instead of 7.
max_document_size = 8000000 ; bytes
;
; Maximum attachment size.
; max_attachment_size = infinity
;
; Do not update the least recently used DB cache on reads, only writes
;update_lru_on_read = false
;
; The default storage engine to use when creating databases
; is set as a key into the [couchdb_engines] section.
default_engine = couch
;
; Enable this to only "soft-delete" databases when DELETE /{db} requests are
; made. This will place a .recovery directory in your data directory and
; move deleted databases/shards there instead. You can then manually delete
; these files later, as desired.
;enable_database_recovery = false
;
; Set the maximum size allowed for a partition. This helps users avoid
; inadvertently abusing partitions resulting in hot shards. The default
; is 10GiB. A value of 0 or less will disable partition size checks.
;max_partition_size = 10737418240
;
; When true, system databases _users and _replicator are created immediately
; on startup if not present.
;single_node = false

; Allow edits on the _security object in the user db. By default, it's disabled.
users_db_security_editable = false

[purge]
; Allowed maximum number of documents in one purge request
;max_document_id_number = 100
;
; Allowed maximum number of accumulated revisions in one purge request
;max_revisions_number = 1000
;
; Allowed durations when index is not updated for local purge checkpoint
; document. Default is 24 hours.
;index_lag_warn_seconds = 86400

[couchdb_engines]
; The keys in this section are the filename extension that
; the specified engine module will use. This is important so
; that couch_server is able to find an existing database without
; having to ask every configured engine.
couch = couch_bt_engine

[process_priority]
; Selectively disable altering process priorities for modules that request it.
; * NOTE: couch_server priority has been shown to lead to CouchDB hangs and
;     failures on Erlang releases 21.0 - 21.3.8.12 and 22.0 -> 22.2.4. Do not
;     enable when running with those versions.
;couch_server = false

[cluster]
q=2
n=3
; placement = metro-dc-a:2,metro-dc-b:1

; Supply a comma-delimited list of node names that this node should
; contact in order to join a cluster. If a seedlist is configured the ``_up``
; endpoint will return a 404 until the node has successfully contacted at
; least one of the members of the seedlist and replicated an up-to-date copy
; of the ``_nodes``, ``_dbs``, and ``_users`` system databases.
; seedlist = couchdb@node1.example.com,couchdb@node2.example.com

[chttpd]
; These settings affect the main, clustered port (5984 by default).
port = 5984
bind_address = 127.0.0.1
backlog = 512
socket_options = [{sndbuf, 262144}, {nodelay, true}]
server_options = [{recbuf, undefined}]
require_valid_user = false
; require_valid_user_except_for_up = false
; List of headers that will be kept when the header Prefer: return=minimal is included in a request.
; If Server header is left out, Mochiweb will add its own one in.
prefer_minimal = Cache-Control, Content-Length, Content-Range, Content-Type, ETag, Server, Transfer-Encoding, Vary
;
; Limit maximum number of databases when tying to get detailed information using
; _dbs_info in a request
max_db_number_for_dbs_info_req = 100

; set to true to delay the start of a response until the end has been calculated
;buffer_response = false

; authentication handlers
; authentication_handlers = {chttpd_auth, cookie_authentication_handler}, {chttpd_auth, default_authentication_handler}
; uncomment the next line to enable proxy authentication
; authentication_handlers = {chttpd_auth, proxy_authentication_handler}, {chttpd_auth, cookie_authentication_handler}, {chttpd_auth, default_authentication_handler}
; uncomment the next line to enable JWT authentication
; authentication_handlers = {chttpd_auth, jwt_authentication_handler}, {chttpd_auth, cookie_authentication_handler}, {chttpd_auth, default_authentication_handler}

; prevent non-admins from accessing /_all_dbs
; admin_only_all_dbs = true

;[jwt_auth]
; List of claims to validate
; required_claims =
;
; [jwt_keys]
; Configure at least one key here if using the JWT auth handler.
; If your JWT tokens do not include a "kid" attribute, use "_default"
; as the config key, otherwise use the kid as the config key.
; Examples
; hmac:_default = aGVsbG8=
; hmac:foo = aGVsbG8=
; The config values can represent symmetric and asymmetrics keys.
; For symmetrics keys, the value is base64 encoded;
; hmac:_default = aGVsbG8= # base64-encoded form of "hello"
; For asymmetric keys, the value is the PEM encoding of the public
; key with newlines replaced with the escape sequence \n.
; rsa:foo = -----BEGIN PUBLIC KEY-----\nMIIBIjAN...IDAQAB\n-----END PUBLIC KEY-----\n
; ec:bar = -----BEGIN PUBLIC KEY-----\nMHYwEAYHK...AzztRs\n-----END PUBLIC KEY-----\n

[couch_peruser]
; If enabled, couch_peruser ensures that a private per-user database
; exists for each document in _users. These databases are writable only
; by the corresponding user. Databases are in the following form:
; userdb-{hex encoded username}
enable = false
; If set to true and a user is deleted, the respective database gets
; deleted as well.
delete_dbs = false
; Set a default q value for peruser-created databases that is different from
; cluster / q
;q = 1
; prefix for user databases. If you change this after user dbs have been
; created, the existing databases won't get deleted if the associated user
; gets deleted because of the then prefix mismatch.
database_prefix = userdb-

[httpd]
port = 5986
bind_address = 127.0.0.1
authentication_handlers = {couch_httpd_auth, cookie_authentication_handler}, {couch_httpd_auth, default_authentication_handler}
secure_rewrites = true
allow_jsonp = false
; Options for the MochiWeb HTTP server.
;server_options = [{backlog, 128}, {acceptor_pool_size, 16}]
; For more socket options, consult Erlang's module 'inet' man page.
;socket_options = [{recbuf, undefined}, {sndbuf, 262144}, {nodelay, true}]
socket_options = [{sndbuf, 262144}]
enable_cors = false
enable_xframe_options = false
; CouchDB can optionally enforce a maximum uri length;
; max_uri_length = 8000
; changes_timeout = 60000
; config_whitelist = 
; max_uri_length = 
; rewrite_limit = 100
; x_forwarded_host = X-Forwarded-Host
; x_forwarded_proto = X-Forwarded-Proto
; x_forwarded_ssl = X-Forwarded-Ssl
; Maximum allowed http request size. Applies to both clustered and local port.
max_http_request_size = 4294967296 ; 4GB

; [httpd_design_handlers]
; _view = 

; [ioq]
; concurrency = 10
; ratio = 0.01

[ssl]
port = 6984

; [chttpd_auth]
; authentication_db = _users

; [chttpd_auth_cache]
; max_lifetime = 600000
; max_objects = 
; max_size = 104857600

; [mem3]
; nodes_db = _nodes
; shard_cache_size = 25000
; shards_db = _dbs
; sync_concurrency = 10

; [fabric]
; all_docs_concurrency = 10
; changes_duration = 
; shard_timeout_factor = 2
; uuid_prefix_len = 7
; request_timeout = 60000
; all_docs_timeout = 10000
; attachments_timeout = 60000
; view_timeout = 3600000
; partition_view_timeout = 3600000

; [rexi]
; buffer_count = 2000
; server_per_node = true
; stream_limit = 5
;
; Use a single message to kill a group of remote workers This is
; mostly is an upgrade clause to allow operating in a mixed cluster of
; 2.x and 3.x nodes. After upgrading switch to true to save some
; network bandwidth
;use_kill_all = false

; [global_changes]
; max_event_delay = 25
; max_write_delay = 500
; update_db = true

; [view_updater]
; min_writer_items = 100
; min_writer_size = 16777216

[couch_httpd_auth]
; WARNING! This only affects the node-local port (5986 by default).
; You probably want the settings under [chttpd].
authentication_db = _users
authentication_redirect = /_utils/session.html
require_valid_user = false
timeout = 600 ; number of seconds before automatic logout
auth_cache_size = 50 ; size is number of cache entries
allow_persistent_cookies = true ; set to false to disallow persistent cookies
iterations = 10 ; iterations for password hashing
; min_iterations = 1
; max_iterations = 1000000000
; password_scheme = pbkdf2
; proxy_use_secret = false
; comma-separated list of public fields, 404 if empty
; public_fields =
; secret = 
; users_db_public = false
; cookie_domain = example.com
; Set the SameSite cookie property for the auth cookie. If empty, the SameSite property is not set.
; same_site =

; CSP (Content Security Policy) Support for _utils
[csp]
enable = true
; header_value = default-src 'self'; img-src 'self'; font-src *; script-src 'self' 'unsafe-eval'; style-src 'self' 'unsafe-inline';

[cors]
credentials = false
; List of origins separated by a comma, * means accept all
; Origins must include the scheme: http://example.com
; You can't set origins: * and credentials = true at the same time.
;origins = *
; List of accepted headers separated by a comma
; headers =
; List of accepted methods
; methods =

; Configuration for a vhost
;[cors:http://example.com]
; credentials = false
; List of origins separated by a comma
; Origins must include the scheme: http://example.com
; You can't set origins: * and credentials = true at the same time.
;origins =
; List of accepted headers separated by a comma
; headers =
; List of accepted methods
; methods =

; Configuration for the design document cache
;[ddoc_cache]
; The maximum size of the cache in bytes
;max_size = 104857600 ; 100MiB
; The period each cache entry should wait before
; automatically refreshing in milliseconds
;refresh_timeout = 67000

[x_frame_options]
; Settings same-origin will return X-Frame-Options: SAMEORIGIN.
; If same origin is set, it will ignore the hosts setting
; same_origin = true
; Settings hosts will return X-Frame-Options: ALLOW-FROM https://example.com/
; List of hosts separated by a comma. * means accept all
; hosts =

[native_query_servers]
; erlang query server
; enable_erlang_query_server = false

; Changing reduce_limit to false will disable reduce_limit.
; If you think you're hitting reduce_limit with a "good" reduce function,
; please let us know on the mailing list so we can fine tune the heuristic.
[query_server_config]
; commit_freq = 5
reduce_limit = true
os_process_limit = 100
; os_process_idle_limit = 300
; os_process_soft_limit = 100
; Timeout for how long a response from a busy view group server can take.
; "infinity" is also a valid configuration value.
;group_info_timeout = 5000
;query_limit = 268435456
;partition_query_limit = 268435456

[mango]
; Set to true to disable the "index all fields" text index, which can lead
; to out of memory issues when users have documents with nested array fields.
;index_all_disabled = false
; Default limit value for mango _find queries.
;default_limit = 25
; Ratio between documents scanned and results matched that will
; generate a warning in the _find response. Setting this to 0 disables
; the warning.
;index_scan_warning_threshold = 10

[indexers]
couch_mrview = true

[feature_flags]
; This enables any database to be created as a partitioned databases (except system db's). 
; Setting this to false will stop the creation of paritioned databases.
; paritioned||allowed* = true will scope the creation of partitioned databases
; to databases with 'allowed' prefix.
partitioned||* = true

[uuids]
; Known algorithms:
;   random - 128 bits of random awesome
;     All awesome, all the time.
;   sequential - monotonically increasing ids with random increments
;     First 26 hex characters are random. Last 6 increment in
;     random amounts until an overflow occurs. On overflow, the
;     random prefix is regenerated and the process starts over.
;   utc_random - Time since Jan 1, 1970 UTC with microseconds
;     First 14 characters are the time in hex. Last 18 are random.
;   utc_id - Time since Jan 1, 1970 UTC with microseconds, plus utc_id_suffix string
;     First 14 characters are the time in hex. uuids/utc_id_suffix string value is appended to these.
algorithm = sequential
; The utc_id_suffix value will be appended to uuids generated by the utc_id algorithm.
; Replicating instances should have unique utc_id_suffix values to ensure uniqueness of utc_id ids.
utc_id_suffix =
# Maximum number of UUIDs retrievable from /_uuids in a single request
max_count = 1000

[attachments]
compression_level = 8 ; from 1 (lowest, fastest) to 9 (highest, slowest), 0 to disable compression
compressible_types = text/*, application/javascript, application/json, application/xml

[replicator]
; Random jitter applied on replication job startup (milliseconds)
startup_jitter = 5000
; Number of actively running replications
max_jobs = 500
;Scheduling interval in milliseconds. During each reschedule cycle
interval = 60000
; Maximum number of replications to start and stop during rescheduling.
max_churn = 20
; More worker processes can give higher network throughput but can also
; imply more disk and network IO.
worker_processes = 4
; With lower batch sizes checkpoints are done more frequently. Lower batch sizes
; also reduce the total amount of used RAM memory.
worker_batch_size = 500
; Maximum number of HTTP connections per replication.
http_connections = 20
; HTTP connection timeout per replication.
; Even for very fast/reliable networks it might need to be increased if a remote
; database is too busy.
connection_timeout = 30000
; Request timeout
;request_timeout = infinity
; If a request fails, the replicator will retry it up to N times.
retries_per_request = 5
; Use checkpoints
;use_checkpoints = true
; Checkpoint interval
;checkpoint_interval = 30000
; Some socket options that might boost performance in some scenarios:
;       {nodelay, boolean()}
;       {sndbuf, integer()}
;       {recbuf, integer()}
;       {priority, integer()}
; See the `inet` Erlang module's man page for the full list of options.
socket_options = [{keepalive, true}, {nodelay, false}]
; Path to a file containing the user's certificate.
;cert_file = /full/path/to/server_cert.pem
; Path to file containing user's private PEM encoded key.
;key_file = /full/path/to/server_key.pem
; String containing the user's password. Only used if the private keyfile is password protected.
;password = somepassword
; Set to true to validate peer certificates.
verify_ssl_certificates = false
; File containing a list of peer trusted certificates (in the PEM format).
;ssl_trusted_certificates_file = /etc/ssl/certs/ca-certificates.crt
; Maximum peer certificate depth (must be set even if certificate validation is off).
ssl_certificate_max_depth = 3
; Maximum document ID length for replication.
;max_document_id_length = infinity
; How much time to wait before retrying after a missing doc exception. This
; exception happens if the document was seen in the changes feed, but internal
; replication hasn't caught up yet, and fetching document's revisions
; fails. This a common scenario when source is updated while continous
; replication is running. The retry period would depend on how quickly internal
; replication is expected to catch up. In general this is an optimisation to
; avoid crashing the whole replication job, which would consume more resources
; and add log noise.
;missing_doc_retry_msec = 2000
; Wait this many seconds after startup before attaching changes listeners
; cluster_start_period = 5
; Re-check cluster state at least every cluster_quiet_period seconds
; cluster_quiet_period = 60

; List of replicator client authentication plugins to try. Plugins will be
; tried in order. The first to initialize successfully will be used for that
; particular endpoint (source or target). Normally couch_replicator_auth_noop
; would be used at the end of the list as a "catch-all". It doesn't do anything
; and effectively implements the previous behavior of using basic auth.
; There are currently two plugins available:
;   couch_replicator_auth_session - use _session cookie authentication
;   couch_replicator_auth_noop - use basic authentication (previous default)
; Currently, the new _session cookie authentication is tried first, before
; falling back to the old basic authenticaion default:
;auth_plugins = couch_replicator_auth_session,couch_replicator_auth_noop
; To restore the old behaviour, use the following value:
;auth_plugins = couch_replicator_auth_noop

; Force couch_replicator_auth_session plugin to refresh the session
; periodically if max-age is not present in the cookie. This is mostly to
; handle the case where anonymous writes are allowed to the database and a VDU
; function is used to forbid writes based on the authenticated user name. In
; that case this value should be adjusted based on the expected minimum session
; expiry timeout on replication endpoints. If session expiry results in a 401
; or 403 response this setting is not needed.
;session_refresh_interval_sec = 550

[log]
; Possible log levels:
;  debug
;  info
;  notice
;  warning, warn
;  error, err
;  critical, crit
;  alert
;  emergency, emerg
;  none
;
level = info
;
; Set the maximum log message length in bytes that will be
; passed through the writer
;
; max_message_size = 16000
;
;
; There are four different log writers that can be configured
; to write log messages. The default writes to stderr of the
; Erlang VM which is useful for debugging/development as well
; as a lot of container deployments.
;
; There's also a file writer that works with logrotate, a
; rsyslog writer for deployments that need to have logs sent
; over the network, and a journald writer that's more suitable
; when using systemd journald.
;
writer = stderr
; Journald Writer notes:
;
; The journald writer doesn't have any options. It still writes
; the logs to stderr, but without the timestamp prepended, since
; the journal will add it automatically, and with the log level
; formated as per
; https://www.freedesktop.org/software/systemd/man/sd-daemon.html
;
;
; File Writer Options:
;
; The file writer will check every 30s to see if it needs
; to reopen its file. This is useful for people that configure
; logrotate to move log files periodically.
;
; file = ./couch.log ; Path name to write logs to
;
; Write operations will happen either every write_buffer bytes
; or write_delay milliseconds. These are passed directly to the
; Erlang file module with the write_delay option documented here:
;
;     http://erlang.org/doc/man/file.html
;
; write_buffer = 0
; write_delay = 0
;
;
; Syslog Writer Options:
;
; The syslog writer options all correspond to their obvious
; counter parts in rsyslog nomenclature.
;
; syslog_host =
; syslog_port = 514
; syslog_appid = couchdb
; syslog_facility = local2

[stats]
; Stats collection interval in seconds. Default 10 seconds.
;interval = 10

[smoosh]
;
; More documentation on these is in the Automatic Compaction
; section of the documentation.
;
;db_channels = upgrade_dbs,ratio_dbs,slack_dbs
;view_channels = upgrade_views,ratio_views,slack_views
;
;[smoosh.ratio_dbs]
;priority = ratio
;min_priority = 2.0
;
;[smoosh.ratio_views]
;priority = ratio
;min_priority = 2.0
;
;[smoosh.slack_dbs]
;priority = slack
;min_priority = 16777216
;
;[smoosh.slack_views]
;priority = slack
;min_priority = 16777216

[ioq]
; The maximum number of concurrent in-flight IO requests that
concurrency = 10

; The fraction of the time that a background IO request will be selected
; over an interactive IO request when both queues are non-empty
ratio = 0.01

[ioq.bypass]
; System administrators can choose to submit specific classes of IO directly
; to the underlying file descriptor or OS process, bypassing the queues
; altogether. Installing a bypass can yield higher throughput and lower
; latency, but relinquishes some control over prioritization. The following
; classes are recognized with the following defaults:

; Messages on their way to an external process (e.g., couchjs) are bypassed
os_process = true

; Disk IO fulfilling interactive read requests is bypassed
read = true

; Disk IO required to update a database is bypassed
write = true

; Disk IO required to update views and other secondary indexes is bypassed
view_update = true

; Disk IO issued by the background replication processes that fix any
; inconsistencies between shard copies is queued
shard_sync = false

; Disk IO issued by compaction jobs is queued
compaction = false

[dreyfus]
; The name and location of the Clouseau Java service required to
; enable Search functionality.
; name = clouseau@127.0.0.1

; CouchDB will try to re-connect to Clouseau using a bounded
; exponential backoff with the following number of iterations.
; retry_limit = 5

; The default number of results returned from a global search query.
; limit = 25

; The default number of results returned from a search on a partition
; of a database.
; limit_partitions = 2000

; The maximum number of results that can be returned from a global
; search query (or any search query on a database without user-defined
; partitions). Attempts to set ?limit=N higher than this value will
; be rejected.
; max_limit = 200

; The maximum number of results that can be returned when searching
; a partition of a database. Attempts to set ?limit=N higher than this
; value will be rejected. If this config setting is not defined,
; CouchDB will use the value of `max_limit` instead. If neither is
; defined, the default is 2000 as stated here.
; max_limit_partitions = 2000

[reshard]
;max_jobs = 48
;max_history = 20
;max_retries = 1
;retry_interval_sec = 10
;delete_source = true
;update_shard_map_timeout_sec = 60
;source_close_timeout_sec = 600
;require_node_param = false
;require_range_param = false
  • les changements de configuration localisés (local.ini).
sudo cat /opt/couchdb/etc/local.ini
; CouchDB Configuration Settings

; Custom settings should be made in this file. They will override settings
; in default.ini, but unlike changes made to default.ini, this file won't be
; overwritten on server upgrade.

[couchdb]
;max_document_size = 4294967296 ; bytes
;os_process_timeout = 5000

[couch_peruser]
; If enabled, couch_peruser ensures that a private per-user database
; exists for each document in _users. These databases are writable only
; by the corresponding user. Databases are in the following form:
; userdb-{hex encoded username}
;enable = true
; If set to true and a user is deleted, the respective database gets
; deleted as well.
;delete_dbs = true
; Set a default q value for peruser-created databases that is different from
; cluster / q
;q = 1

[chttpd]
;port = 5984
;bind_address = 127.0.0.1
; Options for the MochiWeb HTTP server.
;server_options = [{backlog, 128}, {acceptor_pool_size, 16}]
; For more socket options, consult Erlang's module 'inet' man page.
;socket_options = [{sndbuf, 262144}, {nodelay, true}]

[httpd]
; NOTE that this only configures the "backend" node-local port, not the
; "frontend" clustered port. You probably don't want to change anything in
; this section.
; Uncomment next line to trigger basic-auth popup on unauthorized requests.
;WWW-Authenticate = Basic realm="administrator"

; Uncomment next line to set the configuration modification whitelist. Only
; whitelisted values may be changed via the /_config URLs. To allow the admin
; to change this value over HTTP, remember to include {httpd,config_whitelist}
; itself. Excluding it from the list would require editing this file to update
; the whitelist.
;config_whitelist = [{httpd,config_whitelist}, {log,level}, {etc,etc}]

[couch_httpd_auth]
; If you set this to true, you should also uncomment the WWW-Authenticate line
; above. If you don't configure a WWW-Authenticate header, CouchDB will send
; Basic realm="server" in order to prevent you getting logged out.
; require_valid_user = false

[ssl]
;enable = true
;cert_file = /full/path/to/server_cert.pem
;key_file = /full/path/to/server_key.pem
;password = somepassword
; set to true to validate peer certificates
;verify_ssl_certificates = false
; Set to true to fail if the client does not send a certificate. Only used if verify_ssl_certificates is true.
;fail_if_no_peer_cert = false
; Path to file containing PEM encoded CA certificates (trusted
; certificates used for verifying a peer certificate). May be omitted if
; you do not want to verify the peer.
;cacert_file = /full/path/to/cacertf
; The verification fun (optional) if not specified, the default
; verification fun will be used.
;verify_fun = {Module, VerifyFun}
; maximum peer certificate depth
;ssl_certificate_max_depth = 1
;
; Reject renegotiations that do not live up to RFC 5746.
;secure_renegotiate = true
; The cipher suites that should be supported.
; Can be specified in erlang format "{ecdhe_ecdsa,aes_128_cbc,sha256}"
; or in OpenSSL format "ECDHE-ECDSA-AES128-SHA256".
;ciphers = ["ECDHE-ECDSA-AES128-SHA256", "ECDHE-ECDSA-AES128-SHA"]
; The SSL/TLS versions to support
;tls_versions = [tlsv1, 'tlsv1.1', 'tlsv1.2']

; To enable Virtual Hosts in CouchDB, add a vhost = path directive. All requests to
; the Virual Host will be redirected to the path. In the example below all requests
; to http://example.com/ are redirected to /database.
; If you run CouchDB on a specific port, include the port number in the vhost:
; example.com:5984 = /database
[vhosts]
;example.com = /database/

; To create an admin account uncomment the '[admins]' section below and add a
; line in the format 'username = password'. When you next start CouchDB, it
; will change the password to a hash (so that your passwords don't linger
; around in plain-text files). You can add more admin accounts with more
; 'username = password' lines. Don't forget to restart CouchDB after
; changing this.
[admins]
;admin = mysecretpassword

Modifier le mot de passe par défaut du compte admin :

sudo sed -i 's/;admin = mysecretpassword/admin = mypassword/g' /opt/couchdb/etc/local.ini

Redémarrer le service couchdb :

sudo systemctl restart couchdb

Appeler l’API REST de CouchDB en s’authentifiant :

curl -u admin:mypassword -X GET http://127.0.0.1:5984/_utils/
<!--
// Licensed under the Apache License, Version 2.0 (the "License"); you may not
// use this file except in compliance with the License. You may obtain a copy of
// the License at
//
//   http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
// License for the specific language governing permissions and limitations under
// the License.
-->
<!doctype html>
<html lang="en">
<head>
  <meta charset="utf-8">
  <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
  <meta name="viewport" content="width=device-width,initial-scale=1">
  <meta http-equiv="Content-Language" content="en" />
  <link rel="shortcut icon" type="image/png" href="dashboard.assets/img/couchdb-logo.png"/>
  <title>Project Fauxton</title>

  <!-- Application styles. -->
  <style>
    .noscript-warning {
      font-family: "Helvetica Neue", Helvetica, Arial, sans-serif;
      padding: 1px 30px 10px 30px;
      color: #fff;
      background: @brandHighlight;
      margin: 100px;
      box-shadow: 2px 2px 5px #989898;
    }
  </style>

<link href="dashboard.assets/css/styles.2ca2557452a177700f4c.css" rel="stylesheet"><link href="dashboard.assets/css/styles.bdfacd9ba862d16e41b9.css" rel="stylesheet"></head>

<body id="home">

  <noscript>
    <div class="noscript-warning">
      <h1>Please turn on JavaScript</h1>
      <p>Fauxton <strong>requires</strong> JavaScript to be enabled.</p>
    </div>
  </noscript>


  <div id="app"></div>

 <!-- Fauxton Release : 2020-09-11T22:46:54.927Z -->
<script type="text/javascript" src="dashboard.assets/js/manifest.583577db79221d5ae84e.js"></script><script type="text/javascript" src="dashboard.assets/js/vendor.2ca2557452a177700f4c.js"></script><script type="text/javascript" src="dashboard.assets/js/bundle.bdfacd9ba862d16e41b9.js"></script></body>
</html>

Vous savez maintenant comment installer CouchDB sous Centos 7 et vérifier son installation.

Surveiller_les_performances_des_conteneurs_Docker_avec_cAdvisor__english_

Monitor the performance of Docker containers with cAdvisor

cAdvisor (https://github.com/google/cadvisor) allows to monitor the performance of a container. It displays information on CPU and memory usage. cAdvisor is written in Go. It captures the metrics that the docker stats command returns and aggregates them.

Let’s launch cAdvisor in a Docker container :

docker run \
  -d \
  -v=/:/rootfs:ro \
  -v=/var/run:/var/run:rw \
  -v=/sys:/sys:ro \
  -v=/var/lib/docker/:/var/lib/docker:ro \
  -p=8080:8080 \
  --privileged \
  --name=cadvisor \
  google/cadvisor:latest
Unable to find image 'google/cadvisor:latest' locally
latest: Pulling from google/cadvisor

5c916c92: Pulling fs layer 
5bb65cdf: Pulling fs layer 
Digest: sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04
Status: Downloaded newer image for google/cadvisor:latest
f9c03c73404c04bf371fd501e5a9dab999ebddaf18084a91b4aafbf26459d37b

In order to work, we must give cAdvisor access to the necessary resources on the host. To give full access to the host devices, the container must be launched with the --privileged option.

To access cAdvisor, in a Web browser, go to http://0.0.0.0:8080/.

cAdvisor displays the following sections :

  • Overview,
  • Processes,
  • CPU,
  • Memory.

cAdvisor

cAdvisor

cAdvisor

cAdvisor

The summary page

This web page displays the following sections :

  • The Overview section displays gauges to indicate if the resources have reached their limits.
  • The Processes section displays information from the docker ps aux, docker ps and docker top commands in tabular form; to sort the processes, click on the header of the corresponding column. The columns are:
    • User: this is the user who runs the process,
    • PID: this is the ID of the process,
    • PPID: this is the PID of the parent process,
    • Start Time : this is the time the process starts,
    • CPU %: this is the percentage of CPU consumed,
    • MEM %: this is the percentage of RAM consumed,
    • RSS: this is the amount of main memory consumed,
    • Virtual Size: this is the amount of virtual memory consumed,
    • Status: this is the current state of the process (standard Linux status codes),
    • Running Time: this is the process execution time,
    • Command: This is the command executed by the process,
    • Container: this is the container to which the process is attached.
  • The CPU section shows the CPU usage with the columns:
    • Total Usage: this is the aggregate usage of all cores,
    • Usage per Core: this is a breakdown of usage per core,
    • Usage Breakdown: this is the aggregated usage on all cores but distributed between what is used by the kernel and by the user’s processes.
  • The Memory section is divided into two parts :
    • Total Usage: the total amount of memory used by all processes for the host or container: it is equal to Host Memory + Cold Memory;
      • the Hot Memory corresponds to the pages that have been recently affected by the kernel;
      • the Cold Memory corresponds to the page that has not been touched for a certain period of time and which could be claimed if necessary.
    • Usage Breakdown: it gives a visual representation of Total Usage and Hot Memory.
  • The Network section displays :
    • Throughput: shows incoming and outgoing traffic over the last minute;
    • Errors: these are network errors, so this graph should be flat.
  • The Filesystem section shows a breakdown of filesystem usage.
  • The Subcontainers section shows the top CPU usage and the top memory usage.

The container statistics page

At the top of the page is a link to the running containers to view their statistics.

cAdvisor

cAdvisor

The Docker Containers page

At the top of the page there is a link Docker Containers to display statistics about the Docker host.

cAdvisor

This page contains the following sections:

  • The Subcontainers section displays a list of clickable containers. By clicking on the name, cAdvisor displays details on:
    • the isolation (Isolation):
      • CPU: these are the CPU allocations of the container; if there are no resource limits, information about the host CPU is displayed,
      • Memory : these are the memory allocations of the container,
    • usage (Usage):
      • Overview: these are the gauges that allow you to see if you are approaching the resource limits,
      • Processes: these are the processes of the container,
      • CPU: these are the usage graphs of the CPU isolated from the container,
      • Memory: this is the use of the container’s memory.
  • The Driver Status section displays:
    • the basic statistics of the main Docker process,
    • information about the host kernel,
    • the name of the host,
    • the operating system used,
    • the total number of containers and images (the number of images counts each file system as an individual image).
  • The section Images displays the list of Docker images available on the host: Repository, tag, size, image creation date, image ID.

You now know how to obtain a large amount of very useful statistics on Docker containers, allowing you to diagnose problems and optimize container performance.

Automatiser_la_mise_à_jour_des_conteneurs_Docker__english_

Automate Docker Container Update with WatchTower

Updating Docker containers can require significant effort. There are of course solutions to automate these updates. Here we look at the solution proposed by the Docker WatchTower image (https://github.com/v2tec/watchtower).

WatchTower is a Docker Container that updates all running containers when there is an update for that container.

The principle is:

  • you run the WatchTower container,
  • you create a Docker image,
  • you push it to Docker Hub,
  • you update this Docker image,
  • you push it to Docker Hub,
  • WatchTower stops the container of the image you have created, updates the image and then restarts the container.

Launch of Watchtower

Create the WatchTower container to monitor running containers:

docker run -d --name watchtower --restart always -v /var/run/docker.sock:/var/run/docker.sock v2tec/watchtower -i 30
b4417755c0c4c2d71ce50e48bf2cae0a30b9c133e40791536ac1ecd9d84a5ad8

-i allows you to specify the frequency of change verification (here, every 30 seconds).

Show the logs of this container at launch:

docker logs watchtower
time="2020-11-12T13:33:23Z" level=info msg="First run: 2020-11-12 13:33:53 +0000 UTC" 

Connection to Hub Docker

Login to Hub Docker with your credentials ($HUB_USERNAME and HUB_PASSWORD):

docker login -u $HUB_USERNAME -p $HUB_PASSWORD
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Login Succeeded

Creating a first image Docker and launch

Create a first auto-update image based on the devopstestlab/nginx-helloworld image. The latter displays a web page via a nginx web server:

cat <<EOF> Dockerfile
FROM devopstestlab/nginx-helloworld
ADD index.html /usr/share/nginx/html/
EOF

Change the default web page to display Helloworld v1 ! in the web page:

cat <<EOF> index.html
<html>
<body>
<p>Helloworld v1 !</p>
</body>
</html>
EOF

Create the Docker image:

docker build -t devopstestlab/auto-update .
Sending build context to Docker daemon  4.169MB
Step 1/2 : FROM devopstestlab/nginx-helloworld
latest: Pulling from devopstestlab/nginx-helloworld
Digest: sha256:3385ecc3c20a714d257c59c8cbffc7d6e681a6b38c34db6dd43beb48197f3f80
Status: Downloaded newer image for devopstestlab/nginx-helloworld:latest
 ---> 9468dea24c09
Step 2/2 : ADD index.html /usr/share/nginx/html/
 ---> Using cache
 ---> eee8df36d30e
Successfully built eee8df36d30e
Successfully tagged devopstestlab/auto-update:latest

Push it on Hub Docker:

docker push devopstestlab/auto-update
The push refers to repository [docker.io/devopstestlab/auto-update]

e1b56fad: Preparing 
076e3049: Preparing 
e5cf1923: Preparing 
cf4d16de: Preparing 
latest: digest: sha256:6ca819c5e3eca39bbf037995052b103e53253b9efd3dc0dffe4e311d9e15fa79 size: 1360

Launch a Docker container from this image:

docker run -d --rm --name auto-update -p 80:80 devopstestlab/auto-update
0d538753b84d31a4ae8efbfc8d2476e268b9d7db49e7b79eae17e7486a57cfb4

Check that the Docker container displays a web page and with the message Helloworld v1 !:

curl localhost:80
<html>
<body>
<p>Helloworld v1 !</p>
</body>
</html>

Updating the Docker image

Modify the Docker image web page to display the message Helloworld v2 !:

cat <<EOF> index.html
<html>
<body>
<p>Helloworld v2 !</p>
</body>
</html>
EOF

Create the new image:

docker build -t devopstestlab/auto-update .
Sending build context to Docker daemon  4.169MB
Step 1/2 : FROM devopstestlab/nginx-helloworld
 ---> 9468dea24c09
Step 2/2 : ADD index.html /usr/share/nginx/html/
 ---> 840abf8ebb48
Successfully built 840abf8ebb48
Successfully tagged devopstestlab/auto-update:latest

Push on Hub Docker the new version of the image:

docker push devopstestlab/auto-update
The push refers to repository [docker.io/devopstestlab/auto-update]

593359df: Preparing 
076e3049: Preparing 
e5cf1923: Preparing 
cf4d16de: Preparing 
latest: digest: sha256:1514ae398a965c46301c45dd268d1ec2091facabbbe5caf90cd730f6320a4891 size: 1360

Display the contents of the web page of the auto-update container:

curl localhost:80
<html>
<body>
<p>Helloworld v1 !</p>
</body>
</html>

It is always the first version that is displayed (Helloworld v1 !).

Wait a few moments and then try again. At some point, the container is stopped:

curl localhost:80
curl: (7) Failed to connect to localhost port 80: Connection refused

Then it is updated and restarted:

curl localhost:80
<html>
<body>
<p>Helloworld v2 !</p>
</body>
</html>

The new version is well launched (Helloworld v2 !).

Now we can find these events in the logs of the `watchtower’ container:

docker logs watchtower
time="2020-11-12T13:33:23Z" level=info msg="First run: 2020-11-12 13:33:53 +0000 UTC" 
time="2020-11-12T13:34:24Z" level=info msg="Found new devopstestlab/auto-update:latest image (sha256:840abf8ebb48bc5f589548f5d54277a933ad37da0031ecf69f310ba46e9ca83b)" 
time="2020-11-12T13:34:25Z" level=info msg="Stopping /auto-update (0d538753b84d31a4ae8efbfc8d2476e268b9d7db49e7b79eae17e7486a57cfb4) with SIGTERM" 
time="2020-11-12T13:34:26Z" level=info msg="Creating /auto-update" 

Now, you know how to automate the update of Docker containers from their image on the Hub Docker website.

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×