Merge pull request #34 from gingerwizard/master
Support for Node configuration, Roles and Multiple Nodes Per Machine
This commit is contained in:
commit
6bbf380712
42 changed files with 1311 additions and 118 deletions
2
.gitignore
vendored
2
.gitignore
vendored
|
|
@ -4,3 +4,5 @@
|
|||
.vendor
|
||||
.bundle
|
||||
Converging
|
||||
TODO
|
||||
.idea/
|
||||
18
.kitchen.yml
18
.kitchen.yml
|
|
@ -26,12 +26,16 @@ platforms:
|
|||
- echo 'deb http://http.debian.net/debian/ wheezy-backports main' >> /etc/apt/sources.list
|
||||
- apt-get update
|
||||
- apt-get install -y -q ansible
|
||||
- apt-get install -y -q net-tools
|
||||
use_sudo: false
|
||||
- name: debian-8
|
||||
driver_config:
|
||||
image: electrical/debian:8
|
||||
privileged: true
|
||||
provision_command: apt-get -y -q install ansible
|
||||
provision_command:
|
||||
- apt-get update
|
||||
- apt-get install -y -q ansible
|
||||
- apt-get install -y -q net-tools
|
||||
use_sudo: false
|
||||
- name: centos-6
|
||||
driver_config:
|
||||
|
|
@ -46,8 +50,6 @@ platforms:
|
|||
- sed -ri 's/^#?PasswordAuthentication .*/PasswordAuthentication yes/' /etc/ssh/sshd_config
|
||||
- sed -ri 's/^#?UsePAM .*/UsePAM no/' /etc/ssh/sshd_config
|
||||
- yum -y install initscripts
|
||||
# - BUSSER_ROOT="/tmp/verifier" GEM_HOME="/tmp/verifier/gems" GEM_PATH="/tmp/verifier/gems" GEM_CACHE="/tmp/verifier/gems/cache" gem install --no-rdoc --no-ri rake
|
||||
# - chown kitchen:kitchen /tmp/verifier -R
|
||||
- yum clean all
|
||||
run_command: "/usr/sbin/init"
|
||||
privileged: true
|
||||
|
|
@ -64,4 +66,14 @@ suites:
|
|||
attributes:
|
||||
provisioner:
|
||||
playbook: test/integration/package.yml
|
||||
- name: config
|
||||
run_list:
|
||||
attributes:
|
||||
provisioner:
|
||||
playbook: test/integration/config.yml
|
||||
- name: multi
|
||||
run_list:
|
||||
attributes:
|
||||
provisioner:
|
||||
playbook: test/integration/multi.yml
|
||||
|
||||
|
|
|
|||
163
README.md
163
README.md
|
|
@ -1,11 +1,17 @@
|
|||
# ansible-elasticsearch
|
||||
|
||||
Ansible playbook / roles / tasks for Elasticsearch. Currently it will work on Debian and RedHat based linux systems.
|
||||
Ansible playbook / roles / tasks for Elasticsearch. Currently it will work on Debian and RedHat based linux systems. Tested platforms are:
|
||||
|
||||
* Ubuntu 1404
|
||||
* Debian 7
|
||||
* Debian 8
|
||||
* Centos 7
|
||||
* Centos 8
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
Create your ansible playbook with your own tasks, and include the role elasticsearch.
|
||||
You will have to have this repository accessible within the context of playbook, e.g.
|
||||
Create your Ansible playbook with your own tasks, and include the role elasticsearch. You will have to have this repository accessible within the context of playbook, e.g.
|
||||
|
||||
e.g.
|
||||
|
||||
|
|
@ -17,51 +23,126 @@ mkdir -p roles
|
|||
ln -s /my/repos/ansible-elasticsearch ./roles/elasticsearch
|
||||
```
|
||||
|
||||
Then create your playbook yaml adding the role elasticsearch and overriding any variables you wish. It can be as simple as this to take all the defaults:
|
||||
Then create your playbook yaml adding the role elasticsearch. By default, the user is only required to specify a unique es_instance_name per role application.
|
||||
|
||||
The simplest configuration therefore consists of:
|
||||
|
||||
```
|
||||
---
|
||||
hosts: my_host
|
||||
- name: Simple Example
|
||||
hosts: localhost
|
||||
roles:
|
||||
- elasticsearch
|
||||
tasks:
|
||||
- .... your tasks ...
|
||||
```
|
||||
|
||||
or more complex..
|
||||
|
||||
|
||||
```
|
||||
---
|
||||
hosts: my_host
|
||||
roles:
|
||||
- elasticsearch
|
||||
- { role: elasticsearch, es_instance_name: "node1" }
|
||||
vars:
|
||||
java_packages:
|
||||
- "oracle-java7-installer"
|
||||
es_major_version: 1.4
|
||||
es_version: 1.4.4
|
||||
```
|
||||
|
||||
All Elasticsearch configuration parameters are supported. This is achieved using a configuration map parameter 'es_config' which is serialized into the elasticsearch.yml file.
|
||||
The use of a map ensures the Ansible playbook does not need to be updated to reflect new/deprecated/plugin configuration parameters.
|
||||
|
||||
In addition to the es_config map, several other parameters are supported for additional functions e.g. script installation. These can be found in the role's defaults/main.yml file.
|
||||
|
||||
The following illustrates applying configuration parameters to an Elasticsearch instance.
|
||||
|
||||
```
|
||||
- name: Elasticsearch with custom configuration
|
||||
hosts: localhost
|
||||
roles:
|
||||
#expand to all available parameters
|
||||
- { role: elasticsearch, es_instance_name: "node1", es_data_dir: "/opt/elasticsearch/data", es_log_dir: "/opt/elasticsearch/logs", es_work_dir: "/opt/elasticsearch/temp", es_config: {node.name: "node1", cluster.name: "custom-cluster", discovery.zen.ping.unicast.hosts: "localhost:9301", http.port: 9201, transport.tcp.port: 9301, node.data: false, node.master: true, bootstrap.mlockall: true, discovery.zen.ping.multicast.enabled: false } }
|
||||
vars:
|
||||
es_scripts: false
|
||||
es_templates: false
|
||||
es_version_lock: false
|
||||
es_heap_size: 1g
|
||||
```
|
||||
|
||||
|
||||
|
||||
The playbook utilises Elasticsearch version defaults. By default, therefore, multicast is enabled for 1.x. If disabled, the user user is required to specify the following additional parameters:
|
||||
|
||||
1. es_config['http.port'] - the http port for the node
|
||||
2. es_config['transport.tcp.port'] - the transport port for the node
|
||||
3. es_config['discovery.zen.ping.unicast.hosts'] - the unicast discovery list, in the comma separated format "<host>:<port>,<host>:<port>" (typically the clusters dedicated masters)
|
||||
|
||||
|
||||
If set to true, the ports will be auto defined and node discovery will be performed using multicast.
|
||||
|
||||
A more complex example:
|
||||
|
||||
```
|
||||
---
|
||||
- name: Elasticsearch with custom configuration
|
||||
hosts: localhost
|
||||
roles:
|
||||
#expand to all available parameters
|
||||
- { role: elasticsearch, es_instance_name: "node1", es_data_dir: "/opt/elasticsearch/data", es_log_dir: "/opt/elasticsearch/logs", es_work_dir: "/opt/elasticsearch/temp", es_config: {node.name: "node1", cluster.name: "custom-cluster", discovery.zen.ping.unicast.hosts: "localhost:9301", http.port: 9201, transport.tcp.port: 9301, node.data: false, node.master: true, bootstrap.mlockall: true, discovery.zen.ping.multicast.enabled: false } }
|
||||
vars:
|
||||
es_scripts: false
|
||||
es_templates: false
|
||||
es_version_lock: false
|
||||
es_heap_size: 1g
|
||||
es_scripts: false
|
||||
es_templates: false
|
||||
es_version_lock: false
|
||||
es_start_service: false
|
||||
es_plugins_reinstall: false
|
||||
es_plugins:
|
||||
- plugin: elasticsearch/elasticsearch-cloud-aws
|
||||
version: 2.5.0
|
||||
- plugin: elasticsearch/marvel
|
||||
version: latest
|
||||
- plugin: elasticsearch/license
|
||||
version: latest
|
||||
- plugin: elasticsearch/shield
|
||||
version: latest
|
||||
- plugin: elasticsearch/elasticsearch-support-diagnostics
|
||||
version: latest
|
||||
- plugin: lmenezes/elasticsearch-kopf
|
||||
version: master
|
||||
tasks:
|
||||
- .... your tasks ...
|
||||
- plugin: elasticsearch/elasticsearch-cloud-aws
|
||||
version: 2.5.0
|
||||
- plugin: elasticsearch/marvel
|
||||
version: latest
|
||||
- plugin: elasticsearch/license
|
||||
version: latest
|
||||
- plugin: elasticsearch/shield
|
||||
version: latest
|
||||
- plugin: elasticsearch/elasticsearch-support-diagnostics
|
||||
version: latest
|
||||
- plugin: lmenezes/elasticsearch-kopf
|
||||
version: master
|
||||
```
|
||||
|
||||
Make sure your hosts are defined in your ```hosts``` file with the appropriate ```ansible_ssh_host```, ```ansible_ssh_user``` and ```ansible_ssh_private_key_file``` values.
|
||||
The application of a role results in the installation of a node on a host. Multiple roles equates to multiple nodes for a single host.
|
||||
|
||||
In any multi node cluster configuration it is recommened the user list the master eligble roles first - especially if these are used a unicast hosts off which other nodes are 'booted'
|
||||
|
||||
An example of a two server deployment, each with 1 node on one server and 2 nodes on another. The first server holds the master and is thus declared first.
|
||||
|
||||
```
|
||||
---
|
||||
|
||||
- hosts: master_nodes
|
||||
roles:
|
||||
# one master per host
|
||||
- { role: elasticsearch, es_instance_name: "node1", es_heap_size: "1g", es_config: { "discovery.zen.ping.multicast.enabled": false, discovery.zen.ping.unicast.hosts: "elastic02:9300", http.port: 9200, transport.tcp.port: 9300, node.data: false, node.master: true, bootstrap.mlockall: false, discovery.zen.ping.multicast.enabled: false } }
|
||||
vars:
|
||||
es_scripts: false
|
||||
es_templates: false
|
||||
es_version_lock: false
|
||||
es_cluster_name: test-cluster
|
||||
ansible_user: ansible
|
||||
es_plugins:
|
||||
- plugin: elasticsearch/license
|
||||
version: latest
|
||||
|
||||
- hosts: data_nodes
|
||||
roles:
|
||||
# two nodes per host
|
||||
- { role: elasticsearch, es_instance_name: "node1", es_data_dir: "/opt/elasticsearch", es_config: { "discovery.zen.ping.multicast.enabled": false, discovery.zen.ping.unicast.hosts: "elastic02:9300", http.port: 9200, transport.tcp.port: 9300, node.data: true, node.master: false, bootstrap.mlockall: false, discovery.zen.ping.multicast.enabled: false } }
|
||||
- { role: elasticsearch, es_instance_name: "node2", es_config: { "discovery.zen.ping.multicast.enabled": false, discovery.zen.ping.unicast.hosts: "elastic02:9300", http.port: 9201, transport.tcp.port: 9301, node.data: true, node.master: false, bootstrap.mlockall: false, discovery.zen.ping.multicast.enabled: false } }
|
||||
vars:
|
||||
es_scripts: false
|
||||
es_templates: false
|
||||
es_version_lock: false
|
||||
es_cluster_name: test-cluster
|
||||
ansible_user: ansible
|
||||
es_plugins:
|
||||
- plugin: elasticsearch/license
|
||||
version: latest
|
||||
```
|
||||
|
||||
Parameters can additionally be assigned to hosts using the inventory file if desired.
|
||||
|
||||
Make sure your hosts are defined in your ```inventory``` file with the appropriate ```ansible_ssh_host```, ```ansible_ssh_user``` and ```ansible_ssh_private_key_file``` values.
|
||||
|
||||
Then run it:
|
||||
|
||||
|
|
@ -88,3 +169,11 @@ Following variables affect the versions installed:
|
|||
|
||||
* ```java_repos``` (an array of repositories to be added to allow java to be installed)
|
||||
* ```java_packages``` (an array of packages to be installed to get Java installed)
|
||||
|
||||
## Notes
|
||||
|
||||
* The role assumes the user/group exists on the server. The elasticsearch packages create the default elasticsearch user. If this needs to be changed, ensure the user exists.
|
||||
* The playbook relies on the inventory_name of each host to ensure its directories are unique
|
||||
* Systemd scripts are by default installed in addition to init scripts - with the exception of Debian 8. This is pending improvement and currently relies on the user to determine the preferred mechanism.
|
||||
* Changing an instance_name for a role application will result in the installation of a new component. The previous component will remain.
|
||||
* KitchenCI has been used for testing. This is used to confirm images reach the correct state after a play is first applied.
|
||||
|
|
|
|||
|
|
@ -1,2 +1 @@
|
|||
[defaults]
|
||||
roles_path = ../
|
||||
[defaults]
|
||||
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
es_major_version: "1.7"
|
||||
es_version: "1.7.1"
|
||||
es_version: "1.7.3"
|
||||
es_version_lock: false
|
||||
es_use_repository: true
|
||||
es_start_service: true
|
||||
|
|
@ -9,3 +9,9 @@ es_scripts: false
|
|||
es_templates: false
|
||||
es_user: elasticsearch
|
||||
es_group: elasticsearch
|
||||
es_config: {}
|
||||
#Need to provide default directories
|
||||
es_pid_dir: "/var/run/elasticsearch"
|
||||
es_data_dir: "/var/lib/elasticsearch"
|
||||
es_log_dir: "/var/log/elasticsearch"
|
||||
es_work_dir: "/tmp/elasticsearch"
|
||||
9
elasticsearch.iml
Normal file
9
elasticsearch.iml
Normal file
|
|
@ -0,0 +1,9 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<module type="RUBY_MODULE" version="4">
|
||||
<component name="NewModuleRootManager" inherit-compiler-output="true">
|
||||
<exclude-output />
|
||||
<content url="file://$MODULE_DIR$" />
|
||||
<orderEntry type="inheritedJdk" />
|
||||
<orderEntry type="sourceFolder" forTests="false" />
|
||||
</component>
|
||||
</module>
|
||||
1
files/scripts/calculate-score.groovy
Normal file
1
files/scripts/calculate-score.groovy
Normal file
|
|
@ -0,0 +1 @@
|
|||
log(_score * 2) + my_modifier
|
||||
11
files/templates/basic.json
Normal file
11
files/templates/basic.json
Normal file
|
|
@ -0,0 +1,11 @@
|
|||
{
|
||||
"template" : "te*",
|
||||
"settings" : {
|
||||
"number_of_shards" : 1
|
||||
},
|
||||
"mappings" : {
|
||||
"type1" : {
|
||||
"_source" : { "enabled" : false }
|
||||
}
|
||||
}
|
||||
}
|
||||
4
handlers/main.yml
Normal file
4
handlers/main.yml
Normal file
|
|
@ -0,0 +1,4 @@
|
|||
|
||||
- name: restart elasticsearch
|
||||
service: name={{instance_init_script | basename}} state=restarted enabled=yes
|
||||
when: es_start_service and not elasticsearch_started.changed
|
||||
|
|
@ -1,8 +1,9 @@
|
|||
---
|
||||
dependencies: []
|
||||
|
||||
allow_duplicates: yes
|
||||
|
||||
galaxy_info:
|
||||
author: Robin Clarke
|
||||
author: Robin Clarke, Jakob Reiter, Dale McDiarmid
|
||||
description: Elasticsearch for Linux
|
||||
company: "Elastic.co"
|
||||
license: "license (Apache)"
|
||||
|
|
|
|||
19
tasks/checkParameters.yml
Normal file
19
tasks/checkParameters.yml
Normal file
|
|
@ -0,0 +1,19 @@
|
|||
# Check for mandatory parameters
|
||||
|
||||
- fail: msg="es_instance_name must be specified"
|
||||
when: es_instance_name is not defined
|
||||
|
||||
- set_fact: multi_cast={{ (es_version | version_compare('2.0', '<') and es_config['discovery.zen.ping.multicast.enabled'] is not defined) or es_config['discovery.zen.ping.multicast.enabled']}}
|
||||
|
||||
- fail: msg="Parameter 'http.port' must be defined when multicast is disabled"
|
||||
when: not multi_cast and es_config['http.port'] is not defined
|
||||
|
||||
- fail: msg="Parameter 'transport.tcp.port' must be defined when multicast is disabled"
|
||||
when: not multi_cast and es_config['transport.tcp.port'] is not defined
|
||||
|
||||
- fail: msg="Parameter 'discovery.zen.ping.unicast.hosts' must be defined when multicast is disabled"
|
||||
when: not multi_cast and es_config['discovery.zen.ping.unicast.hosts'] is not defined
|
||||
|
||||
#If the user attempts to lock memory they must specify a heap size
|
||||
- fail: msg="If locking memory with bootstrap.mlockall a heap size must be specified"
|
||||
when: es_config['bootstrap.mlockall'] is defined and es_config['bootstrap.mlockall'] == True and es_heap_size is not defined
|
||||
|
|
@ -19,24 +19,4 @@
|
|||
- name: Debian - Ensure elasticsearch is installed from downloaded package
|
||||
apt: deb=/tmp/elasticsearch-{{ es_version }}.deb
|
||||
when: not es_use_repository
|
||||
register: elasticsearch_install
|
||||
|
||||
- name: Debian - configure memory
|
||||
lineinfile: dest=/etc/default/elasticsearch regexp="^ES_HEAP_SIZE" insertafter="^#ES_HEAP_SIZE" line="ES_HEAP_SIZE={{ es_heap_size }}"
|
||||
when: es_heap_size is defined
|
||||
register: elasticsearch_configure
|
||||
- name: Debian - configure data store
|
||||
lineinfile: dest=/etc/default/elasticsearch regexp="^DATA_DIR" insertafter="^#DATA_DIR" line="DATA_DIR={{ es_data_dir }}"
|
||||
when: es_data_dir is defined
|
||||
register: elasticsearch_configure
|
||||
- name: Debian - configure elasticsearch user
|
||||
lineinfile: dest=/etc/default/elasticsearch regexp="^ES_USER" insertafter="^#ES_USER" line="ES_USER={{ es_user }}"
|
||||
when: es_user is defined
|
||||
register: elasticsearch_configure
|
||||
- name: Debian - configure elasticsearch group
|
||||
lineinfile: dest=/etc/default/elasticsearch regexp="^ES_GROUP" insertafter="^#ES_GROUP" line="ES_GROUP={{ es_group }}"
|
||||
when: es_group is defined
|
||||
register: elasticsearch_configure
|
||||
- name: Debian - create data dir
|
||||
file: state=directory path={{ es_data_dir }} owner={{ es_user }} group={{ es_group }}
|
||||
when: es_data_dir is defined
|
||||
register: elasticsearch_install
|
||||
|
|
@ -19,24 +19,4 @@
|
|||
- name: RedHat - Install Elasticsearch from url
|
||||
yum: name={% if es_custom_package_url is defined %}{{ es_custom_package_url }}{% else %}{{ es_package_url }}-{{ es_version }}.noarch.rpm{% endif %} state=present
|
||||
when: not es_use_repository
|
||||
register: elasticsearch_install
|
||||
|
||||
- name: RedHat - configure memory
|
||||
lineinfile: dest=/etc/sysconfig/elasticsearch regexp="^ES_HEAP_SIZE" insertafter="^#ES_HEAP_SIZE" line="ES_HEAP_SIZE={{ es_heap_size }}"
|
||||
when: es_heap_size is defined
|
||||
register: elasticsearch_configure
|
||||
- name: RedHat - configure data store
|
||||
lineinfile: dest=/etc/sysconfig/elasticsearch regexp="^DATA_DIR" insertafter="^#DATA_DIR" line="DATA_DIR={{ es_data_dir }}"
|
||||
when: es_data_dir is defined
|
||||
register: elasticsearch_configure
|
||||
- name: RedHat - configure elasticsearch user
|
||||
lineinfile: dest=/etc/sysconfig/elasticsearch regexp="^ES_USER" insertafter="^#ES_USER" line="ES_USER={{ es_user }}"
|
||||
when: es_user is defined
|
||||
register: elasticsearch_configure
|
||||
- name: RedHat - configure elasticsearch group
|
||||
lineinfile: dest=/etc/sysconfig/elasticsearch regexp="^ES_GROUP" insertafter="^#ES_GROUP" line="ES_GROUP={{ es_group }}"
|
||||
when: es_group is defined
|
||||
register: elasticsearch_configure
|
||||
- name: RedHat - create data dir
|
||||
file: state=directory path={{ es_data_dir }} owner={{ es_user }} group={{ es_group }}
|
||||
when: es_data_dir is defined
|
||||
register: elasticsearch_install
|
||||
68
tasks/elasticsearch-config.yml
Normal file
68
tasks/elasticsearch-config.yml
Normal file
|
|
@ -0,0 +1,68 @@
|
|||
---
|
||||
|
||||
# Configure Elasticsearch Node
|
||||
|
||||
#This relies on elasticsearch installing a serviced script to determine whether one should be copied.
|
||||
- stat: path={{sysd_script}}
|
||||
register: systemd_service
|
||||
- set_fact: use_system_d={{systemd_service.stat.exists and (not ansible_distribution == 'Debian' or ansible_distribution_version | version_compare('8', '<'))}}
|
||||
|
||||
- set_fact: instance_sysd_script={{sysd_script | dirname }}/{{es_instance_name}}_{{sysd_script | basename}}
|
||||
when: use_system_d
|
||||
|
||||
#For directories we also use the {{inventory_hostname}}-{{ es_instance_name }} - this helps if we have a shared SAN.
|
||||
|
||||
- set_fact: pid_dir={{es_pid_dir}}/{{inventory_hostname}}-{{ es_instance_name }}
|
||||
|
||||
- set_fact: data_dir={{ es_data_dir }}/{{inventory_hostname}}-{{ es_instance_name }}
|
||||
|
||||
- set_fact: log_dir={{ es_log_dir }}/{{inventory_hostname}}-{{ es_instance_name }}
|
||||
|
||||
- set_fact: work_dir={{ es_work_dir }}/{{inventory_hostname}}-{{ es_instance_name }}
|
||||
|
||||
#Create required directories
|
||||
- name: Create PID Directory
|
||||
file: path={{ pid_dir }} state=directory owner={{ es_user }} group={{ es_group }}
|
||||
|
||||
- name: Create data dir
|
||||
file: state=directory path={{ data_dir }} owner={{ es_user }} group={{ es_group }}
|
||||
|
||||
- name: Create work dir
|
||||
file: state=directory path={{ work_dir }} owner={{ es_user }} group={{ es_group }}
|
||||
|
||||
- name: Create log dir
|
||||
file: state=directory path={{ log_dir }} owner={{ es_user }} group={{ es_group }}
|
||||
|
||||
- name: Create Config Directory
|
||||
file: path={{ instance_config_directory }} state=directory owner={{ es_user }} group={{ es_group }}
|
||||
|
||||
#Copy the config template
|
||||
- name: Copy Configuration File
|
||||
template: src=elasticsearch.yml.j2 dest={{instance_config_directory}}/elasticsearch.yml owner={{ es_user }} group={{ es_group }} mode=0644 force=yes
|
||||
notify: restart elasticsearch
|
||||
|
||||
#Copy the instance specific default file
|
||||
- name: Copy Default File for Instance
|
||||
template: src=elasticsearch.j2 dest={{instance_default_file}} mode=0644 force=yes
|
||||
notify: restart elasticsearch
|
||||
|
||||
#Copy the instance specific init file
|
||||
- name: Copy Debian Init File for Instance
|
||||
template: src=init/debian/elasticsearch.j2 dest={{instance_init_script}} mode=0755 force=yes
|
||||
when: ansible_os_family == 'Debian'
|
||||
|
||||
#Copy the instance specific init file
|
||||
- name: Copy Redhat Init File for Instance
|
||||
template: src=init/redhat/elasticsearch.j2 dest={{instance_init_script}} mode=0755 force=yes
|
||||
when: ansible_os_family == 'RedHat'
|
||||
|
||||
#Copy the systemd specific file if systemd is installed
|
||||
- name: Copy Systemd File for Instance
|
||||
template: src=systemd/elasticsearch.j2 dest={{instance_sysd_script}} mode=0644 force=yes
|
||||
when: use_system_d
|
||||
notify: restart elasticsearch
|
||||
|
||||
#Copy the logging.yml
|
||||
- name: Copy Logging.yml File for Instance
|
||||
template: src=logging.yml.j2 dest={{instance_config_directory}}/logging.yml owner={{ es_user }} group={{ es_group }} mode=0644 force=yes
|
||||
notify: restart elasticsearch
|
||||
|
|
@ -12,4 +12,4 @@
|
|||
failed_when: "'Failed to install' in command_result.stderr"
|
||||
changed_when: command_result.rc == 0
|
||||
with_items: es_plugins
|
||||
when: ( ansible_os_family == 'RedHat' or ansible_os_family == 'Debian' )
|
||||
when: ( ansible_os_family == 'RedHat' or ansible_os_family == 'Debian' )
|
||||
|
|
@ -1,3 +1,13 @@
|
|||
---
|
||||
|
||||
- set_fact: es_script_dir={{ es_conf_dir }}/{{es_instance_name}}
|
||||
|
||||
- set_fact: es_script_dir={{es_config['path.scripts']}}
|
||||
when: es_config['path.scripts'] is defined
|
||||
|
||||
- name: Create script dir
|
||||
file: state=directory path={{ es_script_dir }} owner={{ es_user }} group={{ es_group }}
|
||||
when: es_config['path.scripts'] is defined
|
||||
|
||||
- name: Copy scripts to elasticsearch
|
||||
copy: src=scripts dest=/etc/elasticsearch/
|
||||
copy: src=scripts dest={{ es_script_dir }} owner={{ es_user }} group={{ es_group }}
|
||||
|
|
@ -1,14 +1,21 @@
|
|||
---
|
||||
#TODO: How to handle in multi node
|
||||
# 1. Template directory needs to be specifiable
|
||||
- name: Copy templates to elasticsearch
|
||||
copy: src=templates dest=/etc/elasticsearch/
|
||||
copy: src=templates dest=/etc/elasticsearch/ owner={{ es_user }} group={{ es_group }}
|
||||
|
||||
- set_fact: http_port=9200
|
||||
|
||||
- set_fact: http_port={{es_config['http.port']}}
|
||||
when: es_config['http.port'] is defined
|
||||
|
||||
- name: Wait for elasticsearch to startup
|
||||
wait_for: port=9200 delay=10
|
||||
wait_for: port={{http_port}} delay=10
|
||||
|
||||
- name: Get template files
|
||||
shell: find . -maxdepth 1 -type f | sed "s#\./##" | sed "s/.json//" chdir=/etc/elasticsearch/templates
|
||||
register: resultstemplate
|
||||
|
||||
- name: Install template(s)
|
||||
command: 'curl -sL -XPUT http://localhost:9200/_template/{{item}} -d @/etc/elasticsearch/templates/{{item}}.json'
|
||||
with_items: resultstemplate.stdout_lines
|
||||
command: "curl -sL -XPUT http://localhost:{{http_port}}/_template/{{item}} -d @/etc/elasticsearch/templates/{{item}}.json"
|
||||
with_items: resultstemplate.stdout_lines
|
||||
|
|
@ -1,21 +1,32 @@
|
|||
---
|
||||
# Trigger Debian section
|
||||
- name: Include Debian specific Elasticsearch
|
||||
|
||||
- set_fact: instance_default_file={{default_file | dirname}}/{{es_instance_name}}_{{default_file | basename}}
|
||||
- set_fact: instance_init_script={{init_script | dirname }}/{{es_instance_name}}_{{init_script | basename}}
|
||||
- set_fact: instance_config_directory={{ es_conf_dir }}/{{es_instance_name}}
|
||||
- set_fact: m_lock_enabled={{ es_config['bootstrap.mlockall'] is defined and es_config['bootstrap.mlockall'] == True }}
|
||||
|
||||
- debug: msg="Node configuration {{ es_config }} "
|
||||
|
||||
# Install OS specific elasticsearch - this can be abbreviated in version 2.0.0
|
||||
- name: Include specific Elasticsearch
|
||||
include: elasticsearch-Debian.yml
|
||||
when: ansible_os_family == 'Debian'
|
||||
|
||||
# Trigger Redhat section
|
||||
- name: Include RedHat specific Elasticsearch
|
||||
- name: Include specific Elasticsearch
|
||||
include: elasticsearch-RedHat.yml
|
||||
when: ansible_os_family == 'RedHat'
|
||||
|
||||
#Configuration file for elasticsearch
|
||||
- name: Elasticsearch configuration
|
||||
include: elasticsearch-config.yml
|
||||
|
||||
|
||||
# Make sure the service is started, and restart if necessary
|
||||
- name: Start elasticsearch service
|
||||
service: name=elasticsearch state=started
|
||||
service: name={{instance_init_script | basename}} state=started enabled=yes
|
||||
when: es_start_service
|
||||
register: elasticsearch_started
|
||||
|
||||
- name: Restart elasticsearch service if new version installed
|
||||
service: name=elasticsearch state=restarted
|
||||
when: es_start_service and
|
||||
( elasticsearch_install.changed or elasticsearch_configure.changed )
|
||||
and not elasticsearch_started.changed
|
||||
service: name={{instance_init_script | basename}} state=restarted enabled=yes
|
||||
when: es_start_service and elasticsearch_install.changed and not elasticsearch_started.changed
|
||||
|
|
@ -1,8 +1,8 @@
|
|||
---
|
||||
- name: RedHat - Ensure Java is installed
|
||||
yum: name={{ java_rhel }} state=latest
|
||||
yum: name={{ java }} state=latest
|
||||
when: ansible_os_family == 'RedHat'
|
||||
|
||||
- name: Debian - Ensure Java is installed
|
||||
apt: name={{ java_debian }} state=present update_cache=yes
|
||||
apt: name={{ java }} state=present update_cache=yes force=yes
|
||||
when: ansible_os_family == 'Debian'
|
||||
|
|
@ -1,4 +1,8 @@
|
|||
---
|
||||
- name: check-parameters
|
||||
include: checkParameters.yml
|
||||
- name: os-specific vars
|
||||
include_vars: "{{ansible_os_family}}.yml"
|
||||
- include: java.yml
|
||||
- include: elasticsearch.yml
|
||||
- include: elasticsearch-plugins.yml
|
||||
|
|
|
|||
86
templates/elasticsearch.j2
Normal file
86
templates/elasticsearch.j2
Normal file
|
|
@ -0,0 +1,86 @@
|
|||
################################
|
||||
# Elasticsearch
|
||||
################################
|
||||
|
||||
# Elasticsearch home directory
|
||||
ES_HOME={{es_home}}
|
||||
|
||||
# Elasticsearch configuration directory
|
||||
CONF_DIR={{instance_config_directory}}
|
||||
|
||||
# Elasticsearch configuration file
|
||||
CONF_FILE={{instance_config_directory}}/elasticsearch.yml
|
||||
|
||||
# Elasticsearch data directory
|
||||
DATA_DIR={{data_dir}}
|
||||
|
||||
# Elasticsearch logs directory
|
||||
LOG_DIR={{log_dir}}
|
||||
|
||||
# Elasticsearch work directory
|
||||
WORK_DIR={{work_dir}}
|
||||
|
||||
# Elasticsearch PID directory
|
||||
PID_DIR={{pid_dir}}
|
||||
|
||||
# Heap size defaults to 256m min, 1g max
|
||||
# Set ES_HEAP_SIZE to 50% of available RAM, but no more than 31g
|
||||
{% if es_heap_size is defined %}
|
||||
ES_HEAP_SIZE={{es_heap_size}}
|
||||
{% endif %}
|
||||
|
||||
# Heap new generation
|
||||
#ES_HEAP_NEWSIZE=
|
||||
|
||||
# Maximum direct memory
|
||||
#ES_DIRECT_SIZE=
|
||||
|
||||
# Additional Java OPTS
|
||||
#ES_JAVA_OPTS=
|
||||
|
||||
# Configure restart on package upgrade (true, every other setting will lead to not restarting)
|
||||
#ES_RESTART_ON_UPGRADE=true
|
||||
|
||||
# Path to the GC log file
|
||||
#ES_GC_LOG_FILE=/var/log/elasticsearch/gc.log
|
||||
|
||||
################################
|
||||
# Elasticsearch service
|
||||
################################
|
||||
|
||||
# SysV init.d
|
||||
#
|
||||
# When executing the init script, this user will be used to run the elasticsearch service.
|
||||
# The default value is 'elasticsearch' and is declared in the init.d file.
|
||||
# Note that this setting is only used by the init script. If changed, make sure that
|
||||
# the configured user can read and write into the data, work, plugins and log directories.
|
||||
# For systemd service, the user is usually configured in file /usr/lib/systemd/system/elasticsearch.service
|
||||
ES_USER={{es_user}}
|
||||
ES_GROUP={{es_group}}
|
||||
|
||||
################################
|
||||
# System properties
|
||||
################################
|
||||
|
||||
# Specifies the maximum file descriptor number that can be opened by this process
|
||||
# When using Systemd, this setting is ignored and the LimitNOFILE defined in
|
||||
# /usr/lib/systemd/system/elasticsearch.service takes precedence
|
||||
{% if es_max_open_files is defined %}
|
||||
#MAX_OPEN_FILES
|
||||
MAX_OPEN_FILES={{es_max_open_files}}
|
||||
{% endif %}
|
||||
|
||||
# The maximum number of bytes of memory that may be locked into RAM
|
||||
# Set to "unlimited" if you use the 'bootstrap.mlockall: true' option
|
||||
# in elasticsearch.yml (ES_HEAP_SIZE must also be set).
|
||||
# When using Systemd, the LimitMEMLOCK property must be set
|
||||
# in /usr/lib/systemd/system/elasticsearch.service
|
||||
{% if m_lock_enabled %}
|
||||
#MAX_LOCKED_MEMORY=
|
||||
MAX_LOCKED_MEMORY=unlimited
|
||||
{% endif %}
|
||||
|
||||
# Maximum number of VMA (Virtual Memory Areas) a process can own
|
||||
# When using Systemd, this setting is ignored and the 'vm.max_map_count'
|
||||
# property is set at boot time in /usr/lib/sysctl.d/elasticsearch.conf
|
||||
#MAX_MAP_COUNT=262144
|
||||
23
templates/elasticsearch.yml.j2
Normal file
23
templates/elasticsearch.yml.j2
Normal file
|
|
@ -0,0 +1,23 @@
|
|||
|
||||
{% if es_config %}
|
||||
{{ es_config | to_nice_yaml }}
|
||||
{% endif %}
|
||||
|
||||
{% if es_config['cluster.name'] is not defined %}
|
||||
cluster.name: elasticsearch
|
||||
{% endif %}
|
||||
|
||||
{% if es_config['node.name'] is not defined %}
|
||||
node.name: {{inventory_hostname}}-{{es_instance_name}}
|
||||
{% endif %}
|
||||
|
||||
#################################### Paths ####################################
|
||||
|
||||
# Path to directory containing configuration (this file and logging.yml):
|
||||
path.conf: {{ instance_config_directory }}
|
||||
|
||||
path.data: {{ data_dir }}
|
||||
|
||||
path.work: {{ work_dir }}
|
||||
|
||||
path.logs: {{ log_dir }}
|
||||
237
templates/init/debian/elasticsearch.j2
Executable file
237
templates/init/debian/elasticsearch.j2
Executable file
|
|
@ -0,0 +1,237 @@
|
|||
#!/bin/sh
|
||||
#
|
||||
# /etc/init.d/elasticsearch -- startup script for Elasticsearch
|
||||
#
|
||||
# Written by Miquel van Smoorenburg <miquels@cistron.nl>.
|
||||
# Modified for Debian GNU/Linux by Ian Murdock <imurdock@gnu.ai.mit.edu>.
|
||||
# Modified for Tomcat by Stefan Gybas <sgybas@debian.org>.
|
||||
# Modified for Tomcat6 by Thierry Carrez <thierry.carrez@ubuntu.com>.
|
||||
# Additional improvements by Jason Brittain <jason.brittain@mulesoft.com>.
|
||||
# Modified by Nicolas Huray for Elasticsearch <nicolas.huray@gmail.com>.
|
||||
#
|
||||
### BEGIN INIT INFO
|
||||
# Provides: elasticsearch
|
||||
# Required-Start: $network $remote_fs $named
|
||||
# Required-Stop: $network $remote_fs $named
|
||||
# Default-Start: 2 3 4 5
|
||||
# Default-Stop: 0 1 6
|
||||
# Short-Description: Starts elasticsearch
|
||||
# Description: Starts elasticsearch using start-stop-daemon
|
||||
### END INIT INFO
|
||||
|
||||
PATH=/bin:/usr/bin:/sbin:/usr/sbin
|
||||
NAME={{es_instance_name}}_{{default_file | basename}}
|
||||
{% if es_config['node.name'] is defined %}
|
||||
DESC="Elasticsearch Server - {{es_config['node.name']}}"
|
||||
{% else %}
|
||||
DESC="Elasticsearch Server - {{es_instance_name}}"
|
||||
{% endif %}
|
||||
DEFAULT=/etc/default/$NAME
|
||||
|
||||
if [ `id -u` -ne 0 ]; then
|
||||
echo "You need root privileges to run this script"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
||||
. /lib/lsb/init-functions
|
||||
|
||||
if [ -r /etc/default/rcS ]; then
|
||||
. /etc/default/rcS
|
||||
fi
|
||||
|
||||
# The following variables can be overwritten in $DEFAULT
|
||||
|
||||
# Run Elasticsearch as this user ID and group ID
|
||||
ES_USER=elasticsearch
|
||||
ES_GROUP=elasticsearch
|
||||
|
||||
# The first existing directory is used for JAVA_HOME (if JAVA_HOME is not defined in $DEFAULT)
|
||||
JDK_DIRS="/usr/lib/jvm/java-8-oracle/ /usr/lib/jvm/j2sdk1.8-oracle/ /usr/lib/jvm/jdk-7-oracle-x64 /usr/lib/jvm/java-7-oracle /usr/lib/jvm/j2sdk1.7-oracle/ /usr/lib/jvm/java-7-openjdk /usr/lib/jvm/java-7-openjdk-amd64/ /usr/lib/jvm/java-7-openjdk-armhf /usr/lib/jvm/java-7-openjdk-i386/ /usr/lib/jvm/default-java"
|
||||
|
||||
# Look for the right JVM to use
|
||||
for jdir in $JDK_DIRS; do
|
||||
if [ -r "$jdir/bin/java" -a -z "${JAVA_HOME}" ]; then
|
||||
JAVA_HOME="$jdir"
|
||||
fi
|
||||
done
|
||||
export JAVA_HOME
|
||||
|
||||
# Directory where the Elasticsearch binary distribution resides
|
||||
ES_HOME=/usr/share/$NAME
|
||||
|
||||
# Heap size defaults to 256m min, 1g max
|
||||
# Set ES_HEAP_SIZE to 50% of available RAM, but no more than 31g
|
||||
#ES_HEAP_SIZE=2g
|
||||
|
||||
# Heap new generation
|
||||
#ES_HEAP_NEWSIZE=
|
||||
|
||||
# max direct memory
|
||||
#ES_DIRECT_SIZE=
|
||||
|
||||
# Additional Java OPTS
|
||||
#ES_JAVA_OPTS=
|
||||
|
||||
# Maximum number of open files
|
||||
MAX_OPEN_FILES=65535
|
||||
|
||||
# Maximum amount of locked memory
|
||||
#MAX_LOCKED_MEMORY=
|
||||
|
||||
# Elasticsearch log directory
|
||||
LOG_DIR=/var/log/$NAME
|
||||
|
||||
# Elasticsearch data directory
|
||||
DATA_DIR=/var/lib/$NAME
|
||||
|
||||
# Elasticsearch work directory
|
||||
WORK_DIR=/tmp/$NAME
|
||||
|
||||
# Elasticsearch configuration directory
|
||||
CONF_DIR=/etc/$NAME
|
||||
|
||||
# Elasticsearch configuration file (elasticsearch.yml)
|
||||
CONF_FILE=$CONF_DIR/elasticsearch.yml
|
||||
|
||||
# Maximum number of VMA (Virtual Memory Areas) a process can own
|
||||
MAX_MAP_COUNT=262144
|
||||
|
||||
# Path to the GC log file
|
||||
#ES_GC_LOG_FILE=/var/log/elasticsearch/gc.log
|
||||
|
||||
# Elasticsearch PID file directory
|
||||
PID_DIR="/var/run/elasticsearch"
|
||||
|
||||
# End of variables that can be overwritten in $DEFAULT
|
||||
|
||||
# overwrite settings from default file
|
||||
if [ -f "$DEFAULT" ]; then
|
||||
. "$DEFAULT"
|
||||
fi
|
||||
|
||||
# Define other required variables
|
||||
PID_FILE="$PID_DIR/$NAME.pid"
|
||||
DAEMON=$ES_HOME/bin/elasticsearch
|
||||
DAEMON_OPTS="-d -p $PID_FILE --default.config=$CONF_FILE --default.path.home=$ES_HOME --default.path.logs=$LOG_DIR --default.path.data=$DATA_DIR --default.path.work=$WORK_DIR --default.path.conf=$CONF_DIR"
|
||||
|
||||
export ES_HEAP_SIZE
|
||||
export ES_HEAP_NEWSIZE
|
||||
export ES_DIRECT_SIZE
|
||||
export ES_JAVA_OPTS
|
||||
|
||||
# Check DAEMON exists
|
||||
test -x $DAEMON || exit 0
|
||||
|
||||
checkJava() {
|
||||
if [ -x "$JAVA_HOME/bin/java" ]; then
|
||||
JAVA="$JAVA_HOME/bin/java"
|
||||
else
|
||||
JAVA=`which java`
|
||||
fi
|
||||
|
||||
if [ ! -x "$JAVA" ]; then
|
||||
echo "Could not find any executable java binary. Please install java in your PATH or set JAVA_HOME"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
case "$1" in
|
||||
start)
|
||||
checkJava
|
||||
|
||||
if [ -n "$MAX_LOCKED_MEMORY" -a -z "$ES_HEAP_SIZE" ]; then
|
||||
log_failure_msg "MAX_LOCKED_MEMORY is set - ES_HEAP_SIZE must also be set"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_daemon_msg "Starting $DESC"
|
||||
|
||||
pid=`pidofproc -p $PID_FILE elasticsearch`
|
||||
if [ -n "$pid" ] ; then
|
||||
log_begin_msg "Already running."
|
||||
log_end_msg 0
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Prepare environment
|
||||
mkdir -p "$LOG_DIR" "$DATA_DIR" "$WORK_DIR" && chown "$ES_USER":"$ES_GROUP" "$LOG_DIR" "$DATA_DIR" "$WORK_DIR"
|
||||
|
||||
# Ensure that the PID_DIR exists (it is cleaned at OS startup time)
|
||||
if [ -n "$PID_DIR" ] && [ ! -e "$PID_DIR" ]; then
|
||||
mkdir -p "$PID_DIR" && chown "$ES_USER":"$ES_GROUP" "$PID_DIR"
|
||||
fi
|
||||
if [ -n "$PID_FILE" ] && [ ! -e "$PID_FILE" ]; then
|
||||
touch "$PID_FILE" && chown "$ES_USER":"$ES_GROUP" "$PID_FILE"
|
||||
fi
|
||||
|
||||
if [ -n "$MAX_OPEN_FILES" ]; then
|
||||
ulimit -n $MAX_OPEN_FILES
|
||||
fi
|
||||
|
||||
if [ -n "$MAX_LOCKED_MEMORY" ]; then
|
||||
ulimit -l $MAX_LOCKED_MEMORY
|
||||
fi
|
||||
|
||||
if [ -n "$MAX_MAP_COUNT" -a -f /proc/sys/vm/max_map_count ]; then
|
||||
sysctl -q -w vm.max_map_count=$MAX_MAP_COUNT
|
||||
fi
|
||||
|
||||
# Start Daemon
|
||||
start-stop-daemon --start -b --user "$ES_USER" -c "$ES_USER" --pidfile "$PID_FILE" --exec $DAEMON -- $DAEMON_OPTS
|
||||
return=$?
|
||||
if [ $return -eq 0 ]
|
||||
then
|
||||
i=0
|
||||
timeout=10
|
||||
# Wait for the process to be properly started before exiting
|
||||
until { cat "$PID_FILE" | xargs kill -0; } >/dev/null 2>&1
|
||||
do
|
||||
sleep 1
|
||||
i=$(($i + 1))
|
||||
if [ $i -gt $timeout ]; then
|
||||
log_end_msg 1
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
else
|
||||
log_end_msg $return
|
||||
fi
|
||||
;;
|
||||
stop)
|
||||
log_daemon_msg "Stopping $DESC"
|
||||
|
||||
if [ -f "$PID_FILE" ]; then
|
||||
start-stop-daemon --stop --pidfile "$PID_FILE" \
|
||||
--user "$ES_USER" \
|
||||
--retry=TERM/20/KILL/5 >/dev/null
|
||||
if [ $? -eq 1 ]; then
|
||||
log_progress_msg "$DESC is not running but pid file exists, cleaning up"
|
||||
elif [ $? -eq 3 ]; then
|
||||
PID="`cat $PID_FILE`"
|
||||
log_failure_msg "Failed to stop $DESC (pid $PID)"
|
||||
exit 1
|
||||
fi
|
||||
rm -f "$PID_FILE"
|
||||
else
|
||||
log_progress_msg "(not running)"
|
||||
fi
|
||||
log_end_msg 0
|
||||
;;
|
||||
status)
|
||||
status_of_proc -p $PID_FILE elasticsearch elasticsearch && exit 0 || exit $?
|
||||
;;
|
||||
restart|force-reload)
|
||||
if [ -f "$PID_FILE" ]; then
|
||||
$0 stop
|
||||
sleep 1
|
||||
fi
|
||||
$0 start
|
||||
;;
|
||||
*)
|
||||
log_success_msg "Usage: $0 {start|stop|restart|force-reload|status}"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
exit 0
|
||||
185
templates/init/redhat/elasticsearch.j2
Executable file
185
templates/init/redhat/elasticsearch.j2
Executable file
|
|
@ -0,0 +1,185 @@
|
|||
#!/bin/sh
|
||||
#
|
||||
# elasticsearch <summary>
|
||||
#
|
||||
# chkconfig: 2345 80 20
|
||||
# description: Starts and stops a single elasticsearch instance on this system
|
||||
#
|
||||
|
||||
### BEGIN INIT INFO
|
||||
# Provides: Elasticsearch
|
||||
# Required-Start: $network $named
|
||||
# Required-Stop: $network $named
|
||||
# Default-Start: 2 3 4 5
|
||||
# Default-Stop: 0 1 6
|
||||
# Short-Description: This service manages the elasticsearch daemon
|
||||
# Description: Elasticsearch is a very scalable, schema-free and high-performance search solution supporting multi-tenancy and near realtime search.
|
||||
### END INIT INFO
|
||||
|
||||
#
|
||||
# init.d / servicectl compatibility (openSUSE)
|
||||
#
|
||||
if [ -f /etc/rc.status ]; then
|
||||
. /etc/rc.status
|
||||
rc_reset
|
||||
fi
|
||||
|
||||
#
|
||||
# Source function library.
|
||||
#
|
||||
if [ -f /etc/rc.d/init.d/functions ]; then
|
||||
. /etc/rc.d/init.d/functions
|
||||
fi
|
||||
|
||||
# Sets the default values for elasticsearch variables used in this script
|
||||
ES_USER="elasticsearch"
|
||||
ES_GROUP="elasticsearch"
|
||||
ES_HOME="/usr/share/elasticsearch"
|
||||
MAX_OPEN_FILES=65535
|
||||
MAX_MAP_COUNT=262144
|
||||
LOG_DIR="/var/log/elasticsearch"
|
||||
DATA_DIR="/var/lib/elasticsearch"
|
||||
WORK_DIR="/tmp/elasticsearch"
|
||||
CONF_DIR="/etc/elasticsearch"
|
||||
CONF_FILE="/etc/elasticsearch/elasticsearch.yml"
|
||||
PID_DIR="/var/run/elasticsearch"
|
||||
|
||||
# Source the default env file
|
||||
ES_ENV_FILE="{{instance_default_file}}"
|
||||
if [ -f "$ES_ENV_FILE" ]; then
|
||||
. "$ES_ENV_FILE"
|
||||
fi
|
||||
|
||||
exec="$ES_HOME/bin/elasticsearch"
|
||||
prog="elasticsearch"
|
||||
pidfile="$PID_DIR/${prog}.pid"
|
||||
|
||||
export ES_HEAP_SIZE
|
||||
export ES_HEAP_NEWSIZE
|
||||
export ES_DIRECT_SIZE
|
||||
export ES_JAVA_OPTS
|
||||
export JAVA_HOME
|
||||
|
||||
lockfile=/var/lock/subsys/$prog
|
||||
|
||||
# backwards compatibility for old config sysconfig files, pre 0.90.1
|
||||
if [ -n $USER ] && [ -z $ES_USER ] ; then
|
||||
ES_USER=$USER
|
||||
fi
|
||||
|
||||
checkJava() {
|
||||
if [ -x "$JAVA_HOME/bin/java" ]; then
|
||||
JAVA="$JAVA_HOME/bin/java"
|
||||
else
|
||||
JAVA=`which java`
|
||||
fi
|
||||
|
||||
if [ ! -x "$JAVA" ]; then
|
||||
echo "Could not find any executable java binary. Please install java in your PATH or set JAVA_HOME"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
start() {
|
||||
checkJava
|
||||
[ -x $exec ] || exit 5
|
||||
[ -f $CONF_FILE ] || exit 6
|
||||
if [ -n "$MAX_LOCKED_MEMORY" -a -z "$ES_HEAP_SIZE" ]; then
|
||||
echo "MAX_LOCKED_MEMORY is set - ES_HEAP_SIZE must also be set"
|
||||
return 7
|
||||
fi
|
||||
if [ -n "$MAX_OPEN_FILES" ]; then
|
||||
ulimit -n $MAX_OPEN_FILES
|
||||
fi
|
||||
if [ -n "$MAX_LOCKED_MEMORY" ]; then
|
||||
ulimit -l $MAX_LOCKED_MEMORY
|
||||
fi
|
||||
if [ -n "$MAX_MAP_COUNT" -a -f /proc/sys/vm/max_map_count ]; then
|
||||
sysctl -q -w vm.max_map_count=$MAX_MAP_COUNT
|
||||
fi
|
||||
if [ -n "$WORK_DIR" ]; then
|
||||
mkdir -p "$WORK_DIR"
|
||||
chown "$ES_USER":"$ES_GROUP" "$WORK_DIR"
|
||||
fi
|
||||
|
||||
# Ensure that the PID_DIR exists (it is cleaned at OS startup time)
|
||||
if [ -n "$PID_DIR" ] && [ ! -e "$PID_DIR" ]; then
|
||||
mkdir -p "$PID_DIR" && chown "$ES_USER":"$ES_GROUP" "$PID_DIR"
|
||||
fi
|
||||
if [ -n "$pidfile" ] && [ ! -e "$pidfile" ]; then
|
||||
touch "$pidfile" && chown "$ES_USER":"$ES_GROUP" "$pidfile"
|
||||
fi
|
||||
|
||||
echo -n $"Starting $prog: "
|
||||
# if not running, start it up here, usually something like "daemon $exec"
|
||||
daemon --user $ES_USER --pidfile $pidfile $exec -p $pidfile -d -Des.default.path.home=$ES_HOME -Des.default.path.logs=$LOG_DIR -Des.default.path.data=$DATA_DIR -Des.default.path.work=$WORK_DIR -Des.default.path.conf=$CONF_DIR
|
||||
retval=$?
|
||||
echo
|
||||
[ $retval -eq 0 ] && touch $lockfile
|
||||
return $retval
|
||||
}
|
||||
|
||||
stop() {
|
||||
echo -n $"Stopping $prog: "
|
||||
# stop it here, often "killproc $prog"
|
||||
killproc -p $pidfile -d 20 $prog
|
||||
retval=$?
|
||||
echo
|
||||
[ $retval -eq 0 ] && rm -f $lockfile
|
||||
return $retval
|
||||
}
|
||||
|
||||
restart() {
|
||||
stop
|
||||
start
|
||||
}
|
||||
|
||||
reload() {
|
||||
restart
|
||||
}
|
||||
|
||||
force_reload() {
|
||||
restart
|
||||
}
|
||||
|
||||
rh_status() {
|
||||
# run checks to determine if the service is running or use generic status
|
||||
status -p $pidfile $prog
|
||||
}
|
||||
|
||||
rh_status_q() {
|
||||
rh_status >/dev/null 2>&1
|
||||
}
|
||||
|
||||
|
||||
case "$1" in
|
||||
start)
|
||||
rh_status_q && exit 0
|
||||
$1
|
||||
;;
|
||||
stop)
|
||||
rh_status_q || exit 0
|
||||
$1
|
||||
;;
|
||||
restart)
|
||||
$1
|
||||
;;
|
||||
reload)
|
||||
rh_status_q || exit 7
|
||||
$1
|
||||
;;
|
||||
force-reload)
|
||||
force_reload
|
||||
;;
|
||||
status)
|
||||
rh_status
|
||||
;;
|
||||
condrestart|try-restart)
|
||||
rh_status_q || exit 0
|
||||
restart
|
||||
;;
|
||||
*)
|
||||
echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}"
|
||||
exit 2
|
||||
esac
|
||||
exit $?
|
||||
68
templates/logging.yml.j2
Normal file
68
templates/logging.yml.j2
Normal file
|
|
@ -0,0 +1,68 @@
|
|||
# you can override this using by setting a system property, for example -Des.logger.level=DEBUG
|
||||
es.logger.level: INFO
|
||||
rootLogger: ${es.logger.level}, console, file
|
||||
logger:
|
||||
# log action execution errors for easier debugging
|
||||
action: DEBUG
|
||||
# reduce the logging for aws, too much is logged under the default INFO
|
||||
com.amazonaws: WARN
|
||||
org.apache.http: INFO
|
||||
|
||||
# gateway
|
||||
#gateway: DEBUG
|
||||
#index.gateway: DEBUG
|
||||
|
||||
# peer shard recovery
|
||||
#indices.recovery: DEBUG
|
||||
|
||||
# discovery
|
||||
#discovery: TRACE
|
||||
|
||||
index.search.slowlog: TRACE, index_search_slow_log_file
|
||||
index.indexing.slowlog: TRACE, index_indexing_slow_log_file
|
||||
|
||||
additivity:
|
||||
index.search.slowlog: false
|
||||
index.indexing.slowlog: false
|
||||
|
||||
appender:
|
||||
console:
|
||||
type: console
|
||||
layout:
|
||||
type: consolePattern
|
||||
conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"
|
||||
|
||||
file:
|
||||
type: dailyRollingFile
|
||||
file: ${path.logs}/${cluster.name}.log
|
||||
datePattern: "'.'yyyy-MM-dd"
|
||||
layout:
|
||||
type: pattern
|
||||
conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"
|
||||
|
||||
# Use the following log4j-extras RollingFileAppender to enable gzip compression of log files.
|
||||
# For more information see https://logging.apache.org/log4j/extras/apidocs/org/apache/log4j/rolling/RollingFileAppender.html
|
||||
#file:
|
||||
#type: extrasRollingFile
|
||||
#file: ${path.logs}/${cluster.name}.log
|
||||
#rollingPolicy: timeBased
|
||||
#rollingPolicy.FileNamePattern: ${path.logs}/${cluster.name}.log.%d{yyyy-MM-dd}.gz
|
||||
#layout:
|
||||
#type: pattern
|
||||
#conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"
|
||||
|
||||
index_search_slow_log_file:
|
||||
type: dailyRollingFile
|
||||
file: ${path.logs}/${cluster.name}_index_search_slowlog.log
|
||||
datePattern: "'.'yyyy-MM-dd"
|
||||
layout:
|
||||
type: pattern
|
||||
conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"
|
||||
|
||||
index_indexing_slow_log_file:
|
||||
type: dailyRollingFile
|
||||
file: ${path.logs}/${cluster.name}_index_indexing_slowlog.log
|
||||
datePattern: "'.'yyyy-MM-dd"
|
||||
layout:
|
||||
type: pattern
|
||||
conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"
|
||||
50
templates/systemd/elasticsearch.j2
Normal file
50
templates/systemd/elasticsearch.j2
Normal file
|
|
@ -0,0 +1,50 @@
|
|||
[Unit]
|
||||
Description=Elasticsearch-{{es_instance_name}}
|
||||
Documentation=http://www.elastic.co
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
|
||||
[Service]
|
||||
Environment=ES_HOME={{es_home}}
|
||||
Environment=CONF_DIR={{instance_config_directory}}
|
||||
Environment=CONF_FILE={{instance_config_directory}}/elasticsearch.yml
|
||||
Environment=DATA_DIR={{data_dir}}
|
||||
Environment=LOG_DIR={{log_dir}}
|
||||
Environment=PID_DIR={{pid_dir}}
|
||||
EnvironmentFile=-{{instance_default_file}}
|
||||
|
||||
User={{es_user}}
|
||||
Group={{es_group}}
|
||||
|
||||
ExecStart={{es_home}}/bin/elasticsearch \
|
||||
-Des.pidfile=$PID_DIR/elasticsearch.pid \
|
||||
-Des.default.path.home=$ES_HOME \
|
||||
-Des.default.path.logs=$LOG_DIR \
|
||||
-Des.default.path.data=$DATA_DIR \
|
||||
-Des.default.config=$CONF_FILE \
|
||||
-Des.default.path.conf=$CONF_DIR
|
||||
|
||||
# Connects standard output to /dev/null
|
||||
StandardOutput=null
|
||||
|
||||
# Connects standard error to journal
|
||||
StandardError=journal
|
||||
|
||||
# When a JVM receives a SIGTERM signal it exits with code 143
|
||||
SuccessExitStatus=143
|
||||
|
||||
# Specifies the maximum file descriptor number that can be opened by this process
|
||||
LimitNOFILE=65535
|
||||
|
||||
# Specifies the maximum number of bytes of memory that may be locked into RAM
|
||||
# Set to "infinity" if you use the 'bootstrap.mlockall: true' option
|
||||
# in elasticsearch.yml and 'MAX_LOCKED_MEMORY=unlimited' in {{instance_default_file}}
|
||||
{% if m_lock_enabled %}
|
||||
LimitMEMLOCK=infinity
|
||||
{% endif %}
|
||||
|
||||
# Shutdown delay in seconds, before process is tried to be killed with KILL (if configured)
|
||||
TimeoutStopSec=20
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
12
test/integration/config.yml
Normal file
12
test/integration/config.yml
Normal file
|
|
@ -0,0 +1,12 @@
|
|||
---
|
||||
#Test explicit setting of parameters and variables
|
||||
- name: Elasticsearch Config tests
|
||||
hosts: localhost
|
||||
roles:
|
||||
#expand to all available parameters
|
||||
- { role: elasticsearch, es_instance_name: "node1", es_data_dir: "/opt/elasticsearch/data", es_log_dir: "/opt/elasticsearch/logs", es_work_dir: "/opt/elasticsearch/temp", es_config: {node.name: "node1", cluster.name: "custom-cluster", discovery.zen.ping.unicast.hosts: "localhost:9301", http.port: 9201, transport.tcp.port: 9301, node.data: false, node.master: true, bootstrap.mlockall: true, discovery.zen.ping.multicast.enabled: false } }
|
||||
vars:
|
||||
es_scripts: false
|
||||
es_templates: false
|
||||
es_version_lock: false
|
||||
es_heap_size: 1g
|
||||
2
test/integration/config/config.yml
Normal file
2
test/integration/config/config.yml
Normal file
|
|
@ -0,0 +1,2 @@
|
|||
---
|
||||
- host: test-kitchen
|
||||
77
test/integration/config/serverspec/default_spec.rb
Normal file
77
test/integration/config/serverspec/default_spec.rb
Normal file
|
|
@ -0,0 +1,77 @@
|
|||
require 'spec_helper'
|
||||
|
||||
context "basic tests" do
|
||||
|
||||
describe user('elasticsearch') do
|
||||
it { should exist }
|
||||
end
|
||||
|
||||
describe service('node1_elasticsearch') do
|
||||
it { should be_running }
|
||||
end
|
||||
|
||||
describe package('elasticsearch') do
|
||||
it { should be_installed }
|
||||
end
|
||||
|
||||
describe file('/etc/elasticsearch/node1/elasticsearch.yml') do
|
||||
it { should be_file }
|
||||
end
|
||||
|
||||
#test configuration parameters have been set - test all appropriately set in config file
|
||||
describe file('/etc/elasticsearch/node1/elasticsearch.yml') do
|
||||
it { should contain 'http.port: 9201' }
|
||||
it { should contain 'transport.tcp.port: 9301' }
|
||||
it { should contain 'node.data: false' }
|
||||
it { should contain 'node.master: true' }
|
||||
it { should contain 'discovery.zen.ping.multicast.enabled: false' }
|
||||
it { should contain 'cluster.name: custom-cluster' }
|
||||
it { should contain 'node.name: node1' }
|
||||
it { should contain 'bootstrap.mlockall: true' }
|
||||
it { should contain 'discovery.zen.ping.unicast.hosts: localhost:9301' }
|
||||
it { should contain 'path.conf: /etc/elasticsearch/node1' }
|
||||
it { should contain 'path.data: /opt/elasticsearch/data/localhost-node1' }
|
||||
it { should contain 'path.work: /opt/elasticsearch/temp/localhost-node1' }
|
||||
it { should contain 'path.logs: /opt/elasticsearch/logs/localhost-node1' }
|
||||
end
|
||||
|
||||
#test directories exist
|
||||
describe file('/etc/elasticsearch/node1') do
|
||||
it { should be_directory }
|
||||
it { should be_owned_by 'elasticsearch' }
|
||||
end
|
||||
|
||||
describe file('/opt/elasticsearch/data/localhost-node1') do
|
||||
it { should be_directory }
|
||||
it { should be_owned_by 'elasticsearch' }
|
||||
end
|
||||
|
||||
describe file('/opt/elasticsearch/logs/localhost-node1') do
|
||||
it { should be_directory }
|
||||
it { should be_owned_by 'elasticsearch' }
|
||||
end
|
||||
|
||||
describe file('/opt/elasticsearch/temp/localhost-node1') do
|
||||
it { should be_directory }
|
||||
it { should be_owned_by 'elasticsearch' }
|
||||
end
|
||||
|
||||
describe file('/etc/init.d/node1_elasticsearch') do
|
||||
it { should be_file }
|
||||
end
|
||||
|
||||
#test we started on the correct port was used
|
||||
describe command('curl -s "localhost:9201" | grep status') do
|
||||
#TODO: This is returning an empty string
|
||||
#its(:stdout) { should match /\"status\" : 200/ }
|
||||
its(:exit_status) { should eq 0 }
|
||||
end
|
||||
|
||||
#test to make sure mlock was applied
|
||||
describe command('curl -s "localhost:9201/_nodes/process?pretty" | grep mlockall') do
|
||||
its(:stdout) { should match /true/ }
|
||||
its(:exit_status) { should eq 0 }
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
2
test/integration/config/serverspec/spec_helper.rb
Normal file
2
test/integration/config/serverspec/spec_helper.rb
Normal file
|
|
@ -0,0 +1,2 @@
|
|||
require 'serverspec'
|
||||
set :backend, :exec
|
||||
10
test/integration/multi.yml
Normal file
10
test/integration/multi.yml
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
---
|
||||
#Test ability to deploy multiple instances to a machine
|
||||
- name: Elasticsearch Multi tests
|
||||
hosts: localhost
|
||||
roles:
|
||||
- { role: elasticsearch, es_instance_name: "master", es_data_dir: "/opt/elasticsearch", es_heap_size: "1g", es_config: { "discovery.zen.ping.multicast.enabled": false, discovery.zen.ping.unicast.hosts: "localhost:9300", http.port: 9200, transport.tcp.port: 9300, node.data: false, node.master: true, bootstrap.mlockall: true, discovery.zen.ping.multicast.enabled: false } }
|
||||
- { role: elasticsearch, es_instance_name: "node1", es_config: { "discovery.zen.ping.multicast.enabled": false, discovery.zen.ping.unicast.hosts: "localhost:9300", http.port: 9201, transport.tcp.port: 9301, node.data: true, node.master: false, discovery.zen.ping.multicast.enabled: false } }
|
||||
vars:
|
||||
es_scripts: true
|
||||
es_templates: true
|
||||
2
test/integration/multi/multi.yml
Normal file
2
test/integration/multi/multi.yml
Normal file
|
|
@ -0,0 +1,2 @@
|
|||
---
|
||||
- host: test-kitchen
|
||||
160
test/integration/multi/serverspec/default_spec.rb
Normal file
160
test/integration/multi/serverspec/default_spec.rb
Normal file
|
|
@ -0,0 +1,160 @@
|
|||
require 'spec_helper'
|
||||
|
||||
context "basic tests" do
|
||||
|
||||
describe user('elasticsearch') do
|
||||
it { should exist }
|
||||
end
|
||||
|
||||
describe service('node1_elasticsearch') do
|
||||
it { should be_running }
|
||||
end
|
||||
|
||||
describe service('master_elasticsearch') do
|
||||
it { should be_running }
|
||||
end
|
||||
|
||||
describe package('elasticsearch') do
|
||||
it { should be_installed }
|
||||
end
|
||||
|
||||
#test configuration parameters have been set - test all appropriately set in config file
|
||||
describe file('/etc/elasticsearch/node1/elasticsearch.yml') do
|
||||
it { should be_file }
|
||||
it { should contain 'http.port: 9201' }
|
||||
it { should contain 'transport.tcp.port: 9301' }
|
||||
it { should contain 'node.data: true' }
|
||||
it { should contain 'node.master: false' }
|
||||
it { should contain 'discovery.zen.ping.multicast.enabled: false' }
|
||||
it { should contain 'node.name: localhost-node1' }
|
||||
it { should_not contain 'bootstrap.mlockall: true' }
|
||||
it { should contain 'path.conf: /etc/elasticsearch/node1' }
|
||||
it { should contain 'path.data: /var/lib/elasticsearch/localhost-node1' }
|
||||
it { should contain 'path.work: /tmp/elasticsearch/localhost-node1' }
|
||||
it { should contain 'path.logs: /var/log/elasticsearch/localhost-node1' }
|
||||
end
|
||||
|
||||
|
||||
#test configuration parameters have been set for master - test all appropriately set in config file
|
||||
describe file('/etc/elasticsearch/master/elasticsearch.yml') do
|
||||
it { should be_file }
|
||||
it { should contain 'http.port: 9200' }
|
||||
it { should contain 'transport.tcp.port: 9300' }
|
||||
it { should contain 'node.data: false' }
|
||||
it { should contain 'node.master: true' }
|
||||
it { should contain 'discovery.zen.ping.multicast.enabled: false' }
|
||||
it { should contain 'node.name: localhost-master' }
|
||||
it { should contain 'bootstrap.mlockall: true' }
|
||||
it { should contain 'path.conf: /etc/elasticsearch/master' }
|
||||
it { should contain 'path.data: /opt/elasticsearch/localhost-master' }
|
||||
it { should contain 'path.work: /tmp/elasticsearch/localhost-master' }
|
||||
it { should contain 'path.logs: /var/log/elasticsearch/localhost-master' }
|
||||
end
|
||||
|
||||
describe 'Master listening' do
|
||||
it 'listening in port 9200' do
|
||||
expect(port 9200).to be_listening
|
||||
end
|
||||
end
|
||||
|
||||
describe 'Node listening' do
|
||||
it 'node should be listening in port 9201' do
|
||||
expect(port 9201).to be_listening
|
||||
end
|
||||
end
|
||||
|
||||
#test we started on the correct port was used for master
|
||||
describe 'master started' do
|
||||
it 'master node should be running', :retry => 3, :retry_wait => 10 do
|
||||
command = command('curl "localhost:9200" | grep name')
|
||||
#expect(command.stdout).should match '/*master_localhost*/'
|
||||
expect(command.exit_status).to eq(0)
|
||||
end
|
||||
end
|
||||
|
||||
#test we started on the correct port was used for node 1
|
||||
describe 'node1 started' do
|
||||
it 'node should be running', :retry => 3, :retry_wait => 10 do
|
||||
command = command('curl "localhost:9201" | grep name')
|
||||
#expect(command.stdout).should match '/*node1_localhost*/'
|
||||
expect(command.exit_status).to eq(0)
|
||||
end
|
||||
end
|
||||
|
||||
describe file('/etc/elasticsearch/templates') do
|
||||
it { should be_directory }
|
||||
it { should be_owned_by 'elasticsearch' }
|
||||
end
|
||||
|
||||
describe file('/etc/elasticsearch/templates/basic.json') do
|
||||
it { should be_file }
|
||||
it { should be_owned_by 'elasticsearch' }
|
||||
end
|
||||
|
||||
describe 'Template Installed' do
|
||||
it 'should be reported as being installed', :retry => 3, :retry_wait => 10 do
|
||||
command = command('curl localhost:9200/_template/basic')
|
||||
expect(command.stdout).to match(/basic/)
|
||||
expect(command.exit_status).to eq(0)
|
||||
end
|
||||
end
|
||||
|
||||
describe 'Template Installed' do
|
||||
it 'should be reported as being installed', :retry => 3, :retry_wait => 10 do
|
||||
command = command('curl localhost:9201/_template/basic')
|
||||
expect(command.stdout).to match(/basic/)
|
||||
expect(command.exit_status).to eq(0)
|
||||
end
|
||||
end
|
||||
|
||||
#Confirm scripts are on both nodes
|
||||
describe file('/etc/elasticsearch/node1/scripts') do
|
||||
it { should be_directory }
|
||||
it { should be_owned_by 'elasticsearch' }
|
||||
end
|
||||
|
||||
describe file('/etc/elasticsearch/node1/scripts/calculate-score.groovy') do
|
||||
it { should be_file }
|
||||
it { should be_owned_by 'elasticsearch' }
|
||||
end
|
||||
|
||||
describe file('/etc/elasticsearch/master/scripts') do
|
||||
it { should be_directory }
|
||||
it { should be_owned_by 'elasticsearch' }
|
||||
end
|
||||
|
||||
describe file('/etc/elasticsearch/master/scripts/calculate-score.groovy') do
|
||||
it { should be_file }
|
||||
it { should be_owned_by 'elasticsearch' }
|
||||
end
|
||||
|
||||
#Confirm that the data directory has only been set for the first node
|
||||
describe file('/opt/elasticsearch/localhost-master') do
|
||||
it { should be_directory }
|
||||
it { should be_owned_by 'elasticsearch' }
|
||||
end
|
||||
|
||||
describe file('/opt/elasticsearch/localhost-node1') do
|
||||
it { should_not exist }
|
||||
end
|
||||
|
||||
describe file('/var/lib/elasticsearch/localhost-node1') do
|
||||
it { should be_directory }
|
||||
it { should be_owned_by 'elasticsearch' }
|
||||
end
|
||||
|
||||
#test to make sure mlock was applied
|
||||
describe command('curl -s "localhost:9200/_nodes/localhost-master/process?pretty=true" | grep mlockall') do
|
||||
its(:stdout) { should match /true/ }
|
||||
its(:exit_status) { should eq 0 }
|
||||
end
|
||||
|
||||
#test to make sure mlock was not applied
|
||||
describe command('curl -s "localhost:9201/_nodes/localhost-node1/process?pretty=true" | grep mlockall') do
|
||||
its(:stdout) { should match /false/ }
|
||||
its(:exit_status) { should eq 0 }
|
||||
end
|
||||
|
||||
|
||||
end
|
||||
|
||||
2
test/integration/multi/serverspec/spec_helper.rb
Normal file
2
test/integration/multi/serverspec/spec_helper.rb
Normal file
|
|
@ -0,0 +1,2 @@
|
|||
require 'serverspec'
|
||||
set :backend, :exec
|
||||
|
|
@ -2,5 +2,7 @@
|
|||
- name: Elasticsearch Package tests
|
||||
hosts: localhost
|
||||
roles:
|
||||
- elasticsearch
|
||||
- { role: elasticsearch, es_config: { "discovery.zen.ping.multicast.enabled": true }, es_instance_name: "node1" }
|
||||
vars:
|
||||
es_scripts: true
|
||||
es_templates: true
|
||||
|
|
@ -6,7 +6,7 @@ context "basic tests" do
|
|||
it { should exist }
|
||||
end
|
||||
|
||||
describe service('elasticsearch') do
|
||||
describe service('node1_elasticsearch') do
|
||||
it { should be_running }
|
||||
end
|
||||
|
||||
|
|
@ -14,9 +14,43 @@ context "basic tests" do
|
|||
it { should be_installed }
|
||||
end
|
||||
|
||||
describe file('/etc/elasticsearch/elasticsearch.yml') do
|
||||
describe file('/etc/elasticsearch/node1/elasticsearch.yml') do
|
||||
it { should be_file }
|
||||
end
|
||||
|
||||
describe file('/etc/elasticsearch/node1/scripts') do
|
||||
it { should be_directory }
|
||||
it { should be_owned_by 'elasticsearch' }
|
||||
end
|
||||
|
||||
describe file('/etc/elasticsearch/node1/scripts/calculate-score.groovy') do
|
||||
it { should be_file }
|
||||
it { should be_owned_by 'elasticsearch' }
|
||||
end
|
||||
|
||||
describe 'Node listening' do
|
||||
it 'listening in port 9200' do
|
||||
expect(port 9200).to be_listening
|
||||
end
|
||||
end
|
||||
|
||||
describe file('/etc/elasticsearch/templates') do
|
||||
it { should be_directory }
|
||||
it { should be_owned_by 'elasticsearch' }
|
||||
end
|
||||
|
||||
describe file('/etc/elasticsearch/templates/basic.json') do
|
||||
it { should be_file }
|
||||
it { should be_owned_by 'elasticsearch' }
|
||||
end
|
||||
|
||||
describe 'Template Installed' do
|
||||
it 'should be reported as being installed', :retry => 3, :retry_wait => 10 do
|
||||
command = command('curl -s "localhost:9200/_template/basic"')
|
||||
expect(command.stdout).to match(/basic/)
|
||||
expect(command.exit_status).to eq(0)
|
||||
end
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
|
|
|
|||
|
|
@ -2,9 +2,9 @@
|
|||
- name: wrapper playbook for kitchen testing "elasticsearch"
|
||||
hosts: localhost
|
||||
roles:
|
||||
- elasticsearch
|
||||
- { role: elasticsearch, es_instance_name: "node1" }
|
||||
vars:
|
||||
es_use_repository: "true"
|
||||
es_plugins:
|
||||
- plugin: lmenezes/elasticsearch-kopf
|
||||
version: master
|
||||
version: master
|
||||
|
|
@ -6,7 +6,7 @@ context "basic tests" do
|
|||
it { should exist }
|
||||
end
|
||||
|
||||
describe service('elasticsearch') do
|
||||
describe service('node1_elasticsearch') do
|
||||
it { should be_running }
|
||||
end
|
||||
|
||||
|
|
@ -14,18 +14,37 @@ context "basic tests" do
|
|||
it { should be_installed }
|
||||
end
|
||||
|
||||
describe file('/etc/elasticsearch/elasticsearch.yml') do
|
||||
describe file('/etc/elasticsearch/node1/elasticsearch.yml') do
|
||||
it { should be_file }
|
||||
it { should be_owned_by 'elasticsearch' }
|
||||
end
|
||||
|
||||
describe file('/etc/elasticsearch/node1/logging.yml') do
|
||||
it { should be_file }
|
||||
it { should be_owned_by 'elasticsearch' }
|
||||
end
|
||||
|
||||
describe file('/etc/elasticsearch/node1/elasticsearch.yml') do
|
||||
it { should contain 'node.name: localhost-node1' }
|
||||
it { should contain 'cluster.name: elasticsearch' }
|
||||
it { should contain 'path.conf: /etc/elasticsearch/node1' }
|
||||
it { should contain 'path.data: /var/lib/elasticsearch/localhost-node1' }
|
||||
it { should contain 'path.work: /tmp/elasticsearch/localhost-node1' }
|
||||
it { should contain 'path.logs: /var/log/elasticsearch/localhost-node1' }
|
||||
end
|
||||
|
||||
describe 'Node listening' do
|
||||
it 'listening in port 9200' do
|
||||
expect(port 9200).to be_listening
|
||||
end
|
||||
end
|
||||
|
||||
describe 'plugin' do
|
||||
|
||||
it 'should be reported as existing', :retry => 3, :retry_wait => 10 do
|
||||
command = command('curl localhost:9200/_nodes/?plugin | grep kopf')
|
||||
command = command('curl -s localhost:9200/_nodes/?plugin | grep kopf')
|
||||
expect(command.stdout).to match(/kopf/)
|
||||
expect(command.exit_status).to eq(0)
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
end
|
||||
|
|
|
|||
5
vars/Debian.yml
Normal file
5
vars/Debian.yml
Normal file
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
java: "openjdk-7-jre-headless"
|
||||
default_file: "/etc/default/elasticsearch"
|
||||
init_script: "/etc/init.d/elasticsearch"
|
||||
es_home: "/usr/share/elasticsearch"
|
||||
|
|
@ -1 +0,0 @@
|
|||
---
|
||||
5
vars/RedHat.yml
Normal file
5
vars/RedHat.yml
Normal file
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
java: "java-1.8.0-openjdk.x86_64"
|
||||
default_file: "/etc/sysconfig/elasticsearch"
|
||||
init_script: "/etc/init.d/elasticsearch"
|
||||
es_home: "/usr/share/elasticsearch"
|
||||
|
|
@ -1,4 +1,4 @@
|
|||
---
|
||||
java_debian: "openjdk-7-jre-headless"
|
||||
java_rhel: "java-1.8.0-openjdk.x86_64"
|
||||
es_package_url: "https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch"
|
||||
es_package_url: "https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch"
|
||||
es_conf_dir: "/etc/elasticsearch"
|
||||
sysd_script: "/usr/lib/systemd/system/elasticsearch.service"
|
||||
Loading…
Add table
Add a link
Reference in a new issue