Install Elasticsearch
Deploy Single ES
Create Network
Due to we need to interconnect the es and kibana containers, we also need to deploy kibana container.
- Check current network
docker network lsdocker network ls- Create a new
es-netnetwork
docker network create es-netdocker network create es-netLoad Docker Image
Load
elasticsearch- elasticsearch v7.12.1
shdocker pull docker.elastic.co/elasticsearch/elasticsearch:7.12.1docker pull docker.elastic.co/elasticsearch/elasticsearch:7.12.1- Here we are using elasticsearch v8.10.4 image.
shdocker pull docker.elastic.co/elasticsearch/elasticsearch:8.10.4docker pull docker.elastic.co/elasticsearch/elasticsearch:8.10.4Load
kibana- kibana v7.12.1
shdocker pull docker.elastic.co/kibana/kibana:7.12.1docker pull docker.elastic.co/kibana/kibana:7.12.1- kibana v8.10.4
shdocker pull docker.elastic.co/kibana/kibana:8.10.4docker pull docker.elastic.co/kibana/kibana:8.10.4
Run Elastic
Run docker command, to deploy single point es.
v7.12.1shdocker run -d \ --name es \ -e "ES_JAVA_OPTS=-Xms512m -Xmx512m" \ -e "discovery.type=single-node" \ -v ~/desktop/elastic-data/es-data:/usr/share/elasticsearch/data \ -v ~/desktop/elastic-data/es-plugins:/usr/share/elasticsearch/plugins \ --privileged \ --network es-net \ -p 9200:9200 \ -p 9300:9300 \ docker.elastic.co/elasticsearch/elasticsearch:7.12.1docker run -d \ --name es \ -e "ES_JAVA_OPTS=-Xms512m -Xmx512m" \ -e "discovery.type=single-node" \ -v ~/desktop/elastic-data/es-data:/usr/share/elasticsearch/data \ -v ~/desktop/elastic-data/es-plugins:/usr/share/elasticsearch/plugins \ --privileged \ --network es-net \ -p 9200:9200 \ -p 9300:9300 \ docker.elastic.co/elasticsearch/elasticsearch:7.12.1- check if success, if you can see below code after 1 min on
http://localhost:9200/
json{ "name": "2d99af34b373", "cluster_name": "docker-cluster", "cluster_uuid": "gqiOmsFkQ4-L352DSm8cqQ", "version": { "number": "7.12.1", "build_flavor": "default", "build_type": "docker", "build_hash": "3186837139b9c6b6d23c3200870651f10d3343b7", "build_date": "2021-04-20T20:56:39.040728659Z", "build_snapshot": false, "lucene_version": "8.8.0", "minimum_wire_compatibility_version": "6.8.0", "minimum_index_compatibility_version": "6.0.0-beta1" }, "tagline": "You Know, for Search" }{ "name": "2d99af34b373", "cluster_name": "docker-cluster", "cluster_uuid": "gqiOmsFkQ4-L352DSm8cqQ", "version": { "number": "7.12.1", "build_flavor": "default", "build_type": "docker", "build_hash": "3186837139b9c6b6d23c3200870651f10d3343b7", "build_date": "2021-04-20T20:56:39.040728659Z", "build_snapshot": false, "lucene_version": "8.8.0", "minimum_wire_compatibility_version": "6.8.0", "minimum_index_compatibility_version": "6.0.0-beta1" }, "tagline": "You Know, for Search" }- check if success, if you can see below code after 1 min on
v8.10.4
docker run -d \
--name es \
-m 1GB\
-e "discovery.type=single-node" \
-v ~/desktop/elastic-data/es-data:/usr/share/elasticsearch/data \
-v ~/desktop/elastic-data/es-plugins:/usr/share/elasticsearch/plugins \
--privileged \
--network es-net \
-p 9200:9200 \
-p 9300:9300 \
docker.elastic.co/elasticsearch/elasticsearch:8.10.4docker run -d \
--name es \
-m 1GB\
-e "discovery.type=single-node" \
-v ~/desktop/elastic-data/es-data:/usr/share/elasticsearch/data \
-v ~/desktop/elastic-data/es-plugins:/usr/share/elasticsearch/plugins \
--privileged \
--network es-net \
-p 9200:9200 \
-p 9300:9300 \
docker.elastic.co/elasticsearch/elasticsearch:8.10.4Commands explanation:
-d: running at background--name es: set the container name to es-e: set environment variable-e "ES_JAVA_OPTS=-Xms512m -Xmx512m": config JVM heap memory size equals to setup es memory size. The default size is 1GB.-e "discovery.type=single-node": set es to single-node mode-e "cluster.name=es-docker-cluster": setup cluster name-p 9200:9200: es restful http port-p 9300:9300: es node port--privileged: granting access to logical volumes--network es-net: join the es-net network
Deploy kibana
kibana can provide a visualize GUI for elasticsearch.
The kibana version must same as elasticsearch version.
Deploy
Run docker command to deploy kibana
docker run -d \
--name kibana \
-e ELASTICSEARCH_HOSTS=http://es:9200 \
--network=es-net \
-p 5601:5601 \
kibana:7.12.1docker run -d \
--name kibana \
-e ELASTICSEARCH_HOSTS=http://es:9200 \
--network=es-net \
-p 5601:5601 \
kibana:7.12.1Commands explanation:
--network=es-net: join the es-net network, join the same network with elasticsearch.-e ELASTICSEARCH_HOSTS=http://es:9200: config elasticsearch address, because the kibana and elasticsearch under a same network, so it can use container name with port to access elasticsearch directly.
Check Kibana status log
docker logs -f kibanadocker logs -f kibanaVisit kibana on bowser
http://0.0.0.0:5601
Install Analysis
elacticsearch needs to perform word splitting on documents when creating an inverted index. When searching, the user input needs to be processed with a split word. However, the default word splitting rules are not friendly to Chinese language processing.
Install IK plugin online
# getting inside the container
docker exec -it es /bin/bash
# download and install it online
./bin/es-plugin install
https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.12.1/elasticsearch-analysis-it-7.12.1.zip
exit
# restart container
docker restart es# getting inside the container
docker exec -it es /bin/bash
# download and install it online
./bin/es-plugin install
https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.12.1/elasticsearch-analysis-it-7.12.1.zip
exit
# restart container
docker restart esInstall IK plugin locally
- check the es-plugins folder
docker volume inspect es-pluginsdocker volume inspect es-pluginsdownload
elasticsearch-analysis-it-7.12.1.zipfrom github elasticsearch-analysis-ikunzip it into es-plugins
~/desktop/elastic-data/es-plugins.restart container
docker restart es
docker logs -f esdocker restart es
docker logs -f esIK-Analyzer - expend dictionary
To expand the thesaurus of the ik splitter, simply modify IKAnalyzer.cfg.xml in the config directory of the ik splitter directory. Put the words into below files.
ext.dic: expend dictionarystopword.dic: stop words