owned this note
owned this note
Published
Linked with GitHub
###### tags: `Design`, `Elastic`
triple-ci team needs to be able to query rdo es/logstash in order to reconfigure elastic-recheck to use rdo servers.
### What is failing now
https://opensearch.logs.openstack.org/_dashboards/app/discover/?security_tenant=global#/ is the new opensearch.
Next task is to
- bring up Elastic recheck
- integrate with this instance of elasticsearch.
- Merge rdo branch to master
We are not touching the Openstack health repo as it had complex backend tools which are all gone now
### The discussion with QE/TC:
```
15:18:40 <gmann> #topic New ELK service dashboard: e-r service
15:19:10 <gmann> frenzy_friday: dasm : I think most part if clear now on this
15:19:37 <dpawlik> There was an issue that comes few times last week that Opensearch cluster does not have enough space. It was little bit odd, but it seems that few days there was pushed a lot of messages there
15:19:47 <dpawlik> I'm monitoring that situation
15:19:53 <gmann> dpawlik: ok, thanks
15:20:22 <fungi> the volume of logs generated fluctuates wildly depending on what's going on the openstack development
15:20:39 <clarkb> we found 7 days to be stable with 6TB of storage for 5TB effective with one replica on the old system
15:20:44 <frenzy_friday> gmann, we will prepare the code so that it can be merged to master. It will still be tripleo focused but we might clean up some stuff
15:21:09 <fungi> sometimes some projects/jobs end up with failure modes where massive log streams get generated. other times it's just that somebody is repeatedly rebasing a 50-change stack for a project which runs lots of jobs
15:21:15 <gmann> frenzy_friday: and keeping master one also I mean openstack based queries etc?
15:21:26 <dasm> gmann++n
15:21:33 <gmann> ok
15:22:05 <fungi> there's also no reason the e-r being run for the openstack community can't evaluate both tripleo and devstack job logs
15:22:29 <frenzy_friday> gmann, we have decoupled the queries from ER repo: https://opendev.org/openstack/tripleo-ci-health-queries The syntax of the queries have also changed
15:22:46 <dasm> do we have any ETA for bringing it up? or is it: when it's done, it's done?
15:22:47 <dpawlik> clarkb, fungi: ack. If situation will be too much to handle, I will cut off some part of logs that are pushed temporary.
15:23:25 <gmann> frenzy_friday: yeah, that is why we can keep them in separate folder or so in master branch supporitng both syntex
15:23:51 <dpawlik> sometimes logs can have over 200MB....
15:23:54 <gmann> dasm: I think no ETA planned yet but as we are doing we should just get it done in this flow :)
15:24:04 <dasm> gmann: ack
15:24:08 <gmann> frenzy_friday: dasm and e-r we can discuss in separate call also if need or any query on merging. dpawlik will be here and on #openstack-infra for discussion.
15:24:31 * dasm is on #openstack-infra too -- just in case
15:24:53 <clarkb> dpawlik: the old system also filtered out all debug logs
15:24:55 <gmann> cool thanks frenzy_friday dasm for joining and helping on this. really appreciate that
15:24:57 <dpawlik> feel free to catch me there ;)
15:25:03 <clarkb> dpawlik: for that reason
15:25:04 <gmann> +1
15:25:12 <frenzy_friday> gmann, ack, that will be good. Once we have the code cleaned up a bit we can discuss if that can be merged to master. dasm what do you think?
15:25:29 <dpawlik> clarkb: ack
15:25:31 * dasm is interested in bringing that up again. if infra allows for that
15:25:55 <dpawlik> what would be the subdomain name for it?
15:25:55 <dasm> frenzy_friday: yes, let's start with making it used by both: rdo and master.
```
old
---
~~ rdo elasticsearch has a username/passwd. So the check jobs which connect to elasticsearch and verify queries and stuff needs access to the creds (they ran previously with upstream es without creds)
There are 2 ways to do it
1. Hardcode the creds in the code since the password is already displayed on the kibana ui (but then password will be in gerrit)
2. Zuul secrets - creare a base job in config repo with secrets and use these base jobs to run check jobs (a lot of work) example https://github.com/rdo-infra/review.rdoproject.org-config/blob/master/zuul.d/tripleo-rdo-base.yaml#L230 ~~
---
# Connecting Elastic Recheck to RDO
### The URLs used by ER to connect to upstream logstash and elasticsearch:
ES_URL = http://logstash.openstack.org:80/elasticsearch
LS_URL = http://logstash.openstack.org
### How they are used in code:
ES_URL is passed directly to pyelasticsearch.ElasticSearch()
<LS_URL>/#/dashboard/file/logstash.json?<logstash_query> - This is used to form the link to logstash under the graph in ER (I think so)
### What we found:
We logged in with kibana/<long_password>. Then:
https://review.rdoproject.org/analytics/app/discover/? - This is a view of the dashboard and the queries work here just as in upstream. But If we try GET (with/without basic auth) on this it returns html. This is not the API endpoint.
https://review.rdoproject.org/analytics/app/dev_tools#/console - We can query here (on the webpage). But looks like the kibana user doesn't have permissions - returns 403
https://elk.review.rdoproject.org:9200/_search - When we select copy as curl from ^ we get this. But if we try to hit this url (with/without basic auth) it times out.
### Next step
1. Hide launchpad link when there is no bug
2. Right now the title of the bug shows the bug id (from queries.yml). But the id doesnt make it clear what the bug is. Maybe we can show the pattern/msg from the yml file when the cursor hovers over the title? (or any better solution is also welcome)
3. Get msg field in er queries (update er converter script)
4. sova logfile /gerrit msg add a link back to the er dashboard
5. get rdo projects url (health.rdoproject.org) for the project (boycott sorin)- less priority
6. Deploy on openshift- less priority
7. trello+er bot (https://github.com/weshayutin/tripleo-critical-bugs)
### Questions:
Q1. What are the equivalent for these in RDO?
A1. "https://elk.review.rdoproject.org:9200/_search" is the elasticsearch URL for RDO
"https://review.rdoproject.org/elasticsearch/logstash-*/_search" is the kibana URL for RDO
Q2. Can we access the elk url from our localhost (not only from a designated server)
A2. Need to check.
context:
```
<frenzy_friday> Hey dpawlik, I can reach elk.review.rdoproject.org:9200 from the health server now. Sorry I missed a point last time - is it also possible to open this up for all (with authentication)?
<dpawlik> frenzy_friday: you mean not only for health server but all ip addresses?
<frenzy_friday> yes
<dpawlik> frenzy_friday: hmm, we would get a lot of bot requests there
<dpawlik> earlier was situation that some delete the index then index pattern... so I guess it will attack the server later with bruteforce
<dpawlik> frenzy_friday: I can discuss it with team
```
Q3. Using the kibana user to curl ^ returns 403 (http://paste.openstack.org/show/807021/) Could we have an unrestricted user/account?
context:
```
<anbanerj|rover> Hey dpawlik , With the kibana user from the kibana dashboard we can query stuff like "message:"async task did not" AND tags:"console"" But to do the same through API to elasticsearch do we need a different user or different permission? This is how we are trying - http://paste.openstack.org/show/807021/ are we missing something here?
<dpawlik> anbanerj|rover: looks ok... I can try to add this indices into the elk, but I need to check it first
<anbanerj|rover> dpawlik, thanks
<anbanerj|rover> dpawlik, but how are we able to query without any permissions through the dashboard? Does it use some other user in the backend?
<dpawlik> anbanerj|rover: Thats a good question, but I did not analyze the java code in Elasticsearch in Opendistro so maybe community will know
```
A3. We were using the API wrong. Needed to pass the index "logstash-* " pattern in the url.
Note for later:
```
dpawlik> anbanerj|rover: and I also would like to suggest you, that if we merge the elasticsearch host into one, you will need a special header: -H "securitytenant: mytenant"
<dpawlik> anbanerj|rover: right now there is only global tenant, in the future it will be changed, so adding the header into the request would avoid some troubles in the future
```
Q4. Where can we get a cert to connecto to the ES url?
context:
```
<anbanerj|rover> hey dpawlik, sorry bugging you once again, curl -XGET "https://kibana:<passwd>@elk.review.rdoproject.org:9200/logstash-*/_search" witout --insecure flag is complaining about missing ssl certificate. Do you know how I can get the proper certificate to access it without insecure flag?
<dpawlik> anbanerj|rover: try doing query to the frontend instead of directly to backend: eg. curl -XGET "https://kibana:test@review.rdoproject.org/elasticsearch/logstash-*/_search" -H 'Content-Type: application/json' -d'{ "query": { "match_all": {} }}'
<dpawlik> anbanerj|rover: but if you will do a lot of requests, direct query to the elasticsearch would be better
<anbanerj|rover> dpawlik, thanks. curl to the frontend passed without insecure. Is there a way to hit the elasticsearch directly without the insecure flag as well?
<dpawlik> anbanerj|rover: so far not ;/
<anbanerj|rover> dpawlik, oh ok, thanks :)
<dpawlik> anbanerj|rover: we can think to provide such feature in the future, when we have one host
```
Q5. Access The endpoint /_status (https://www.elastic.co/guide/en/elasticsearch/reference/1.3/indices-status.html) has moved to /_stats (https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-status.html) to access it the user need extra user privileges .
A5. https://projects.engineering.redhat.com/browse/RHOSZUUL-577
https://softwarefactory-project.io/r/c/software-factory/sf-config/+/22291
Q6. pythonclient version that works with both rdo and upstream
The endpoint /_status (https://www.elastic.co/guide/en/elasticsearch/reference/1.3/indices-status.html) has moved to /_stats
The code used pyelasticsearch client which doenst seem to have an api for _stats (why are we not using elasticsearch ?)
https://github.com/pyelasticsearch/pyelasticsearch/issues/202
https://github.com/pyelasticsearch/pyelasticsearch/issues/192
A6. We will move to a separate branch for rdo https://review.opendev.org/c/openstack/project-config/+/803473
Q7. Cant debug Pytests with pdb - 'module' object has no attribute 'Cmd'
A7. https://bugs.launchpad.net/openstack-gate/+bug/1522553 - do this
Q8. Fix er-bot (tbd)
Q9. One job from upstream ( I suggest containers-multinode ) should have logs imported to rdo's elastic db. Sounds like gearman triggering is involved. TO-DO: work w/ rdo-infra to drive a change that gets pulled into rdo's elastic
Cards opened in RDO:
- https://issues.redhat.com/browse/RHOSZUUL-595
- https://issues.redhat.com/browse/RHOSZUUL-532
______
## Setting up a copy connected with RDO ES
### To test from hlth server:
ES -> [centos@tripleo-health-temp ~]$ curl -XGET "https://kibana:<passwd>@elk.review.rdoproject.org:9200/logstash-*/_search" --insecure -H 'Content-Type: application/json' -d'{ "query": { "match_all": {} }}'
Frontend -> curl -XGET "https://kibana:<passwd>@review.rdoproject.org/elasticsearch/logstash-*/_search" -H 'Content-Type: application/json' -d'{ "query": { "match_all": {} }}'
We have a generic query which should always return something. This is needed to build the initial htmls. Originally it was
- ALL_FAILS_QUERY = (((filename:"job-output.txt" AND message:"POST-RUN END" AND message:"playbooks/base/post.yaml") OR (filename:"console.html" AND (message:"[Zuul] Job complete" OR message:"[SCP] Copying console log" OR message:"Grabbing consoleLog"))) AND (build_status:"FAILURE" AND build_queue:"gate" AND voting:"1"))
To get a hit from RDO ES before it is connected to upstream
- ALL_FAILS_QUERY = (message:"Playbook run failed" AND tags:"console" AND filename:"job-output.txt" AND project:"*tripleo*")
message: "Playbook run failed" AND tags:"console" AND filename:"job-output.txt" AND project:"*tripleo*" AND build_status:"FAILURE" AND voting:"1" AND build_queue:"check"
## Sova
### How sova patterns works
sova-patterns.json has 2 main keys/sections - `regex` and `patterns`
`regex` has fields
- `regex` which is the actual regular expression to searched in the files, and `name` which is a unique readable identifier for the regex.
- `patterns` has keys/sections which correspond to type of files (like console, error, syslog, ironic-conductor etc)
- `multiline` which is true for mutiline regexes
Each of these sections has items which has
- `id`
- `pattern` which refers to the regex `name` in `regex` section. If there are no regexes for it then this pattern is the regex
- `logstash` (again refers to the regex section but I think it is not used anywhere)
- `msg` - this is what user sees in the output file name after sova finds the corresponding regex in the files
- `tag` - denotes type of failure - infra/code etc
### To move an existing sova regex from sova-patterns.json to queries.yml
- Choose an item from ` patterns ` in output/sova-patterns.json. This is the file that sova currently uses.
Example 1:
```
"patterns": {
"console": [
{
"id": 1,
"logstash": "",
"msg": "Overcloud stack installation: SUCCESS.",
"pattern": "Stack overcloud CREATE_COMPLETE",
"tag": "info"
}, ... ] ... }
```
^ Here "Stack overcloud CREATE_COMPLETE" is not present in `regexes`. So this is how it will look after conversion to new format.
```
- id: Overcloud stack installation: SUCCESS
pattern: "Stack overcloud CREATE_COMPLETE"
tags: console
```
Example 2:
```
"patterns": { ...
"ironic-conductor": [ ...
{
"id": 301,
"logstash": "is located doesn't have enough disk space. Required",
"msg": "No space on disk for Ironic.",
"pattern": "iron_space_re",
"tag": "infra"
} ] }
```
^ Here "iron_space_re" is present in `regexes`
```
{
"regex": "Disk volume where .* is located doesn't have enough disk space",
"name": "iron_space_re"
},
```
So this is how it will look after conversion to new format.
```
- id: iron_space_re
pattern: "is located doesn't have enough disk space. Required"
regex: "Disk volume where .* is located doesn't have enough disk space"
tags: ironic-conductor
```
https://bugs.launchpad.net/openstack-gate/+bug/1522553
## To run tripleo ci health dashboard (with RDO Elasticsearch)
- Copy the ssh keys of os-tripleo-ci from infra doc to ~/.ssh/id_rsa and ~/.ssh/id_rsa.pub (with proper permissions)
- cp ~/.ssh/id_rsa ~/.ssh/id_rsa_insecure
- vi ~/sourcefile (and add the following)
- export GERRIT_USER=os-tripleo-ci
- git clone --branch rdo https://opendev.org/opendev/elastic-recheck.git
- git clone https://opendev.org/openstack/tripleo-ci-health-queries
- cd elastic-recheck
- cp ~/.ssh/id_rsa data/id_rsa (this should be done by make build but sometimes it doesnt work)
- install docker
- install docker-compose
- source ~/sourcefile
- pip install -r requirements.txt (if it fails pip install --upgrade pip)
- make build (this can end in DEBUG:paramiko.transport:Dropping user packet because connection is dead. - that is normal)
- make up (if [SSL: CERTIFICATE_VERIFY_FAILED: check if your system has correct datetime)
- Make sure port 80 is open in the VM
- http://<IP>:80 --> dashboard
## Notes
* Presentation: https://docs.google.com/presentation/d/1CavlDKCEXc2LdXTYDNwDot4X9wbehXkAcdUvHyeKt8Q/edit#slide=id.g3d946fe0a1_0_0
* https://opendev.org/opendev/elastic-recheck