# CAT-2849
The issue in this ticket is that the bulk scraping pods (digikey and findchips) are logging at the debug level. I looked at this issue while I was working on CAT-3000 and wasn't able to fix it, but I did learn a few things.
First, when we look at the logs inside of the bulk scraping pods, we can see that our json logger is being used and is working properly. Each log line is a json string. However, the log level is not working as expected. We see many logs at the `DEBUG` and `INFO` log levels. In `scraper/settings_base.py` even adding the line `logger.setLevel(logging.WARNING)` or `LOG_LEVEL="WARNING"` does not fix this issue. It works for the realtime scraping pods, but there is some override for the bulk scraping pods which stops this from working.
One difference between the bulk scraping pods and the realtime pods is the command used to run them. For the realtime pods, this is
```
poetry run ddtrace-run python scraperrt/cmdline.py -i 0.0.0.0 -S scraperrt.settings -p "80"
```
Nothing in this command says anything about log level.
For the bulk scraping pods, this is (example findchips)
```
poetry run ddtrace-run scrapy crawl findchips -L WARNING
```
Here there is an explicit option passed to set the log level to warning. So despite adding the level in the settings file and passing the argument in the command line, the bulk scraping pods continue to output logs at the debug level.
The difference between the two seems to be that we are entering the program using scrapy for the bulk scrapers and we are using our own python script to enter the program for the realtime scrapers. I have a feeling that this is the important part, but I haven't figured out why.
One thing I tried to solve this issue was setting the log levels for the offending loggers manually in `scraper/settings_base.py`:
```
logging.getLogger("pika").setLevel(logging.WARNING)
logging.getLogger("ddtrace.tracer").setLevel(logging.WARNING)
logging.getLogger("scrapy").setLevel(logging.WARNING)
```
This only works for pika. The other two loggers and others continue to log at the DEBUG level.
In summary, what I learned was that both the realtime and bulk scrapers use `scraper/settings_base.py` for intialization. We know this because the json logger is set up in that file, and it is working for both. However, something is overriding the setting for log level both from the command line and in the settings file. In the realtime scrapers, the log level is correctly at WARNING, while in the bulk scrapers, the log level continues to be DEBUG (or maybe undefined).
## How was I testing?
It is not possible to test the datadog logging locally. What I did was make changes, commit the code, and change the image for the staging findchips bulk scraper to the image which was built by gitlab CI in k9s. Then I would look at the logs and scrape a product from the product page in catalog. If this is working properly, no logs with DEBUG or INFO should show up. At the moment, we see many logs from different loggers at these lower log levels.