# DevOps Discussions
## Documentation feedback
- mostly OK
- steps to widen the stack by a new app would be nice
- steps to upgrade DB version are necessary and critical, need to be added
- some things (like with SSL) work in a weird way:
- conf. nowhere to be found in code
- Beanstalk UI sugests something else that what really happens
- Configuration done outside either Terraform files or config files in the APP repo (stuff done one-time, just by hand) needs to be either ported to Terraform/config files, or very clearly documented so that we could replicate that.
## VPC/PowerBI
- I think the setup we have is a good starting point
- I see no major sense in using AWS PowerBI infrastructure integration solution - too costly for perceived benefits.
- The way to go would be to eliminate deficiencies of our setup:
- Gateway runs on same DB creds as the app itself - we need to change it
- Gateway (once enabled, which is rarely the case), has RDP server enabled - security issue. We don't really need it except for maintenance. The PowerBI Desktop experience on it is horrible anyway
- Prod db and Gateway are open to outside traffic. Can we perhaps VPN-ize them in some way?
## Review Apps
- they're not a must, but nice to have
- if found to be too cumbersome to implement, we can just have 1-2 environments besides staging, say feature1 and feature2, on which we'd force-push certain code, much like we did with staging on Heroku. We would then have a tool to quickly reset the database.
## Blue/Green
- sounds fine, although more knowledge must be collected about how switching the traffic back and forth between the two would look like. And when exactly migrations would kick in.
# Microservice Arch
## Lambdas
- I'm all for it in terms of theoretical direction
- not that I see a clear use-case for now, as we do not really have heavily resource-consuming jobs right now.
## Auth Service
App is not ready for implementing new Auth (and when we would be probably we don't need DevOps to help us out)
- right now 100% of our views use Rails rendering and Devise and therefore Cookie-based auth
- even with new booking SPA that's gonna be like 85% or so
- bottomline: we're dependant on cookies for now, and that's not gonna change anytime soon. We're stuck with it, unless frontend is completely rewritten to be an SPA. Not that it is an immediate threat to any of our plans.
- There's no easy way of changing cookie-dependent frontend to Bearer-Token-based frontend. You'd have to somehow force rails-rendered views to pass bearer token along with requests and I see no robust way of doing that.
- AWS Cognito|Firebase|Auth0 integration will not really help us at that point in any way, with us being reliant on Devise Cookies (these are all Token-based solutions). By integrating them we create a hybrid of Token-based auth and Cookie-based auth. We can do that, but things will get more complicated instead of simpler.
- To make us ready for microservices, we'd probably have to do the following things, none of them overwhelmingly complicated I think:
- write a simple auth resolver for microservices that would accept either Data from Cookie of from Bearer token (I think Devise has that covered out of box)
- consider moving session from cookie to either redis or DB
- gradually remove/refactor these pieces of code that rely on session state (we'd have to review the old code to find such), to make way for more stateless approach
..and in none of these DevOps will hep us.
- if at some point (with the new SPA-based frontend) we'd like to move fully to token-based auth, i think we can handle Firebase|Auth0|Cognito integration on our own, or write custom code for that.
- i reviewed Cognito and Firebase, and cognito felt very awkward comparing to Firebase. That being said I believe we could integrate with either of those. Or Auth0 which is a similar tool.
## Message queue
There is a fundamental diff AFAIK between queues like RabbitMQ and SQS
Rabbit uses AMPQ protocol (and some other i think), SQS uses just HTTP
Rabbit will tend to be more performant with tons of relatively short messages. SQS will be better with smaller amount of bigger messages ('smaller' is highly relative here :-) both can take 100k's of these)
SQS was created to handle communications within cloud env, possible spread across wide area and timezones.
RabbitMQ was created to handle communications within an application stack relatively condensed (1 machine? maybe few of them, etc.)
My suspicion is that SQS would be an overkill for now. Both should not be difficult to implement. RabbitMQ will however be better replacement for stack-level intercommunication between the services, because AFAIR you can make it communicate in request-response model, if you'd like.
At this point however we neither need or can make use of neither.