# cloud.gov issues and questions
- access to cloud.gov future request / visibility from roadmap of future requests
- slack channels for community of practice; future sync-up's with other dev teams, security, it-ops teams
- roadblocks getting some features implemented (platform immaturity)
- nginx/proxy configuration
- We are advised to configure an nginx proxy to sit in front of each application we deploy
- https://cloud.gov/docs/technology/responsibilities/
- Supposedly, we can just deploy our supported buildpack and Cloud gov handles the rest
- Generally a platform would handle network responsibility with an ingress controller/reverse proxy, but we have to configure our own
- geoip filtering
- we need to implement custom geoip filtering capability due to attack/spam traffic
- Cloud gov has told us that because their network is single tenant, any configuration rules made for us would apply to all tenants' code
- Poor separation of infrastructure, and another feature a platform should be responsible for
- In using a nginx proxy to provide us with this workaround, having static config files incures risk on a microservice future abilities
to scale vert/horiz when needed
- Rather then relying on segmented AWS VPC's/tenants or a service mesh that can dynamically manage proxies in the backgroud, with apps residing
in a cluster; we are prone for services being un-available to our end users due to possible downtime
- no support for AWS KMS or DynamoDB (needed for terraform capabilities)
- In finding a solution for secret management, we found a tool called SOPS that lets us encrypt files at rest/transit with a key and store
those encrypted files within our repo
- Using SOPS we can simply encrypt/decrypt files within our pipeline using the SOPS binary and key, rather than managing the overhead costs/burden
of using a managed secrets service like Hashicorp's Vault
- In choosing a key we went with trying to use a provisioned KMS key, because of our distributed team; it didn't make sense to use a
generated GPG public/private key pair that would have to be passed around
- Our hopes were to have a generated key within the same AWS account that we share with our Cloud Foundry provider, but found that could not be done
from the platform or ITOPS teams for these "requests not being on thier radar for being added anytime soon"
- The Cloud Foundry marketplace does have support for a AWS Service Broker. This broker looks like it provides dev teams with other AWS resources that arn't on the
Cloud.gov marketplace (KMS, EC2, DynamoDB tables, etc....)
- This broker service would have to be implemented by each dev team since the default marketplace dosn't provide this service, each team that had the bandwidth
to implement this broker on thier own; would have to shoulder the secruity risks along with current ATO's in place
- elastic/kibana proxy configuration
- Cloud gov doesn't allow elastic instances to expose the kibana dashboard to the public internet
- From the documentation: https://cloud.gov/knowledge-base/2021-05-20-connecting-to-brokered-service-instances/
- There is an example proxy service, but the noted security concern: "It's inherently insecure and should be used only for testing with non-production, non-sensitive, non-essential data"
- We have to adapt this to have either basic authentication (less secure) or to tap into Django's authentication (heavier lift) and expose the service ourselves with another application
- Elastic/Kibana security/authentication considerations
- Cloud.gov provides AWS ES/OpenSearch as a service. We are currently leveraging this service in all of our deployment spaces.
- The issue with this is that AWS ES does not support Xpack (Elastic/Kibana native security features) because it was forked off of Elastic 7.10.2 which did not support Xpack at that time. This implies that our current Xpack configuration we are using with our local Elastic 7.17.6/Kibana 7.17.10 deployments ([implemented here](https://github.com/raft-tech/TANF-app/pull/2775)) will not be applicable to our deployed environments. To get around this issue, AWS suggests introducing a [proxy EC2 node](https://aws.amazon.com/blogs/security/how-to-control-access-to-your-amazon-elasticsearch-service-domain/) to implement the same type of features that Xpack natively provides by way of IAM policies and Signature Version 4 request signing. However, Cloud.gov does not allow access to the underlying AWS resources it is wrapping, thus making this workaround impossible.
- Another option to workaround AWS ES and Cloud.gov would be to deploy and manage our own ES cluster to Cloud.gov. This also introduces large blocks in and of itself. To deploy/manage our own cluster would take at least one dedicated Elastic SME to ensure uptime, availability, updates, security, etc... This would also imply that we would need to purchase Elastic Stack self-managed licenses from Elastic. To acquire the minimum feature set we need to have robust security and authentication integration with TDP, we would need to procure platinum tier licenses. Elastic requires a minimum of three licenses to be purchased at a cost of $6,600 per year per license. The cost of these licenses and the cost of at least one person to manage the cluster(s) also makes this an infeasible option.
- With these things considered, the best security/authentication we can provide is by blocking all external incoming traffic to our Elastic and Kibana servers, and by leveraging the view based auth [implemented here](https://github.com/raft-tech/TANF-app/pull/2775), which prevents non admin users from navigating to Kibana via the frontend. We will not be able to use any Xpack features (RBAC, Realms, P2P encryption, etc...) used in that PR in our deployed environments.
- Difficult to get logging for cloud.gov managed services
- Cloud.gov does not expose service logs to the customer. The only way to retrieve logs from cloud.gov managed services is to email support requesting logs for a time period. This limits the gains from deploying a monitoring service (e.g., promethius) as no logs from services would be captured. This is not sufficient to monitor or identify issues in a production environment.
- Cloud.gov support 4/26/2024
- > Currently customer service instance logs (RDS, Elasticsearch, etc) are not exposed to customers or the customer deployment space, as such any monitoring service would not have access to your service instance logs.
# pros/cons