# MDP (Microservices Development Platform) ## What is MDP? MDP is a local platform that helps us run AWS based microservices locally and as part of component tests. MDP automatically provisions a set of integrated AWS mocks (chiefly provided by [localstack](https://github.com/localstack/localstack)) based on your service's serverless.yml file. ## mdp-network `mdp-network` spins up a docker container containing different docker images for all the different AWS resources that we will need to provision in order to run and test a service end to end. The container also provisions images that are not AWS resources such as Postgres and mountebank which can be used for stubbing and proxying requests through. See [the docker-compose.yml file in this service](mdp-network/docker-compose.yml) for a full list of what is provisioned. | Resource | Auto-provisioned | | ------------- |:-------------: | | DynamoDB | ✔ | | DynamoDB streams | ✔ | | KMS | ✔ | | RDS Postgres | ✔ | | Kinesis | ✔ | | S3 | ✔ | | Secrets Manager | | | Mountebank (not AWS) | | The above table shows all of the resources _currently_ supported by MDP. ## Prerequisites ### Requirements - Docker installation - docker-compose - Node 8 / Node 10 ### CLI commands These CLI commands are to be invoked from the microservice's checked out repository and aid your development workflow. Please add them to your `package.json` as scripts. ``` "mdp-network:up": "node mdp-network/generateComposeFile.js && docker-compose -f mdp-network/docker-compose.yml up", "mdp-network:down": "docker-compose -f ./node_modules/mdp/mdp-network/docker-compose.yml down", "debug": "source .env.debug && SLS_DEBUG=* node --inspect ./node_modules/.bin/serverless offline start", "debug:infra:up": "source .env.debug && npx babel-node ./debugging/setup-local-environment up", "debug:infra:down": "source .env.debug && npx babel-node ./debugging/setup-local-environment down" ``` - `mdp-network:up` generates the mdp's docker network as a docker-compose.yml definition file and spins it up while `mdp-network:down` brings it down. - `debug:infra:up` runs our setup function which creates all resources/stubs we need to run our services end to end. - `debug:infra:down` tears down any resources that were created in the setup restoring our service environment back to a clean state. - `debug` Runs serverless offline, this can be hooked into your IDE's launch configuration to allow you to set breakpoints and debug through your service. It is **important** that you run `serverless offline start` instead of `serverless offline` to ensure the serverless-offline plugins gets loaded. ## Walkthrough This walkthrough will show the set up needed to run a service locally. The example will use the `secure-adapter`. The flow we will be going through is processing topups on the back of a `payment.created event`, see below for the complete flow - this is a very common flow across our services so it serves as a good example. The flow is as follows: >Kinesis payment.created event >> Lambda >> secure api >> DynamoDB >> DynamoDB Stream >> Kinesis topups.created event` ### env.debug ``` export AWS_STAGE=local export AWS_REGION=eu-west-1 export BUILD_VERSION='test' export BUILD_ID='test' export BUILD_COMMIT='test' export EVENT_VERSION_SUFFIX='local' ## The full ARN of the resources created are exported for completeness. ## However, the serverless `kinesis-offline` and `offline-dynamodb-streams` ## plugins both only use the table/stream name to retrieve the resource export secureAdapterTopupsTableStreamArn=arn:aws:dynamodb:ddblocal:000000000000:table/localSecureAdapterTopups/stream export paymentsStreamArn=arn:aws:kinesis:us-east-1:000000000000:stream/payments-local export secureCredentialsSecretId=ecotricity/secure-credentials-secret-local ``` ### debugging/secure-mock.js ```js const secureMock = { name: 'secure', port: 40000, protocol: 'http', stubs: [ { predicates: [ { equals: { method: 'POST', path: '/WseRestService.svc/json/GetVendCodeByPaymentCard' } }, { matches: { body: { PaymentCardId: '9826015002501620247', AmountPaid: 1000, AutoTransfer: true, HesId: 10000001 } } } ], responses: [ { is: { statusCode: 200, headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ WseId: 1675714, UtrnCode: '40283892318193339641', AmountPaid: 1000, MeterCreditAmount: 1000, DebtDeducted: 0, VatOnEnergy: 47, VatOnDebt: 0, OutstandingDebt: 0 }) } } ] } ] } export default secureMock ``` ### debugging/setup-local-environment.js ```js // import the resources and helper functions we need from the `mdp` library import { serverlessYaml, mountebank, secretsmanager, kinesis, report } from 'mdp' import secureMock from './secure-mock' // import env vars const secureCredentialsSecretId = process.env.secureCredentialsSecretId const stage = process.env.AWS_STAGE // The up() function sets up all the resources needed to run the service end to end const up = async () => { // posts the stub to the mountebank instance // See the mountebank docs(http://www.mbtest.org/docs/api/stubs) for more info. // can comment this out if you want to run against Secure's dev API await mountebank.postImposters([ secureMock ]) // triggers MDP to read our service's serverless.yml file and to spin up any resources specified in the YAML file await serverlessYaml.up(`${__dirname}/../`) // creates the Secrets Manager and Kinesis resources manually // using the AWS sdk (which is pointing at the right ports) const secureCredentialsSecret = await secretsmanager.createSecret({ Name: secureCredentialsSecretId, SecretString: JSON.stringify({ secureHost: 'http://localhost', securePort: secureMock.port, // port mountebank stub is running on securePath: '/WseRestService.svc/json/GetVendCodeByPaymentCard', securePass: 'password', secureUser: 'user' }) }) const paymentStream = await kinesis.createStream({ StreamName: `payments-${stage}` }) // pass any custom created resources to the reporter // this will log out whether the resources were created successfully report('CUSTOM PROVISIONED', [ secureCredentialsSecret, paymentStream ]) } // The `down()` function is the inverse of the `up()` function tearing down everything that was created by the set up function. const down = async () => { await mountebank.deleteImposters([ secureMock.port ]) await serverlessYaml.down(`${__dirname}/../`) const secureCredentialsSecret = await secretsmanager.deleteSecret({ SecretId: secureCredentialsSecretId }) const paymentStream = await kinesis.deleteStream({ StreamName: `payments-${stage}` }) report('CUSTOM DESTROYED', [ secureCredentialsSecret, paymentStream ]) } if (process.argv[2] === 'down') { down() } else { up() } ``` ## Steps ### 1. Install MDP dependencies. ``` npm install --save-dev git+ssh://git@bitbucket.org/ecotricity/mdp.git#semver:{{version}} git+ssh://git@bitbucket.org/ecotricity/serverless-offline-mdp.git#semver:{{version}} ``` The above command installs `mdp` and `serverless-mdp-offline`. `serverless-mdp-offline` is a serverless plugin which points your service's AWS config at the ports exposed by the MDP docker network instead of pointing at the actual AWS instances in the cloud. It does this without the need for any manual intervention. ### 2. Install and set up dynamodb/kinesis offline. ``` npm install --save-dev serverless-offline-dynamodb-streams serverless-offline-kinesis ``` First run the above command to install the `serverless-offline-dynamodb-streams` and `serverless-offline-kinesis` libraries. The `serverless-offline-dynamodb-streams` and `serverless-offline-kinesis` plugins emulate AWS and Kinesis/DynamoDb streams, listening to events and invoking your lambda. **serverless.yml** ```yaml plugins: - serverless-webpack # Ensure the plugin entries are after `serverless-webpack` if present - serverless-offline-mdp - serverless-offline-dynamodb-streams - serverless-offline-kinesis # Also ensure their entries are before `serverless-offline` - serverless-offline custom: serverless-offline-kinesis: endpoint: http://localhost:4568 accessKeyId: none secretAccessKey: none serverless-offline-dynamodb-streams: endpoint: http://localhost:4569 accessKeyId: none secretAccessKey: none # Can override the endpoints if necessary... # serverless-offline-mdp: # endpoints: # dynamo: http://localhost:4522 ``` As per the above you will need to set the configuration of each plugin to point to where the resource is provisioned. The snippet above points the kinesis resource to port `4568` and dynamo to port `4659` which is the ports exposed by the MDP docker network. Also note that the `serverless-offline-mdp` configuration is commented out in the snippet and not needed - this is because by default the plugin defaults all of the service's resources to point to the ports exposed the MDP network. It is there to illustrate that it _can_ be overridden if there ever was a use case to point somewhere else. ### 3. Use Varmonger for resources that differ so they can be resolved as env vars Resources that need to be overriden should be using Varmonger. This is because Varmonger allows you to resolve a variable by passing in an env variable which makes it easy to override. **serverless.yml** ```yaml varmonger: paymentsStreamArn: Fn::ImportValue: payments-stream-arn-${var:streamArnStage} secureCredentialsSecretId: Fn::ImportValue: ecotricity-secure-credentials-secret ``` **env.debug** ``` export paymentsStreamArn=arn:aws:kinesis:us-east-1:000000000000:stream/payments-local export secureCredentialsSecretId=ecotricity/secure-credentials-secret-local ``` ### 4. (Optional) Apply modifications to the docker network There will be times where we may need to apply modifications to the docker network either by changing the default ports or env vars per service, minimizing the running services to only a specific set or adding additional services to it (eg. a new database). These modifications can be easily applied through patch files that are fed to the Docker Compose Generator script run before bringing up. For example if we only want to run a specific set of mdp services in our network we create a patch yml file and we run ```bash node mdp-network/generateComposeFile.js {path_to_optional_patch_yml_file} ``` where patch to optional file is where our patch file is. The following patch file limits the running services to `mountebank`, `localstack` and `dynamodb` ```yaml mountebank: default localstack: default dynamodb: default ``` We can also override the ports and env vars per service. The following example modifies those for the `localstack` ```yaml mountebank: dynamodb: localstack: ports: - "4568:4568" environment: - "SERVICES=secretsmanager" ``` For cases where we only want to make minor changes to the default network config we can use the reverse format where the patch file contains only the changes we want to make. For example the patch below will keep all the default configuration and will just remove the `dynamodb` service while modifying the ports and env vars for the `localstack` ```yaml reverse: true dynamodb: delete localstack: ports: - "4568:4568" environment: - "SERVICES=secretsmanager" ``` In both formats we can add our own services on top of the existing network structure. The following patch file will keep all the current default services and will add an additional mongodb with the above config. ```yaml reverse: true mongo: image: mongo ports: - "27017:27017" ``` Additionally any service definition that holds null values for ports or env vars will completely remove the default ones in both formats of the patch ### 5. Run it end to end! - Generate the mdp's docker-compose.yml network definition file by running `node mdp-network/generateComposeFile.js {path_to_optional_patch_yml_file}` (the default config will be generated if the patch file argument is omitted) - Spin up the MDP docker network by running `npm run mdp-network:up` - Next run `npm run debug:infra:up` to spin up all the resources needed for your specific service. - Finally invoke your end to end flow by putting something on the local kinesis stream. See below for a helper function which would put an event on the stream. Worth noting that how you kick off the flow differs dependent on what you are testing- if this was an endpoint you could invoke localhost:3000 to trigger it off. ```js import { AWS } from 'mdp' import { StreamEvent } from 'stream-event' const kinesis = new AWS.Kinesis() const invoke = async ({ source, data, eventType, eventTypeVersion }) => { const streamEvent = new StreamEvent(source) .setType(eventType, eventTypeVersion) .setData(data) await kinesis.putRecord({ StreamName: process.env.paymentsStreamArn.split(/\//)[1], PartitionKey: data.id, Data: JSON.stringify(streamEvent) }).promise() } const paymentCreatedEvent = {} // Event json await invoke(paymentCreatedEvent) ``` ## Other useful tips - Use a DynamoDB GUI so you can view and interact with the local dynamo db visually. - `npm install dynamodb-admin -g` - `alias openLocalDynamo="DYNAMO_ENDPOINT=http://localhost:4569 AWS_ACCESS_KEY_ID=none AWS_SECRET_ACCESS_KEY=none dynamodb-admin -o || open http://localhost:8001"` - Run the `openLocalDynamo` alias or whatever you name it as to then fire up the GUI in your browser which will allow you to check your local table for contents. - Make use of Mountebank's UI to see if your 'imposters' are being successfully invoked. - Navigate to `http://localhost:2525/imposters` which will list all your imposters and what port they are running on. It will also highlight the amount of successful requests your imposters have had which is useful. - Have a teardown in your component tests that deletes any resources created during the duration of the test. - If you wish to set breakpoints and debug from within your IDE then start serverless offline from inside your IDE. ## FAQs ### Why aren't some resources like secrets-manager automatically provisioned? Resources like Secrets-manager and Kinesis are not provisioned in our service's serverless.yml, instead they're imported from a separate service. This is why they are not spun up automatically. And in the case of secrets, they would need to be manually created anyways to populate the secret value. ### What if I want to override a port that a particular resource uses? MDP returns a default `config` object with the endpoints set to the default ports that the docker images are exposed on. You can override this by calling the update function on the config object returned to you by MDP. See below for an example of how to override the port secrets manager runs on. ```js config.update({ endpoints: { secretsmanager: `http://localhost:{{overridenPortHere}}` } }) ``` ### Does MDP support triggering lambdas as a result of a S3 event? S3 events are currently not supported. You can get around this by invoking your lambda with an event payload that is identical to ``` { "Records": [ { "eventVersion": "2.1", "eventSource": "aws:s3", "awsRegion": "eu-west-1", "eventTime": "2019-11-05T14:16:36.610Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "AWS:AROAJRPHG4F2TIRRUBTNK:isf@bluetel.co.uk" }, "requestParameters": { "sourceIPAddress": "80.229.98.27" }, "responseElements": { "x-amz-request-id": "DB74D4F94332B404", "x-amz-id-2": "lWER/PM9mriJ+nOECsSTwkM+ek7+/Gmgfw5t8HFHbHPLXM3W7a7Pw3mOlxqgjn/BiunxFxm6I1E=" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "b49e445a-1abf-4333-b392-e413188534d9", "bucket": { "name": "eco-meters-service-prepayment-ids-ibrahimtest", "ownerIdentity": { "principalId": "AP57F5C1AQSTS" }, "arn": "arn:aws:s3:::eco-meters-service-prepayment-ids-ibrahimtest" }, "object": { "key": "uploads/mpxn10k.csv", "size": 22926, "eTag": "02c3bb41128cb680a6f508296974eba8", "sequencer": "005DC184448AB38C21" } } } ] } ``` ### What do I do if MDP does not support a resource that is being used in the service i'm working on? Please raise a JIRA for support to be added for that resource.