# Portfolio ## # EchossVIP An service which combines physical shop E-stamp and extension feature of LINE - Develop tool: Laravel - DB: AWS MySQL - DB: AWS RDS - Cache: AWS ElastiCache - Server: AWS EC2 More concrete, an user go shopping in StarBucks, and register as an user of this service by LINE Login. After that, the user could receive different messages from LINE such as: - Shop and reach certain amount, StarBucks will send you a coupon via LINE. - StarBucks could send you a promotion link - There are many differnt features. I would simply cover some of them instead of all. ### # Login As mentioned above, this project suports three kind of users logging in - Service mangers - Brand managers - Users under certain brand ![](https://i.imgur.com/Hccp0ns.png) ![](https://i.imgur.com/4U1fOS8.png) ![](https://i.imgur.com/688ga80.png) ![](https://i.imgur.com/oLL0o3R.png) ### # Main features There are many features and this project is developed by many developers. I will give more details towards some feature that was made by me I think more interesting. ![](https://i.imgur.com/oGWM5Rk.png) ### # Member card component Different member card levels have dependency. For example, the upper amount of the one with level one must be lower than the lowest amount of the one with level two, and there are some restrictions like that. However, the member card would be insert, update, and delete. For example, when level 2 is deleted, the upper amount of level 1 must adjust to link to the lowest amount of level 3, and the previous level 3 must become level 2 after the deleting so there is not a gap between two levels. In order to achieve that, we use linked list data structure. We store `next` & `previous` in the DB, and implement linked list CRUD ![](https://i.imgur.com/gsEJhpk.png) ### # Export component Here we need to export all user data to CSV, and it's kind of time-consuming operation because some brand has hundren or thousand users, and the required data is across multiple tables. We use queue to asynchronously deal with this operation. After the brand manager submits the reqeust, the service would pass the job to Redis queue, and then we pick up jobs with limited worker, process them in a queue. The data is chunked and exported to CSV, and the CSV file is uploaded to s3. During the exporting, the exporting button would be locked to prevent multiple duplicate exporting requests. After the exporting task is done, the feature would be unlocked. Those successfully exported file would show up in Download section for brand manager to download. ![](https://i.imgur.com/95b30WI.jpg) ![](https://i.imgur.com/S2ZAbEC.png) ### # Continuous Deployment We implement CD with Envoyer concept. Simply speaking, we pull the newest commit first and then run necessary command, adjust the permission, and soft link the dir after everything is done to achieve zero downtime as much as possible. storage is shared across all versions. Current dir always soft link to the newest version, and there are several versions in release dir. ![](https://i.imgur.com/2JzrOes.png) <br> ## # Echo Square An application that combines distance and interaction. Use case: The user could specify the diameter based on himself and send messages. Anyone who is within the diameter could receive the message. Sending message would consume strength, and strength would be recovered per 5 minutes. - Develop tool: Laravel - DB: AWS RDS - Server: AWS EC2 ![](https://i.imgur.com/VqtoMCN.jpg) ![](https://i.imgur.com/usT9fd3.jpg) ### # MySQL geographic function During the implementation, we found that the MySQL function st_distance has some limitation which was not able to use index. After some research, we wrote a formula to get the result, and it's more efficiency than st_distance in this case. ### # Testing This project was developed by TDD. ### # Deployment We implemented CI/CD of this project with GitLab Runner. We created a docker image for Laravel environment, and used that image for testing. <br> ## # 日日考核 The core concept of this project is to score employees per their education, job tenures, and subjective score from their managers for different companies. - Developing tool: Express - DB: GCP Cloud SQL - Server: GCP Compute Engine ![](https://i.imgur.com/C0BZsMr.png) ![](https://i.imgur.com/acs1XOM.png) ![](https://i.imgur.com/NYqomtl.png) ### # Crawler We got company's registered public official data from government website by crawling, and use some third party service to deal with the captcha ![](https://i.imgur.com/cpwAKUj.png) ### # Employee Number Here is a special requirement. The employee number must be serial and can't jump numbers unless some employee is deleted. Currently we use negative lock to solve race condition, but actually positive lock would work, which has a lower cost. Although there is still some chance to cause race condition, we could make it less likely to happen by checking if the employee number is modified at the end of the transaction. If it doesn't, then we could commit. If it does, we roll back. By doing this we could decrease the chance of race condition and also no need to use lock. ![](https://i.imgur.com/HmhucND.png) ### # Multiple level structure The special part here is that every department can has child department and unlimitedly extend Here I use a string column `chain` to decrease time complexity. For example, the id of department A is `1`, the id of department B is `2`, and the id of department C is `3`. A is the parent of B, and B is the parent of C. In this case, A's chain column would be null because it doesn't has parent, and the one of column B would be `-1-`, and the one of column C would be `-1-2-` Via this this column, we could get all parent departments or child departments of any specific department. However, tree structure is another option. Because it may cause more unnecessary DB operation when it comes to update & insert, I don't think it's suitable for this case. ![](https://i.imgur.com/8vx11uo.png) </br> ## # 好玩專案 - Developing tool: Laravel - DB: AWS RDS - Server: AWS EC2 - Cache: AWS ElastiCache - Queue: AWS SQS - Many other AWS services The con concept of this project is that customers can buy activities from this service, and customers could interact with users in activities. User case: Apple purchase an activity for their year party. During the activity period, the employees of Apple could login and attend the activity. The host and employees could chat in real time in the chatting room, and interact by various games, like drawing, lottery, questions and answers, etc. ![](https://i.imgur.com/wBGs4EM.png) ![](https://i.imgur.com/VLivfQg.png) ![](https://i.imgur.com/Z2WZ1Vy.jpg) ![](https://i.imgur.com/uZ8dhBo.png) ![](https://i.imgur.com/vnvohRW.png) ![](https://i.imgur.com/6lrSyTr.png) ![](https://i.imgur.com/2KJzjeD.png) ![](https://i.imgur.com/hugaBck.png) ![](https://i.imgur.com/Yk9kQqJ.png) ![](https://i.imgur.com/Isdof3B.png) ![](https://i.imgur.com/PGKdtJw.png) ![](https://i.imgur.com/bx1EV9W.png) ### # Batch import This requirement needs to batch import employees by Excel, and batch import employees' avatars by Zip The import operation is quite time-consuming because there may be hundreds of thousands rows in a single Excel, or hundreds of thousands images in a zip So we implement this requirement with queue. We let the manager uploads the file first, and then the backend server queues the task, and then asynchronously process the task. By this way it massively increase the user experience because users only need to upload the file and they can leave this page and do something else. After uploading, server queues tasks and process them step by step. It's like trading time for server resource, which is perfectly for some expensive operation without urgency. In addition, during the import process, the import feature is locked to prevent duplicate import requests which cause unnecessary cost. ![](https://i.imgur.com/FmKJbxD.png) ![](https://i.imgur.com/0nXoym9.png) ![](https://i.imgur.com/UrnbpIg.png) ### # Grabing envelops This is one of the provided games of this project. When it comes to companies with a lot of employees, there will be thousand or thousands requests at the same time. To prevent reaching the server resource limit, we let the frontend to send requests directly to AWS SQS, and backend server asynchronously pick up those requests and process them. Once the quantity of pick-up requests reach the quota of envelop, we broadcast to users who got envelops and those who didn't. To prevent the chance of cheating, we clean the queue before and after the game. ### # Architecture Even though we solve the high flow at a short time, requests continuously sent by thousands of people still can not be held by by a single server. So, we use the following services to deal with the flow. - We use AWS Load Balancer & Auto Scalling Policy to scale and shrink the quantity of servers. - We host frontend project with s3, and use CDN before it. It only invalidate when the frontend project updates. - We cache immutable or rarely changed tables with Redis to reduce DB operation as much as possible. - We use AWS Parameter Manager to manage the env, so no matter existing servers or newly created servers use the same env. - We use AWS CloudWatch to manager logs because every server can process requests and may shutdown at any time due to auto scaling policy ![](https://i.imgur.com/mfMoq4f.png)