# 講稿 ## 1. Introduction In this paper, we study how the design of microservice architectures affects their performance and scalability. A microservice architecture breaks an application into many small, independent services. Each service runs on its own and communicates with others through lightweight APIs. This design makes systems easier to develop, update, and maintain, but it also adds new challenges, such as managing distributed data and keeping the system consistent. The goal of this work is to understand the relationship between architecture and performance, using quantitative metrics and scalability measures. Our main contribution is a framework that evaluates how architectural choices—like how services are split, how they communicate, and how they scale—impact overall system performance and scalability. ## 2. Related Work This section reviews related work about anti-patterns. Anti-patterns have been studied across three major levels. Architectural model analysis has demonstrated performance improvements of up to 50% after addressing anti-patterns. Source code analysis, particularly in Java, balances between accuracy and implementation cost. Performance test data has been used to detect bottlenecks. Second, in architecture anti-pattern research, two main categories are recognized: internal anti-patterns, like API versioning, and communication anti-patterns, such as cyclic dependencies, missing API gateways, and shared libraries. These are often analyzed using co-change history to estimate maintenance cost. However, there is still no unified model that directly links architectural problems to performance loss. Finally, correlation studies have begun to explore how certain architectural patterns—for example, CQRS, Gateway Aggregation, and Pipes & Filters—affect system performance. While earlier work confirms performance differences between designs, it also emphasizes the need for a combined assessment framework that links architectural structure to performance outcomes. ## 3. Scalability Assessment Framework In this section, we describe the performance and scalability assessment framework illustrated in Figure 1. The PSAF is built around three blocks: Test Design Framework, Measurement Framework, Assessment Framework. - The framework starts with real data from the production system. We take workload traces and automatically analyze historical data. - This raw data feeds into the Operational Profile Definition (OPD), which generates different load levels for testing. - High-Level Model (HLM) creates an analytical or virtual model of the system. It can also compute the load level probabilities that feed back into the OPD. - We then move to Automated Load Generation. This component takes a sampling of the operational profile to create realistic test cases and workloads for the actual testing. The Measurement Framework collects key metrics such as response time and throughput, which show how well the system performs under different loads. The Assessment Framework then analyzes these results to see whether the system meets its performance goals. If not, it helps identify where and why the performance dropped. ## 4. Performance Anti-Patterns Let’s now discuss Performance Anti-Patterns, which are recurring design flaws that cause performance degradation in microservices. Application Hiccups are transient spikes in response time, often caused by periodic tasks like garbage collection. Continuous Violated Requirements occur when service-level goals are consistently missed, typically due to inefficient code or third-party software. Traffic Jam happens when a bottleneck triggers a job queue, leading to persistent variability in response times. The Stifle happens when multiple similar database queries are executed repeatedly, slowing down the system. Expensive Database Call refers to a single long-running query that delays responses. Empty Semi Trucks is similar to The Stifle, but arises from many small, repetitive requests that waste interface resources. Finally, The Blob—or God Class—occurs when one service or class does most of the work or stores most of the data, creating a scalability bottleneck. By identifying these patterns in our system, we can trace performance deviations back to architectural or design flaws. This helps us address root causes. ## 5. DV8 Architecture Anti-Patterns Detection Now let’s look at DV8, the tool we use for detecting architecture anti-patterns. DV8 visualizes dependencies using a Design Structure Matrix, or DSM. In this square matrix, rows and columns represent the same set of elements — such as files, packages, or microservice components. Each marked cell indicates a dependency; for example, a cell might show that the seat service calls the order service twice and shares one entity. With co-change data, DV8 detects anti-patterns such as: Unstable Interface/API (UIF), Modularity Violation Group (MVG), Crossing (CRS) When co-change information is unavailable, DV8 can still detect: Unhealthy Inheritance Hierarchy (UIH), Clique (CLQ), and Package Cycles (PKC). DV8 also provides fan-in and fan-out metrics, making it easy to detect god classes or overly complex components.