# CodeRush - Performance Enhancements
Performance always matters for CodeRush. Productivity tools, by their very definition, cannot slow you down.
With the recent CodeRush migration to Microsoft's Roslyn engine, we eliminated the cost of maintaining an array of language infrastructures. Visual Studio parses, performs semantic analysis, stores solution structure, and searches for symbols, all allowing CodeRush to use less memory and increase performance.
However, relying upon the Roslyn engine alone does not solve every performance challenge. CodeRush is a complex product with many points of Visual Studio integration where event listening and custom editor rendering are necessary. And in areas where Visual Studio provides insufficient functionality, CodeRush fills the gap with its own core engine. All these factors can introduce performance challenges, which, if handled improperly, could result in slowdowns and perhaps a "yellow bar" performance warning appearing inside Visual Studio.
Over the years performance has been a top a priority for our team. And our work for many of those years has been reactive: a customer experiences a performance issue; reports it; and then we hunt it down and kill it. This approach often yields positive results, provided we can reproduce the issue. However sometimes we may not have enough information to reproduce the slowdown, or customers may not have been able to provide essential details needed, for example the results of running a PerfView analysis.
To improve the quality of the performance data, in recent years we introduced a performance logging system, which allows CodeRush to continually monitor its own performance in critical code sections, and records any issues, if found, to the CodeRush log file.
We monitor many events in both CodeRush's and Visual Studio's performance, including:
1. CodeRush Startup
a. Loading the entire product.
b. Starting each separate CodeRush service.
с. Individual initialization for selected services.
2. Editor & IDE Events
a. The time spent on each individual handler
b. The time spent processing the entire event (all event handlers)
3. Visual Studio Interaction
a. Checking the availability of menu items, toolbar buttons, commands, etc.
b. Measuring the execution time of user interface items.
c. Editor rendering time.
4. Dispatcher.Invoke
Any code executed through Dispatcher.Invoke is subject to performance monitoring. Executed on the main UI thread, any code running here needs to be both absolutely necessary and optimized for speed, or it can impede performance. And so CodeRush monitors code blocks executed on the main thread in Visual Studio.
5. Refactorings & Code Generators
a. Availability checks (determining which refactorings/generators are available)
b. Execution times (how long it takes to apply a refactoring/generator)
Now with the CodeRush performance logging, if a customer experiences a slowdown, the CodeRush logs will more likely contain information that shows what's happening and why. And since we introduced performance logging, our support team has been able to more quickly diagnose and solve reported issues.
If you are a member of the DevExpress Customer Experience Program (https://www.devexpress.com/AboutUs/privacy-policy.xml#cep), details of any slowdowns will be automatically sent to our support team for analysis. This allows us to get a broader picture of what our customers are experiencing -- what causes the delays, and under what conditions. We can also monitor progress and regression, and in general, make our work more systematic.
In preparation for each sprint, we collect information about delays - the cause, the total number of events, and the total sum of all the delays.
In order to more objectively compare changes across minor releases, we do not directly compare the **sum of all the delays**, but instead use a ratio of **total delays** to the number of performance events. This gives us an average delay per event as our primary metric, and it's essentially how we normalize all this performance data across releases.
For each release, we compare the metrics collected for the current and the previous release. Our strategies depend on the results of this comparison:
1. **Minor changes** to the metric, either small improvements or small degradations, generally mean nothing has changed. If work had been performed attempting to improve performance, then it was likely unsuccessful.
2. **Significant decreases** in the metric means customers are experiencing smaller delays per event. We work to analyze our changes for this version and make sure we have a reason to explain the change. If we performed worked to improve performance, this drop in the metric confirms we are moving in the right direction. We can continue to improve performance further, or move on to another challenge if satisfied with the results.
3. **Significant increases** in the metric - indicating customers are experiencing more delays per performance event - we look closely at the changes across sprints so we can understand what might have caused the change. And then we prioritize addressing it for the next sprint.
4. **A performance event has totally disappeared**. While on the surface this may appear like a good thing, a significant event like this always warrants deeper investigation. It could mean our efforts led to a complete correction of an issue, or perhaps due to changes in the code one event became another. So our response is to always analyze the changes so we can understand why.
5. **A new performance event appeared** that has not been identified previously. This almost always happens because we’ve extended the detail of our logging into a new area. For example, if it is unclear what is causing a delay at the start of the service, we might are more logging for the individual stages of that service's start. This can sometimes cause new performance events to be added as children of the old event.
The key point here is that when there are changes in the performance data across releases, we investigate the cause and verify if they were accidental or intentional.
Here are some actual results of the logging and our analysis:
Here are some actual results of the logging and our analysis, comparing CodeRush releases 20.2.**3** against 20.2.**11**.
In CodeRush v20.2.3, we had **58 active performance events**.
In v20.2.11:
* For 29 of these events, we managed to achieve performance improvements of more than 50 percent.
* For 22 of these events, we completely fixed the performance issues and never saw them logged again.
* For 5 of these events, we achieved performance improvements of more than 80 percent.
Here's a short summary of what we've accomplished:
1. Improved startup performance.
We completely rewrote the code engine responsible for interaction with Visual Studio in order to eliminate any slowdowns on when Visual Studio loads. We also optimized a number of CodeRush internal services to speed up their work.
This graph shows our results over time improving startup performance (smaller means faster):

2. Improving Test Runner performance
A better test discovery engine lets the CodeRush Test Runner find tests in your solution faster. We also optimized internal services and reduced our project dependency graph build time, which improves test run speed.

3. Increasing rendering speed and optimization for Spell Checker, Rich Comments (including Image Embedding), Unused Code Highlighting, Structural Highlighting, and Member Icons.

4. Improving typing performance in the Visual Studio editor by optimizing String Format Assistant, Naming Assistant, Smart Semicolon, Smart Dot, IntelliRush, and Code Template Expansion. We also made command availability checking faster.

Our team continues to work hard to win performance gains, and we look forward to achieving even greater progress in this direction in the coming CodeRush releases.