# Coordinating library upgrades
The **control framework** wants to major release from N to N'. There are breaking changes. Many different repos which depend on each other also depend on the framework. What is the order of operations required to minimize conflicting dependencies and the accompanied duplicate code in bundles?
## Example
Office apps live in the `office-online-ui` repo.
The apps depend on the shared `commenting` package in the `fabric-internal` repo, along with various LPC packages in the `midgard` repo.
We want to upgrade `@fluentui/react` from 7 to 8. All 3 repos depend on fluent ui packages.
If the ooui repo is upgraded first, we bring in commenting and LPC packages from the other repos, which depend on v7, and cause duplicate dependencies.
If we upgrade the commenting or midgard repos first, we avoid immediate problem, but stop delivering updates from the 2 shared code repos until the app is on the latest bits.
To make matters worse, we don't have an official v8 yet; we have a pre-release.
## The core problem
The core problem we're worried about is duplicates in packages due to renames, conflicting dependencies, and mismatched timelines.
Duplicates can be mitigated in 2 ways:
* Top down - dedupe at the npm dependency layer.
* Bottom up - dedupe at the webpack compilation layer.
## Possible approaches
If the app updates to N' first, in pulls in the experience, which still depends on N, so you get dupe modules in your bundle.
If the experience updates to N' first, the app will not immediately get dupes, so long as they don't upgrade the experience. So, while dupes are avoidable, the updates stop until the app bumps. As soon as an update happens, we now have dupes again.
So let's take a step back: How bad are dupes and what other things can we do to reduce the problem?
## Minimize duplicates in your bundle
Duplicates occur when you bundle the graph of an application, and branches of that graph resolve to different versions of the same npm package.
It is likely that 2 versions of the same package have a lot of the same code. NPM semver attempts to provide a way to "declare" that you have patches/minor/major changes, but a single version number does not describe minutia detail of exactly what changed.
Webpack takes a conservative approach by letting "npm" own the deduplication. If npm doesn't dedupe, webpack will end up duplicating dependencies. This is primarily because there is risk a duplicate module may have a different dependency which ultimately changes the behavior/outcome of the code path.
## Hypothesis
Two packages in the dependency graph which contain identical modules should be deduped at the webpack level.
If `lodash` major bumps by deleting 1 export, but has no other changes, then it should be possible to include both the old and new lodash versions with no bundle penalties, because all redudant code referenced should be aliasable.
### Detecting duplicate candidates
Modules should be compared based on the code exported; not the names of the import paths. A module is "duplicated" when the code is the same.
Modules detected as duplicates of previous modules should be aliased to the original. This can be done in a webpack plugin during the compilation phase, before optimizations are applied. However once alias candidates are detected, webpack may need to be restarted with the alias map.
Imports must be identical as well. If a dupe module imports a different version of a submodule, it becomes essentially different code.
Scenario: A and A' are duplicates in different pages. None of the modules they each depend on are different. They should be deduped. This is the majority case.
Scenario: A and A' are duplicates, but one of them depends on an updated version of a library. Because they have different subtree dependencies, they cannot be deduped.
## Summary
The recommended approach to updating frameworks:
If you have control over the upgrade process:
* Bump the lowest level packages first to the new packages.
* Upgrade the mid HVC packages second.
* Finally upgrade the application package(s).
Once the lowest level has bumped, the next level dependent has 3 options:
* They do nothing and never upgrade anything, blocking everything downstream. (BAD)
* They upgrade the framework and republish to unblock everything downstream. (IDEAL)
* If they have downstream partners which are not ready to upgrade but still need fixes, they may need to branch and provide service fixes for the old and new major bumps.
Once the mid level has bumped, the application can bump all of the things.
This is all in an ideal world with perfect coordination, and likely to not be able to happen in this order.
The fallback mitigations when things get bumped out of order are:
* Do nothing and let the bundle size bloat while in transition
* Lean on deduping optimizations to kill most of the bloat during scenarios like this.