# v1.3.2 Changed Breaking Config value substreams-stores-save-interval and substreams-output-cache-save-interval have been merged together as a single value to avoid potential bugs that would arise when the value is different for those two. The new configuration value is called substreams-cache-save-interval. **To migrate, remove usage of substreams-stores-save-interval: <number> and substreams-output-cache-save-interval: <number> if defined in your config file and replace with substreams-cache-save-interval: <number>, if you had two different value before, pick the biggest of the two as the new value to put. We are currently setting to 1000 for Ethereum Mainnet.** # v1.3.3 Added flag common-auto-max-procs to optimize go thread management using github.com/uber-go/automaxprocs flag common-auto-mem-limit-percent to specify GOMEMLIMIT based on a percentage of available memory # v1.3.4 Highlights Fixed the 'upgrade-merged-blocks' from v2 to v3 Blocks that were migrated from v2 to v3 using the 'upgrade-merged-blocks' should now be considered invalid. The upgrade mechanism did not correctly fix the "caller" on DELEGATECALLs when these calls were nested under another DELEGATECALL. **You should run the upgrade-merged-blocks again if you previously used 'v2' blocks that were upgraded to 'v3'.** Backoff mechanism for bursts This mechanism uses a leaky-bucket mechanism, allowing an initial burst of X connections, allowing a new connection every Y seconds or whenever an existing connection closes. Use --firehose-rate-limit-bucket-size=50 and --firehose-rate-limit-bucket-fill-rate=1s to allow 50 connections instantly, and another connection every second. Note that when the server is above the limit, it waits 500ms before it returns codes.Unavailable to the client, forcing a minimal back-off. # v1.3.6 Highlights This release implements the new CANCEL_BLOCK instruction from Firehose protocol 2.2 (fh2.2), to reject blocks that failed post-validation. This release fixes polygon "StateSync" transactions by grouping the calls inside an artificial transaction. **If you had previous blocks from a Polygon chain (bor), you will need to reprocess all your blocks from the node because some StateSync transactions may be missing on some blocks.** Operators This release now supports the new Firehose node exchange format 2.2 which introduced a new exchanged message CANCEL_BLOCK. This has an implication on the Firehose instrumented Geth binary you can use with the release. If you use Firehose instrumented Geth binary tagged fh2.2 (like geth-v1.11.4-fh2.2-1), you must use firehose-ethereum version >= 1.3.6 If you use Firehose instrumented Geth binary tagged fh2.1 (like geth-v1.11.3-fh2.1), you can use firehose-ethereum version >= 1.0.0 New releases of Firehose instrumented Geth binary for all chain will soon all be tagged fh2.2, so upgrade to >= 1.3.6 of firehose-ethereum will be required. # v1.3.8 Changed Now using Golang 1.20 for building releases. **Changed default value of flag substreams-sub-request-block-range-size from 1000 to 10000.** # v1.4.1 Warning If you don't use dedicated tier2 nodes, make sure that you don't expose sf.substreams.internal.v2.Substreams to the public (from your load-balancer or using a firewall) Breaking changes **flag substreams-partial-mode-enabled renamed to substreams-tier2** flag substreams-client-endpoint now defaults to empty string, which means it is its own client-endpoint (as it was before the change to protocol V2) # v1.4.2 Highlights This release brings an update to substreams to v1.1.3 which includes the following: Fixes an important bug that could have generated corrupted store state files. This is important for developers and operators. Fixes for race conditions that would return a failure when multiple identical requests are backprocessing. Fixes and speed/scaling improvements around the engine. **!!Note for Operators!!** Note This upgrade procedure is applies if your Substreams deployment topology includes both tier1 and tier2 processes. If you have defined somewhere the config value substreams-tier2: true, then this applies to you, otherwise, if you can ignore the upgrade procedure. This release includes a small change in the internal RPC layer between tier1 processes and tier2 processes. This change requires an ordered upgrade of the processes to avoid errors. The components should be deployed in this order: Deploy and roll out tier1 processes first Deploy and roll out tier2 processes in second If you upgrade in the wrong order or if somehow tier2 processes start using the new protocol without tier1 being aware, user will end up with backend error(s) saying that some partial file are not found. Those will be resolved only when tier1 processes have been upgraded successfully. # v1.4.3 Highlights This release brings an update to substreams to v1.1.4 which includes the following: Changes the module hash computation implementation to allow reusing caches accross substreams that 'import' other substreams as a dependency. Faster shutdown of requests that fail deterministically Fixed memory leak in RPC calls **Note for Operators** Note This upgrade procedure applies to you if your Substreams deployment topology includes both tier1 and tier2 processes. If you have defined somewhere the config value substreams-tier2: true, then this applies to you, otherwise, if you can ignore the upgrade procedure. The components should be deployed simultaneously to tier1 and tier2, or users will end up with backend error(s) saying that some partial file are not found. These errors will be resolved when both tiers are upgraded. # v1.4.4 **Operators** When upgrading a substreams server to this version, you should delete all existing module caches to benefit from deterministic output # v1.4.6 Changed Substreams (@v1.1.6) is now out of the firehose app, and must be started using substreams-tier1 and substreams-tier2 apps! **Most substreams-related flags have been changed**: common: --substreams-rpc-cache-chunk-size,--substreams-rpc-cache-store-url,--substreams-rpc-endpoints,--substreams-state-bundle-size,--substreams-state-store-url tier1: --substreams-tier1-debug-request-stats,--substreams-tier1-discovery-service-url,--substreams-tier1-grpc-listen-addr,--substreams-tier1-max-subrequests,--substreams-tier1-subrequests-endpoint,--substreams-tier1-subrequests-insecure,--substreams-tier1-subrequests-plaintext,--substreams-tier1-subrequests-size tier2: --substreams-tier2-discovery-service-url,--substreams-tier2-grpc-listen-addr Some auth plugins have been removed, the new available plugins for --common-auth-plugins are trust:// and grpc://. See https://github.com/streamingfast/dauth for details Metering features have been added, the available plugins for --common-metering-plugin are null://, logger://, grpc://. See https://github.com/streamingfast/dmetering for details Added Support for firehose protocol 2.3 (for parallel processing of transactions, added to polygon 'bor' v0.4.0 # v1.4.8 Latest Changed **Changed --substreams-tier1-debug-request-stats to --substreams-tier1-request-stats which enabled request stats logging on Substreams Tier1** **Changed --substreams-tier2-debug-request-stats to --substreams-tier2-request-stats which enabled request stats logging on Substreams Tier2** ## Changes logs We are planning to migrate from v1.3.1 to v1.4.8 (Polygon), according to changelogs here are actions: 1. **from v1.3.2:** remove usage of substreams-stores-save-interval: <number> and substreams-output-cache-save-interval 1. **from v1.3.4:** You should run the upgrade-merged-blocks again if you previously used ‘v2’ blocks that were upgraded to ‘v3’. 1. **from v1.3.6:** If you had previous blocks from a Polygon chain (bor), you will need to reprocess all your blocks from the node because some StateSync transactions may be missing on some blocks. 1. **from v1.4.1:** flag substreams-partial-mode-enabled renamed to substreams-tier2 1. **from v1.4.2:** Deploy and roll out tier1 processes first Deploy and roll out tier2 processes in second 1. **from v1.4.3:** The components should be deployed simultaneously to tier1 and tier2, or users will end up with backend error(s) saying that some partial file are not found. These errors will be resolved when both tiers are upgraded. 1. **from v1.4.4:** When upgrading a substreams server to this version, you should delete all existing module caches to benefit from deterministic output 1. **from v1.4.6 and v1.4.8:** Some flags have been changed # Question: Hi! We are planning to migrate from v1.3.1 to v1.4.8 (Polygon). Looking at the changelog, we see a few breaking changes: 1. **from v1.3.4:** You should run the upgrade-merged-blocks again if you previously used ‘v2’ blocks that were upgraded to ‘v3’. 1. **from v1.3.6:** If you had previous blocks from a Polygon chain (bor), you will need to reprocess all your blocks from the node because some StateSync transactions may be missing on some blocks. and regarding release order: 1. **from v1.4.2:** Deploy and roll out tier1 processes first Deploy and roll out tier2 processes in second 1. **from v1.4.3:** The components should be deployed simultaneously to tier1 and tier2, or users will end up with backend error(s) saying that some partial file are not found. These errors will be resolved when both tiers are upgraded. Questions: 1. Does change in v1.3.6 means we will have to fully resync? 2. With changes in v1.4.2 and v1.4.3, is it better to upgrade nodes simultaneously or start with T1 and go to T2? Should we shut them down completely or rolling update is okay?