# Event Tracking System V1 Stabilisation
## Observations:
1. In one case of a failed buildWorkflow, the events were published in DynamoDB and S3 before the workflow was actually finished, by about 3 minutes.
WF example: https://connect.two.ownzones.dev/metadata-service-second/workflows/5ac2da5f-93ab-4e8b-801b-a4486f8ba947
This workflow has the finished date as 9:46 but the events from it were actually published at 9:43, thus we believe that it failed in the BE and was recovered by the retry mechanism which lead to a premature publishing of events.


2. File size in connect UI is not properly converted, to check if this is expected for UI only and why

## Issues to be fixed/adressed:
67. For BuildWF with a multi segment CPL tag as input (IMF to flat), the **PACKAGING** event will list the media items based on their values (aggregated) but the first one will always list the MXF files and the XML (CPL) together. This is incorrect!
Due to this, another related side effect is that the audio files are aggregated with the XML or the video with the XML thus increasing the duration incorrectly.
WF example: https://connect.two.ownzones.dev/metadata-service-second/workflows/61ab04f2-eaa2-4e07-8b29-192074f25e58
In this case it shows 9 minutes in the first event (2 Audio MXF files that sum up to 6 minutes and 1 XML for another 3 minutes)
Screen capture of the event with explanation: https://i.imgur.com/Ag1EL4F.png
Note: For a single segment CPL input (with a tag) the XML doesn't appear at all in the packaging but only the MXF files.
[CD-11533](https://ateliere.atlassian.net/browse/CD-11533)
- [ ] BE - in progress
- [ ] QA - reported https://ateliere.atlassian.net/browse/CD-11533
## Fixed issues:
1. [**Fixed**] Cancelled events are not received in dynamo or S3
[Workflow example](https://connect.three.ownzones.dev/teddy/workflows/0d3fb32a-5e42-40de-a1bb-7444e2fbba21)
- [x] QA
- [x] DEV
2. [**Fixed**] Can't trigger INTERLACE for cancelled events, it seems they are always interpreted as INGEST.
- [x] QA
- [x] DEV
3. [[CD-11239](https://ateliere.atlassian.net/browse/CD-11239)] Cancelled events are not generated in S3
- [x] QA
- [x] DEV
4. [**Won't Fix, for now**] DemuxWorkflow does not generate events. The events exceed DynamoDB size limit.
[Workflow example](https://connect.two.ownzones.dev/str/workflows/3702bece-7fbc-42d3-8d6c-aa29a62826de
```
2022-08-17T13:08:01.786Z - [32minfo[39m: processEvent: skipping workflow '3702bece-7fbc-42d3-8d6c-aa29a62826de' with status 'pending'. app=tracking-service, env=production, version=0.0.1
```
```
[32minfo[39m: [DynamoDbService] DynamoDB put error: Item size has exceeded the maximum allowed size app=tracking-service, env=production, version=0.0.1
```
ORG#7209e289-8973-484e-a86c-3a0fd267a007
SYSTEM_EVENT#3702bece-7fbc-42d3-8d6c-aa29a62826de
SYSTEM_EVENT#83dffd66-2e49-4934-abff-c1f4c6bfed2c
- [ ] BE
- [ ] QA
5. [**Fixed**] Reporting event for non video file (caption, XML's, etc) ingests looks incomplete
[Workflow example](https://connect.two.ownzones.dev/str/workflows/067487a9-fed9-4e1a-9791-743f8966805b)
```
{
"createdAt":"2022-08-17T13:29:04.623Z",
"endedAt":"2022-08-17T13:29:13.475Z",
"pk":"REPORTING_EVENT#067487a9-fed9-4e1a-9791-743f8966805b",
"sk":"ORG#7209e289-8973-484e-a86c-3a0fd267a007",
"organizationId":"7209e289-8973-484e-a86c-3a0fd267a007",
"organizationName":"str",
"organizationSlug":"str",
"publishedAt":"2022-08-17T13:29:54.632Z",
"startedAt":"2022-08-17T13:29:05.817Z",
"systemEventId":"SYSTEM_EVENT#067487a9-fed9-4e1a-9791-743f8966805b",
"triggeredAt":"2022-08-17T13:29:04.623Z",
"triggeredByUserId":"0640f165-9a02-4e6a-9192-c53d8cc75a51",
"skipped":true
}
```
https://connect.two.ownzones.dev/metadata-service-second/workflows/d20ffa44-af81-4c5e-9600-e38057984a51
```
{
"createdAt":"2022-08-17T11:48:09.042Z",
"endedAt":"2022-08-17T11:48:16.752Z",
"pk":"REPORTING_EVENT#d20ffa44-af81-4c5e-9600-e38057984a51",
"sk":"ORG#4f6c1c0e-863d-4fe0-9eb0-e71216dc8293",
"organizationId":"4f6c1c0e-863d-4fe0-9eb0-e71216dc8293",
"organizationName":"Metadata service second",
"organizationSlug":"metadata-service-second",
"publishedAt":"2022-08-17T11:48:52.370Z",
"startedAt":"2022-08-17T11:48:09.937Z",
"systemEventId":"SYSTEM_EVENT#d20ffa44-af81-4c5e-9600-e38057984a51",
"triggeredAt":"2022-08-17T11:48:09.042Z",
"triggeredByUserId":"c6b0cb49-9475-45db-9c23-e71283019a57",
"skipped":true
}
```
- [x] BE
- [x] QA
6. [**Not an issue**] No **PACKAGING** type of event appears either as a json in S3 or as an entry in DynamoDB when a BuildWF contains a File Copy type of deliverable and the input+output is a non video file.
WF example: https://connect.two.ownzones.dev/metadata-service-second/workflows/39fc0a62-4e69-4e95-8e95-3657110fdc7d
Job example: https://connect.two.ownzones.dev/metadata-service-second/jobs/7e1a3162-7588-4d14-9fab-d17a926174a3
[Configuration example](https://i.imgur.com/Tij2V1f.png)
~~- [ ] BE~~
- [x] QA - not an issue
7. In the case of a BuildWorkflow that contains a Thumbnail deliverable, the generated S3 json and DynamoDB entry will list 2 BuildWorkflow ID's but only one of them is the correct one. The first listed item doesn't actually exist.
Job example: https://connect.two.ownzones.dev/metadata-service-second/jobs/f04c2ff0-8da7-420e-b6e4-4449c06dda8a
The first BuildWorkflow present doesn't actually exist:
https://connect.two.ownzones.dev/metadata-service-second/workflows/ae40c9ef-b45a-4adc-917c-f14c855add17
[First BuildWorkflow Screen](https://i.imgur.com/CEU6Beb.png)
However, the second listed BuildWorkflow exists:
https://connect.two.ownzones.dev/metadata-service-second/workflows/335d38a8-de7f-44af-ada7-b15b35654cfb
[Second BuildWorkflow Screen](https://i.imgur.com/WOxL9jw.png)
In the DynamoDB search, the incorrect BuildWorkflow ID is present with the correct one appearing only in the systemEventId section:
[DynamoDB Workflow Screen](https://i.imgur.com/bY56hXL.png)
- [x] BE - fixed - we were trying to use random uuid as report id (because a workflow can generate multiple events which cannot have the same id), and we replaced the true workflow id a little too early.
- [x] QA
8. No tracking event is registered for the **NTP** **Transcode** type of tasks from a BuildWorkflow. This applies for both S3 jsons and for DynamoDB.
Running such a job will only generate a tracking event for the ingestiong of the output file. Nothing up to it.
WF example: https://connect.two.ownzones.dev/metadata-service-second/workflows/d3d0bb34-9671-41d4-87ef-255b65e58353
In DynamoDB I've tried searching with the following filters:
sk: ORG#4f6c1c0e-863d-4fe0-9eb0-e71216dc8293
systemEventId (adding the Connect WF ID in it): SYSTEM_EVENT#d3d0bb34-9671-41d4-87ef-255b65e58353
Job example: https://connect.two.ownzones.dev/metadata-service-second/jobs/7474b97a-26bb-4ef8-83a4-5821f9b6f5ed
- [x] BE - sources field is missing from the [system event](https://us-east-1.console.aws.amazon.com/dynamodbv2/home?region=us-east-1#edit-item?table=tf-tracking-service-main-table-two&itemMode=2&pk=SYSTEM_EVENT%23d3d0bb34-9671-41d4-87ef-255b65e58353&sk=ORG%234f6c1c0e-863d-4fe0-9eb0-e71216dc8293&route=ROUTE_ITEM_EXPLORER) - FIXED
- [x] QA
9. [**Fixed**] No tracking event (S3 or DynamoDB) for Demux - Draw Text(Completed, Failed, Conceled)
Workflow example: https://connect.two.ownzones.dev/ttt/workflows/0fd198d7-492c-41ba-bd8e-c1561ae71dc5
- [x] BE - looks like it is ignored because it has an image as input. Should validate with product, not sure why it's considered as Transcode event if it doesn't work with media containers.
- [x] LE ileana: IT WAS COMPLETELY REMOVED
- [x] QA
10. [**Fixed**] No tracking event (S3 or DynamoDB) for Demux - Extract caption (Failed or Canceled)
Workflow example: https://connect.two.ownzones.dev/ttt/workflows/a45bd3c0-c059-47f4-bda6-3d067c474711
- [x] BE
- [x] QA
11. No tracking event (S3 or DynamoDB) is generated for the **BuildWorkflow** with a **Composition** type of transcoding when the input is a CPL indicated with a tag. (Esentially an IMF to Flat type of transcoding)
Note: It works if the same deliverable is configured to use a Composition template. The issue is present only when an existing CPL tag is used as a source.
WF example: https://connect.two.ownzones.dev/metadata-service-second/workflows/a6c0ebe9-e31a-4e6d-8fed-fc6a208766ee
Job example: https://connect.two.ownzones.dev/metadata-service-second/jobs/279b4262-3f6b-4b8a-a18d-8eac749b77c6
[Screen1](https://i.imgur.com/Rc77ger.png)
[Screen2](https://i.imgur.com/GrMb35H.png)
- [x] BE
- [x] QA - will recheck after #12 is also fixed
12. No tracking event (S3 or DynamoDB) is generated when the input is an Image Sequence.
Workflow example: https://connect.two.ownzones.dev/ttt/workflows/06d7e722-1ac9-4746-a111-f73bd5337100
- [x] BE - image_sequence should also be considered as media_container input? are there other similar cases? cpl? everything with tracks?
- [x] QA - Fixed
13. (Clarification) There is a difference between the S3 json and DynamoDB entry for the duration of the input media item.
The S3 json indicates the exact value present in the DB while DynamoDB doesn't contain the last 3 characters.
DB:
https://i.imgur.com/a3lzRVR.png
Comparison between S3 json and DynamoDB entry:
https://i.imgur.com/wzuRphn.png
Is this intended or a side effect?
- [x] BE - this has to do with the way Dynamo publishes the changes. We don't do anything specific for S3. We can convert duration to a string to avoid this
- [x] QA
14. (CNR) ~~Adding an MXF** as a source file for a deliverable from a package template will no longer generate the **PACKAGING** S3 json file.
The result appears fine in DynamoDB though.~~
Please note that the **INGEST** type of events appear, only the **PACKAGING** one is missing.
The purpose of the MXF was to check the **essenceType** value for the BuildWorkflow.~~
WF example: https://connect.two.ownzones.dev/metadata-service-second/workflows/621a5a23-9cec-43d4-a36d-9fdec704691d
Job example: https://connect.two.ownzones.dev/metadata-service-second/jobs/4a33327a-3332-43ee-84d5-d91167e24172
https://i.imgur.com/Eq5YZCN.png
- [ ] ~~BE~~
- [x] QA - CNR
15. No Awaiting files ingest task appears after a Transcode task (if no Metadata is set for it).
This is related to the following specification:
https://ateliere.atlassian.net/browse/CD-10851
https://i.imgur.com/ClZb8wq.png
WF example: https://connect.two.ownzones.dev/metadata-service-second/workflows/3935db16-37cc-4ba9-bc58-7bfccdaef20e
- [x] BE
- [x] QA
16. The newly added **Awaiting media ingest** task, following a transcode task, doesn't indicate the file in question.
WF example: https://connect.two.ownzones.dev/metadata-service-second/workflows/16172828-f03b-463e-8a44-5a50deca3454
https://i.imgur.com/IlBNfZ6.png
- [x] BE
- [x] QA
17. The newly added **Awaiting media ingest** task works fine only in jobs. Running it on standalone files with an NTP type of transcode will r~~esult in an error: Error: File 's3://tf-s3-ownzones-two/test/metadata-second/deliverables' not found in the database.~~
Update: Now the file is missing completely from the task.
Note: This is actually a **CustomWorkflow**
~~WF example: https://connect.two.ownzones.dev/metadata-service-second/workflows/fe706208-894a-4139-9b40-fdfb81069b05~~
WF example: https://connect.two.ownzones.dev/metadata-service-second/workflows/48950660-d47e-
b7-b600-49cee8dac28b

- [x] BE - LE: added fix on Custom Workflow
- [x] QA - Update: Now the file is missing completely from the task.
18. (Partial fix) Only the first file is noted inside the events for a BuildWF. Both the ID and details suggest that only file 1 is being recorded.
~~WF example: https://connect.two.ownzones.dev/metadata-service-second/workflows/7ae37f79-f2a1-42a5-a59c-bf2dcb602bcb~~


- [x] BE
- [x] QA - fixed for the PACKAGING event. The TRANSCODE event still lists only the 1st file. WF example: https://connect.two.ownzones.dev/metadata-service-second/workflows/147d1e72-4b0d-46e2-9748-e3ae16177a24
19. Various optional details appear to be missing from the input and output file details from both S3 json and DynamoDB entry, such as **codec** one.
Keep in mind that the fields are also different between input and output:

- [x] BE
- [x] QA
20. It appears that no S3 event json is generated for Concatenate type of task inside a BuildWF. It appears as one event for each Transcode task inside DynamoDB but nothing in S3.
WF example: https://connect.two.ownzones.dev/metadata-service-second/workflows/967e66bb-cd95-4caa-b199-75899eeb979b
Job example: https://connect.two.ownzones.dev/metadata-service-second/jobs/fa14c723-e32d-4d66-b4b9-52aeb67f43b7
- [x] BE
- [x] QA
21. The **PROVIDER** section from the recorded events doesn't reflect the actual service provider and will always return ZYPLINE (valid for all TRANSCODE events).

- [x] BE
- [x] QA
22. (Aggregation issue) The DynamoDB events for **PACKAGING**, for the **Concatenate**, **Composition** and **Transcode** type of deliverables, lists as source files the source files themselves, the intermediate files from the transcoding of the source files and also the concatenated or the transcoded file itself.
This results in false information being generated as the duration is aggregated for all of them (3 sources, 3 intermediates and 1 concatenated file) and it appears as 1230 seconds total time while the sources/intermediates only have 410 total time (3 files, 180s + 180s + 30s), in the concat case.
WF example: https://connect.two.ownzones.dev/metadata-service-second/workflows/27161761-d37b-4b91-822b-41d159585c17
Note: The intermediate files are also listed for the classical type of transcoding, such as a **Composition** type of deliverable with a **Composition template** as an input.
https://i.imgur.com/6ODLbYz.png
For NTP the output file (deliverable) also appears as input but only for the packaging event. The transcode event lists only one file thus the duration shown in the packaging is doubled from the transcode event.
WF example: https://connect.two.ownzones.dev/metadata-service-second/workflows/f0c501f9-49b3-422d-af8c-05c5cc086432
https://i.imgur.com/kq2c7wo.png
- [x] BE
- [x] QA
23. In the DynamoDB events, it appears that no outputs appear for the NTP transcodes made inside a BuildWorkflow.
WF example: https://connect.two.ownzones.dev/metadata-service-second/workflows/b7b274e3-f5a0-4800-981a-cc5c975c8159
The output file doesn't appear anywhere: https://connect.two.ownzones.dev/metadata-service-second/files/89b0aa16-0bef-4f1d-9df5-8d674ef9f53f
https://i.imgur.com/0Csep5A.png
https://i.imgur.com/JdDb313.png
- [x] BE
- [x] QA
24. transcodingType not available in quantifiers for TRANSCODE events
[extended ERD](https://i.imgur.com/0x8uYwv.png)
[reporting events examples](https://docs.google.com/spreadsheets/d/1rJSssJPiLHCyU-C4FYmY_nCZwQ0J09cQRxf0ymwXQRU/edit#gid=458399358)
- [x] BE
- [x] QA
25. The unit of measure (UOM) is specified in minutes but the values populated under it are in seconds.
For example: 3 input files for a Composition (Concat) type of deliverable sum up 410 seconds (2x180 and 1x30) and appear exactly like this in the reporting event.

Shouldn't the minutes value actually appear? Meaning 6.8 instead of 410, thus matching the documentation?

- [x] BE
- [x] QA
26. For the **TRANSCODE** type of event, **size** and **type** additional properties appear for the **outputMedia**.
It appears that they shouldn't be present.

WF example: https://connect.two.ownzones.dev/metadata-service-second/workflows/fce7c7ec-3588-4ac4-beec-29c76434d222

- [x] BE
- [x] QA - still present
27. Aggregate input audio files for DemuxWorkflow?
WF example: https://connect.two.ownzones.dev/ttt/workflows/a6eaa385-fc73-4d68-a1f0-da4007e47295 - see output no 3
[DynamoDB JSON](https://i.imgur.com/xkp4Fkv.png)
- [x] BE - deployed, to be tested
- [x] QA - Now, all source fileIds will be shown, in case there are multiple files added to audio muxing
28. The **frameRate** and **essenceType** fields need to be removed from all task events with the exception of the **Ingest** and **Transcode** ones.
- [x] BE
- [x] QA
29. **DELIVERY** quantifiers should not contain **inputMedia**, but **inputFile**, as shown below
[current quantifiers](https://i.imgur.com/fEwll16.png)
[expected quantifiers](https://i.imgur.com/bdE1f9B.png)
- [x] BE
- [x] QA (CustomWorkflow + DeliveryWorkflow - Copy - Deliver ok)
30. **OutputMedia** appears on Ingest Transcode events that do not return any visible outputs (framescan, mvcache, interlacescan, cropdetection)(also on Demux - ExtractCaption)
ex: "quantifiers": {
"action": "INTERLACESCAN",
"inputMedia": {
"codec": null,
"duration": "70.036633",
"essenceType": null,
"frameRate": 29.97,
"quality": "hd"
},
"outputMedia": {},
"provider": "ZYPLINE",
"type": "TRANSCODE"
},
"source": {
"
s": [
"8899137f-3e2d-4ca2-9b24-552fb8fdcf69"
- [x] BE
- [x] QA - Ingest Transcode ok
31. **Codec: Null** returned in all event types (Ingest, Transcode, DemuxWorkflow, Concatenate, Packaging):
file: https://connect.two.ownzones.dev/cost-tracking/files/8899137f-3e2d-4ca2-9b24-552fb8fdcf69
SystemEvent: https://us-east-1.console.aws.amazon.com/dynamodbv2/home?region=us-east-1#edit-item?table=tf-tracking-service-main-table-two&itemMode=2&pk=SYSTEM_EVENT%2371eaa2bd-f80a-411e-a283-d432838be6e9&sk=ORG%2329ac6605-7469-4fe3-99ab-7e9822b04149&route=ROUTE_ITEM_EXPLORER
"fileId": "8899137f-3e2d-4ca2-9b24-552fb8fdcf69",
"fileProperties": {
"codec": null,
"duration": "70.036633",
"essenceType": null,
"frameRate": 29.97,
"quality": "hd",
"size": 0.7446694243699312,
"type": "media_container"
- [x] BE - how it was implemented (should check if needed for other type of files):
- if file.type is MediaContainer or ImageSequence
- get videoStream if exists, if not get audioStream
- return stream.codecName
- [x] QA - Fixed for WAV (AAC example), MP4 H264, MP4 H265(HEVC), ProRes MOV, MPEG2TS MPEG2, concat
32. Small issues for Demux Workflow:
* Demux Provider is still Zypline instead of Mediawarp or NodeTT
* Also, there are some information in the output media which are `null`, but should be similar to the input media (frameRate, quality) - **Fixed?**
* ~~And `size` should be removed from Output Media~~ - **Fixed**
* ~~Duration has different unit of measure between input and output~~ - **Fixed**
* ~~**input: "duration": "3.1677510833333336"** - same as UOM~~
* ~~**output: "duration": "190.064875"** - same as before converting seconds to minute~~
[DemuxWorkflow Image](https://i.imgur.com/ujIhQjR.png)
- [x] BE
- [x] QA
33. (nice to have) The order of the input and output file details doesn't match the design or between themselves. The **quality** parameter is placed as the last one for the **outputMedia** while in the **inputMedia** it is the second.

Design: https://i.imgur.com/UnOhqWc.png
- [x] BE
- [x] QA
34. For the **Concatenate** action, the **inputMedia** lists the duration in minutes while the **outputMedia** lists the duration in seconds.
Note: This **DOES NOT** occur for all **Transcode** actions.
Build WF example: https://connect.two.ownzones.dev/metadata-service-second/workflows/b90fb25b-000d-4893-b22c-e4ba7e94031f

- [x] BE
- [x] QA
35. (**NTP**) The output files are aggregated based on their quality set in the profile but this actually results in multiple identical **Transcode** events, in which the duration of all the outputs is summed up instead of having one event for each quality set.
In this example, the NTP profile has 3 inputs (2 HD and 1 SD) and 5 outputs (2 SD, 2 HD and 1 UHD).
The reporting json from S3 has 3 identical Transcode events (correctly identified 3 quality groups) but they are with the same 3 files as inputs while the output is the duration of all 5 outputs put together for all 3 events.
Build WF example: https://connect.two.ownzones.dev/metadata-service-second/workflows/5756b537-d554-48c0-92a4-827fb75da1d5


- [x] BE
- [x] QA
36. (Recheck)(Cached) Starting a **Transcode** job with cached elements will generate the **PACKAGING** events only in DynamoDB. No json is copied in S3.
Build WF example: https://connect.two.ownzones.dev/metadata-service-second/workflows/d1fc109d-7524-44b0-8543-08a6e1145de1
In this example, the 3 input files appear as 3 individual PACKAGING events in DynamoDB but nothing appears in S3.
- [ ] BE
- [x] QA - CNR
~~37. [LOW priority] Order for CustomWorkflow - Demux - Transcode quantifiers(with input and output) is not the same in S3 and DynamoDB~~
[S3 Image](https://i.imgur.com/5dIdYUI.png)
[DynamoDB Image](https://i.imgur.com/sxPNaUp.png)
- [x] BE - not needed, confirmed with @abalan
- [x] QA
38. S3 Ingest events are not generated for CompositionPlaylist files:
[workflow](https://connect.two.ownzones.dev/cost-tracking/workflows/1c21c9aa-1bde-471d-840a-3aada6b0ddad)
[file example](https://connect.two.ownzones.dev/cost-tracking/files/abd2eda5-1279-4ff3-94ac-7f54d9dd4be9)
- [x] BE
- [x] QA
39. Having a **CPL** as an input in a **BuildWF** will show the **inputMedia** as having a zero duration with everything else null (except **essenceType**).
This affects both the **PACKAGING** and **TRANSCODE** events.

Build WF example: https://connect.two.ownzones.dev/metadata-service-second/workflows/58218e9c-5bae-4a46-8a18-dbe0f7769dbf
Job that generated the above workflow: https://connect.two.ownzones.dev/metadata-service-second/jobs/a80b4045-f477-495b-a322-192bbe072510
- [x] BE
- [x] QA
41. CPL CustomWorkflow - Transcode reporting events are not generated
- [x] BE
- [x] QA
42. (Clarification) The S3 json containing reported events doesn't appear to keep track only for a particular system event (workflow).
Currently, it appears that the S3 json contains all the tracked events that occurred in a short time span, regardless if they are from the same system event (workflow) or not. This means that working with a high number of files might make them appear in multiple json files in S3.
From the documentation page, it appears that they should be split per system event: https://github.com/OwnZones/tracking-service/blob/technical-documentation/docs/architecture/events.md#reporting-events

The S3 json in which that screen capture was made: https://s3.console.aws.amazon.com/s3/object/tf-tracking-service-s3-reporting-events-ownzones-two?region=us-east-1&prefix=ORG%234f6c1c0e-863d-4fe0-9eb0-e71216dc8293/2022/9/1/85bac1e8-e811-46e7-b2c2-0a1c30ca2db9.jsonl.gz
- [x] BE - please provide an example with the report events of a single system event added in two different s3 files. Otherwise, it's normal to have report events generated by multiple system events in the same files
- [x] QA - CNR at this time. Will update if it occurs again
43. Configuring an **NTP** to generate the outputs in a different bucket will not register any **TRANSCODE** event in neither S3 or DynamoDB.
For these types of jobs only the **PACKAGING** events are registered.
**NOTE**: Due to CD-9086 this scenario cannot be tested by setting the same bucket with a different organisation/folder. Only a completely different bucket is testable now (the workflow finishes).
**WF example #1**: https://connect.two.ownzones.dev/metadata-service-second/workflows/653107ed-ba43-4049-9c3d-5b3243a560a8
**WF example #2**: https://connect.two.ownzones.dev/metadata-service-second/workflows/e21aa520-e2d0-4932-b769-c58f6751d044
**The job that generated the above workflows is**: https://connect.two.ownzones.dev/metadata-service-second/jobs/a6d09ce5-5f73-41aa-ba2a-9da551f0a6e6
- [x] BE
- [x] QA
44. BuildWF events are missing from S3 and DynamoDB when the source file is deleted while the **Transcode** task is running.
Steps:
a) Start the job with a **Transcode** task (**Composition** deliverable in this case);
b) While the Transcode task is running, delete the source file from S3;
c) Look for the S3 json for that workflow or search DynamoDB.
Note: The event appeared for the deleted item that was used in a **Copy** deliverable.
WF: https://connect.two.ownzones.dev/metadata-service-second/workflows/89b66980-1d86-4cec-8546-b512df786ebf
Job in which the above wf was generated in: https://connect.two.ownzones.dev/metadata-service-second/jobs/49a446d6-39d5-4fca-96f2-e556886a2111
- [x] BE Ileana: done, but this issue was not caused by the deletion of the file, but because of something which was omitted from the augmentation process
- [x] QA
45. MediaWarp workflow for an image sequence will not contain info for the input file in DB related to:
- duration
- codec
- framerate
- quality(resolution)
[Workflow example]
https://connect.two.ownzones.dev/metadata-service-second/workflows/929327cd-a94e-4957-8f67-e22b5008426d
- [x] BE
- [x] QA - Done
46. The failed Transcode NTP buildWorkflows with archived items (with auto dearchive disabled) do not appear in S3 or DynamoDB.
However, the workflows with Composition deliverable type will appear.
[CD-11514](https://ateliere.atlassian.net/browse/CD-11514)
WF example #1: https://connect.two.ownzones.dev/metadata-service-second/workflows/73e4d7d7-f514-413d-8fbc-e17292d68a3c
WF example #2: https://connect.two.ownzones.dev/metadata-service-second/workflows/fe9e2937-a343-4e98-b30c-e09ec56a654a
For comparison, this is a WF for a Composition: https://connect.two.ownzones.dev/metadata-service-second/workflows/6daf729e-6dad-46e9-aefa-f4cba8dc7eb0
- [x] BE - deployed
- [x] QA - reported https://ateliere.atlassian.net/browse/CD-11514
47. The cached tasks appear as regular ones thus no visual difference appears to exist between these events.
BuildWF example: https://connect.two.ownzones.dev/metadata-service-second/workflows/ec4544bf-7c35-4626-a4ed-97399e31cfab
In this example, the Transcode task with input 886a0cfe-ed29-48b1-b17c-288f7442623d used the cached output but in the generated events, it appears as if it ran from start to finish.
Event screen capture: https://i.imgur.com/rTY9xQM.png
Workflow screen capture: https://i.imgur.com/Z1v8pNP.png
In this BuildWorkflow the 3 **TRANSCODE** and 1 **CONCATENATE** events are actually cached but appear as regular events: https://connect.two.ownzones.dev/metadata-service-second/workflows/a41e0ba9-c157-448d-bf1b-7250427fe7fd
- [x] BE
- [x] QA
~~48. Having a CPL input with multiple segments (multiple MXF video and audio files) will display the outputMedia from the **TRANSCODE** event with a zero duration and everything else as null.~~
WF example #1: https://connect.two.ownzones.dev/metadata-service-second/workflows/c34c5982-1401-4b9a-8711-4a20c18d746d
WF example #2: https://connect.two.ownzones.dev/metadata-service-second/workflows/dcd00d39-2d42-452d-a64b-083f49f358b8

Job that generated the above workflows: https://connect.two.ownzones.dev/metadata-service-second/jobs/0ae7613d-65ab-4a85-af12-92ce00655e2d
For comparison, having only one segment in the input CPL doesn't generate any issues: https://connect.two.ownzones.dev/metadata-service-second/workflows/729b7056-9a3b-4565-a831-a2cde6cd22c0
- [x] BE - typo in the composition template -> missing '.' from '.mov' extension and the system doesn't recognize this type of file (without extension) and it looks like it doesn't save properties on it on ingest
- [x] PM - added a ticket in JIRA (Connect Backlog) CD-11475
- [ ] QA
49.**CustomWorkflow - Delivery**: quantifiers - copyType is not shown if the delivery is failed.
[FTP Fail](https://i.imgur.com/I3VwPrA.png)
[S3 Fail](https://i.imgur.com/5mR2KMP.png)
* Please note that the copyType quatifier is missing only for failed deliveries, for [Cancelled](https://i.imgur.com/h0qWeck.png) ones copyType appears
- [x] BE
- [x] QA
48. **CustomWorkflow - Delivery**: quantifiers - type appears as COPY instead of DELIVERY
[S3](https://i.imgur.com/BOHNlDS.png)
[Documentation](https://i.imgur.com/6rkEF3f.png)
- [x] BE
- [x] QA
40. [**CD-11453**] Copy type is not fully specified in CustomWorkflow - Copy - Deliver events (S3_SAME_REGION, S3_DIFFERENT_REGION, ASPERA_ON_CLOUD etc)
[S3event](https://i.imgur.com/RuGgShL.png)
[WF Example](https://connect.two.ownzones.dev/metadata-service-second/workflows/ac681c48-219b-48b3-8c0d-d949e8a60447) - [Platform Used](https://connect.two.ownzones.dev/metadata-service-second/platforms/d0532d5b-9711-4d37-9891-db76cdf392cb)
- [x] BE
- [x] QA
50. Report event is not generated for a media warp workflow with a failed task
[(CD-11479)](https://ateliere.atlassian.net/browse/CD-11479)
- [x] BE
- [x] QA
51. **CustomWorflow-Transcode-ENCODEIAB:** Reporting Event does not include codec
[DynamoDB](https://i.imgur.com/pvUWR4E.png)
[S3](https://i.imgur.com/3LdevwA.png)
- [x] BE - deployed, ready to be tested
- [x] QA
52. In the **PACKAGING** events, the **codec**, **frameRate** and **essenceType** are still present. They should remain only for the **Transcode** and **Ingest** events.
Event screen capture: https://i.imgur.com/1dxheXq.png
Documentation screen capture: https://i.imgur.com/JTjcwBP.png
- [x] BE - deployed
- [x] QA
53. Change transcodeType to transcodingType in quantifiers as per specs
- [x] BE - deployed
- [x] QA
54. Add triggeredByUsername to report event
[CD-11539](https://ateliere.atlassian.net/browse/CD-11539)
- [x] BE - ready for qa, leftover issue 71.
- [x] QA
55. Duration is 0 and FrameRate is "null" for Image Sequence ingest.
[CD-11536 ](https://ateliere.atlassian.net/browse/CD-11536)
e.g. [Workflow](https://connect.two.ownzones.dev/metadata-service-second/workflows/6e9ea7b0-557b-4895-af36-c3c571bf37b4)
[Screenshot](https://i.imgur.com/5K5warg.png)
- [x] BE - deployed
- [x] QA - All good. Duration and framerate are both show correctly now
56. In the events the attribute **frameRate** apears while in the documentation it is referenced as **framerate**.
What is the desired format?
Note: In the metadata strings for the file attributes, we have it as frameRate.
- [x] BE - deployed
- [x] QA
57. **CustomWorkflow-Transcode**: Canceling customWF transcode for a CPL will not generate a ReportingEvent - https://connect.two.ownzones.dev/cost-tracking/workflows/8f09c073-d377-4416-960e-a16b0dd51251
*Same for completed Transcode for CPL https://connect.two.ownzones.dev/cost-tracking/workflows/5a4b7fe3-3cbe-49dd-9e19-6803e14afc35
^ Ileana: for this one, it's because of the cache, the TranscodeTask was skipped
- [x] BE Ileana - this is a timing issue... because the WaitForIngestTask before the Transcode wasn't able to run, we didn't story any source for the transcode yet and because no source was found, the task was skipped;
We should discuss how this cases should be treated.
- [x] QA
58. **Action**, **transcodingType** strings in events are not formatted as per specs. Example: "action": "REVERSETRANSCODE",
[specs](https://i.imgur.com/QpdCqwl.png)
- [x] BE - deployed - to be tested again
- [x] QA - still an issue. So far, found that for CustomWorkflow-Transcode-Demux -["transcodingType": "FlatToImfVideo"](https://us-east-1.console.aws.amazon.com/dynamodbv2/home?region=us-east-1#edit-item?table=tf-tracking-service-main-table-two&itemMode=2&pk=REPORTING_EVENT%238ee097ca-d7b3-4484-a29a-0ff28a23812b&sk=ORG%2329ac6605-7469-4fe3-99ab-7e9822b04149&route=ROUTE_ITEM_EXPLORER)
59. **transcodingType:null** for Ingest-Transcode tasks is missing. As per specs transcodingType:null should always appear.
[specs](https://i.imgur.com/FOjDgNQ.png)
- [x] BE - deployed, to be tested
- [x] QA
60. S3 Delivery with a different account on the same region is marked as copyType": "S3_DIFFERENT_REGION"
[CD-11554](https://ateliere.atlassian.net/browse/CD-11554)
[Workflow](https://connect.two.ownzones.dev/cost-tracking/workflows/96cd03fa-339d-4f27-b7b6-3fe38a3ad88a)
[Job](https://connect.two.ownzones.dev/cost-tracking/jobs/62cd44c5-9266-407b-8182-4811ef3d2f86)
[Platform](https://connect.two.ownzones.dev/cost-tracking/platforms/b5fc586e-ebe8-40c4-adf4-97d28e5708b2)
- [x] BE - done, Code Review
- [x] QA - Done. Now Unknown Region has been added for Failed delivery scenario
61. quantifiers.frameRate value is 23.98 for files with 24000/1001, 2997/125 frameRate(23.976) (https://ateliere.atlassian.net/browse/CD-11513)
file: https://connect.two.ownzones.dev/testautomation/files/c7a4da56-7520-46e7-98d8-c4c7c02506bb
- [x] BE - deployed
- [x] QA
62. Two ensure access tasks are present in an Apple Avails + Catalog delivery
[2_ensure_access](https://i.imgur.com/pZhBWJI.png)
[Workflow](https://connect.two.ownzones.dev/metadata-service-second/workflows/d669d05c-049e-427e-99a0-91dc521066f4)
- [x] BE - deployed
- [x] QA - Only 1 ensure access is present now
63. (Related to the fix for #52) In the **TRANSCODE** event for the **inputMedia**, the **codec**, **framerate** and **essenceType** have been removed.
WF example: https://connect.two.ownzones.dev/metadata-service-second/workflows/a9c76ab0-59e0-457e-b82e-d2889ac81e6c
Screen capture **TRANSCODE** action: https://i.imgur.com/Ofxvilr.png
Screen capture **THUMBNAIL** action: https://i.imgur.com/TrQeSOv.png
Note: This affects the **TRANSCODE** and **THUMBNAIL** actions. For the **TRANSCODE** event with **CONCATENATE** action it appears fine.
Comparison screen capture: https://i.imgur.com/3cROxnz.png
- [x] BE - deployed. Later edit: **essenceType** added.
- [x] QA - partial fix. The **essenceType** is still missing. New WF example: https://connect.two.ownzones.dev/metadata-service-second/workflows/07f2a6ad-9c6f-44f4-8270-f76b9cdf236a
64. CustomWorkflow - Transcode - Reverse_Transcode - transcodingType is missing.
[workflow](https://connect.two.ownzones.dev/cost-tracking/workflows/ef900ed1-bc0a-4952-bc30-dfe7117f0ced)
[Dynamo ReportingEvent](https://us-east-1.console.aws.amazon.com/dynamodbv2/home?region=us-east-1#edit-item?table=tf-tracking-service-main-table-two&itemMode=2&pk=REPORTING_EVENT%23129bdd64-efab-4674-8884-efc801b06ce7&sk=ORG%2329ac6605-7469-4fe3-99ab-7e9822b04149&route=ROUTE_ITEM_EXPLORER)
- [x] BE - deployed, to be tested
- [x] QA
65. (Related to the #39 fix) Having a CPL input (TAG) will generate the **TRANSCODE** event with **ImfToFlat** action with a zero duration both in the event section and in the workflow section.
WF example: https://connect.two.ownzones.dev/metadata-service-second/workflows/ac792a30-02e0-44f2-a593-37e5aa883dc9
Screen capture: https://i.imgur.com/AqkOi9S.png
Note: The **PACKAGING** event is not affected by this.
Job example: https://connect.two.ownzones.dev/metadata-service-second/jobs/a80b4045-f477-495b-a322-192bbe072510
- [x] BE
- [x] QA
65. IAB files are tracked in S3 and DynamoDB (ingest/mediaview transcode tasks)
[Ingest reporting event example](https://us-east-1.console.aws.amazon.com/dynamodbv2/home?region=us-east-1#edit-item?table=tf-tracking-service-main-table-two&itemMode=2&pk=REPORTING_EVENT%234f994256-11af-4507-b4fe-6a587d6d3338&sk=ORG%2329ac6605-7469-4fe3-99ab-7e9822b04149&route=ROUTE_ITEM_EXPLORER)
[MediaView Transcode reporting event example](https://us-east-1.console.aws.amazon.com/dynamodbv2/home?region=us-east-1#edit-item?table=tf-tracking-service-main-table-two&itemMode=2&pk=REPORTING_EVENT%232bf6e5d5-9085-4b74-8c60-adddeb627a81&sk=ORG%2329ac6605-7469-4fe3-99ab-7e9822b04149&route=ROUTE_ITEM_EXPLORER)
https://ateliere.atlassian.net/browse/CD-11531
- [x] BE - deployed
- [x] QA
66. (Clarification) In this buildWorkflow there is only one Transcode task but in the generated events, two **TRANSCODE** events appeared with the same fileID but different lenghts, the same workflow ID but different task ID's.
Neither of these transcode task ID's matches the one recorded in CloudWatch.
In the event the following 2 appear: 229b9464-9dc2-4b9f-88b9-5106a76c66fd and 5c6c47fa-3e6e-41b3-a6fd-96fbad7341c1
In CloudWatch (zypline-API log) the following appears: ee074c32-6884-4575-801e-d7ceab9ec41f
WF example: https://connect.two.ownzones.dev/metadata-service-second/workflows/2c3858a5-6ef0-4adc-a188-34340cb60761
From this job: https://connect.two.ownzones.dev/metadata-service-second/jobs/a80b4045-f477-495b-a322-192bbe072510
Screen capture with task #1: https://i.imgur.com/1ioKekN.png
Screen capture with task #2: https://i.imgur.com/pScsflk.png
What's the good transcode ID?
Screen capture with the ID that appears for the transcode task in CloudWatch: 
Note: 3 outputs from that NTP so they are aggregated.
- [x] BE - Ileana: in this case we have a transcode with 1 input and 3 outputs in a build workflow
We're creating 1 TitlePackaging event and 2 Transcode events.
We have 2 Transcode events because we have 2 types of files in outputs: 2 with HD quality, 1 with SD quality
- [x] QA - Confirming the fix and the fact that the ID is the one that can also be found in the zypline-api logs from CloudWatch
68. For BuildWF with a CPL tag input for the NTP packages (IMF to flat), no **PACKAGING** events appear for the MXF files.
Only one event appears, for the CPL (XML).
[CD-11534](https://ateliere.atlassian.net/browse/CD-11534)
NTP workflow example: https://connect.two.ownzones.dev/metadata-service-second/workflows/b1997bdd-8ec5-45e2-a601-2ff0cac34403
NTP job: https://connect.two.ownzones.dev/metadata-service-second/jobs/a80b4045-f477-495b-a322-192bbe072510
- [x] BE - as discussed, this is expected
- [x] QA - closing based on the same discussion
69. CustomWF-Delivery: Size is not properly formatted if the file size is less than 1.5kB (example an ASSETMAP.xml which is usually about 1.03 kB)
[CD-11527](https://ateliere.atlassian.net/browse/CD-11527 )
[S3 Json](https://s3.console.aws.amazon.com/s3/object/tf-tracking-service-s3-reporting-events-ownzones-two?region=us-east-1&prefix=ORG%2329ac6605-7469-4fe3-99ab-7e9822b04149/2022/9/20/bd222e9a-e0db-4520-93ce-fe07036a9b86.jsonl.gz)
[DynamoDB Reporting Event - Cannot view json since reporting event is broken (returns the size error)](https://us-east-1.console.aws.amazon.com/dynamodbv2/home?region=us-east-1#edit-item?table=tf-tracking-service-main-table-two&itemMode=2&pk=REPORTING_EVENT%230f0f5382-bb73-4195-8edf-c55fd90b9b61&sk=ORG%2329ac6605-7469-4fe3-99ab-7e9822b04149&route=ROUTE_ITEM_EXPLORER)
[File example](https://connect.two.ownzones.dev/cost-tracking/files/80a7d58e-3a3c-44db-9311-8143b773212d)
[Workflow example](https://connect.two.ownzones.dev/cost-tracking/workflows/29e8cd6c-c579-442d-a297-d2d4d53ef2d9)

- [x] BE - not an issue
- [ ] QA
70. Missing **codec**, **framerate** and **essenceType** values from the **inputMedia** quantifier.
WF example #1: https://connect.two.ownzones.dev/metadata-service-second/workflows/162002d1-5461-4d1c-8528-ef1d4e4ed7b8
WF example #2: https://connect.two.ownzones.dev/metadata-service-second/workflows/90b7c136-e93e-4cdd-a1b2-54da88d0a224
- [x] BE
- [x] QA - reported https://ateliere.atlassian.net/browse/CD-11537
71. Canceled workflows (delivery, ingest, custom etc) will not have the "triggeredByUsername" field present:
[CD-11539](https://ateliere.atlassian.net/browse/CD-11539)
Ingest WF:
- [DB](https://us-east-1.console.aws.amazon.com/dynamodbv2/home?region=us-east-1#edit-item?table=tf-tracking-service-main-table-two&itemMode=2&pk=REPORTING_EVENT%239414fe0f-663d-47f3-9495-4ac14e369a0b&sk=ORG%2329ac6605-7469-4fe3-99ab-7e9822b04149&route=ROUTE_ITEM_EXPLORER)
- [Connect](https://connect.two.ownzones.dev/cost-tracking/workflows/9414fe0f-663d-47f3-9495-4ac14e369a0b)
Delivery WF:
- [DB](https://us-east-1.console.aws.amazon.com/dynamodbv2/home?region=us-east-1#edit-item?table=tf-tracking-service-main-table-two&itemMode=2&pk=REPORTING_EVENT%235ee339fc-e62e-4af4-b018-abab8c0f1a90&sk=ORG%2329ac6605-7469-4fe3-99ab-7e9822b04149&route=ROUTE_ITEM_EXPLORER)
- [Connect](https://connect.two.ownzones.dev/cost-tracking/workflows/8fc22804-e170-4afe-9aeb-1c4bc6d996e7)
- [x] BE - deployed on two
- [x] QA
72. (not an issue) Is there any possibility to have any kind of tasks inside the **PACKAGING** event? Currently a **null** appears under the **tasks** section from it.
Screen capture: https://i.imgur.com/6K1U8Dt.png
WF example: https://connect.two.ownzones.dev/metadata-service-second/workflows/621903c1-4581-416f-876c-abcde85a9040
- [ ] BE
- [x] QA - not an issue
73. It appears that the cancelled buildWorkflows do not generate any events for some jobs in S3 or DynamoDB.
WF example #1: https://connect.two.ownzones.dev/metadata-service-second/workflows/bc3855c2-8e9a-47ff-91f4-29ecd1b8170f
WF example #2: https://connect.two.ownzones.dev/metadata-service-second/workflows/4fe824b5-fef3-4f17-94b7-c8286fbe2514
- [x] Ileana: Both workflow examples from above were stopped before "WaitForIngest" task was completed, so the sources for transcode were not added to wf sources - the case is similar with issue 46. Also, the Copy(DELIVERY) events were ignored because it delivers in the same bucket and we're not tracking that.
WF example #3: https://connect.two.ownzones.dev/metadata-service-second/workflows/19b67661-4b63-44f3-adaf-11cc6c1033b3
- [x] Ileana: Not sure why, but for this example I found 8 report events which looked the same, 4 packaging and 4 transcode, but not sure how and why. A job to debug would be helpful
Teddy: This is the job: https://connect.two.ownzones.dev/metadata-service-second/jobs/cba196e8-c6f7-4d2d-a7b6-8f22da780df5
For comparison, this NTP workflow appears: https://connect.two.ownzones.dev/metadata-service-second/workflows/1e2a2fb9-0d62-40af-9f50-87790d5adcb7
- [x] - Ileana: the only difference between the ntp wf that works ^ and the first 2 wf examples is the timing: the good one was canceled after the WaitForIngest task before Transcode was **completed**.
- [x] BE - not an issue
- [x] QA - reported https://ateliere.atlassian.net/browse/CD-11565
74. In the **buildWorkflows** generated when creating an **IMF** manual package from a title, there are no **organizationName**, **organizationSlug** and **titleName** parameters in the generated events.
**WF example**: https://connect.two.ownzones.dev/metadata-service-second/workflows/74e4c0b1-f54f-4dd5-89fd-fc9377dbb09b
**Comparison screenshot**: https://i.imgur.com/FTJFDZC.png
Note: The same issue occurs when creating a Flat package but this can be confirmed only in DynamoDB due to issue #76.
- [x] BE
- [x] QA - reported https://ateliere.atlassian.net/browse/CD-11572
75. Selecting **Convert to flat** in an **IMF** package will trigger the transcoding **buildWorkflow** but no events are generated for it in neither S3 or DynamoDB.
**Note**: The same **IMF_TO_FLAT** type of action works fine in jobs.
**WF example #1**: https://connect.two.ownzones.dev/metadata-service-second/workflows/a07e04da-60a6-4ef2-a50b-d5221dbf556d
**WF example #2**: https://connect.two.ownzones.dev/metadata-service-second/workflows/8e5a6a6f-9edd-4a01-96e8-91cf0eadef12
**IMF package example**: https://connect.two.ownzones.dev/metadata-service-second/packages/2994355e-0768-4379-adda-b696099a5661
- [x] BE - deployed
- [x] QA - reported https://ateliere.atlassian.net/browse/CD-11574
76. It appears that whenever a workflow finishes too quickly, the generated S3 archive, with the events, gets corrupted.
WF example #1: https://connect.two.ownzones.dev/metadata-service-second/workflows/4e899bf3-c12c-4f60-b14a-cc9324cbb471
S3 event archive for #1: https://s3.console.aws.amazon.com/s3/object/tf-tracking-service-s3-reporting-events-ownzones-two?region=us-east-1&prefix=ORG%234f6c1c0e-863d-4fe0-9eb0-e71216dc8293/2022/9/27/da095ae9-0cc8-4732-b3c1-0e3b39344c4f.jsonl.gz
WF example #2: https://connect.two.ownzones.dev/metadata-service-second/workflows/78132e86-710f-4be0-9492-0f51a7707ce9
S3 event archive for #2: https://s3.console.aws.amazon.com/s3/object/tf-tracking-service-s3-reporting-events-ownzones-two?region=us-east-1&prefix=ORG%234f6c1c0e-863d-4fe0-9eb0-e71216dc8293/2022/9/27/01c12b5e-e281-4d6d-9799-0335d385c990.jsonl.gz
- [x] BE - to be deployed
- [x] QA - reported https://ateliere.atlassian.net/browse/CD-11575
77. Following the deployment of the CD-11589, the **uom** and **uomUnits** no longer appear above the **sources** but under them, at the end of the S3 json.
In DynamoDB it appears in the same location as before.
While not an issue per say, is this side effect acceptable?
It is easy to miss it now, in the S3 json events.
S3 json screencapture: https://i.imgur.com/dzR7zRr.png
DynamoDB screencapture: https://i.imgur.com/tWuSSai.png
WF example: https://connect.two.ownzones.dev/metadata-service-second/workflows/0ca24f75-8db9-471d-9594-3517c034ecb2
- [x] BE - not an issue
- [x] QA