💥 Payload limit options and validation#2175
💥 Payload limit options and validation#2175jmaeagle99 wants to merge 1 commit intotemporalio:masterfrom
Conversation
| // Options for when payload sizes exceed limits. | ||
| // | ||
| // Exposed as: [go.temporal.io/sdk/client.PayloadLimitOptions] | ||
| PayloadLimitOptions struct { |
There was a problem hiding this comment.
This is supposed to be similar to https://github.com/temporalio/sdk-python/pull/1288/changes#diff-dca556db4f4a9a3950b33cf90d75886e8c3216b0955fa950e71cf60aadee3946R1211, but the payload visitor in Go doesn't allow access to Memo fields. The visitor (https://github.com/temporalio/api-go/blob/master/proxy/interceptor.go#L652) will pull out the fields of the Memo and pass that map[string]*common.Payload back to the visitor and is handled at https://github.com/temporalio/api-go/blob/master/proxy/interceptor.go#L266. The visitor will see each payload individually rather than as a collection. The collection is what is needed to validate the the memo size.
Thinking of some options to allow this alternative behavior for memos:
- Add a new bool field to VisitPayloadsOptions that means "I want all payloads of a memo at once". Maybe call it
VisitMemoPayloadsAsSequence bool. This would allow current uses of the VisitPayloadsOptions` struct to work as-is if they are using named parameters, but would break those using position parameters (less likely). - Replicate the visitor behavior in the current repo but just for memo fields. Not a great option from a maintenance perspective.
- Don't check memo fields for their limits. This would be a behavioral difference across SDKs.
Looking for other suggestions or alternatives.
64f8f36 to
4b9c580
Compare
4b9c580 to
4fc214c
Compare
What was changed
💥 Behavioral change details
Within workers, if a payload exceeds the server limits, the worker will eagerly fail the current task instead of uploading the object with the too large payload. This allows the task to be retried instead of entirely failing the workflow from within the server.
Customers who use gRPC proxies that alter payloads before they are passed to the server (encryption in the proxy, offloading to external storage within proxy) should disable this new behavior on the worker using the new
DisablePayloadErrorLimitoption on the worker. For example:Examples
Log output when an activity attempts to return a result that exceeds the warning limit:
The above example will upload the payload to the server as normal.
Log output when a workflow attempts to return a result or provide an activity some input that exceeds the warning limit:
The above example will upload the payload to the server as normal.
Log output when a client attempts to provide input to a workflow that exceeds the warning limit:
The above example will upload the payload to the server as normal.
Why?
Users need to know when payload sizes are approaching or have exceeded size limits. This will help prevent workflow outages and inform users to adjust their workflows to make use of alternate storage methods or to break down their payloads more granularly.
Checklist
Closes SDK should fail workflow task if payloads size it known to be too large #2165
Closes Warn if the SDK tried to send a payload above a specific size #2167
How was this tested: Unit tests
Any docs updates needed? Yes