From 84fd78ba51b3362b48ea983c263dd368b88f4287 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Martin=20=C5=A0tekl?= These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/CreateClusterCommand.ts b/clients/client-ecs/src/commands/CreateClusterCommand.ts
index e7b572f1a47b8..02f023ac27872 100644
--- a/clients/client-ecs/src/commands/CreateClusterCommand.ts
+++ b/clients/client-ecs/src/commands/CreateClusterCommand.ts
@@ -178,6 +178,16 @@ export interface CreateClusterCommandOutput extends CreateClusterResponse, __Met
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/CreateServiceCommand.ts b/clients/client-ecs/src/commands/CreateServiceCommand.ts
index 34b727d2c0ddc..c5d204c34cc2e 100644
--- a/clients/client-ecs/src/commands/CreateServiceCommand.ts
+++ b/clients/client-ecs/src/commands/CreateServiceCommand.ts
@@ -108,8 +108,8 @@ export interface CreateServiceCommandOutput extends CreateServiceResponse, __Met
* When creating a service that uses the When the service scheduler launches new tasks, it determines task placement. For information
- * about task placement and task placement strategies, see Amazon ECS
+ * When the service scheduler launches new tasks, it determines task placement. For
+ * information about task placement and task placement strategies, see Amazon ECS
* task placement in the Amazon Elastic Container Service Developer Guide
* Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition. For information about the maximum number of task sets and otther quotas, see Amazon ECS
+ * For information about the maximum number of task sets and other quotas, see Amazon ECS
* service quotas in the Amazon Elastic Container Service Developer Guide. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/DeleteAttributesCommand.ts b/clients/client-ecs/src/commands/DeleteAttributesCommand.ts
index cc37319dccb7e..a7154e1f5b075 100644
--- a/clients/client-ecs/src/commands/DeleteAttributesCommand.ts
+++ b/clients/client-ecs/src/commands/DeleteAttributesCommand.ts
@@ -76,8 +76,8 @@ export interface DeleteAttributesCommandOutput extends DeleteAttributesResponse,
*
* @throws {@link TargetNotFoundException} (client fault)
* The specified target wasn't found. You can view your available container instances
- * with ListContainerInstances. Amazon ECS container instances are
- * cluster-specific and Region-specific. Base exception class for all service exceptions from ECS service. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/DeleteClusterCommand.ts b/clients/client-ecs/src/commands/DeleteClusterCommand.ts
index 78f9d06f0de3a..b05fbe5733dce 100644
--- a/clients/client-ecs/src/commands/DeleteClusterCommand.ts
+++ b/clients/client-ecs/src/commands/DeleteClusterCommand.ts
@@ -131,6 +131,16 @@ export interface DeleteClusterCommandOutput extends DeleteClusterResponse, __Met
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The You can't delete a cluster that has registered container instances. First, deregister
diff --git a/clients/client-ecs/src/commands/DeleteServiceCommand.ts b/clients/client-ecs/src/commands/DeleteServiceCommand.ts
index eb5754ed872c0..c6c5091293324 100644
--- a/clients/client-ecs/src/commands/DeleteServiceCommand.ts
+++ b/clients/client-ecs/src/commands/DeleteServiceCommand.ts
@@ -348,6 +348,16 @@ export interface DeleteServiceCommandOutput extends DeleteServiceResponse, __Met
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/DeleteTaskSetCommand.ts b/clients/client-ecs/src/commands/DeleteTaskSetCommand.ts
index 7888c3b56f250..1c1ce33942141 100644
--- a/clients/client-ecs/src/commands/DeleteTaskSetCommand.ts
+++ b/clients/client-ecs/src/commands/DeleteTaskSetCommand.ts
@@ -129,6 +129,16 @@ export interface DeleteTaskSetCommandOutput extends DeleteTaskSetResponse, __Met
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/DescribeCapacityProvidersCommand.ts b/clients/client-ecs/src/commands/DescribeCapacityProvidersCommand.ts
index 982c4a245fef8..4ab33faa407cd 100644
--- a/clients/client-ecs/src/commands/DescribeCapacityProvidersCommand.ts
+++ b/clients/client-ecs/src/commands/DescribeCapacityProvidersCommand.ts
@@ -97,6 +97,16 @@ export interface DescribeCapacityProvidersCommandOutput extends DescribeCapacity
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/DescribeClustersCommand.ts b/clients/client-ecs/src/commands/DescribeClustersCommand.ts
index 7f08e74087edd..170cae2899834 100644
--- a/clients/client-ecs/src/commands/DescribeClustersCommand.ts
+++ b/clients/client-ecs/src/commands/DescribeClustersCommand.ts
@@ -140,6 +140,16 @@ export interface DescribeClustersCommandOutput extends DescribeClustersResponse,
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/DescribeContainerInstancesCommand.ts b/clients/client-ecs/src/commands/DescribeContainerInstancesCommand.ts
index 35029fffe8ed6..8223837d8660b 100644
--- a/clients/client-ecs/src/commands/DescribeContainerInstancesCommand.ts
+++ b/clients/client-ecs/src/commands/DescribeContainerInstancesCommand.ts
@@ -151,6 +151,16 @@ export interface DescribeContainerInstancesCommandOutput extends DescribeContain
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/DescribeTaskSetsCommand.ts b/clients/client-ecs/src/commands/DescribeTaskSetsCommand.ts
index 2087ba458bd36..97885c31ede64 100644
--- a/clients/client-ecs/src/commands/DescribeTaskSetsCommand.ts
+++ b/clients/client-ecs/src/commands/DescribeTaskSetsCommand.ts
@@ -144,6 +144,16 @@ export interface DescribeTaskSetsCommandOutput extends DescribeTaskSetsResponse,
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The These errors are usually caused by a server issue. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/ListClustersCommand.ts b/clients/client-ecs/src/commands/ListClustersCommand.ts
index d10e4d15c706b..4e09360d53d33 100644
--- a/clients/client-ecs/src/commands/ListClustersCommand.ts
+++ b/clients/client-ecs/src/commands/ListClustersCommand.ts
@@ -60,6 +60,16 @@ export interface ListClustersCommandOutput extends ListClustersResponse, __Metad
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/ListContainerInstancesCommand.ts b/clients/client-ecs/src/commands/ListContainerInstancesCommand.ts
index fdd113d2a2fc2..837a7433d1eaa 100644
--- a/clients/client-ecs/src/commands/ListContainerInstancesCommand.ts
+++ b/clients/client-ecs/src/commands/ListContainerInstancesCommand.ts
@@ -65,6 +65,16 @@ export interface ListContainerInstancesCommandOutput extends ListContainerInstan
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/ListServicesCommand.ts b/clients/client-ecs/src/commands/ListServicesCommand.ts
index 1b06362de3f27..994d478299918 100644
--- a/clients/client-ecs/src/commands/ListServicesCommand.ts
+++ b/clients/client-ecs/src/commands/ListServicesCommand.ts
@@ -64,6 +64,16 @@ export interface ListServicesCommandOutput extends ListServicesResponse, __Metad
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/ListTaskDefinitionsCommand.ts b/clients/client-ecs/src/commands/ListTaskDefinitionsCommand.ts
index 0b56a4f0174a5..b4ebb02b5357f 100644
--- a/clients/client-ecs/src/commands/ListTaskDefinitionsCommand.ts
+++ b/clients/client-ecs/src/commands/ListTaskDefinitionsCommand.ts
@@ -65,6 +65,16 @@ export interface ListTaskDefinitionsCommandOutput extends ListTaskDefinitionsRes
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/ListTasksCommand.ts b/clients/client-ecs/src/commands/ListTasksCommand.ts
index 63cd9b3731ad8..164c249343014 100644
--- a/clients/client-ecs/src/commands/ListTasksCommand.ts
+++ b/clients/client-ecs/src/commands/ListTasksCommand.ts
@@ -70,6 +70,16 @@ export interface ListTasksCommandOutput extends ListTasksResponse, __MetadataBea
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/PutAccountSettingDefaultCommand.ts b/clients/client-ecs/src/commands/PutAccountSettingDefaultCommand.ts
index 250918126c6ef..081605d863b77 100644
--- a/clients/client-ecs/src/commands/PutAccountSettingDefaultCommand.ts
+++ b/clients/client-ecs/src/commands/PutAccountSettingDefaultCommand.ts
@@ -63,6 +63,16 @@ export interface PutAccountSettingDefaultCommandOutput extends PutAccountSetting
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/PutAttributesCommand.ts b/clients/client-ecs/src/commands/PutAttributesCommand.ts
index 185532518e613..97e7086e9d857 100644
--- a/clients/client-ecs/src/commands/PutAttributesCommand.ts
+++ b/clients/client-ecs/src/commands/PutAttributesCommand.ts
@@ -84,8 +84,8 @@ export interface PutAttributesCommandOutput extends PutAttributesResponse, __Met
*
* @throws {@link TargetNotFoundException} (client fault)
* The specified target wasn't found. You can view your available container instances
- * with ListContainerInstances. Amazon ECS container instances are
- * cluster-specific and Region-specific. Base exception class for all service exceptions from ECS service. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/RegisterTaskDefinitionCommand.ts b/clients/client-ecs/src/commands/RegisterTaskDefinitionCommand.ts
index ec0950a4ca30d..74ad0dd4d959c 100644
--- a/clients/client-ecs/src/commands/RegisterTaskDefinitionCommand.ts
+++ b/clients/client-ecs/src/commands/RegisterTaskDefinitionCommand.ts
@@ -39,9 +39,7 @@ export interface RegisterTaskDefinitionCommandOutput extends RegisterTaskDefinit
* policy that's associated with the role. For more information, see IAM
* Roles for Tasks in the Amazon Elastic Container Service Developer Guide. You can specify a Docker networking mode for the containers in your task definition
- * with the These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/RunTaskCommand.ts b/clients/client-ecs/src/commands/RunTaskCommand.ts
index 7b13358bbd45c..e3d8270ccae8f 100644
--- a/clients/client-ecs/src/commands/RunTaskCommand.ts
+++ b/clients/client-ecs/src/commands/RunTaskCommand.ts
@@ -386,6 +386,16 @@ export interface RunTaskCommandOutput extends RunTaskResponse, __MetadataBearer
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.EXTERNAL deployment controller, you
* can specify only parameters that aren't controlled at the task set level. The only
* required parameter is the service name. You control your services using the CreateTaskSet operation. For more information, see Amazon ECS deployment types in the Amazon Elastic Container Service Developer Guide.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterContainsContainerInstancesException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ServerException} (server fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.networkMode parameter. The available network modes correspond to
- * those described in Network
- * settings in the Docker run reference. If you specify the awsvpc
+ * with the networkMode parameter. If you specify the awsvpc
* network mode, the task is allocated an elastic network interface, and you must specify a
* NetworkConfiguration when you create a service or run a task with
* the task definition. For more information, see Task Networking
@@ -81,6 +79,13 @@ export interface RegisterTaskDefinitionCommandOutput extends RegisterTaskDefinit
* },
* ],
* essential: true || false,
+ * restartPolicy: { // ContainerRestartPolicy
+ * enabled: true || false, // required
+ * ignoredExitCodes: [ // IntegerList
+ * Number("int"),
+ * ],
+ * restartAttemptPeriod: Number("int"),
+ * },
* entryPoint: [
* "STRING_VALUE",
* ],
@@ -333,6 +338,13 @@ export interface RegisterTaskDefinitionCommandOutput extends RegisterTaskDefinit
* // },
* // ],
* // essential: true || false,
+ * // restartPolicy: { // ContainerRestartPolicy
+ * // enabled: true || false, // required
+ * // ignoredExitCodes: [ // IntegerList
+ * // Number("int"),
+ * // ],
+ * // restartAttemptPeriod: Number("int"),
+ * // },
* // entryPoint: [
* // "STRING_VALUE",
* // ],
@@ -590,6 +602,16 @@ export interface RegisterTaskDefinitionCommandOutput extends RegisterTaskDefinit
*
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.SIGKILL value is sent and the containers are forcibly stopped. If the
* container handles the SIGTERM value gracefully and exits within 30 seconds
* from receiving it, no SIGKILL value is sent.
For Windows containers, POSIX signals do not work and runtime stops the container by sending
- * a For Windows containers, POSIX signals do not work and runtime stops the container by
+ * sending a The default 30-second timeout can be configured on the Amazon ECS container agent with
@@ -232,6 +232,16 @@ export interface StopTaskCommandOutput extends StopTaskResponse, __MetadataBeare
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/SubmitContainerStateChangeCommand.ts b/clients/client-ecs/src/commands/SubmitContainerStateChangeCommand.ts
index 84f76366bdd88..965c08044c56b 100644
--- a/clients/client-ecs/src/commands/SubmitContainerStateChangeCommand.ts
+++ b/clients/client-ecs/src/commands/SubmitContainerStateChangeCommand.ts
@@ -78,6 +78,16 @@ export interface SubmitContainerStateChangeCommandOutput extends SubmitContainer
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The These errors are usually caused by a server issue. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/TagResourceCommand.ts b/clients/client-ecs/src/commands/TagResourceCommand.ts
index 5aaf38df237e5..c478cd833d67d 100644
--- a/clients/client-ecs/src/commands/TagResourceCommand.ts
+++ b/clients/client-ecs/src/commands/TagResourceCommand.ts
@@ -63,6 +63,16 @@ export interface TagResourceCommandOutput extends TagResourceResponse, __Metadat
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/UpdateClusterCommand.ts b/clients/client-ecs/src/commands/UpdateClusterCommand.ts
index 4e107e51c41f0..3f164d0118b2d 100644
--- a/clients/client-ecs/src/commands/UpdateClusterCommand.ts
+++ b/clients/client-ecs/src/commands/UpdateClusterCommand.ts
@@ -152,6 +152,16 @@ export interface UpdateClusterCommandOutput extends UpdateClusterResponse, __Met
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. Any A container instance has completed draining when it has no more CTRL_SHUTDOWN_EVENT. For more information, see Unable to react to graceful shutdown
+ * CTRL_SHUTDOWN_EVENT. For more information, see Unable to react to graceful shutdown
* of (Windows) container #25982 on GitHub.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ServerException} (server fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.PENDING or RUNNING tasks that do not belong to a service
* aren't affected. You must wait for them to finish or stop them manually.RUNNING
- * tasks. You can verify this using ListTasks.
When a container instance has been drained, you can set a container instance to
* ACTIVE status and once it has reached that status the Amazon ECS scheduler
* can begin scheduling tasks on the instance again.
These errors are usually caused by a client action. This client action might be using * an action or resource on behalf of a user that doesn't have permissions to use the * action or resource. Or, it might be specifying an identifier that isn't valid.
+ *The following list includes additional causes for the error:
+ *The RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific.
diff --git a/clients/client-ecs/src/commands/UpdateServiceCommand.ts b/clients/client-ecs/src/commands/UpdateServiceCommand.ts index 28eaba8bdd3f3..9416699a2241e 100644 --- a/clients/client-ecs/src/commands/UpdateServiceCommand.ts +++ b/clients/client-ecs/src/commands/UpdateServiceCommand.ts @@ -602,6 +602,16 @@ export interface UpdateServiceCommandOutput extends UpdateServiceResponse, __Met *These errors are usually caused by a client action. This client action might be using * an action or resource on behalf of a user that doesn't have permissions to use the * action or resource. Or, it might be specifying an identifier that isn't valid.
+ *The following list includes additional causes for the error:
+ *The RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific.
diff --git a/clients/client-ecs/src/commands/UpdateServicePrimaryTaskSetCommand.ts b/clients/client-ecs/src/commands/UpdateServicePrimaryTaskSetCommand.ts index 08e2de8a37fcd..6794cba10a782 100644 --- a/clients/client-ecs/src/commands/UpdateServicePrimaryTaskSetCommand.ts +++ b/clients/client-ecs/src/commands/UpdateServicePrimaryTaskSetCommand.ts @@ -133,6 +133,16 @@ export interface UpdateServicePrimaryTaskSetCommandOutput *These errors are usually caused by a client action. This client action might be using * an action or resource on behalf of a user that doesn't have permissions to use the * action or resource. Or, it might be specifying an identifier that isn't valid.
+ *The following list includes additional causes for the error:
+ *The RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific.
diff --git a/clients/client-ecs/src/commands/UpdateTaskProtectionCommand.ts b/clients/client-ecs/src/commands/UpdateTaskProtectionCommand.ts index 2236b11f9df8d..8c8af456b1ce3 100644 --- a/clients/client-ecs/src/commands/UpdateTaskProtectionCommand.ts +++ b/clients/client-ecs/src/commands/UpdateTaskProtectionCommand.ts @@ -103,6 +103,16 @@ export interface UpdateTaskProtectionCommandOutput extends UpdateTaskProtectionR *These errors are usually caused by a client action. This client action might be using * an action or resource on behalf of a user that doesn't have permissions to use the * action or resource. Or, it might be specifying an identifier that isn't valid.
+ *The following list includes additional causes for the error:
+ *The RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific.
diff --git a/clients/client-ecs/src/commands/UpdateTaskSetCommand.ts b/clients/client-ecs/src/commands/UpdateTaskSetCommand.ts index a312a183d6ff6..5dac762f1c129 100644 --- a/clients/client-ecs/src/commands/UpdateTaskSetCommand.ts +++ b/clients/client-ecs/src/commands/UpdateTaskSetCommand.ts @@ -6,7 +6,8 @@ import { MetadataBearer as __MetadataBearer } from "@smithy/types"; import { ECSClientResolvedConfig, ServiceInputTypes, ServiceOutputTypes } from "../ECSClient"; import { commonParams } from "../endpoint/EndpointParameters"; -import { UpdateTaskSetRequest, UpdateTaskSetResponse } from "../models/models_0"; +import { UpdateTaskSetRequest } from "../models/models_0"; +import { UpdateTaskSetResponse } from "../models/models_1"; import { de_UpdateTaskSetCommand, se_UpdateTaskSetCommand } from "../protocols/Aws_json1_1"; /** @@ -133,6 +134,16 @@ export interface UpdateTaskSetCommandOutput extends UpdateTaskSetResponse, __Met *These errors are usually caused by a client action. This client action might be using * an action or resource on behalf of a user that doesn't have permissions to use the * action or resource. Or, it might be specifying an identifier that isn't valid.
+ *The following list includes additional causes for the error:
+ *The RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific.
diff --git a/clients/client-ecs/src/models/index.ts b/clients/client-ecs/src/models/index.ts index 9eaceb12865f8..1657800f73ce5 100644 --- a/clients/client-ecs/src/models/index.ts +++ b/clients/client-ecs/src/models/index.ts @@ -1,2 +1,3 @@ // smithy-typescript generated code export * from "./models_0"; +export * from "./models_1"; diff --git a/clients/client-ecs/src/models/models_0.ts b/clients/client-ecs/src/models/models_0.ts index ecd0c438903a5..d59c3f8d774b9 100644 --- a/clients/client-ecs/src/models/models_0.ts +++ b/clients/client-ecs/src/models/models_0.ts @@ -45,6 +45,16 @@ export type AgentUpdateStatus = (typeof AgentUpdateStatus)[keyof typeof AgentUpd *These errors are usually caused by a client action. This client action might be using * an action or resource on behalf of a user that doesn't have permissions to use the * action or resource. Or, it might be specifying an identifier that isn't valid.
+ *The following list includes additional causes for the error:
+ *The RunTask could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
FARGATE_SPOT capacity providers. The Fargate capacity providers are
* available to all accounts and only need to be associated with a cluster to be used in a
* capacity provider strategy.
- * With FARGATE_SPOT, you can run interruption
- * tolerant tasks at a rate that's discounted compared to the FARGATE price.
- * FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the
- * capacity back, your tasks are interrupted with a two-minute warning.
- * FARGATE_SPOT only supports Linux tasks with the X86_64 architecture on
- * platform version 1.3.0 or later.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate
+ * that's discounted compared to the FARGATE price. FARGATE_SPOT
+ * runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are
+ * interrupted with a two-minute warning. FARGATE_SPOT only supports Linux
+ * tasks with the X86_64 architecture on platform version 1.3.0 or later.
A capacity provider strategy may contain a maximum of 6 capacity providers.
* @public */ @@ -1477,15 +1486,13 @@ export interface DeploymentConfiguration { * the task towards the minimum healthy percent total. * * - *The default value for a replica service for
- * minimumHealthyPercent is 100%. The default
- * minimumHealthyPercent value for a service using
- * the DAEMON service schedule is 0% for the CLI,
- * the Amazon Web Services SDKs, and the APIs and 50% for the Amazon Web Services Management Console.
The default value for a replica service for minimumHealthyPercent is
+ * 100%. The default minimumHealthyPercent value for a service using the
+ * DAEMON service schedule is 0% for the CLI, the Amazon Web Services SDKs, and the
+ * APIs and 50% for the Amazon Web Services Management Console.
The minimum number of healthy tasks during a deployment is the
- * desiredCount multiplied by the
- * minimumHealthyPercent/100, rounded up to the
- * nearest integer value.
desiredCount multiplied by the minimumHealthyPercent/100,
+ * rounded up to the nearest integer value.
* If a service is using either the blue/green (CODE_DEPLOY) or
* EXTERNAL deployment types and is running tasks that use the
* EC2 launch type, the minimum healthy
@@ -1651,8 +1658,7 @@ export type AssignPublicIp = (typeof AssignPublicIp)[keyof typeof AssignPublicIp
/**
*
An object representing the networking details for a task or service. For example
- * awsvpcConfiguration=\{subnets=["subnet-12344321"],securityGroups=["sg-12344321"]\}
- *
awsVpcConfiguration=\{subnets=["subnet-12344321"],securityGroups=["sg-12344321"]\}.
* @public
*/
export interface AwsVpcConfiguration {
@@ -1883,16 +1889,12 @@ export interface Secret {
/**
* The log configuration for the container. This parameter maps to LogConfig
- * in the Create a container section of the Docker Remote API and the
- * --log-driver option to
- * docker
- * run
- * .
--log-driver option to docker
+ * run.
* By default, containers use the same logging driver that the Docker daemon uses. * However, the container might use a different logging driver than the Docker daemon by - * specifying a log driver configuration in the container definition. For more information - * about the options for different supported log drivers, see Configure logging - * drivers in the Docker documentation.
+ * specifying a log driver configuration in the container definition. *Understand the following when specifying a log configuration for your * containers.
*splunk, and awsfirelens.
* For tasks hosted on Amazon EC2 instances, the supported log drivers are
* awslogs, fluentd, gelf,
- * json-file, journald,
- * logentries,syslog, splunk, and
- * awsfirelens.
json-file, journald,syslog,
+ * splunk, and awsfirelens.
*
* This parameter requires version 1.18 of the Docker Remote API or greater on
@@ -1936,12 +1937,12 @@ export interface LogConfiguration {
* splunk, and awsfirelens.
For tasks hosted on Amazon EC2 instances, the supported log drivers are
* awslogs, fluentd, gelf,
- * json-file, journald,
- * logentries,syslog, splunk, and
- * awsfirelens.
For more information about using the awslogs log driver, see Using
- * the awslogs log driver in the Amazon Elastic Container Service Developer Guide.
For more information about using the awsfirelens log driver, see Custom log routing in the Amazon Elastic Container Service Developer Guide.
json-file, journald, syslog,
+ * splunk, and awsfirelens.
+ * For more information about using the awslogs log driver, see Send
+ * Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide.
For more information about using the awsfirelens log driver, see Send
+ * Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner.
If you have a custom driver that isn't listed, you can fork the Amazon ECS container
* agent project that's available
@@ -2183,16 +2184,12 @@ export interface ServiceConnectConfiguration {
/**
* The log configuration for the container. This parameter maps to LogConfig
- * in the Create a container section of the Docker Remote API and the
- * --log-driver option to
- * docker
- * run
- * .--log-driver option to docker
+ * run.
By default, containers use the same logging driver that the Docker daemon uses. * However, the container might use a different logging driver than the Docker daemon by - * specifying a log driver configuration in the container definition. For more information - * about the options for different supported log drivers, see Configure logging - * drivers in the Docker documentation.
+ * specifying a log driver configuration in the container definition. *Understand the following when specifying a log configuration for your * containers.
*splunk, and awsfirelens.
* For tasks hosted on Amazon EC2 instances, the supported log drivers are
* awslogs, fluentd, gelf,
- * json-file, journald,
- * logentries,syslog, splunk, and
- * awsfirelens.
json-file, journald,syslog,
+ * splunk, and awsfirelens.
* This parameter requires version 1.18 of the Docker Remote API or greater on @@ -2646,8 +2642,8 @@ export interface CreateServiceRequest { * infrastructure.
*Fargate Spot infrastructure is available for use but a capacity provider - * strategy must be used. For more information, see Fargate capacity providers in the - * Amazon ECS Developer Guide.
+ * strategy must be used. For more information, see Fargate capacity providers in the Amazon ECS + * Developer Guide. *The EC2 launch type runs your tasks on Amazon EC2 instances registered to your
* cluster.
The platform version that your tasks in the service are running on. A platform version
* is specified only for tasks using the Fargate launch type. If one isn't
* specified, the LATEST platform version is used. For more information, see
- * Fargate platform versions in the Amazon Elastic Container Service Developer Guide.
If you do not use an Elastic Load Balancing, we recommend that you use the startPeriod in
* the task definition health check parameters. For more information, see Health
* check.
If your service's tasks take a while to start and respond to Elastic Load Balancing health checks, you can - * specify a health check grace period of up to 2,147,483,647 seconds (about 69 years). + *
If your service's tasks take a while to start and respond to Elastic Load Balancing health checks, you + * can specify a health check grace period of up to 2,147,483,647 seconds (about 69 years). * During that time, the Amazon ECS service scheduler ignores health check status. This grace * period can prevent the service scheduler from marking tasks as unhealthy and stopping * them before they have time to come up.
@@ -2848,7 +2845,9 @@ export interface CreateServiceRequest { *Specifies whether to propagate the tags from the task definition to the task. If no * value is specified, the tags aren't propagated. Tags can only be propagated to the task * during task creation. To add tags to a task after task creation, use the TagResource API action.
- *You must set this to a value other than NONE when you use Cost Explorer. For more information, see Amazon ECS usage reports in the Amazon Elastic Container Service Developer Guide.
You must set this to a value other than NONE when you use Cost Explorer.
+ * For more information, see Amazon ECS usage reports
+ * in the Amazon Elastic Container Service Developer Guide.
The default is NONE.
Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment.
+ *Specify an Key Management Service key ID to encrypt the ephemeral storage for + * deployment.
* @public */ kmsKeyId?: string; @@ -4207,8 +4207,8 @@ export interface DeleteAttributesResponse { /** *The specified target wasn't found. You can view your available container instances - * with ListContainerInstances. Amazon ECS container instances are - * cluster-specific and Region-specific.
+ * with ListContainerInstances. Amazon ECS container instances are cluster-specific and + * Region-specific. * @public */ export class TargetNotFoundException extends __BaseException { @@ -4538,8 +4538,10 @@ export type EnvironmentFileType = (typeof EnvironmentFileType)[keyof typeof Envi * parameter in a container definition, they take precedence over the variables contained * within an environment file. If multiple environment files are specified that contain the * same variable, they're processed from the top down. We recommend that you use unique - * variable names. For more information, see Use a file to pass environment variables to a container in the Amazon Elastic Container Service Developer Guide. - *Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply.
+ * variable names. For more information, see Use a file to pass + * environment variables to a container in the Amazon Elastic Container Service Developer Guide. + *Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations + * apply.
*You must use the following platforms for the Fargate launch type:
*The file type to use. Environment files are objects in Amazon S3. The only supported value is
- * s3.
The file type to use. Environment files are objects in Amazon S3. The only supported value
+ * is s3.
An object representing a container health check. Health check parameters that are
* specified in a container definition override any Docker health checks that exist in the
* container image (such as those specified in a parent image or from the image's
- * Dockerfile). This configuration maps to the HEALTHCHECK parameter of docker run.
HEALTHCHECK parameter of docker run.
* The Amazon ECS container agent only monitors and reports on the health checks specified * in the task definition. Amazon ECS does not monitor Docker health checks that are @@ -4761,17 +4763,18 @@ export interface FirelensConfiguration { *
The following are notes about container health check support:
*If the Amazon ECS container agent becomes disconnected from the Amazon ECS service, this won't
- * cause a container to transition to an UNHEALTHY status. This is by design,
- * to ensure that containers remain running during agent restarts or temporary
- * unavailability. The health check status is the "last heard from" response from the Amazon ECS
- * agent, so if the container was considered HEALTHY prior to the disconnect,
- * that status will remain until the agent reconnects and another health check occurs.
- * There are no assumptions made about the status of the container health checks.
If the Amazon ECS container agent becomes disconnected from the Amazon ECS service, this
+ * won't cause a container to transition to an UNHEALTHY status. This
+ * is by design, to ensure that containers remain running during agent restarts or
+ * temporary unavailability. The health check status is the "last heard from"
+ * response from the Amazon ECS agent, so if the container was considered
+ * HEALTHY prior to the disconnect, that status will remain until
+ * the agent reconnects and another health check occurs. There are no assumptions
+ * made about the status of the container health checks.
Container health checks require version Container health checks require version 1.17.0 or greater of the Amazon ECS
- * container agent. For more information, see Updating the
+ * 1.17.0 or greater of the
+ * Amazon ECS container agent. For more information, see Updating the
* Amazon ECS container agent.
CMD-SHELL, curl -f http://localhost/ || exit 1
*
* An exit code of 0 indicates success, and non-zero exit code indicates failure. For
- * more information, see HealthCheck in the Create a container
- * section of the Docker Remote API.
HealthCheck in tthe docker create-container command
* @public
*/
command: string[] | undefined;
@@ -4846,19 +4848,16 @@ export interface HealthCheck {
}
/**
- * The Linux capabilities to add or remove from the default Docker configuration for a container defined in the task definition. For more information about the default capabilities - * and the non-default available capabilities, see Runtime privilege and Linux capabilities in the Docker run - * reference. For more detailed information about these Linux capabilities, + *
The Linux capabilities to add or remove from the default Docker configuration for a container defined in the task definition. For more detailed information about these Linux capabilities, * see the capabilities(7) Linux manual page.
* @public */ export interface KernelCapabilities { /** *The Linux capabilities for the container that have been added to the default
- * configuration provided by Docker. This parameter maps to CapAdd in the
- * Create a container section of the Docker Remote API and the
- * --cap-add option to docker
- * run.
CapAdd in the docker create-container command and the
+ * --cap-add option to docker
+ * run.
* Tasks launched on Fargate only support adding the SYS_PTRACE kernel
* capability.
The Linux capabilities for the container that have been removed from the default
- * configuration provided by Docker. This parameter maps to CapDrop in the
- * Create a container section of the Docker Remote API and the
- * --cap-drop option to docker
- * run.
CapDrop in the docker create-container command and the
+ * --cap-drop option to docker
+ * run.
* Valid values: Any host devices to expose to the container. This parameter maps to
- * "ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" |
* "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" |
* "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" |
@@ -4988,8 +4986,7 @@ export interface LinuxParameters {
/**
* Devices in the Create a container section of the
- * Docker Remote API and the --device option to docker run.Devices in tthe docker create-container command and the --device option to docker run.
If you're using tasks that use the Fargate launch type, the
* devices parameter isn't supported.
The value for the size (in MiB) of the /dev/shm volume. This parameter
- * maps to the --shm-size option to docker
- * run.
--shm-size option to docker
+ * run.
* If you are using tasks that use the Fargate launch type, the
* sharedMemorySize parameter is not supported.
The container path, mount options, and size (in MiB) of the tmpfs mount. This
- * parameter maps to the --tmpfs option to docker run.
--tmpfs option to docker run.
* If you're using tasks that use the Fargate launch type, the
* tmpfs parameter isn't supported.
0 and 100. If the swappiness parameter is not
* specified, a default value of 60 is used. If a value is not specified for
* maxSwap then this parameter is ignored. This parameter maps to the
- * --memory-swappiness option to docker run.
+ * --memory-swappiness option to docker run.
* If you're using tasks that use the Fargate launch type, the
* swappiness parameter isn't supported.
hostPort can be left blank or it must be the same value as the
* containerPort.
* Most fields of this parameter (containerPort, hostPort,
- * protocol) maps to PortBindings in the
- * Create a container section of the Docker Remote API and the
- * --publish option to
- * docker
- * run
- * . If the network mode of a task definition is set to
+ * protocol) maps to PortBindings in the docker create-container command and the
+ * --publish option to docker
+ * run. If the network mode of a task definition is set to
* host, host ports must either be undefined or match the container port
* in the port mapping.
The value for the specified resource type.
- *When the type is GPU, the value is the number of physical GPUs the
- * Amazon ECS container agent reserves for the container. The number of GPUs that's reserved for
- * all containers in a task can't exceed the number of available GPUs on the container
- * instance that the task is launched on.
When the type is InferenceAccelerator, the value matches
- * the deviceName for an InferenceAccelerator specified in a task definition.
When the type is GPU, the value is the number of physical
+ * GPUs the Amazon ECS container agent reserves for the container. The number
+ * of GPUs that's reserved for all containers in a task can't exceed the number of
+ * available GPUs on the container instance that the task is launched on.
When the type is InferenceAccelerator, the value matches the
+ * deviceName for an InferenceAccelerator specified in a task definition.
You can enable a restart policy for each container defined in your + * task definition, to overcome transient failures faster and maintain task availability. When you + * enable a restart policy for a container, Amazon ECS can restart the container if it exits, without needing to replace + * the task. For more information, see Restart individual containers + * in Amazon ECS tasks with container restart policies in the Amazon Elastic Container Service Developer Guide.
+ * @public + */ +export interface ContainerRestartPolicy { + /** + *Specifies whether a restart policy is enabled for the + * container.
+ * @public + */ + enabled: boolean | undefined; + + /** + *A list of exit codes that Amazon ECS will ignore and not attempt a restart on. You can specify a maximum of 50 container exit + * codes. By default, Amazon ECS does not ignore + * any exit codes.
+ * @public + */ + ignoredExitCodes?: number[]; + + /** + *A period of time (in seconds) that the container must run for before a restart can be attempted. A container can be
+ * restarted only once every restartAttemptPeriod seconds. If a container isn't able to run for this time period and exits early, it will not be restarted. You can set a minimum
+ * restartAttemptPeriod of 60 seconds and a maximum restartAttemptPeriod of 1800 seconds.
+ * By default, a container must run for 300 seconds before it can be restarted.
A list of namespaced kernel parameters to set in the container. This parameter maps to
- * Sysctls in the Create a container section of the
- * Docker Remote API and the --sysctl option to docker run. For example, you can configure
+ * Sysctls in tthe docker create-container command and the --sysctl option to docker run. For example, you can configure
* net.ipv4.tcp_keepalive_time setting to maintain longer lived
* connections.
We don't recommend that you specify network-related systemControls
@@ -5478,7 +5505,7 @@ export type UlimitName = (typeof UlimitName)[keyof typeof UlimitName];
* the nofile resource limit parameter which Fargate
* overrides. The nofile resource limit sets a restriction on
* the number of open files that a container can use. The default
- * nofile soft limit is 1024 and the default hard limit
+ * nofile soft limit is 65535 and the default hard limit
* is 65535.
You can specify the ulimit settings for a container in a task
* definition.
The name of a container. If you're linking multiple containers together in a task
* definition, the name of one container can be entered in the
* links of another container to connect the containers.
- * Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. This parameter maps to name in the
- * Create a container section of the Docker Remote API and the
- * --name option to docker
- * run.
name in tthe docker create-container command and the
+ * --name option to docker
+ * run.
* @public
*/
name?: string;
@@ -5550,10 +5576,9 @@ export interface ContainerDefinition {
* repository-url/image:tag
* or
* repository-url/image@digest
- * . Up to 255 letters (uppercase and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs are allowed. This parameter maps to Image in the
- * Create a container section of the Docker Remote API and the
- * IMAGE parameter of docker
- * run.
+ * . Up to 255 letters (uppercase and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs are allowed. This parameter maps to Image in the docker create-container command and the
+ * IMAGE parameter of docker
+ * run.
* When a new task starts, the Amazon ECS container agent pulls the latest version of @@ -5594,8 +5619,7 @@ export interface ContainerDefinition { /** *
The number of cpu units reserved for the container. This parameter maps
- * to CpuShares in the Create a container section of the
- * Docker Remote API and the --cpu-shares option to docker run.
CpuShares in the docker create-container commandand the --cpu-shares option to docker run.
* This field is optional for tasks using the Fargate launch type, and the
* only requirement is that the total amount of CPU reserved for all containers within a
* task be lower than the task-level cpu value.
On Linux container instances, the Docker daemon on the container instance uses the CPU - * value to calculate the relative CPU share ratios for running containers. For more - * information, see CPU share - * constraint in the Docker documentation. The minimum valid CPU share value - * that the Linux kernel allows is 2. However, the CPU parameter isn't required, and you - * can use CPU values below 2 in your container definitions. For CPU values below 2 - * (including null), the behavior varies based on your Amazon ECS container agent + * value to calculate the relative CPU share ratios for running containers. The minimum valid CPU share value + * that the Linux kernel allows is 2, and the + * maximum valid CPU share value that the Linux kernel allows is 262144. However, the CPU parameter isn't required, and you + * can use CPU values below 2 or above 262144 in your container definitions. For CPU values below 2 + * (including null) or above 262144, the behavior varies based on your Amazon ECS container agent * version:
*+ * Agent versions greater than or equal to + * 1.84.0: CPU values greater than 256 vCPU are passed to Docker as + * 256, which is equivalent to 262144 CPU shares.
+ *On Windows container instances, the CPU limit is enforced as an absolute limit, or a
* quota. Windows containers only have access to the specified amount of CPU that's
@@ -5648,8 +5677,7 @@ export interface ContainerDefinition {
* to exceed the memory specified here, the container is killed. The total amount of memory
* reserved for all containers within a task must be lower than the task
* memory value, if one is specified. This parameter maps to
- * Memory in the Create a container section of the
- * Docker Remote API and the --memory option to docker run.
Memory in thethe docker create-container command and the --memory option to docker run.
* If using the Fargate launch type, this parameter is optional.
*If using the EC2 launch type, you must specify either a task-level
* memory value or a container-level memory value. If you specify both a container-level
@@ -5672,8 +5700,7 @@ export interface ContainerDefinition {
* However, your container can consume more memory when it needs to, up to either the hard
* limit specified with the memory parameter (if applicable), or all of the
* available memory on the container instance, whichever comes first. This parameter maps
- * to MemoryReservation in the Create a container section of
- * the Docker Remote API and the --memory-reservation option to docker run.
MemoryReservation in the the docker create-container command and the --memory-reservation option to docker run.
* If a task-level memory value is not specified, you must specify a non-zero integer for
* one or both of memory or memoryReservation in a container
* definition. If you specify both, memory must be greater than
@@ -5700,12 +5727,9 @@ export interface ContainerDefinition {
* without the need for port mappings. This parameter is only supported if the network mode
* of a task definition is bridge. The name:internalName
* construct is analogous to name:alias in Docker links.
- * Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. For more information about linking Docker containers, go to
- * Legacy container links
- * in the Docker documentation. This parameter maps to Links in the
- * Create a container section of the Docker Remote API and the
- * --link option to docker
- * run.
Links in the docker create-container command and the
+ * --link option to docker
+ * run.
* This parameter is not supported for Windows containers.
*localhost. There's no loopback for port mappings on Windows, so you
* can't access a container's mapped port from the host itself.
* This parameter maps to PortBindings in the
- * Create a container section of the Docker Remote API and the
- * --publish option to docker
- * run. If the network mode of a task definition is set to none,
+ * the docker create-container command and the
+ * --publish option to docker
+ * run. If the network mode of a task definition is set to none,
* then you can't specify port mappings. If the network mode of a task definition is set to
* host, then host ports must either be undefined or they must match the
* container port in the port mapping.
The restart policy for a container. When you set up a restart policy, Amazon ECS can restart the container without needing to replace the + * task. For more information, see Restart individual containers in Amazon ECS tasks with container restart policies in the Amazon Elastic Container Service Developer Guide.
+ * @public + */ + restartPolicy?: ContainerRestartPolicy; + /** *Early versions of the Amazon ECS container agent don't properly handle
@@ -5770,17 +5801,16 @@ export interface ContainerDefinition {
* arguments as command array items instead.
The entry point that's passed to the container. This parameter maps to
- * Entrypoint in the Create a container section of the
- * Docker Remote API and the --entrypoint option to docker run. For more information, see https://docs.docker.com/engine/reference/builder/#entrypoint.
Entrypoint in tthe docker create-container command and the --entrypoint option to docker run.
* @public
*/
entryPoint?: string[];
/**
* The command that's passed to the container. This parameter maps to Cmd in
- * the Create a container section of the Docker Remote API and the
- * COMMAND parameter to docker
- * run. For more information, see https://docs.docker.com/engine/reference/builder/#cmd. If there are multiple arguments, each
+ * the docker create-container command and the
+ * COMMAND parameter to docker
+ * run. If there are multiple arguments, each
* argument is a separated string in the array.
The environment variables to pass to a container. This parameter maps to
- * Env in the Create a container section of the
- * Docker Remote API and the --env option to docker run.
Env in the docker create-container command and the --env option to docker run.
* We don't recommend that you use plaintext environment variables for sensitive * information, such as credential data.
@@ -5800,13 +5829,11 @@ export interface ContainerDefinition { /** *A list of files containing the environment variables to pass to a container. This
- * parameter maps to the --env-file option to docker run.
--env-file option to docker run.
* You can specify up to ten environment files. The file must have a .env
* file extension. Each line in an environment file contains an environment variable in
* VARIABLE=VALUE format. Lines beginning with # are treated
- * as comments and are ignored. For more information about the environment variable file
- * syntax, see Declare default
- * environment variables in file.
If there are environment variables specified using the environment
* parameter in a container definition, they take precedence over the variables contained
* within an environment file. If multiple environment files are specified that contain the
@@ -5819,8 +5846,7 @@ export interface ContainerDefinition {
/**
*
The mount points for data volumes in your container.
- *This parameter maps to Volumes in the Create a container
- * section of the Docker Remote API and the --volume option to docker run.
This parameter maps to Volumes in the the docker create-container command and the --volume option to docker run.
Windows containers can mount whole directories on the same drive as
* $env:ProgramData. Windows containers can't mount directories on a
* different drive, and mount point can't be across drives.
Data volumes to mount from another container. This parameter maps to
- * VolumesFrom in the Create a container section of the
- * Docker Remote API and the --volumes-from option to docker run.
VolumesFrom in tthe docker create-container command and the --volumes-from option to docker run.
* @public
*/
volumesFrom?: VolumeFrom[];
@@ -5914,7 +5939,7 @@ export interface ContainerDefinition {
* later, then they contain the required versions of the container agent and
* ecs-init. For more information, see Amazon ECS-optimized Linux AMI
* in the Amazon Elastic Container Service Developer Guide.
- * The valid values are 2-120 seconds.
+ *The valid values for Fargate are 2-120 seconds.
* @public */ startTimeout?: number; @@ -5954,9 +5979,9 @@ export interface ContainerDefinition { /** *The hostname to use for your container. This parameter maps to Hostname
- * in the Create a container section of the Docker Remote API and the
- * --hostname option to docker
- * run.
--hostname option to docker
+ * run.
* The hostname parameter is not supported if you're using the
* awsvpc network mode.
The user to use inside the container. This parameter maps to User in the
- * Create a container section of the Docker Remote API and the
- * --user option to docker
- * run.
The user to use inside the container. This parameter maps to User in the docker create-container command and the
+ * --user option to docker
+ * run.
When running tasks using the host network mode, don't run containers
* using the root user (UID 0). We recommend using a non-root user for better
@@ -6018,16 +6042,14 @@ export interface ContainerDefinition {
/**
*
The working directory to run commands inside the container in. This parameter maps to
- * WorkingDir in the Create a container section of the
- * Docker Remote API and the --workdir option to docker run.
WorkingDir in the docker create-container command and the --workdir option to docker run.
* @public
*/
workingDirectory?: string;
/**
* When this parameter is true, networking is off within the container. This parameter
- * maps to NetworkDisabled in the Create a container section
- * of the Docker Remote API.
NetworkDisabled in the docker create-container command.
* This parameter is not supported for Windows containers.
*When this parameter is true, the container is given elevated privileges on the host
* container instance (similar to the root user). This parameter maps to
- * Privileged in the Create a container section of the
- * Docker Remote API and the --privileged option to docker run.
Privileged in the the docker create-container command and the --privileged option to docker run
* This parameter is not supported for Windows containers or tasks run on Fargate.
*When this parameter is true, the container is given read-only access to its root file
- * system. This parameter maps to ReadonlyRootfs in the
- * Create a container section of the Docker Remote API and the
- * --read-only option to docker
- * run.
ReadonlyRootfs in the docker create-container command and the
+ * --read-only option to docker
+ * run.
* This parameter is not supported for Windows containers.
*A list of DNS servers that are presented to the container. This parameter maps to
- * Dns in the Create a container section of the
- * Docker Remote API and the --dns option to docker run.
Dns in the the docker create-container command and the --dns option to docker run.
* This parameter is not supported for Windows containers.
*A list of DNS search domains that are presented to the container. This parameter maps
- * to DnsSearch in the Create a container section of the
- * Docker Remote API and the --dns-search option to docker run.
DnsSearch in the docker create-container command and the --dns-search option to docker run.
* This parameter is not supported for Windows containers.
*A list of hostnames and IP address mappings to append to the /etc/hosts
- * file on the container. This parameter maps to ExtraHosts in the
- * Create a container section of the Docker Remote API and the
- * --add-host option to docker
- * run.
ExtraHosts in the docker create-container command and the
+ * --add-host option to docker
+ * run.
* This parameter isn't supported for Windows containers or tasks that use the
* awsvpc network mode.
A list of strings to provide custom configuration for multiple security systems. For - * more information about valid values, see Docker - * Run Security Configuration. This field isn't valid for containers in tasks + *
A list of strings to provide custom configuration for multiple security systems. This field isn't valid for containers in tasks * using the Fargate launch type.
*For Linux tasks on EC2, this parameter can be used to reference custom * labels for SELinux and AppArmor multi-level security systems.
@@ -6108,10 +6123,9 @@ export interface ContainerDefinition { * For more information, see Using gMSAs for Windows * Containers and Using gMSAs for Linux * Containers in the Amazon Elastic Container Service Developer Guide. - *This parameter maps to SecurityOpt in the
- * Create a container section of the Docker Remote API and the
- * --security-opt option to docker
- * run.
This parameter maps to SecurityOpt in the docker create-container command and the
+ * --security-opt option to docker
+ * run.
The Amazon ECS container agent running on a container instance must register with the
* ECS_SELINUX_CAPABLE=true or ECS_APPARMOR_CAPABLE=true
@@ -6119,8 +6133,6 @@ export interface ContainerDefinition {
* security options. For more information, see Amazon ECS Container
* Agent Configuration in the Amazon Elastic Container Service Developer Guide.
For more information about valid values, see Docker - * Run Security Configuration.
*Valid values: "no-new-privileges" | "apparmor:PROFILE" | "label:value" | * "credentialspec:CredentialSpecFilePath"
* @public @@ -6130,24 +6142,21 @@ export interface ContainerDefinition { /** *When this parameter is true, you can deploy containerized applications
* that require stdin or a tty to be allocated. This parameter
- * maps to OpenStdin in the Create a container section of the
- * Docker Remote API and the --interactive option to docker run.
OpenStdin in the docker create-container command and the --interactive option to docker run.
* @public
*/
interactive?: boolean;
/**
* When this parameter is true, a TTY is allocated. This parameter maps to
- * Tty in the Create a container section of the
- * Docker Remote API and the --tty option to docker run.
Tty in tthe docker create-container command and the --tty option to docker run.
* @public
*/
pseudoTerminal?: boolean;
/**
* A key/value map of labels to add to the container. This parameter maps to
- * Labels in the Create a container section of the
- * Docker Remote API and the --label option to docker run. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '\{\{.Server.APIVersion\}\}'
+ * Labels in the docker create-container command and the --label option to docker run. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '\{\{.Server.APIVersion\}\}'
*
A list of ulimits to set in the container. If a ulimit value
* is specified in a task definition, it overrides the default values set by Docker. This
- * parameter maps to Ulimits in the Create a container section
- * of the Docker Remote API and the --ulimit option to docker run. Valid naming values are displayed
+ * parameter maps to Ulimits in tthe docker create-container command and the --ulimit option to docker run. Valid naming values are displayed
* in the Ulimit data type.
Amazon ECS tasks hosted on Fargate use the default
* resource limit values set by the operating system with the exception of
* the nofile resource limit parameter which Fargate
* overrides. The nofile resource limit sets a restriction on
* the number of open files that a container can use. The default
- * nofile soft limit is 1024 and the default hard limit
+ * nofile soft limit is 65535 and the default hard limit
* is 65535.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '\{\{.Server.APIVersion\}\}'
*
The log configuration specification for the container.
- *This parameter maps to LogConfig in the
- * Create a container section of the Docker Remote API and the
- * --log-driver option to docker
- * run. By default, containers use the same logging driver that the Docker
+ *
This parameter maps to LogConfig in the docker create-container command and the
+ * --log-driver option to docker
+ * run. By default, containers use the same logging driver that the Docker
* daemon uses. However the container can use a different logging driver than the Docker
* daemon by specifying a log driver with this parameter in the container definition. To
* use a different logging driver for a container, the log system must be configured
* properly on the container instance (or on a different log server for remote logging
- * options). For more information about the options for different supported log drivers,
- * see Configure
- * logging drivers in the Docker documentation.
Amazon ECS currently supports a subset of the logging drivers available to the Docker * daemon (shown in the LogConfiguration data type). Additional log @@ -6209,18 +6214,16 @@ export interface ContainerDefinition { /** *
The container health check command and associated configuration parameters for the
- * container. This parameter maps to HealthCheck in the
- * Create a container section of the Docker Remote API and the
- * HEALTHCHECK parameter of docker
- * run.
HealthCheck in the docker create-container command and the
+ * HEALTHCHECK parameter of docker
+ * run.
* @public
*/
healthCheck?: HealthCheck;
/**
* A list of namespaced kernel parameters to set in the container. This parameter maps to
- * Sysctls in the Create a container section of the
- * Docker Remote API and the --sysctl option to docker run. For example, you can configure
+ * Sysctls in tthe docker create-container command and the --sysctl option to docker run. For example, you can configure
* net.ipv4.tcp_keepalive_time setting to maintain longer lived
* connections.
The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported
- * value is 20 GiB and the maximum supported value is
+ *
The total amount, in GiB, of ephemeral storage to set for the task. The minimum
+ * supported value is 20 GiB and the maximum supported value is
* 200 GiB.
docker plugin ls to retrieve the driver name from
* your container instance. If the driver was installed using another method, use Docker
- * plugin discovery to retrieve the driver name. For more information, see Docker
- * plugin discovery. This parameter maps to Driver in the
- * Create a volume section of the Docker Remote API and the
- * xxdriver option to docker
- * volume create.
+ * plugin discovery to retrieve the driver name. This parameter maps to Driver in the docker create-container command and the
+ * xxdriver option to docker
+ * volume create.
* @public
*/
driver?: string;
/**
* A map of Docker driver-specific options passed through. This parameter maps to
- * DriverOpts in the Create a volume section of the
- * Docker Remote API and the xxopt option to docker
- * volume create.
DriverOpts in the docker create-volume command and the xxopt option to docker
+ * volume create.
* @public
*/
driverOpts?: RecordCustom metadata to add to your Docker volume. This parameter maps to
- * Labels in the Create a volume section of the
- * Docker Remote API and the xxlabel option to docker
- * volume create.
Labels in the docker create-container command and the xxlabel option to docker
+ * volume create.
* @public
*/
labels?: RecordThe short name or full Amazon Resource Name (ARN) of the Identity and Access Management role that grants containers in the - * task permission to call Amazon Web Services APIs on your behalf. For more information, see Amazon ECS - * Task Role in the Amazon Elastic Container Service Developer Guide.
- *IAM roles for tasks on Windows require that the -EnableTaskIAMRole
- * option is set when you launch the Amazon ECS-optimized Windows AMI. Your containers must also run some
- * configuration code to use the feature. For more information, see Windows IAM roles
- * for tasks in the Amazon Elastic Container Service Developer Guide.
The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent - * permission to make Amazon Web Services API calls on your behalf. The task execution IAM role is required - * depending on the requirements of your task. For more information, see Amazon ECS task - * execution IAM role in the Amazon Elastic Container Service Developer Guide.
+ * permission to make Amazon Web Services API calls on your behalf. For informationabout the required IAM roles for Amazon ECS, see IAM roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide. * @public */ executionRoleArn?: string; @@ -6990,13 +6984,11 @@ export interface TaskDefinition { * to use a non-root user. *If the network mode is awsvpc, the task is allocated an elastic network
- * interface, and you must specify a NetworkConfiguration value when you create
+ * interface, and you must specify a NetworkConfiguration value when you create
* a service or run a task with the task definition. For more information, see Task Networking in the
* Amazon Elastic Container Service Developer Guide.
If the network mode is host, you cannot run multiple instantiations of the
* same task on a single container instance when port mappings are used.
For more information, see Network - * settings in the Docker run reference.
* @public */ networkMode?: NetworkMode; @@ -7081,6 +7073,9 @@ export interface TaskDefinition { * this field is optional. Any value can be used. If you use the Fargate launch type, this * field is required. You must use one of the following values. The value that you choose * determines your range of valid values for thememory parameter.
+ * If you use the EC2 launch type, this field is optional. Supported values
+ * are between 128 CPU units (0.125 vCPUs) and 10240
+ * CPU units (10 vCPUs).
The CPU units cannot be less than 1 vCPU when you use Windows containers on * Fargate.
*If task is specified, all containers within the specified
* task share the same process namespace.
If no value is specified, the - * default is a private namespace for each container. For more information, - * see PID settings in the Docker run - * reference.
+ * default is a private namespace for each container. *If the host PID mode is used, there's a heightened risk
- * of undesired process namespace exposure. For more information, see
- * Docker security.
This parameter is not supported for Windows containers.
*none is specified, then IPC resources
* within the containers of a task are private and not shared with other containers in a
* task or on the container instance. If no value is specified, then the IPC resource
- * namespace sharing depends on the Docker daemon setting on the container instance. For
- * more information, see IPC
- * settings in the Docker run reference.
+ * namespace sharing depends on the Docker daemon setting on the container instance.
* If the host IPC mode is used, be aware that there is a heightened risk of
- * undesired IPC namespace expose. For more information, see Docker
- * security.
If you are setting namespaced kernel parameters using The total amount, in GiB, of the ephemeral storage to set for the task. The minimum
- * supported value is systemControls for
* the containers in the task, the following will apply to your IPC resource namespace. For
* more information, see System
@@ -8477,14 +8466,15 @@ export interface Container {
export interface TaskEphemeralStorage {
/**
* 20 GiB and the maximum supported value is
 200
- * GiB.20 GiB and the maximum supported value is
+ * 200 GiB.
Specify an Key Management Service key ID to encrypt the ephemeral storage for the task.
+ *Specify an Key Management Service key ID to encrypt the ephemeral storage for the + * task.
* @public */ kmsKeyId?: string; @@ -10799,9 +10789,7 @@ export interface RegisterTaskDefinitionRequest { /** *The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent - * permission to make Amazon Web Services API calls on your behalf. The task execution IAM role is required - * depending on the requirements of your task. For more information, see Amazon ECS task - * execution IAM role in the Amazon Elastic Container Service Developer Guide.
+ * permission to make Amazon Web Services API calls on your behalf. For informationabout the required IAM roles for Amazon ECS, see IAM roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide. * @public */ executionRoleArn?: string; @@ -10828,13 +10816,11 @@ export interface RegisterTaskDefinitionRequest { * to use a non-root user. *If the network mode is awsvpc, the task is allocated an elastic network
- * interface, and you must specify a NetworkConfiguration value when you create
+ * interface, and you must specify a NetworkConfiguration value when you create
* a service or run a task with the task definition. For more information, see Task Networking in the
* Amazon Elastic Container Service Developer Guide.
If the network mode is host, you cannot run multiple instantiations of the
* same task on a single container instance when port mappings are used.
For more information, see Network - * settings in the Docker run reference.
* @public */ networkMode?: NetworkMode; @@ -11018,12 +11004,9 @@ export interface RegisterTaskDefinitionRequest { *If task is specified, all containers within the specified
* task share the same process namespace.
If no value is specified, the - * default is a private namespace for each container. For more information, - * see PID settings in the Docker run - * reference.
+ * default is a private namespace for each container. *If the host PID mode is used, there's a heightened risk
- * of undesired process namespace exposure. For more information, see
- * Docker security.
This parameter is not supported for Windows containers.
*none is specified, then IPC resources
* within the containers of a task are private and not shared with other containers in a
* task or on the container instance. If no value is specified, then the IPC resource
- * namespace sharing depends on the Docker daemon setting on the container instance. For
- * more information, see IPC
- * settings in the Docker run reference.
+ * namespace sharing depends on the Docker daemon setting on the container instance.
* If the host IPC mode is used, be aware that there is a heightened risk of
- * undesired IPC namespace expose. For more information, see Docker
- * security.
If you are setting namespaced kernel parameters using An optional tag specified when a task is started. For example, if you automatically
* trigger a task to run a batch process job, you could apply a unique identifier for that
* job to your task with the systemControls for
* the containers in the task, the following will apply to your IPC resource namespace. For
* more information, see System
@@ -11569,9 +11549,9 @@ export interface RunTaskRequest {
* startedBy parameter. You can then identify which
- * tasks belong to that job by filtering the results of a ListTasks call
- * with the startedBy value. Up to 128 letters (uppercase and lowercase),
- * numbers, hyphens (-), and underscores (_) are allowed.startedBy value. Up to 128 letters (uppercase and lowercase), numbers,
+ * hyphens (-), forward slash (/), and underscores (_) are allowed.
If a task is started by an Amazon ECS service, then the startedBy parameter
* contains the deployment ID of the service that starts it.
To specify a specific revision, include the revision number in the ARN. For example,
* to specify revision 2, use
* arn:aws:ecs:us-east-1:111122223333:task-definition/TaskFamilyName:2.
To specify all revisions, use the wildcard (*) in the ARN. For example, to specify all - * revisions, use + *
To specify all revisions, use the wildcard (*) in the ARN. For example, to specify
+ * all revisions, use
* arn:aws:ecs:us-east-1:111122223333:task-definition/TaskFamilyName:*.
For more information, see Policy Resources for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
* @public @@ -11659,7 +11639,6 @@ export interface RunTaskResponse { /** *A full description of the tasks that were run. The tasks that were successfully placed * on your cluster are described here.
- * * @public */ tasks?: Task[]; @@ -11753,9 +11732,9 @@ export interface StartTaskRequest { *An optional tag specified when a task is started. For example, if you automatically
* trigger a task to run a batch process job, you could apply a unique identifier for that
* job to your task with the startedBy parameter. You can then identify which
- * tasks belong to that job by filtering the results of a ListTasks call
- * with the startedBy value. Up to 36 letters (uppercase and lowercase),
- * numbers, hyphens (-), and underscores (_) are allowed.
startedBy value. Up to 36 letters (uppercase and lowercase), numbers,
+ * hyphens (-), forward slash (/), and underscores (_) are allowed.
* If a task is started by an Amazon ECS service, the startedBy parameter
* contains the deployment ID of the service that starts it.
Details about the task set.
- * @public - */ - taskSet?: TaskSet; -} - /** * @internal */ diff --git a/clients/client-ecs/src/models/models_1.ts b/clients/client-ecs/src/models/models_1.ts new file mode 100644 index 0000000000000..fd0da70788da8 --- /dev/null +++ b/clients/client-ecs/src/models/models_1.ts @@ -0,0 +1,13 @@ +// smithy-typescript generated code +import { TaskSet } from "./models_0"; + +/** + * @public + */ +export interface UpdateTaskSetResponse { + /** + *Details about the task set.
+ * @public + */ + taskSet?: TaskSet; +} diff --git a/clients/client-ecs/src/protocols/Aws_json1_1.ts b/clients/client-ecs/src/protocols/Aws_json1_1.ts index 1fee3b94304f6..f4104f54aff4d 100644 --- a/clients/client-ecs/src/protocols/Aws_json1_1.ts +++ b/clients/client-ecs/src/protocols/Aws_json1_1.ts @@ -198,6 +198,7 @@ import { ContainerInstanceField, ContainerInstanceHealthStatus, ContainerOverride, + ContainerRestartPolicy, ContainerStateChange, CreateCapacityProviderRequest, CreateClusterRequest, @@ -370,11 +371,11 @@ import { UpdateTaskProtectionRequest, UpdateTaskProtectionResponse, UpdateTaskSetRequest, - UpdateTaskSetResponse, VersionInfo, Volume, VolumeFrom, } from "../models/models_0"; +import { UpdateTaskSetResponse } from "../models/models_1"; /** * serializeAws_json1_1CreateCapacityProviderCommand @@ -2772,6 +2773,8 @@ const de_UpdateInProgressExceptionRes = async ( // se_ContainerOverrides omitted. +// se_ContainerRestartPolicy omitted. + // se_ContainerStateChange omitted. // se_ContainerStateChanges omitted. @@ -2903,6 +2906,8 @@ const se_CreateTaskSetRequest = (input: CreateTaskSetRequest, context: __SerdeCo // se_InferenceAccelerators omitted. +// se_IntegerList omitted. + // se_KernelCapabilities omitted. // se_KeyValuePair omitted. @@ -3353,6 +3358,8 @@ const de_ContainerInstances = (output: any, context: __SerdeContext): ContainerI // de_ContainerOverrides omitted. +// de_ContainerRestartPolicy omitted. + /** * deserializeAws_json1_1Containers */ @@ -3652,6 +3659,8 @@ const de_InstanceHealthCheckResultList = (output: any, context: __SerdeContext): return retVal; }; +// de_IntegerList omitted. + // de_InvalidParameterException omitted. // de_KernelCapabilities omitted. diff --git a/codegen/sdk-codegen/aws-models/ecs.json b/codegen/sdk-codegen/aws-models/ecs.json index 6f4fad9674ec2..bfcc1fe5e4983 100644 --- a/codegen/sdk-codegen/aws-models/ecs.json +++ b/codegen/sdk-codegen/aws-models/ecs.json @@ -1518,7 +1518,7 @@ } }, "traits": { - "smithy.api#documentation": "An object representing the networking details for a task or service. For example\n\t\t\t\tawsvpcConfiguration={subnets=[\"subnet-12344321\"],securityGroups=[\"sg-12344321\"]}\n
An object representing the networking details for a task or service. For example\n\t\t\t\tawsVpcConfiguration={subnets=[\"subnet-12344321\"],securityGroups=[\"sg-12344321\"]}.
The details of a capacity provider strategy. A capacity provider strategy can be set\n\t\t\twhen using the RunTask or CreateCluster APIs or as\n\t\t\tthe default capacity provider strategy for a cluster with the CreateCluster API.
\nOnly capacity providers that are already associated with a cluster and have an\n\t\t\t\tACTIVE or UPDATING status can be used in a capacity\n\t\t\tprovider strategy. The PutClusterCapacityProviders API is used to\n\t\t\tassociate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity\n\t\t\tprovider must already be created. New Auto Scaling group capacity providers can be\n\t\t\tcreated with the CreateCapacityProvider API operation.
\nTo use a Fargate capacity provider, specify either the FARGATE or\n\t\t\t\tFARGATE_SPOT capacity providers. The Fargate capacity providers are\n\t\t\tavailable to all accounts and only need to be associated with a cluster to be used in a\n\t\t\tcapacity provider strategy.
With FARGATE_SPOT, you can run interruption\n\t\t\ttolerant tasks at a rate that's discounted compared to the FARGATE price.\n\t\t\t\tFARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the\n\t\t\tcapacity back, your tasks are interrupted with a two-minute warning.\n\t\t\t\tFARGATE_SPOT only supports Linux tasks with the X86_64 architecture on\n\t\t\tplatform version 1.3.0 or later.
A capacity provider strategy may contain a maximum of 6 capacity providers.
" + "smithy.api#documentation": "The details of a capacity provider strategy. A capacity provider strategy can be set\n\t\t\twhen using the RunTask or CreateCluster APIs or as\n\t\t\tthe default capacity provider strategy for a cluster with the CreateCluster API.
\nOnly capacity providers that are already associated with a cluster and have an\n\t\t\t\tACTIVE or UPDATING status can be used in a capacity\n\t\t\tprovider strategy. The PutClusterCapacityProviders API is used to\n\t\t\tassociate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity\n\t\t\tprovider must already be created. New Auto Scaling group capacity providers can be\n\t\t\tcreated with the CreateCapacityProvider API operation.
\nTo use a Fargate capacity provider, specify either the FARGATE or\n\t\t\t\tFARGATE_SPOT capacity providers. The Fargate capacity providers are\n\t\t\tavailable to all accounts and only need to be associated with a cluster to be used in a\n\t\t\tcapacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate\n\t\t\tthat's discounted compared to the FARGATE price. FARGATE_SPOT\n\t\t\truns tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are\n\t\t\tinterrupted with a two-minute warning. FARGATE_SPOT only supports Linux\n\t\t\ttasks with the X86_64 architecture on platform version 1.3.0 or later.
A capacity provider strategy may contain a maximum of 6 capacity providers.
" } }, "com.amazonaws.ecs#CapacityProviderStrategyItemBase": { @@ -1762,7 +1762,7 @@ } }, "traits": { - "smithy.api#documentation": "These errors are usually caused by a client action. This client action might be using\n\t\t\tan action or resource on behalf of a user that doesn't have permissions to use the\n\t\t\taction or resource. Or, it might be specifying an identifier that isn't valid.
", + "smithy.api#documentation": "These errors are usually caused by a client action. This client action might be using\n\t\t\tan action or resource on behalf of a user that doesn't have permissions to use the\n\t\t\taction or resource. Or, it might be specifying an identifier that isn't valid.
\nThe following list includes additional causes for the error:
\nThe RunTask could not be processed because you use managed\n\t\t\t\t\tscaling and there is a capacity error because the quota of tasks in the\n\t\t\t\t\t\tPROVISIONING per cluster has been reached. For information\n\t\t\t\t\tabout the service quotas, see Amazon ECS\n\t\t\t\t\t\tservice quotas.
The name of a container. If you're linking multiple containers together in a task\n\t\t\tdefinition, the name of one container can be entered in the\n\t\t\t\tlinks of another container to connect the containers.\n\t\t\tUp to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. This parameter maps to name in the\n\t\t\tCreate a container section of the Docker Remote API and the\n\t\t\t\t--name option to docker\n\t\t\trun.
The name of a container. If you're linking multiple containers together in a task\n\t\t\tdefinition, the name of one container can be entered in the\n\t\t\t\tlinks of another container to connect the containers.\n\t\t\tUp to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. This parameter maps to name in tthe docker create-container command and the\n\t\t\t\t--name option to docker\n\t\t\trun.
The image used to start a container. This string is passed directly to the Docker\n\t\t\tdaemon. By default, images in the Docker Hub registry are available. Other repositories\n\t\t\tare specified with either \n repository-url/image:tag\n or \n repository-url/image@digest\n . Up to 255 letters (uppercase and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs are allowed. This parameter maps to Image in the\n\t\t\tCreate a container section of the Docker Remote API and the\n\t\t\t\tIMAGE parameter of docker\n\t\t\t\trun.
When a new task starts, the Amazon ECS container agent pulls the latest version of\n\t\t\t\t\tthe specified image and tag for the container to use. However, subsequent\n\t\t\t\t\tupdates to a repository image aren't propagated to already running tasks.
\nImages in Amazon ECR repositories can be specified by either using the full\n\t\t\t\t\t\tregistry/repository:tag or\n\t\t\t\t\t\tregistry/repository@digest. For example,\n\t\t\t\t\t\t012345678910.dkr.ecr.\n\t\t\t\t\tor\n\t\t\t\t\t\t012345678910.dkr.ecr..\n\t\t\t\t
Images in official repositories on Docker Hub use a single name (for example,\n\t\t\t\t\t\tubuntu or mongo).
Images in other repositories on Docker Hub are qualified with an organization\n\t\t\t\t\tname (for example, amazon/amazon-ecs-agent).
Images in other online repositories are qualified further by a domain name\n\t\t\t\t\t(for example, quay.io/assemblyline/ubuntu).
The image used to start a container. This string is passed directly to the Docker\n\t\t\tdaemon. By default, images in the Docker Hub registry are available. Other repositories\n\t\t\tare specified with either \n repository-url/image:tag\n or \n repository-url/image@digest\n . Up to 255 letters (uppercase and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs are allowed. This parameter maps to Image in the docker create-container command and the\n\t\t\t\tIMAGE parameter of docker\n\t\t\t\trun.
When a new task starts, the Amazon ECS container agent pulls the latest version of\n\t\t\t\t\tthe specified image and tag for the container to use. However, subsequent\n\t\t\t\t\tupdates to a repository image aren't propagated to already running tasks.
\nImages in Amazon ECR repositories can be specified by either using the full\n\t\t\t\t\t\tregistry/repository:tag or\n\t\t\t\t\t\tregistry/repository@digest. For example,\n\t\t\t\t\t\t012345678910.dkr.ecr.\n\t\t\t\t\tor\n\t\t\t\t\t\t012345678910.dkr.ecr..\n\t\t\t\t
Images in official repositories on Docker Hub use a single name (for example,\n\t\t\t\t\t\tubuntu or mongo).
Images in other repositories on Docker Hub are qualified with an organization\n\t\t\t\t\tname (for example, amazon/amazon-ecs-agent).
Images in other online repositories are qualified further by a domain name\n\t\t\t\t\t(for example, quay.io/assemblyline/ubuntu).
The number of cpu units reserved for the container. This parameter maps\n\t\t\tto CpuShares in the Create a container section of the\n\t\t\tDocker Remote API and the --cpu-shares option to docker run.
This field is optional for tasks using the Fargate launch type, and the\n\t\t\tonly requirement is that the total amount of CPU reserved for all containers within a\n\t\t\ttask be lower than the task-level cpu value.
You can determine the number of CPU units that are available per EC2 instance type\n\t\t\t\tby multiplying the vCPUs listed for that instance type on the Amazon EC2 Instances detail page\n\t\t\t\tby 1,024.
\nLinux containers share unallocated CPU units with other containers on the container\n\t\t\tinstance with the same ratio as their allocated amount. For example, if you run a\n\t\t\tsingle-container task on a single-core instance type with 512 CPU units specified for\n\t\t\tthat container, and that's the only task running on the container instance, that\n\t\t\tcontainer could use the full 1,024 CPU unit share at any given time. However, if you\n\t\t\tlaunched another copy of the same task on that container instance, each task is\n\t\t\tguaranteed a minimum of 512 CPU units when needed. Moreover, each container could float\n\t\t\tto higher CPU usage if the other container was not using it. If both tasks were 100%\n\t\t\tactive all of the time, they would be limited to 512 CPU units.
\nOn Linux container instances, the Docker daemon on the container instance uses the CPU\n\t\t\tvalue to calculate the relative CPU share ratios for running containers. For more\n\t\t\tinformation, see CPU share\n\t\t\t\tconstraint in the Docker documentation. The minimum valid CPU share value\n\t\t\tthat the Linux kernel allows is 2. However, the CPU parameter isn't required, and you\n\t\t\tcan use CPU values below 2 in your container definitions. For CPU values below 2\n\t\t\t(including null), the behavior varies based on your Amazon ECS container agent\n\t\t\tversion:
\n\n Agent versions less than or equal to 1.1.0:\n\t\t\t\t\tNull and zero CPU values are passed to Docker as 0, which Docker then converts\n\t\t\t\t\tto 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux\n\t\t\t\t\tkernel converts to two CPU shares.
\n\n Agent versions greater than or equal to 1.2.0:\n\t\t\t\t\tNull, zero, and CPU values of 1 are passed to Docker as 2.
\nOn Windows container instances, the CPU limit is enforced as an absolute limit, or a\n\t\t\tquota. Windows containers only have access to the specified amount of CPU that's\n\t\t\tdescribed in the task definition. A null or zero CPU value is passed to Docker as\n\t\t\t\t0, which Windows interprets as 1% of one CPU.
The number of cpu units reserved for the container. This parameter maps\n\t\t\tto CpuShares in the docker create-container commandand the --cpu-shares option to docker run.
This field is optional for tasks using the Fargate launch type, and the\n\t\t\tonly requirement is that the total amount of CPU reserved for all containers within a\n\t\t\ttask be lower than the task-level cpu value.
You can determine the number of CPU units that are available per EC2 instance type\n\t\t\t\tby multiplying the vCPUs listed for that instance type on the Amazon EC2 Instances detail page\n\t\t\t\tby 1,024.
\nLinux containers share unallocated CPU units with other containers on the container\n\t\t\tinstance with the same ratio as their allocated amount. For example, if you run a\n\t\t\tsingle-container task on a single-core instance type with 512 CPU units specified for\n\t\t\tthat container, and that's the only task running on the container instance, that\n\t\t\tcontainer could use the full 1,024 CPU unit share at any given time. However, if you\n\t\t\tlaunched another copy of the same task on that container instance, each task is\n\t\t\tguaranteed a minimum of 512 CPU units when needed. Moreover, each container could float\n\t\t\tto higher CPU usage if the other container was not using it. If both tasks were 100%\n\t\t\tactive all of the time, they would be limited to 512 CPU units.
\nOn Linux container instances, the Docker daemon on the container instance uses the CPU\n\t\t\tvalue to calculate the relative CPU share ratios for running containers. The minimum valid CPU share value\n\t\t\tthat the Linux kernel allows is 2, and the\n\t\t\tmaximum valid CPU share value that the Linux kernel allows is 262144. However, the CPU parameter isn't required, and you\n\t\t\tcan use CPU values below 2 or above 262144 in your container definitions. For CPU values below 2\n\t\t\t(including null) or above 262144, the behavior varies based on your Amazon ECS container agent\n\t\t\tversion:
\n\n Agent versions less than or equal to 1.1.0:\n\t\t\t\t\tNull and zero CPU values are passed to Docker as 0, which Docker then converts\n\t\t\t\t\tto 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux\n\t\t\t\t\tkernel converts to two CPU shares.
\n\n Agent versions greater than or equal to 1.2.0:\n\t\t\t\t\tNull, zero, and CPU values of 1 are passed to Docker as 2.
\n\n Agent versions greater than or equal to\n\t\t\t\t\t\t1.84.0: CPU values greater than 256 vCPU are passed to Docker as\n\t\t\t\t\t256, which is equivalent to 262144 CPU shares.
\nOn Windows container instances, the CPU limit is enforced as an absolute limit, or a\n\t\t\tquota. Windows containers only have access to the specified amount of CPU that's\n\t\t\tdescribed in the task definition. A null or zero CPU value is passed to Docker as\n\t\t\t\t0, which Windows interprets as 1% of one CPU.
The amount (in MiB) of memory to present to the container. If your container attempts\n\t\t\tto exceed the memory specified here, the container is killed. The total amount of memory\n\t\t\treserved for all containers within a task must be lower than the task\n\t\t\t\tmemory value, if one is specified. This parameter maps to\n\t\t\t\tMemory in the Create a container section of the\n\t\t\tDocker Remote API and the --memory option to docker run.
If using the Fargate launch type, this parameter is optional.
\nIf using the EC2 launch type, you must specify either a task-level\n\t\t\tmemory value or a container-level memory value. If you specify both a container-level\n\t\t\t\tmemory and memoryReservation value, memory\n\t\t\tmust be greater than memoryReservation. If you specify\n\t\t\t\tmemoryReservation, then that value is subtracted from the available\n\t\t\tmemory resources for the container instance where the container is placed. Otherwise,\n\t\t\tthe value of memory is used.
The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a\n\t\t\tcontainer. So, don't specify less than 6 MiB of memory for your containers.
\nThe Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a\n\t\t\tcontainer. So, don't specify less than 4 MiB of memory for your containers.
" + "smithy.api#documentation": "The amount (in MiB) of memory to present to the container. If your container attempts\n\t\t\tto exceed the memory specified here, the container is killed. The total amount of memory\n\t\t\treserved for all containers within a task must be lower than the task\n\t\t\t\tmemory value, if one is specified. This parameter maps to\n\t\t\tMemory in thethe docker create-container command and the --memory option to docker run.
If using the Fargate launch type, this parameter is optional.
\nIf using the EC2 launch type, you must specify either a task-level\n\t\t\tmemory value or a container-level memory value. If you specify both a container-level\n\t\t\t\tmemory and memoryReservation value, memory\n\t\t\tmust be greater than memoryReservation. If you specify\n\t\t\t\tmemoryReservation, then that value is subtracted from the available\n\t\t\tmemory resources for the container instance where the container is placed. Otherwise,\n\t\t\tthe value of memory is used.
The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a\n\t\t\tcontainer. So, don't specify less than 6 MiB of memory for your containers.
\nThe Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a\n\t\t\tcontainer. So, don't specify less than 4 MiB of memory for your containers.
" } }, "memoryReservation": { "target": "com.amazonaws.ecs#BoxedInteger", "traits": { - "smithy.api#documentation": "The soft limit (in MiB) of memory to reserve for the container. When system memory is\n\t\t\tunder heavy contention, Docker attempts to keep the container memory to this soft limit.\n\t\t\tHowever, your container can consume more memory when it needs to, up to either the hard\n\t\t\tlimit specified with the memory parameter (if applicable), or all of the\n\t\t\tavailable memory on the container instance, whichever comes first. This parameter maps\n\t\t\tto MemoryReservation in the Create a container section of\n\t\t\tthe Docker Remote API and the --memory-reservation option to docker run.
If a task-level memory value is not specified, you must specify a non-zero integer for\n\t\t\tone or both of memory or memoryReservation in a container\n\t\t\tdefinition. If you specify both, memory must be greater than\n\t\t\t\tmemoryReservation. If you specify memoryReservation, then\n\t\t\tthat value is subtracted from the available memory resources for the container instance\n\t\t\twhere the container is placed. Otherwise, the value of memory is\n\t\t\tused.
For example, if your container normally uses 128 MiB of memory, but occasionally\n\t\t\tbursts to 256 MiB of memory for short periods of time, you can set a\n\t\t\t\tmemoryReservation of 128 MiB, and a memory hard limit of\n\t\t\t300 MiB. This configuration would allow the container to only reserve 128 MiB of memory\n\t\t\tfrom the remaining resources on the container instance, but also allow the container to\n\t\t\tconsume more memory resources when needed.
The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a\n\t\t\tcontainer. So, don't specify less than 6 MiB of memory for your containers.
\nThe Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a\n\t\t\tcontainer. So, don't specify less than 4 MiB of memory for your containers.
" + "smithy.api#documentation": "The soft limit (in MiB) of memory to reserve for the container. When system memory is\n\t\t\tunder heavy contention, Docker attempts to keep the container memory to this soft limit.\n\t\t\tHowever, your container can consume more memory when it needs to, up to either the hard\n\t\t\tlimit specified with the memory parameter (if applicable), or all of the\n\t\t\tavailable memory on the container instance, whichever comes first. This parameter maps\n\t\t\tto MemoryReservation in the the docker create-container command and the --memory-reservation option to docker run.
If a task-level memory value is not specified, you must specify a non-zero integer for\n\t\t\tone or both of memory or memoryReservation in a container\n\t\t\tdefinition. If you specify both, memory must be greater than\n\t\t\t\tmemoryReservation. If you specify memoryReservation, then\n\t\t\tthat value is subtracted from the available memory resources for the container instance\n\t\t\twhere the container is placed. Otherwise, the value of memory is\n\t\t\tused.
For example, if your container normally uses 128 MiB of memory, but occasionally\n\t\t\tbursts to 256 MiB of memory for short periods of time, you can set a\n\t\t\t\tmemoryReservation of 128 MiB, and a memory hard limit of\n\t\t\t300 MiB. This configuration would allow the container to only reserve 128 MiB of memory\n\t\t\tfrom the remaining resources on the container instance, but also allow the container to\n\t\t\tconsume more memory resources when needed.
The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a\n\t\t\tcontainer. So, don't specify less than 6 MiB of memory for your containers.
\nThe Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a\n\t\t\tcontainer. So, don't specify less than 4 MiB of memory for your containers.
" } }, "links": { "target": "com.amazonaws.ecs#StringList", "traits": { - "smithy.api#documentation": "The links parameter allows containers to communicate with each other\n\t\t\twithout the need for port mappings. This parameter is only supported if the network mode\n\t\t\tof a task definition is bridge. The name:internalName\n\t\t\tconstruct is analogous to name:alias in Docker links.\n\t\t\tUp to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. For more information about linking Docker containers, go to\n\t\t\t\tLegacy container links\n\t\t\tin the Docker documentation. This parameter maps to Links in the\n\t\t\tCreate a container section of the Docker Remote API and the\n\t\t\t\t--link option to docker\n\t\t\trun.
This parameter is not supported for Windows containers.
\nContainers that are collocated on a single container instance may be able to\n\t\t\t\tcommunicate with each other without requiring links or host port mappings. Network\n\t\t\t\tisolation is achieved on the container instance using security groups and VPC\n\t\t\t\tsettings.
\nThe links parameter allows containers to communicate with each other\n\t\t\twithout the need for port mappings. This parameter is only supported if the network mode\n\t\t\tof a task definition is bridge. The name:internalName\n\t\t\tconstruct is analogous to name:alias in Docker links.\n\t\t\tUp to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.. This parameter maps to Links in the docker create-container command and the\n\t\t\t\t--link option to docker\n\t\t\trun.
This parameter is not supported for Windows containers.
\nContainers that are collocated on a single container instance may be able to\n\t\t\t\tcommunicate with each other without requiring links or host port mappings. Network\n\t\t\t\tisolation is achieved on the container instance using security groups and VPC\n\t\t\t\tsettings.
\nThe list of port mappings for the container. Port mappings allow containers to access\n\t\t\tports on the host container instance to send or receive traffic.
\nFor task definitions that use the awsvpc network mode, only specify the\n\t\t\t\tcontainerPort. The hostPort can be left blank or it must\n\t\t\tbe the same value as the containerPort.
Port mappings on Windows use the NetNAT gateway address rather than\n\t\t\t\tlocalhost. There's no loopback for port mappings on Windows, so you\n\t\t\tcan't access a container's mapped port from the host itself.
This parameter maps to PortBindings in the\n\t\t\tCreate a container section of the Docker Remote API and the\n\t\t\t\t--publish option to docker\n\t\t\t\trun. If the network mode of a task definition is set to none,\n\t\t\tthen you can't specify port mappings. If the network mode of a task definition is set to\n\t\t\t\thost, then host ports must either be undefined or they must match the\n\t\t\tcontainer port in the port mapping.
After a task reaches the RUNNING status, manual and automatic host\n\t\t\t\tand container port assignments are visible in the Network\n\t\t\t\t\tBindings section of a container description for a selected task in\n\t\t\t\tthe Amazon ECS console. The assignments are also visible in the\n\t\t\t\t\tnetworkBindings section DescribeTasks\n\t\t\t\tresponses.
The list of port mappings for the container. Port mappings allow containers to access\n\t\t\tports on the host container instance to send or receive traffic.
\nFor task definitions that use the awsvpc network mode, only specify the\n\t\t\t\tcontainerPort. The hostPort can be left blank or it must\n\t\t\tbe the same value as the containerPort.
Port mappings on Windows use the NetNAT gateway address rather than\n\t\t\t\tlocalhost. There's no loopback for port mappings on Windows, so you\n\t\t\tcan't access a container's mapped port from the host itself.
This parameter maps to PortBindings in the\n\t\t\tthe docker create-container command and the\n\t\t\t\t--publish option to docker\n\t\t\t\trun. If the network mode of a task definition is set to none,\n\t\t\tthen you can't specify port mappings. If the network mode of a task definition is set to\n\t\t\t\thost, then host ports must either be undefined or they must match the\n\t\t\tcontainer port in the port mapping.
After a task reaches the RUNNING status, manual and automatic host\n\t\t\t\tand container port assignments are visible in the Network\n\t\t\t\t\tBindings section of a container description for a selected task in\n\t\t\t\tthe Amazon ECS console. The assignments are also visible in the\n\t\t\t\t\tnetworkBindings section DescribeTasks\n\t\t\t\tresponses.
If the essential parameter of a container is marked as true,\n\t\t\tand that container fails or stops for any reason, all other containers that are part of\n\t\t\tthe task are stopped. If the essential parameter of a container is marked\n\t\t\tas false, its failure doesn't affect the rest of the containers in a task.\n\t\t\tIf this parameter is omitted, a container is assumed to be essential.
All tasks must have at least one essential container. If you have an application\n\t\t\tthat's composed of multiple containers, group containers that are used for a common\n\t\t\tpurpose into components, and separate the different components into multiple task\n\t\t\tdefinitions. For more information, see Application\n\t\t\t\tArchitecture in the Amazon Elastic Container Service Developer Guide.
" } }, + "restartPolicy": { + "target": "com.amazonaws.ecs#ContainerRestartPolicy", + "traits": { + "smithy.api#documentation": "The restart policy for a container. When you set up a restart policy, Amazon ECS can restart the container without needing to replace the\n\t\t\ttask. For more information, see Restart individual containers in Amazon ECS tasks with container restart policies in the Amazon Elastic Container Service Developer Guide.
" + } + }, "entryPoint": { "target": "com.amazonaws.ecs#StringList", "traits": { - "smithy.api#documentation": "Early versions of the Amazon ECS container agent don't properly handle\n\t\t\t\t\tentryPoint parameters. If you have problems using\n\t\t\t\t\tentryPoint, update your container agent or enter your commands and\n\t\t\t\targuments as command array items instead.
The entry point that's passed to the container. This parameter maps to\n\t\t\t\tEntrypoint in the Create a container section of the\n\t\t\tDocker Remote API and the --entrypoint option to docker run. For more information, see https://docs.docker.com/engine/reference/builder/#entrypoint.
Early versions of the Amazon ECS container agent don't properly handle\n\t\t\t\t\tentryPoint parameters. If you have problems using\n\t\t\t\t\tentryPoint, update your container agent or enter your commands and\n\t\t\t\targuments as command array items instead.
The entry point that's passed to the container. This parameter maps to\n\t\t\tEntrypoint in tthe docker create-container command and the --entrypoint option to docker run.
The command that's passed to the container. This parameter maps to Cmd in\n\t\t\tthe Create a container section of the Docker Remote API and the\n\t\t\t\tCOMMAND parameter to docker\n\t\t\t\trun. For more information, see https://docs.docker.com/engine/reference/builder/#cmd. If there are multiple arguments, each\n\t\t\targument is a separated string in the array.
The command that's passed to the container. This parameter maps to Cmd in\n\t\t\tthe docker create-container command and the\n\t\t\t\tCOMMAND parameter to docker\n\t\t\t\trun. If there are multiple arguments, each\n\t\t\targument is a separated string in the array.
The environment variables to pass to a container. This parameter maps to\n\t\t\t\tEnv in the Create a container section of the\n\t\t\tDocker Remote API and the --env option to docker run.
We don't recommend that you use plaintext environment variables for sensitive\n\t\t\t\tinformation, such as credential data.
\nThe environment variables to pass to a container. This parameter maps to\n\t\t\tEnv in the docker create-container command and the --env option to docker run.
We don't recommend that you use plaintext environment variables for sensitive\n\t\t\t\tinformation, such as credential data.
\nA list of files containing the environment variables to pass to a container. This\n\t\t\tparameter maps to the --env-file option to docker run.
You can specify up to ten environment files. The file must have a .env\n\t\t\tfile extension. Each line in an environment file contains an environment variable in\n\t\t\t\tVARIABLE=VALUE format. Lines beginning with # are treated\n\t\t\tas comments and are ignored. For more information about the environment variable file\n\t\t\tsyntax, see Declare default\n\t\t\t\tenvironment variables in file.
If there are environment variables specified using the environment\n\t\t\tparameter in a container definition, they take precedence over the variables contained\n\t\t\twithin an environment file. If multiple environment files are specified that contain the\n\t\t\tsame variable, they're processed from the top down. We recommend that you use unique\n\t\t\tvariable names. For more information, see Specifying Environment\n\t\t\t\tVariables in the Amazon Elastic Container Service Developer Guide.
A list of files containing the environment variables to pass to a container. This\n\t\t\tparameter maps to the --env-file option to docker run.
You can specify up to ten environment files. The file must have a .env\n\t\t\tfile extension. Each line in an environment file contains an environment variable in\n\t\t\t\tVARIABLE=VALUE format. Lines beginning with # are treated\n\t\t\tas comments and are ignored.
If there are environment variables specified using the environment\n\t\t\tparameter in a container definition, they take precedence over the variables contained\n\t\t\twithin an environment file. If multiple environment files are specified that contain the\n\t\t\tsame variable, they're processed from the top down. We recommend that you use unique\n\t\t\tvariable names. For more information, see Specifying Environment\n\t\t\t\tVariables in the Amazon Elastic Container Service Developer Guide.
The mount points for data volumes in your container.
\nThis parameter maps to Volumes in the Create a container\n\t\t\tsection of the Docker Remote API and the --volume option to docker run.
Windows containers can mount whole directories on the same drive as\n\t\t\t\t$env:ProgramData. Windows containers can't mount directories on a\n\t\t\tdifferent drive, and mount point can't be across drives.
The mount points for data volumes in your container.
\nThis parameter maps to Volumes in the the docker create-container command and the --volume option to docker run.
Windows containers can mount whole directories on the same drive as\n\t\t\t\t$env:ProgramData. Windows containers can't mount directories on a\n\t\t\tdifferent drive, and mount point can't be across drives.
Data volumes to mount from another container. This parameter maps to\n\t\t\t\tVolumesFrom in the Create a container section of the\n\t\t\tDocker Remote API and the --volumes-from option to docker run.
Data volumes to mount from another container. This parameter maps to\n\t\t\tVolumesFrom in tthe docker create-container command and the --volumes-from option to docker run.
Time duration (in seconds) to wait before giving up on resolving dependencies for a\n\t\t\tcontainer. For example, you specify two containers in a task definition with containerA\n\t\t\thaving a dependency on containerB reaching a COMPLETE,\n\t\t\tSUCCESS, or HEALTHY status. If a startTimeout\n\t\t\tvalue is specified for containerB and it doesn't reach the desired status within that\n\t\t\ttime then containerA gives up and not start. This results in the task transitioning to a\n\t\t\t\tSTOPPED state.
When the ECS_CONTAINER_START_TIMEOUT container agent configuration\n\t\t\t\tvariable is used, it's enforced independently from this start timeout value.
For tasks using the Fargate launch type, the task or service requires\n\t\t\tthe following platforms:
\nLinux platform version 1.3.0 or later.
Windows platform version 1.0.0 or later.
For tasks using the EC2 launch type, your container instances require at\n\t\t\tleast version 1.26.0 of the container agent to use a container start\n\t\t\ttimeout value. However, we recommend using the latest container agent version. For\n\t\t\tinformation about checking your agent version and updating to the latest version, see\n\t\t\t\tUpdating the Amazon ECS\n\t\t\t\tContainer Agent in the Amazon Elastic Container Service Developer Guide. If you're using an Amazon ECS-optimized Linux AMI,\n\t\t\tyour instance needs at least version 1.26.0-1 of the ecs-init\n\t\t\tpackage. If your container instances are launched from version 20190301 or\n\t\t\tlater, then they contain the required versions of the container agent and\n\t\t\t\tecs-init. For more information, see Amazon ECS-optimized Linux AMI\n\t\t\tin the Amazon Elastic Container Service Developer Guide.
The valid values are 2-120 seconds.
" + "smithy.api#documentation": "Time duration (in seconds) to wait before giving up on resolving dependencies for a\n\t\t\tcontainer. For example, you specify two containers in a task definition with containerA\n\t\t\thaving a dependency on containerB reaching a COMPLETE,\n\t\t\tSUCCESS, or HEALTHY status. If a startTimeout\n\t\t\tvalue is specified for containerB and it doesn't reach the desired status within that\n\t\t\ttime then containerA gives up and not start. This results in the task transitioning to a\n\t\t\t\tSTOPPED state.
When the ECS_CONTAINER_START_TIMEOUT container agent configuration\n\t\t\t\tvariable is used, it's enforced independently from this start timeout value.
For tasks using the Fargate launch type, the task or service requires\n\t\t\tthe following platforms:
\nLinux platform version 1.3.0 or later.
Windows platform version 1.0.0 or later.
For tasks using the EC2 launch type, your container instances require at\n\t\t\tleast version 1.26.0 of the container agent to use a container start\n\t\t\ttimeout value. However, we recommend using the latest container agent version. For\n\t\t\tinformation about checking your agent version and updating to the latest version, see\n\t\t\t\tUpdating the Amazon ECS\n\t\t\t\tContainer Agent in the Amazon Elastic Container Service Developer Guide. If you're using an Amazon ECS-optimized Linux AMI,\n\t\t\tyour instance needs at least version 1.26.0-1 of the ecs-init\n\t\t\tpackage. If your container instances are launched from version 20190301 or\n\t\t\tlater, then they contain the required versions of the container agent and\n\t\t\t\tecs-init. For more information, see Amazon ECS-optimized Linux AMI\n\t\t\tin the Amazon Elastic Container Service Developer Guide.
The valid values for Fargate are 2-120 seconds.
" } }, "stopTimeout": { @@ -2400,103 +2406,103 @@ "hostname": { "target": "com.amazonaws.ecs#String", "traits": { - "smithy.api#documentation": "The hostname to use for your container. This parameter maps to Hostname\n\t\t\tin the Create a container section of the Docker Remote API and the\n\t\t\t\t--hostname option to docker\n\t\t\t\trun.
The hostname parameter is not supported if you're using the\n\t\t\t\t\tawsvpc network mode.
The hostname to use for your container. This parameter maps to Hostname\n\t\t\tin thethe docker create-container command and the\n\t\t\t\t--hostname option to docker\n\t\t\t\trun.
The hostname parameter is not supported if you're using the\n\t\t\t\t\tawsvpc network mode.
The user to use inside the container. This parameter maps to User in the\n\t\t\tCreate a container section of the Docker Remote API and the\n\t\t\t\t--user option to docker\n\t\t\trun.
When running tasks using the host network mode, don't run containers\n\t\t\t\tusing the root user (UID 0). We recommend using a non-root user for better\n\t\t\t\tsecurity.
You can specify the user using the following formats. If specifying a UID\n\t\t\tor GID, you must specify it as a positive integer.
\n user\n
\n user:group\n
\n uid\n
\n uid:gid\n
\n user:gid\n
\n uid:group\n
This parameter is not supported for Windows containers.
\nThe user to use inside the container. This parameter maps to User in the docker create-container command and the\n\t\t\t\t--user option to docker\n\t\t\trun.
When running tasks using the host network mode, don't run containers\n\t\t\t\tusing the root user (UID 0). We recommend using a non-root user for better\n\t\t\t\tsecurity.
You can specify the user using the following formats. If specifying a UID\n\t\t\tor GID, you must specify it as a positive integer.
\n user\n
\n user:group\n
\n uid\n
\n uid:gid\n
\n user:gid\n
\n uid:group\n
This parameter is not supported for Windows containers.
\nThe working directory to run commands inside the container in. This parameter maps to\n\t\t\t\tWorkingDir in the Create a container section of the\n\t\t\tDocker Remote API and the --workdir option to docker run.
The working directory to run commands inside the container in. This parameter maps to\n\t\t\tWorkingDir in the docker create-container command and the --workdir option to docker run.
When this parameter is true, networking is off within the container. This parameter\n\t\t\tmaps to NetworkDisabled in the Create a container section\n\t\t\tof the Docker Remote API.
This parameter is not supported for Windows containers.
\nWhen this parameter is true, networking is off within the container. This parameter\n\t\t\tmaps to NetworkDisabled in the docker create-container command.
This parameter is not supported for Windows containers.
\nWhen this parameter is true, the container is given elevated privileges on the host\n\t\t\tcontainer instance (similar to the root user). This parameter maps to\n\t\t\t\tPrivileged in the Create a container section of the\n\t\t\tDocker Remote API and the --privileged option to docker run.
This parameter is not supported for Windows containers or tasks run on Fargate.
\nWhen this parameter is true, the container is given elevated privileges on the host\n\t\t\tcontainer instance (similar to the root user). This parameter maps to\n\t\t\tPrivileged in the the docker create-container command and the --privileged option to docker run
This parameter is not supported for Windows containers or tasks run on Fargate.
\nWhen this parameter is true, the container is given read-only access to its root file\n\t\t\tsystem. This parameter maps to ReadonlyRootfs in the\n\t\t\tCreate a container section of the Docker Remote API and the\n\t\t\t\t--read-only option to docker\n\t\t\t\trun.
This parameter is not supported for Windows containers.
\nWhen this parameter is true, the container is given read-only access to its root file\n\t\t\tsystem. This parameter maps to ReadonlyRootfs in the docker create-container command and the\n\t\t\t\t--read-only option to docker\n\t\t\t\trun.
This parameter is not supported for Windows containers.
\nA list of DNS servers that are presented to the container. This parameter maps to\n\t\t\t\tDns in the Create a container section of the\n\t\t\tDocker Remote API and the --dns option to docker run.
This parameter is not supported for Windows containers.
\nA list of DNS servers that are presented to the container. This parameter maps to\n\t\t\tDns in the the docker create-container command and the --dns option to docker run.
This parameter is not supported for Windows containers.
\nA list of DNS search domains that are presented to the container. This parameter maps\n\t\t\tto DnsSearch in the Create a container section of the\n\t\t\tDocker Remote API and the --dns-search option to docker run.
This parameter is not supported for Windows containers.
\nA list of DNS search domains that are presented to the container. This parameter maps\n\t\t\tto DnsSearch in the docker create-container command and the --dns-search option to docker run.
This parameter is not supported for Windows containers.
\nA list of hostnames and IP address mappings to append to the /etc/hosts\n\t\t\tfile on the container. This parameter maps to ExtraHosts in the\n\t\t\tCreate a container section of the Docker Remote API and the\n\t\t\t\t--add-host option to docker\n\t\t\t\trun.
This parameter isn't supported for Windows containers or tasks that use the\n\t\t\t\t\tawsvpc network mode.
A list of hostnames and IP address mappings to append to the /etc/hosts\n\t\t\tfile on the container. This parameter maps to ExtraHosts in the docker create-container command and the\n\t\t\t\t--add-host option to docker\n\t\t\t\trun.
This parameter isn't supported for Windows containers or tasks that use the\n\t\t\t\t\tawsvpc network mode.
A list of strings to provide custom configuration for multiple security systems. For\n\t\t\tmore information about valid values, see Docker\n\t\t\t\tRun Security Configuration. This field isn't valid for containers in tasks\n\t\t\tusing the Fargate launch type.
\nFor Linux tasks on EC2, this parameter can be used to reference custom\n\t\t\tlabels for SELinux and AppArmor multi-level security systems.
\nFor any tasks on EC2, this parameter can be used to reference a\n\t\t\tcredential spec file that configures a container for Active Directory authentication.\n\t\t\tFor more information, see Using gMSAs for Windows\n\t\t\t\tContainers and Using gMSAs for Linux\n\t\t\t\tContainers in the Amazon Elastic Container Service Developer Guide.
\nThis parameter maps to SecurityOpt in the\n\t\t\tCreate a container section of the Docker Remote API and the\n\t\t\t\t--security-opt option to docker\n\t\t\t\trun.
The Amazon ECS container agent running on a container instance must register with the\n\t\t\t\t\tECS_SELINUX_CAPABLE=true or ECS_APPARMOR_CAPABLE=true\n\t\t\t\tenvironment variables before containers placed on that instance can use these\n\t\t\t\tsecurity options. For more information, see Amazon ECS Container\n\t\t\t\t\tAgent Configuration in the Amazon Elastic Container Service Developer Guide.
For more information about valid values, see Docker\n\t\t\t\tRun Security Configuration.
\nValid values: \"no-new-privileges\" | \"apparmor:PROFILE\" | \"label:value\" |\n\t\t\t\"credentialspec:CredentialSpecFilePath\"
" + "smithy.api#documentation": "A list of strings to provide custom configuration for multiple security systems. This field isn't valid for containers in tasks\n\t\t\tusing the Fargate launch type.
\nFor Linux tasks on EC2, this parameter can be used to reference custom\n\t\t\tlabels for SELinux and AppArmor multi-level security systems.
\nFor any tasks on EC2, this parameter can be used to reference a\n\t\t\tcredential spec file that configures a container for Active Directory authentication.\n\t\t\tFor more information, see Using gMSAs for Windows\n\t\t\t\tContainers and Using gMSAs for Linux\n\t\t\t\tContainers in the Amazon Elastic Container Service Developer Guide.
\nThis parameter maps to SecurityOpt in the docker create-container command and the\n\t\t\t\t--security-opt option to docker\n\t\t\t\trun.
The Amazon ECS container agent running on a container instance must register with the\n\t\t\t\t\tECS_SELINUX_CAPABLE=true or ECS_APPARMOR_CAPABLE=true\n\t\t\t\tenvironment variables before containers placed on that instance can use these\n\t\t\t\tsecurity options. For more information, see Amazon ECS Container\n\t\t\t\t\tAgent Configuration in the Amazon Elastic Container Service Developer Guide.
Valid values: \"no-new-privileges\" | \"apparmor:PROFILE\" | \"label:value\" |\n\t\t\t\"credentialspec:CredentialSpecFilePath\"
" } }, "interactive": { "target": "com.amazonaws.ecs#BoxedBoolean", "traits": { - "smithy.api#documentation": "When this parameter is true, you can deploy containerized applications\n\t\t\tthat require stdin or a tty to be allocated. This parameter\n\t\t\tmaps to OpenStdin in the Create a container section of the\n\t\t\tDocker Remote API and the --interactive option to docker run.
When this parameter is true, you can deploy containerized applications\n\t\t\tthat require stdin or a tty to be allocated. This parameter\n\t\t\tmaps to OpenStdin in the docker create-container command and the --interactive option to docker run.
When this parameter is true, a TTY is allocated. This parameter maps to\n\t\t\t\tTty in the Create a container section of the\n\t\t\tDocker Remote API and the --tty option to docker run.
When this parameter is true, a TTY is allocated. This parameter maps to\n\t\t\tTty in tthe docker create-container command and the --tty option to docker run.
A key/value map of labels to add to the container. This parameter maps to\n\t\t\t\tLabels in the Create a container section of the\n\t\t\tDocker Remote API and the --label option to docker run. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'\n
A key/value map of labels to add to the container. This parameter maps to\n\t\t\tLabels in the docker create-container command and the --label option to docker run. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'\n
A list of ulimits to set in the container. If a ulimit value\n\t\t\tis specified in a task definition, it overrides the default values set by Docker. This\n\t\t\tparameter maps to Ulimits in the Create a container section\n\t\t\tof the Docker Remote API and the --ulimit option to docker run. Valid naming values are displayed\n\t\t\tin the Ulimit data type.
Amazon ECS tasks hosted on Fargate use the default\n\t\t\t\t\t\t\tresource limit values set by the operating system with the exception of\n\t\t\t\t\t\t\tthe nofile resource limit parameter which Fargate\n\t\t\t\t\t\t\toverrides. The nofile resource limit sets a restriction on\n\t\t\t\t\t\t\tthe number of open files that a container can use. The default\n\t\t\t\t\t\t\t\tnofile soft limit is 1024 and the default hard limit\n\t\t\t\t\t\t\tis 65535.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'\n
This parameter is not supported for Windows containers.
\nA list of ulimits to set in the container. If a ulimit value\n\t\t\tis specified in a task definition, it overrides the default values set by Docker. This\n\t\t\tparameter maps to Ulimits in tthe docker create-container command and the --ulimit option to docker run. Valid naming values are displayed\n\t\t\tin the Ulimit data type.
Amazon ECS tasks hosted on Fargate use the default\n\t\t\t\t\t\t\tresource limit values set by the operating system with the exception of\n\t\t\t\t\t\t\tthe nofile resource limit parameter which Fargate\n\t\t\t\t\t\t\toverrides. The nofile resource limit sets a restriction on\n\t\t\t\t\t\t\tthe number of open files that a container can use. The default\n\t\t\t\t\t\t\t\tnofile soft limit is 65535 and the default hard limit\n\t\t\t\t\t\t\tis 65535.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'\n
This parameter is not supported for Windows containers.
\nThe log configuration specification for the container.
\nThis parameter maps to LogConfig in the\n\t\t\tCreate a container section of the Docker Remote API and the\n\t\t\t\t--log-driver option to docker\n\t\t\t\trun. By default, containers use the same logging driver that the Docker\n\t\t\tdaemon uses. However the container can use a different logging driver than the Docker\n\t\t\tdaemon by specifying a log driver with this parameter in the container definition. To\n\t\t\tuse a different logging driver for a container, the log system must be configured\n\t\t\tproperly on the container instance (or on a different log server for remote logging\n\t\t\toptions). For more information about the options for different supported log drivers,\n\t\t\tsee Configure\n\t\t\t\tlogging drivers in the Docker documentation.
Amazon ECS currently supports a subset of the logging drivers available to the Docker\n\t\t\t\tdaemon (shown in the LogConfiguration data type). Additional log\n\t\t\t\tdrivers may be available in future releases of the Amazon ECS container agent.
\nThis parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'\n
The Amazon ECS container agent running on a container instance must register the\n\t\t\t\tlogging drivers available on that instance with the\n\t\t\t\t\tECS_AVAILABLE_LOGGING_DRIVERS environment variable before\n\t\t\t\tcontainers placed on that instance can use these log configuration options. For more\n\t\t\t\tinformation, see Amazon ECS Container\n\t\t\t\t\tAgent Configuration in the Amazon Elastic Container Service Developer Guide.
The log configuration specification for the container.
\nThis parameter maps to LogConfig in the docker create-container command and the\n\t\t\t\t--log-driver option to docker\n\t\t\t\trun. By default, containers use the same logging driver that the Docker\n\t\t\tdaemon uses. However the container can use a different logging driver than the Docker\n\t\t\tdaemon by specifying a log driver with this parameter in the container definition. To\n\t\t\tuse a different logging driver for a container, the log system must be configured\n\t\t\tproperly on the container instance (or on a different log server for remote logging\n\t\t\toptions).
Amazon ECS currently supports a subset of the logging drivers available to the Docker\n\t\t\t\tdaemon (shown in the LogConfiguration data type). Additional log\n\t\t\t\tdrivers may be available in future releases of the Amazon ECS container agent.
\nThis parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'\n
The Amazon ECS container agent running on a container instance must register the\n\t\t\t\tlogging drivers available on that instance with the\n\t\t\t\t\tECS_AVAILABLE_LOGGING_DRIVERS environment variable before\n\t\t\t\tcontainers placed on that instance can use these log configuration options. For more\n\t\t\t\tinformation, see Amazon ECS Container\n\t\t\t\t\tAgent Configuration in the Amazon Elastic Container Service Developer Guide.
The container health check command and associated configuration parameters for the\n\t\t\tcontainer. This parameter maps to HealthCheck in the\n\t\t\tCreate a container section of the Docker Remote API and the\n\t\t\t\tHEALTHCHECK parameter of docker\n\t\t\t\trun.
The container health check command and associated configuration parameters for the\n\t\t\tcontainer. This parameter maps to HealthCheck in the docker create-container command and the\n\t\t\t\tHEALTHCHECK parameter of docker\n\t\t\t\trun.
A list of namespaced kernel parameters to set in the container. This parameter maps to\n\t\t\t\tSysctls in the Create a container section of the\n\t\t\tDocker Remote API and the --sysctl option to docker run. For example, you can configure\n\t\t\t\tnet.ipv4.tcp_keepalive_time setting to maintain longer lived\n\t\t\tconnections.
A list of namespaced kernel parameters to set in the container. This parameter maps to\n\t\t\tSysctls in tthe docker create-container command and the --sysctl option to docker run. For example, you can configure\n\t\t\t\tnet.ipv4.tcp_keepalive_time setting to maintain longer lived\n\t\t\tconnections.
Specifies whether a restart policy is enabled for the\n\t\t\tcontainer.
", + "smithy.api#required": {} + } + }, + "ignoredExitCodes": { + "target": "com.amazonaws.ecs#IntegerList", + "traits": { + "smithy.api#documentation": "A list of exit codes that Amazon ECS will ignore and not attempt a restart on. You can specify a maximum of 50 container exit\n\t\t\tcodes. By default, Amazon ECS does not ignore\n\t\t\tany exit codes.
" + } + }, + "restartAttemptPeriod": { + "target": "com.amazonaws.ecs#BoxedInteger", + "traits": { + "smithy.api#documentation": "A period of time (in seconds) that the container must run for before a restart can be attempted. A container can be\n\t\t\trestarted only once every restartAttemptPeriod seconds. If a container isn't able to run for this time period and exits early, it will not be restarted. You can set a minimum\n\t\t\trestartAttemptPeriod of 60 seconds and a maximum restartAttemptPeriod of 1800 seconds.\n\t\t\tBy default, a container must run for 300 seconds before it can be restarted.
You can enable a restart policy for each container defined in your\n\t\t\ttask definition, to overcome transient failures faster and maintain task availability. When you\n\t\t\tenable a restart policy for a container, Amazon ECS can restart the container if it exits, without needing to replace\n\t\t\tthe task. For more information, see Restart individual containers\n\t\t\t\tin Amazon ECS tasks with container restart policies in the Amazon Elastic Container Service Developer Guide.
" + } + }, "com.amazonaws.ecs#ContainerStateChange": { "type": "structure", "members": { @@ -3103,7 +3136,7 @@ } ], "traits": { - "smithy.api#documentation": "Runs and maintains your desired number of tasks from a specified task definition. If\n\t\t\tthe number of tasks running in a service drops below the desiredCount,\n\t\t\tAmazon ECS runs another copy of the task in the specified cluster. To update an existing\n\t\t\tservice, see the UpdateService action.
On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.
\nIn addition to maintaining the desired count of tasks in your service, you can\n\t\t\toptionally run your service behind one or more load balancers. The load balancers\n\t\t\tdistribute traffic across the tasks that are associated with the service. For more\n\t\t\tinformation, see Service load balancing in the Amazon Elastic Container Service Developer Guide.
\nYou can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or\n\t\t\tupdating a service. volumeConfigurations is only supported for REPLICA\n\t\t\tservice and not DAEMON service. For more infomation, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide.
Tasks for services that don't use a load balancer are considered healthy if they're in\n\t\t\tthe RUNNING state. Tasks for services that use a load balancer are\n\t\t\tconsidered healthy if they're in the RUNNING state and are reported as\n\t\t\thealthy by the load balancer.
There are two service scheduler strategies available:
\n\n REPLICA - The replica scheduling strategy places and\n\t\t\t\t\tmaintains your desired number of tasks across your cluster. By default, the\n\t\t\t\t\tservice scheduler spreads tasks across Availability Zones. You can use task\n\t\t\t\t\tplacement strategies and constraints to customize task placement decisions. For\n\t\t\t\t\tmore information, see Service scheduler concepts in the Amazon Elastic Container Service Developer Guide.
\n DAEMON - The daemon scheduling strategy deploys exactly one\n\t\t\t\t\ttask on each active container instance that meets all of the task placement\n\t\t\t\t\tconstraints that you specify in your cluster. The service scheduler also\n\t\t\t\t\tevaluates the task placement constraints for running tasks. It also stops tasks\n\t\t\t\t\tthat don't meet the placement constraints. When using this strategy, you don't\n\t\t\t\t\tneed to specify a desired number of tasks, a task placement strategy, or use\n\t\t\t\t\tService Auto Scaling policies. For more information, see Service scheduler concepts in the Amazon Elastic Container Service Developer Guide.
You can optionally specify a deployment configuration for your service. The deployment\n\t\t\tis initiated by changing properties. For example, the deployment might be initiated by\n\t\t\tthe task definition or by your desired count of a service. This is done with an UpdateService operation. The default value for a replica service for\n\t\t\t\tminimumHealthyPercent is 100%. The default value for a daemon service\n\t\t\tfor minimumHealthyPercent is 0%.
If a service uses the ECS deployment controller, the minimum healthy\n\t\t\tpercent represents a lower limit on the number of tasks in a service that must remain in\n\t\t\tthe RUNNING state during a deployment. Specifically, it represents it as a\n\t\t\tpercentage of your desired number of tasks (rounded up to the nearest integer). This\n\t\t\thappens when any of your container instances are in the DRAINING state if\n\t\t\tthe service contains tasks using the EC2 launch type. Using this\n\t\t\tparameter, you can deploy without using additional cluster capacity. For example, if you\n\t\t\tset your service to have desired number of four tasks and a minimum healthy percent of\n\t\t\t50%, the scheduler might stop two existing tasks to free up cluster capacity before\n\t\t\tstarting two new tasks. If they're in the RUNNING state, tasks for services\n\t\t\tthat don't use a load balancer are considered healthy . If they're in the\n\t\t\t\tRUNNING state and reported as healthy by the load balancer, tasks for\n\t\t\tservices that do use a load balancer are considered healthy . The\n\t\t\tdefault value for minimum healthy percent is 100%.
If a service uses the ECS deployment controller, the maximum percent parameter represents an upper limit on the\n\t\t\tnumber of tasks in a service that are allowed in the RUNNING or\n\t\t\t\tPENDING state during a deployment. Specifically, it represents it as a\n\t\t\tpercentage of the desired number of tasks (rounded down to the nearest integer). This\n\t\t\thappens when any of your container instances are in the DRAINING state if\n\t\t\tthe service contains tasks using the EC2 launch type. Using this\n\t\t\tparameter, you can define the deployment batch size. For example, if your service has a\n\t\t\tdesired number of four tasks and a maximum percent value of 200%, the scheduler may\n\t\t\tstart four new tasks before stopping the four older tasks (provided that the cluster\n\t\t\tresources required to do this are available). The default value for maximum percent is\n\t\t\t200%.
If a service uses either the CODE_DEPLOY or EXTERNAL\n\t\t\tdeployment controller types and tasks that use the EC2 launch type, the\n\t\t\t\tminimum healthy percent and maximum percent values are used only to define the lower and upper limit\n\t\t\ton the number of the tasks in the service that remain in the RUNNING state.\n\t\t\tThis is while the container instances are in the DRAINING state. If the\n\t\t\ttasks in the service use the Fargate launch type, the minimum healthy\n\t\t\tpercent and maximum percent values aren't used. This is the case even if they're\n\t\t\tcurrently visible when describing your service.
When creating a service that uses the EXTERNAL deployment controller, you\n\t\t\tcan specify only parameters that aren't controlled at the task set level. The only\n\t\t\trequired parameter is the service name. You control your services using the CreateTaskSet operation. For more information, see Amazon ECS deployment types in the Amazon Elastic Container Service Developer Guide.
When the service scheduler launches new tasks, it determines task placement. For information\n\t\t\tabout task placement and task placement strategies, see Amazon ECS\n\t\t\t\ttask placement in the Amazon Elastic Container Service Developer Guide\n
\nStarting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service.
", + "smithy.api#documentation": "Runs and maintains your desired number of tasks from a specified task definition. If\n\t\t\tthe number of tasks running in a service drops below the desiredCount,\n\t\t\tAmazon ECS runs another copy of the task in the specified cluster. To update an existing\n\t\t\tservice, see the UpdateService action.
On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.
\nIn addition to maintaining the desired count of tasks in your service, you can\n\t\t\toptionally run your service behind one or more load balancers. The load balancers\n\t\t\tdistribute traffic across the tasks that are associated with the service. For more\n\t\t\tinformation, see Service load balancing in the Amazon Elastic Container Service Developer Guide.
\nYou can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or\n\t\t\tupdating a service. volumeConfigurations is only supported for REPLICA\n\t\t\tservice and not DAEMON service. For more infomation, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide.
Tasks for services that don't use a load balancer are considered healthy if they're in\n\t\t\tthe RUNNING state. Tasks for services that use a load balancer are\n\t\t\tconsidered healthy if they're in the RUNNING state and are reported as\n\t\t\thealthy by the load balancer.
There are two service scheduler strategies available:
\n\n REPLICA - The replica scheduling strategy places and\n\t\t\t\t\tmaintains your desired number of tasks across your cluster. By default, the\n\t\t\t\t\tservice scheduler spreads tasks across Availability Zones. You can use task\n\t\t\t\t\tplacement strategies and constraints to customize task placement decisions. For\n\t\t\t\t\tmore information, see Service scheduler concepts in the Amazon Elastic Container Service Developer Guide.
\n DAEMON - The daemon scheduling strategy deploys exactly one\n\t\t\t\t\ttask on each active container instance that meets all of the task placement\n\t\t\t\t\tconstraints that you specify in your cluster. The service scheduler also\n\t\t\t\t\tevaluates the task placement constraints for running tasks. It also stops tasks\n\t\t\t\t\tthat don't meet the placement constraints. When using this strategy, you don't\n\t\t\t\t\tneed to specify a desired number of tasks, a task placement strategy, or use\n\t\t\t\t\tService Auto Scaling policies. For more information, see Service scheduler concepts in the Amazon Elastic Container Service Developer Guide.
You can optionally specify a deployment configuration for your service. The deployment\n\t\t\tis initiated by changing properties. For example, the deployment might be initiated by\n\t\t\tthe task definition or by your desired count of a service. This is done with an UpdateService operation. The default value for a replica service for\n\t\t\t\tminimumHealthyPercent is 100%. The default value for a daemon service\n\t\t\tfor minimumHealthyPercent is 0%.
If a service uses the ECS deployment controller, the minimum healthy\n\t\t\tpercent represents a lower limit on the number of tasks in a service that must remain in\n\t\t\tthe RUNNING state during a deployment. Specifically, it represents it as a\n\t\t\tpercentage of your desired number of tasks (rounded up to the nearest integer). This\n\t\t\thappens when any of your container instances are in the DRAINING state if\n\t\t\tthe service contains tasks using the EC2 launch type. Using this\n\t\t\tparameter, you can deploy without using additional cluster capacity. For example, if you\n\t\t\tset your service to have desired number of four tasks and a minimum healthy percent of\n\t\t\t50%, the scheduler might stop two existing tasks to free up cluster capacity before\n\t\t\tstarting two new tasks. If they're in the RUNNING state, tasks for services\n\t\t\tthat don't use a load balancer are considered healthy . If they're in the\n\t\t\t\tRUNNING state and reported as healthy by the load balancer, tasks for\n\t\t\tservices that do use a load balancer are considered healthy . The\n\t\t\tdefault value for minimum healthy percent is 100%.
If a service uses the ECS deployment controller, the maximum percent parameter represents an upper limit on the\n\t\t\tnumber of tasks in a service that are allowed in the RUNNING or\n\t\t\t\tPENDING state during a deployment. Specifically, it represents it as a\n\t\t\tpercentage of the desired number of tasks (rounded down to the nearest integer). This\n\t\t\thappens when any of your container instances are in the DRAINING state if\n\t\t\tthe service contains tasks using the EC2 launch type. Using this\n\t\t\tparameter, you can define the deployment batch size. For example, if your service has a\n\t\t\tdesired number of four tasks and a maximum percent value of 200%, the scheduler may\n\t\t\tstart four new tasks before stopping the four older tasks (provided that the cluster\n\t\t\tresources required to do this are available). The default value for maximum percent is\n\t\t\t200%.
If a service uses either the CODE_DEPLOY or EXTERNAL\n\t\t\tdeployment controller types and tasks that use the EC2 launch type, the\n\t\t\t\tminimum healthy percent and maximum percent values are used only to define the lower and upper limit\n\t\t\ton the number of the tasks in the service that remain in the RUNNING state.\n\t\t\tThis is while the container instances are in the DRAINING state. If the\n\t\t\ttasks in the service use the Fargate launch type, the minimum healthy\n\t\t\tpercent and maximum percent values aren't used. This is the case even if they're\n\t\t\tcurrently visible when describing your service.
When creating a service that uses the EXTERNAL deployment controller, you\n\t\t\tcan specify only parameters that aren't controlled at the task set level. The only\n\t\t\trequired parameter is the service name. You control your services using the CreateTaskSet operation. For more information, see Amazon ECS deployment types in the Amazon Elastic Container Service Developer Guide.
When the service scheduler launches new tasks, it determines task placement. For\n\t\t\tinformation about task placement and task placement strategies, see Amazon ECS\n\t\t\t\ttask placement in the Amazon Elastic Container Service Developer Guide\n
\nStarting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service.
", "smithy.api#examples": [ { "title": "To create a new service", @@ -3262,7 +3295,7 @@ "launchType": { "target": "com.amazonaws.ecs#LaunchType", "traits": { - "smithy.api#documentation": "The infrastructure that you run your service on. For more information, see Amazon ECS\n\t\t\t\tlaunch types in the Amazon Elastic Container Service Developer Guide.
\nThe FARGATE launch type runs your tasks on Fargate On-Demand\n\t\t\tinfrastructure.
Fargate Spot infrastructure is available for use but a capacity provider\n\t\t\t\tstrategy must be used. For more information, see Fargate capacity providers in the\n\t\t\t\t\tAmazon ECS Developer Guide.
\nThe EC2 launch type runs your tasks on Amazon EC2 instances registered to your\n\t\t\tcluster.
The EXTERNAL launch type runs your tasks on your on-premises server or\n\t\t\tvirtual machine (VM) capacity registered to your cluster.
A service can use either a launch type or a capacity provider strategy. If a\n\t\t\t\tlaunchType is specified, the capacityProviderStrategy\n\t\t\tparameter must be omitted.
The infrastructure that you run your service on. For more information, see Amazon ECS\n\t\t\t\tlaunch types in the Amazon Elastic Container Service Developer Guide.
\nThe FARGATE launch type runs your tasks on Fargate On-Demand\n\t\t\tinfrastructure.
Fargate Spot infrastructure is available for use but a capacity provider\n\t\t\t\tstrategy must be used. For more information, see Fargate capacity providers in the Amazon ECS\n\t\t\t\t\tDeveloper Guide.
\nThe EC2 launch type runs your tasks on Amazon EC2 instances registered to your\n\t\t\tcluster.
The EXTERNAL launch type runs your tasks on your on-premises server or\n\t\t\tvirtual machine (VM) capacity registered to your cluster.
A service can use either a launch type or a capacity provider strategy. If a\n\t\t\t\tlaunchType is specified, the capacityProviderStrategy\n\t\t\tparameter must be omitted.
The platform version that your tasks in the service are running on. A platform version\n\t\t\tis specified only for tasks using the Fargate launch type. If one isn't\n\t\t\tspecified, the LATEST platform version is used. For more information, see\n\t\t\t\tFargate platform versions in the Amazon Elastic Container Service Developer Guide.
The platform version that your tasks in the service are running on. A platform version\n\t\t\tis specified only for tasks using the Fargate launch type. If one isn't\n\t\t\tspecified, the LATEST platform version is used. For more information, see\n\t\t\t\tFargate platform\n\t\t\t\tversions in the Amazon Elastic Container Service Developer Guide.
The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy\n\t\t\tElastic Load Balancing target health checks after a task has first started. This is only used when your\n\t\t\tservice is configured to use a load balancer. If your service has a load balancer\n\t\t\tdefined and you don't specify a health check grace period value, the default value of\n\t\t\t\t0 is used.
If you do not use an Elastic Load Balancing, we recommend that you use the startPeriod in\n\t\t\tthe task definition health check parameters. For more information, see Health\n\t\t\t\tcheck.
If your service's tasks take a while to start and respond to Elastic Load Balancing health checks, you can\n\t\t\tspecify a health check grace period of up to 2,147,483,647 seconds (about 69 years).\n\t\t\tDuring that time, the Amazon ECS service scheduler ignores health check status. This grace\n\t\t\tperiod can prevent the service scheduler from marking tasks as unhealthy and stopping\n\t\t\tthem before they have time to come up.
" + "smithy.api#documentation": "The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy\n\t\t\tElastic Load Balancing target health checks after a task has first started. This is only used when your\n\t\t\tservice is configured to use a load balancer. If your service has a load balancer\n\t\t\tdefined and you don't specify a health check grace period value, the default value of\n\t\t\t\t0 is used.
If you do not use an Elastic Load Balancing, we recommend that you use the startPeriod in\n\t\t\tthe task definition health check parameters. For more information, see Health\n\t\t\t\tcheck.
If your service's tasks take a while to start and respond to Elastic Load Balancing health checks, you\n\t\t\tcan specify a health check grace period of up to 2,147,483,647 seconds (about 69 years).\n\t\t\tDuring that time, the Amazon ECS service scheduler ignores health check status. This grace\n\t\t\tperiod can prevent the service scheduler from marking tasks as unhealthy and stopping\n\t\t\tthem before they have time to come up.
" } }, "schedulingStrategy": { @@ -3341,7 +3374,7 @@ "propagateTags": { "target": "com.amazonaws.ecs#PropagateTags", "traits": { - "smithy.api#documentation": "Specifies whether to propagate the tags from the task definition to the task. If no\n\t\t\tvalue is specified, the tags aren't propagated. Tags can only be propagated to the task\n\t\t\tduring task creation. To add tags to a task after task creation, use the TagResource API action.
\nYou must set this to a value other than NONE when you use Cost Explorer. For more information, see Amazon ECS usage reports in the Amazon Elastic Container Service Developer Guide.
The default is NONE.
Specifies whether to propagate the tags from the task definition to the task. If no\n\t\t\tvalue is specified, the tags aren't propagated. Tags can only be propagated to the task\n\t\t\tduring task creation. To add tags to a task after task creation, use the TagResource API action.
\nYou must set this to a value other than NONE when you use Cost Explorer.\n\t\t\tFor more information, see Amazon ECS usage reports\n\t\t\tin the Amazon Elastic Container Service Developer Guide.
The default is NONE.
Create a task set in the specified cluster and service. This is used when a service\n\t\t\tuses the EXTERNAL deployment controller type. For more information, see\n\t\t\t\tAmazon ECS deployment\n\t\t\t\ttypes in the Amazon Elastic Container Service Developer Guide.
On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.
\nFor information about the maximum number of task sets and otther quotas, see Amazon ECS\n\t\t\tservice quotas in the Amazon Elastic Container Service Developer Guide.
" + "smithy.api#documentation": "Create a task set in the specified cluster and service. This is used when a service\n\t\t\tuses the EXTERNAL deployment controller type. For more information, see\n\t\t\t\tAmazon ECS deployment\n\t\t\t\ttypes in the Amazon Elastic Container Service Developer Guide.
On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.
\nFor information about the maximum number of task sets and other quotas, see Amazon ECS\n\t\t\tservice quotas in the Amazon Elastic Container Service Developer Guide.
" } }, "com.amazonaws.ecs#CreateTaskSetRequest": { @@ -4254,7 +4287,7 @@ "minimumHealthyPercent": { "target": "com.amazonaws.ecs#BoxedInteger", "traits": { - "smithy.api#documentation": "If a service is using the rolling update (ECS) deployment type, the\n\t\t\t\tminimumHealthyPercent represents a lower limit on the number of your\n\t\t\tservice's tasks that must remain in the RUNNING state during a deployment,\n\t\t\tas a percentage of the desiredCount (rounded up to the nearest integer).\n\t\t\tThis parameter enables you to deploy without using additional cluster capacity. For\n\t\t\texample, if your service has a desiredCount of four tasks and a\n\t\t\t\tminimumHealthyPercent of 50%, the service scheduler may stop two\n\t\t\texisting tasks to free up cluster capacity before starting two new tasks.
For services that do not use a load balancer, the following\n\t\t\tshould be noted:
\nA service is considered healthy if all essential containers within the tasks\n\t\t\t\t\tin the service pass their health checks.
\nIf a task has no essential containers with a health check defined, the service\n\t\t\t\t\tscheduler will wait for 40 seconds after a task reaches a RUNNING\n\t\t\t\t\tstate before the task is counted towards the minimum healthy percent\n\t\t\t\t\ttotal.
If a task has one or more essential containers with a health check defined,\n\t\t\t\t\tthe service scheduler will wait for the task to reach a healthy status before\n\t\t\t\t\tcounting it towards the minimum healthy percent total. A task is considered\n\t\t\t\t\thealthy when all essential containers within the task have passed their health\n\t\t\t\t\tchecks. The amount of time the service scheduler can wait for is determined by\n\t\t\t\t\tthe container health check settings.
\nFor services that do use a load balancer, the following should be\n\t\t\tnoted:
\nIf a task has no essential containers with a health check defined, the service\n\t\t\t\t\tscheduler will wait for the load balancer target group health check to return a\n\t\t\t\t\thealthy status before counting the task towards the minimum healthy percent\n\t\t\t\t\ttotal.
\nIf a task has an essential container with a health check defined, the service\n\t\t\t\t\tscheduler will wait for both the task to reach a healthy status and the load\n\t\t\t\t\tbalancer target group health check to return a healthy status before counting\n\t\t\t\t\tthe task towards the minimum healthy percent total.
\nThe default value for a replica service for\n\t\t\tminimumHealthyPercent is 100%. The default\n\t\t\tminimumHealthyPercent value for a service using\n\t\t\tthe DAEMON service schedule is 0% for the CLI,\n\t\t\tthe Amazon Web Services SDKs, and the APIs and 50% for the Amazon Web Services Management Console.
The minimum number of healthy tasks during a deployment is the\n\t\t\tdesiredCount multiplied by the\n\t\t\tminimumHealthyPercent/100, rounded up to the\n\t\t\tnearest integer value.
If a service is using either the blue/green (CODE_DEPLOY) or\n\t\t\t\tEXTERNAL deployment types and is running tasks that use the\n\t\t\tEC2 launch type, the minimum healthy\n\t\t\t\tpercent value is set to the default value and is used to define the lower\n\t\t\tlimit on the number of the tasks in the service that remain in the RUNNING\n\t\t\tstate while the container instances are in the DRAINING state. If a service\n\t\t\tis using either the blue/green (CODE_DEPLOY) or EXTERNAL\n\t\t\tdeployment types and is running tasks that use the Fargate launch type,\n\t\t\tthe minimum healthy percent value is not used, although it is returned when describing\n\t\t\tyour service.
If a service is using the rolling update (ECS) deployment type, the\n\t\t\t\tminimumHealthyPercent represents a lower limit on the number of your\n\t\t\tservice's tasks that must remain in the RUNNING state during a deployment,\n\t\t\tas a percentage of the desiredCount (rounded up to the nearest integer).\n\t\t\tThis parameter enables you to deploy without using additional cluster capacity. For\n\t\t\texample, if your service has a desiredCount of four tasks and a\n\t\t\t\tminimumHealthyPercent of 50%, the service scheduler may stop two\n\t\t\texisting tasks to free up cluster capacity before starting two new tasks.
For services that do not use a load balancer, the following\n\t\t\tshould be noted:
\nA service is considered healthy if all essential containers within the tasks\n\t\t\t\t\tin the service pass their health checks.
\nIf a task has no essential containers with a health check defined, the service\n\t\t\t\t\tscheduler will wait for 40 seconds after a task reaches a RUNNING\n\t\t\t\t\tstate before the task is counted towards the minimum healthy percent\n\t\t\t\t\ttotal.
If a task has one or more essential containers with a health check defined,\n\t\t\t\t\tthe service scheduler will wait for the task to reach a healthy status before\n\t\t\t\t\tcounting it towards the minimum healthy percent total. A task is considered\n\t\t\t\t\thealthy when all essential containers within the task have passed their health\n\t\t\t\t\tchecks. The amount of time the service scheduler can wait for is determined by\n\t\t\t\t\tthe container health check settings.
\nFor services that do use a load balancer, the following should be\n\t\t\tnoted:
\nIf a task has no essential containers with a health check defined, the service\n\t\t\t\t\tscheduler will wait for the load balancer target group health check to return a\n\t\t\t\t\thealthy status before counting the task towards the minimum healthy percent\n\t\t\t\t\ttotal.
\nIf a task has an essential container with a health check defined, the service\n\t\t\t\t\tscheduler will wait for both the task to reach a healthy status and the load\n\t\t\t\t\tbalancer target group health check to return a healthy status before counting\n\t\t\t\t\tthe task towards the minimum healthy percent total.
\nThe default value for a replica service for minimumHealthyPercent is\n\t\t\t100%. The default minimumHealthyPercent value for a service using the\n\t\t\t\tDAEMON service schedule is 0% for the CLI, the Amazon Web Services SDKs, and the\n\t\t\tAPIs and 50% for the Amazon Web Services Management Console.
The minimum number of healthy tasks during a deployment is the\n\t\t\t\tdesiredCount multiplied by the minimumHealthyPercent/100,\n\t\t\trounded up to the nearest integer value.
If a service is using either the blue/green (CODE_DEPLOY) or\n\t\t\t\tEXTERNAL deployment types and is running tasks that use the\n\t\t\tEC2 launch type, the minimum healthy\n\t\t\t\tpercent value is set to the default value and is used to define the lower\n\t\t\tlimit on the number of the tasks in the service that remain in the RUNNING\n\t\t\tstate while the container instances are in the DRAINING state. If a service\n\t\t\tis using either the blue/green (CODE_DEPLOY) or EXTERNAL\n\t\t\tdeployment types and is running tasks that use the Fargate launch type,\n\t\t\tthe minimum healthy percent value is not used, although it is returned when describing\n\t\t\tyour service.
Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment.
" + "smithy.api#documentation": "Specify an Key Management Service key ID to encrypt the ephemeral storage for\n\t\t\tdeployment.
" } } }, @@ -5537,19 +5570,19 @@ "driver": { "target": "com.amazonaws.ecs#String", "traits": { - "smithy.api#documentation": "The Docker volume driver to use. The driver value must match the driver name provided\n\t\t\tby Docker because it is used for task placement. If the driver was installed using the\n\t\t\tDocker plugin CLI, use docker plugin ls to retrieve the driver name from\n\t\t\tyour container instance. If the driver was installed using another method, use Docker\n\t\t\tplugin discovery to retrieve the driver name. For more information, see Docker\n\t\t\t\tplugin discovery. This parameter maps to Driver in the\n\t\t\tCreate a volume section of the Docker Remote API and the\n\t\t\t\txxdriver option to docker\n\t\t\t\tvolume create.
The Docker volume driver to use. The driver value must match the driver name provided\n\t\t\tby Docker because it is used for task placement. If the driver was installed using the\n\t\t\tDocker plugin CLI, use docker plugin ls to retrieve the driver name from\n\t\t\tyour container instance. If the driver was installed using another method, use Docker\n\t\t\tplugin discovery to retrieve the driver name. This parameter maps to Driver in the docker create-container command and the\n\t\t\t\txxdriver option to docker\n\t\t\t\tvolume create.
A map of Docker driver-specific options passed through. This parameter maps to\n\t\t\t\tDriverOpts in the Create a volume section of the\n\t\t\tDocker Remote API and the xxopt option to docker\n\t\t\t\tvolume create.
A map of Docker driver-specific options passed through. This parameter maps to\n\t\t\t\tDriverOpts in the docker create-volume command and the xxopt option to docker\n\t\t\t\tvolume create.
Custom metadata to add to your Docker volume. This parameter maps to\n\t\t\t\tLabels in the Create a volume section of the\n\t\t\tDocker Remote API and the xxlabel option to docker\n\t\t\t\tvolume create.
Custom metadata to add to your Docker volume. This parameter maps to\n\t\t\t\tLabels in the docker create-container command and the xxlabel option to docker\n\t\t\t\tvolume create.
The file type to use. Environment files are objects in Amazon S3. The only supported value is\n\t\t\t\ts3.
The file type to use. Environment files are objects in Amazon S3. The only supported value\n\t\t\tis s3.
A list of files containing the environment variables to pass to a container. You can\n\t\t\tspecify up to ten environment files. The file must have a .env file\n\t\t\textension. Each line in an environment file should contain an environment variable in\n\t\t\t\tVARIABLE=VALUE format. Lines beginning with # are treated\n\t\t\tas comments and are ignored.
If there are environment variables specified using the environment\n\t\t\tparameter in a container definition, they take precedence over the variables contained\n\t\t\twithin an environment file. If multiple environment files are specified that contain the\n\t\t\tsame variable, they're processed from the top down. We recommend that you use unique\n\t\t\tvariable names. For more information, see Use a file to pass environment variables to a container in the Amazon Elastic Container Service Developer Guide.
Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply.
\nYou must use the following platforms for the Fargate launch type:
\nLinux platform version 1.4.0 or later.
Windows platform version 1.0.0 or later.
Consider the following when using the Fargate launch type:
\nThe file is handled like a native Docker env-file.
\nThere is no support for shell escape handling.
\nThe container entry point interperts the VARIABLE values.
A list of files containing the environment variables to pass to a container. You can\n\t\t\tspecify up to ten environment files. The file must have a .env file\n\t\t\textension. Each line in an environment file should contain an environment variable in\n\t\t\t\tVARIABLE=VALUE format. Lines beginning with # are treated\n\t\t\tas comments and are ignored.
If there are environment variables specified using the environment\n\t\t\tparameter in a container definition, they take precedence over the variables contained\n\t\t\twithin an environment file. If multiple environment files are specified that contain the\n\t\t\tsame variable, they're processed from the top down. We recommend that you use unique\n\t\t\tvariable names. For more information, see Use a file to pass\n\t\t\t\tenvironment variables to a container in the Amazon Elastic Container Service Developer Guide.
Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations\n\t\t\tapply.
\nYou must use the following platforms for the Fargate launch type:
\nLinux platform version 1.4.0 or later.
Windows platform version 1.0.0 or later.
Consider the following when using the Fargate launch type:
\nThe file is handled like a native Docker env-file.
\nThere is no support for shell escape handling.
\nThe container entry point interperts the VARIABLE values.
The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported\n\t\t\tvalue is 20 GiB and the maximum supported value is\n\t\t\t\t200 GiB.
The total amount, in GiB, of ephemeral storage to set for the task. The minimum\n\t\t\tsupported value is 20 GiB and the maximum supported value is\n\t\t\t\t200 GiB.
A string array representing the command that the container runs to determine if it is\n\t\t\thealthy. The string array must start with CMD to run the command arguments\n\t\t\tdirectly, or CMD-SHELL to run the command with the container's default\n\t\t\tshell.
When you use the Amazon Web Services Management Console JSON panel, the Command Line Interface, or the APIs, enclose the list\n\t\t\tof commands in double quotes and brackets.
\n\n [ \"CMD-SHELL\", \"curl -f http://localhost/ || exit 1\" ]\n
You don't include the double quotes and brackets when you use the Amazon Web Services Management Console.
\n\n CMD-SHELL, curl -f http://localhost/ || exit 1\n
An exit code of 0 indicates success, and non-zero exit code indicates failure. For\n\t\t\tmore information, see HealthCheck in the Create a container\n\t\t\tsection of the Docker Remote API.
A string array representing the command that the container runs to determine if it is\n\t\t\thealthy. The string array must start with CMD to run the command arguments\n\t\t\tdirectly, or CMD-SHELL to run the command with the container's default\n\t\t\tshell.
When you use the Amazon Web Services Management Console JSON panel, the Command Line Interface, or the APIs, enclose the list\n\t\t\tof commands in double quotes and brackets.
\n\n [ \"CMD-SHELL\", \"curl -f http://localhost/ || exit 1\" ]\n
You don't include the double quotes and brackets when you use the Amazon Web Services Management Console.
\n\n CMD-SHELL, curl -f http://localhost/ || exit 1\n
An exit code of 0 indicates success, and non-zero exit code indicates failure. For\n\t\t\tmore information, see HealthCheck in tthe docker create-container command
An object representing a container health check. Health check parameters that are\n\t\t\tspecified in a container definition override any Docker health checks that exist in the\n\t\t\tcontainer image (such as those specified in a parent image or from the image's\n\t\t\tDockerfile). This configuration maps to the HEALTHCHECK parameter of docker run.
The Amazon ECS container agent only monitors and reports on the health checks specified\n\t\t\t\tin the task definition. Amazon ECS does not monitor Docker health checks that are\n\t\t\t\tembedded in a container image and not specified in the container definition. Health\n\t\t\t\tcheck parameters that are specified in a container definition override any Docker\n\t\t\t\thealth checks that exist in the container image.
\nYou can view the health status of both individual containers and a task with the\n\t\t\tDescribeTasks API operation or when viewing the task details in the console.
\nThe health check is designed to make sure that your containers survive agent restarts,\n\t\t\tupgrades, or temporary unavailability.
\nAmazon ECS performs health checks on containers with the default that launched the\n\t\t\tcontainer instance or the task.
\nThe following describes the possible healthStatus values for a\n\t\t\tcontainer:
\n HEALTHY-The container health check has passed\n\t\t\t\t\tsuccessfully.
\n UNHEALTHY-The container health check has failed.
\n UNKNOWN-The container health check is being evaluated,\n\t\t\t\t\tthere's no container health check defined, or Amazon ECS doesn't have the health\n\t\t\t\t\tstatus of the container.
The following describes the possible healthStatus values based on the\n\t\t\tcontainer health checker status of essential containers in the task with the following\n\t\t\tpriority order (high to low):
\n UNHEALTHY-One or more essential containers have failed\n\t\t\t\t\ttheir health check.
\n UNKNOWN-Any essential container running within the task is\n\t\t\t\t\tin an UNKNOWN state and no other essential containers have an\n\t\t\t\t\t\tUNHEALTHY state.
\n HEALTHY-All essential containers within the task have\n\t\t\t\t\tpassed their health checks.
Consider the following task health example with 2 containers.
\nIf Container1 is UNHEALTHY and Container2 is\n\t\t\t\t\tUNKNOWN, the task health is UNHEALTHY.
If Container1 is UNHEALTHY and Container2 is\n\t\t\t\t\tHEALTHY, the task health is UNHEALTHY.
If Container1 is HEALTHY and Container2 is UNKNOWN,\n\t\t\t\t\tthe task health is UNKNOWN.
If Container1 is HEALTHY and Container2 is HEALTHY,\n\t\t\t\t\tthe task health is HEALTHY.
Consider the following task health example with 3 containers.
\nIf Container1 is UNHEALTHY and Container2 is\n\t\t\t\t\tUNKNOWN, and Container3 is UNKNOWN, the task health is\n\t\t\t\t\t\tUNHEALTHY.
If Container1 is UNHEALTHY and Container2 is\n\t\t\t\t\tUNKNOWN, and Container3 is HEALTHY, the task health is\n\t\t\t\t\t\tUNHEALTHY.
If Container1 is UNHEALTHY and Container2 is\n\t\t\t\t\tHEALTHY, and Container3 is HEALTHY, the task health is\n\t\t\t\t\t\tUNHEALTHY.
If Container1 is HEALTHY and Container2 is UNKNOWN,\n\t\t\t\t\tand Container3 is HEALTHY, the task health is\n\t\t\t\t\tUNKNOWN.
If Container1 is HEALTHY and Container2 is UNKNOWN,\n\t\t\t\t\tand Container3 is UNKNOWN, the task health is\n\t\t\t\t\tUNKNOWN.
If Container1 is HEALTHY and Container2 is HEALTHY,\n\t\t\t\t\tand Container3 is HEALTHY, the task health is\n\t\t\t\t\tHEALTHY.
If a task is run manually, and not as part of a service, the task will continue its\n\t\t\tlifecycle regardless of its health status. For tasks that are part of a service, if the\n\t\t\ttask reports as unhealthy then the task will be stopped and the service scheduler will\n\t\t\treplace it.
\nThe following are notes about container health check support:
\nIf the Amazon ECS container agent becomes disconnected from the Amazon ECS service, this won't\n\t\t\t\t\tcause a container to transition to an UNHEALTHY status. This is by design,\n\t\t\t\t\tto ensure that containers remain running during agent restarts or temporary\n\t\t\t\t\tunavailability. The health check status is the \"last heard from\" response from the Amazon ECS\n\t\t\t\t\tagent, so if the container was considered HEALTHY prior to the disconnect,\n\t\t\t\t\tthat status will remain until the agent reconnects and another health check occurs.\n\t\t\t\t\tThere are no assumptions made about the status of the container health checks.
Container health checks require version 1.17.0 or greater of the Amazon ECS\n\t\t\t\t\tcontainer agent. For more information, see Updating the\n\t\t\t\t\t\tAmazon ECS container agent.
Container health checks are supported for Fargate tasks if\n\t\t\t\t\tyou're using platform version 1.1.0 or greater. For more\n\t\t\t\t\tinformation, see Fargate\n\t\t\t\t\t\tplatform versions.
Container health checks aren't supported for tasks that are part of a service\n\t\t\t\t\tthat's configured to use a Classic Load Balancer.
\nAn object representing a container health check. Health check parameters that are\n\t\t\tspecified in a container definition override any Docker health checks that exist in the\n\t\t\tcontainer image (such as those specified in a parent image or from the image's\n\t\t\tDockerfile). This configuration maps to the HEALTHCHECK parameter of docker run.
The Amazon ECS container agent only monitors and reports on the health checks specified\n\t\t\t\tin the task definition. Amazon ECS does not monitor Docker health checks that are\n\t\t\t\tembedded in a container image and not specified in the container definition. Health\n\t\t\t\tcheck parameters that are specified in a container definition override any Docker\n\t\t\t\thealth checks that exist in the container image.
\nYou can view the health status of both individual containers and a task with the\n\t\t\tDescribeTasks API operation or when viewing the task details in the console.
\nThe health check is designed to make sure that your containers survive agent restarts,\n\t\t\tupgrades, or temporary unavailability.
\nAmazon ECS performs health checks on containers with the default that launched the\n\t\t\tcontainer instance or the task.
\nThe following describes the possible healthStatus values for a\n\t\t\tcontainer:
\n HEALTHY-The container health check has passed\n\t\t\t\t\tsuccessfully.
\n UNHEALTHY-The container health check has failed.
\n UNKNOWN-The container health check is being evaluated,\n\t\t\t\t\tthere's no container health check defined, or Amazon ECS doesn't have the health\n\t\t\t\t\tstatus of the container.
The following describes the possible healthStatus values based on the\n\t\t\tcontainer health checker status of essential containers in the task with the following\n\t\t\tpriority order (high to low):
\n UNHEALTHY-One or more essential containers have failed\n\t\t\t\t\ttheir health check.
\n UNKNOWN-Any essential container running within the task is\n\t\t\t\t\tin an UNKNOWN state and no other essential containers have an\n\t\t\t\t\t\tUNHEALTHY state.
\n HEALTHY-All essential containers within the task have\n\t\t\t\t\tpassed their health checks.
Consider the following task health example with 2 containers.
\nIf Container1 is UNHEALTHY and Container2 is\n\t\t\t\t\tUNKNOWN, the task health is UNHEALTHY.
If Container1 is UNHEALTHY and Container2 is\n\t\t\t\t\tHEALTHY, the task health is UNHEALTHY.
If Container1 is HEALTHY and Container2 is UNKNOWN,\n\t\t\t\t\tthe task health is UNKNOWN.
If Container1 is HEALTHY and Container2 is HEALTHY,\n\t\t\t\t\tthe task health is HEALTHY.
Consider the following task health example with 3 containers.
\nIf Container1 is UNHEALTHY and Container2 is\n\t\t\t\t\tUNKNOWN, and Container3 is UNKNOWN, the task health is\n\t\t\t\t\t\tUNHEALTHY.
If Container1 is UNHEALTHY and Container2 is\n\t\t\t\t\tUNKNOWN, and Container3 is HEALTHY, the task health is\n\t\t\t\t\t\tUNHEALTHY.
If Container1 is UNHEALTHY and Container2 is\n\t\t\t\t\tHEALTHY, and Container3 is HEALTHY, the task health is\n\t\t\t\t\t\tUNHEALTHY.
If Container1 is HEALTHY and Container2 is UNKNOWN,\n\t\t\t\t\tand Container3 is HEALTHY, the task health is\n\t\t\t\t\tUNKNOWN.
If Container1 is HEALTHY and Container2 is UNKNOWN,\n\t\t\t\t\tand Container3 is UNKNOWN, the task health is\n\t\t\t\t\tUNKNOWN.
If Container1 is HEALTHY and Container2 is HEALTHY,\n\t\t\t\t\tand Container3 is HEALTHY, the task health is\n\t\t\t\t\tHEALTHY.
If a task is run manually, and not as part of a service, the task will continue its\n\t\t\tlifecycle regardless of its health status. For tasks that are part of a service, if the\n\t\t\ttask reports as unhealthy then the task will be stopped and the service scheduler will\n\t\t\treplace it.
\nThe following are notes about container health check support:
\nIf the Amazon ECS container agent becomes disconnected from the Amazon ECS service, this\n\t\t\t\t\twon't cause a container to transition to an UNHEALTHY status. This\n\t\t\t\t\tis by design, to ensure that containers remain running during agent restarts or\n\t\t\t\t\ttemporary unavailability. The health check status is the \"last heard from\"\n\t\t\t\t\tresponse from the Amazon ECS agent, so if the container was considered\n\t\t\t\t\t\tHEALTHY prior to the disconnect, that status will remain until\n\t\t\t\t\tthe agent reconnects and another health check occurs. There are no assumptions\n\t\t\t\t\tmade about the status of the container health checks.
Container health checks require version 1.17.0 or greater of the\n\t\t\t\t\tAmazon ECS container agent. For more information, see Updating the\n\t\t\t\t\t\tAmazon ECS container agent.
Container health checks are supported for Fargate tasks if\n\t\t\t\t\tyou're using platform version 1.1.0 or greater. For more\n\t\t\t\t\tinformation, see Fargate\n\t\t\t\t\t\tplatform versions.
Container health checks aren't supported for tasks that are part of a service\n\t\t\t\t\tthat's configured to use a Classic Load Balancer.
\nThe Linux capabilities for the container that have been added to the default\n\t\t\tconfiguration provided by Docker. This parameter maps to CapAdd in the\n\t\t\tCreate a container section of the Docker Remote API and the\n\t\t\t\t--cap-add option to docker\n\t\t\t\trun.
Tasks launched on Fargate only support adding the SYS_PTRACE kernel\n\t\t\t\tcapability.
Valid values: \"ALL\" | \"AUDIT_CONTROL\" | \"AUDIT_WRITE\" | \"BLOCK_SUSPEND\" |\n\t\t\t\t\"CHOWN\" | \"DAC_OVERRIDE\" | \"DAC_READ_SEARCH\" | \"FOWNER\" | \"FSETID\" | \"IPC_LOCK\" |\n\t\t\t\t\"IPC_OWNER\" | \"KILL\" | \"LEASE\" | \"LINUX_IMMUTABLE\" | \"MAC_ADMIN\" | \"MAC_OVERRIDE\" |\n\t\t\t\t\"MKNOD\" | \"NET_ADMIN\" | \"NET_BIND_SERVICE\" | \"NET_BROADCAST\" | \"NET_RAW\" | \"SETFCAP\"\n\t\t\t\t| \"SETGID\" | \"SETPCAP\" | \"SETUID\" | \"SYS_ADMIN\" | \"SYS_BOOT\" | \"SYS_CHROOT\" |\n\t\t\t\t\"SYS_MODULE\" | \"SYS_NICE\" | \"SYS_PACCT\" | \"SYS_PTRACE\" | \"SYS_RAWIO\" |\n\t\t\t\t\"SYS_RESOURCE\" | \"SYS_TIME\" | \"SYS_TTY_CONFIG\" | \"SYSLOG\" |\n\t\t\t\"WAKE_ALARM\"\n
The Linux capabilities for the container that have been added to the default\n\t\t\tconfiguration provided by Docker. This parameter maps to CapAdd in the docker create-container command and the\n\t\t\t\t--cap-add option to docker\n\t\t\t\trun.
Tasks launched on Fargate only support adding the SYS_PTRACE kernel\n\t\t\t\tcapability.
Valid values: \"ALL\" | \"AUDIT_CONTROL\" | \"AUDIT_WRITE\" | \"BLOCK_SUSPEND\" |\n\t\t\t\t\"CHOWN\" | \"DAC_OVERRIDE\" | \"DAC_READ_SEARCH\" | \"FOWNER\" | \"FSETID\" | \"IPC_LOCK\" |\n\t\t\t\t\"IPC_OWNER\" | \"KILL\" | \"LEASE\" | \"LINUX_IMMUTABLE\" | \"MAC_ADMIN\" | \"MAC_OVERRIDE\" |\n\t\t\t\t\"MKNOD\" | \"NET_ADMIN\" | \"NET_BIND_SERVICE\" | \"NET_BROADCAST\" | \"NET_RAW\" | \"SETFCAP\"\n\t\t\t\t| \"SETGID\" | \"SETPCAP\" | \"SETUID\" | \"SYS_ADMIN\" | \"SYS_BOOT\" | \"SYS_CHROOT\" |\n\t\t\t\t\"SYS_MODULE\" | \"SYS_NICE\" | \"SYS_PACCT\" | \"SYS_PTRACE\" | \"SYS_RAWIO\" |\n\t\t\t\t\"SYS_RESOURCE\" | \"SYS_TIME\" | \"SYS_TTY_CONFIG\" | \"SYSLOG\" |\n\t\t\t\"WAKE_ALARM\"\n
The Linux capabilities for the container that have been removed from the default\n\t\t\tconfiguration provided by Docker. This parameter maps to CapDrop in the\n\t\t\tCreate a container section of the Docker Remote API and the\n\t\t\t\t--cap-drop option to docker\n\t\t\t\trun.
Valid values: \"ALL\" | \"AUDIT_CONTROL\" | \"AUDIT_WRITE\" | \"BLOCK_SUSPEND\" |\n\t\t\t\t\"CHOWN\" | \"DAC_OVERRIDE\" | \"DAC_READ_SEARCH\" | \"FOWNER\" | \"FSETID\" | \"IPC_LOCK\" |\n\t\t\t\t\"IPC_OWNER\" | \"KILL\" | \"LEASE\" | \"LINUX_IMMUTABLE\" | \"MAC_ADMIN\" | \"MAC_OVERRIDE\" |\n\t\t\t\t\"MKNOD\" | \"NET_ADMIN\" | \"NET_BIND_SERVICE\" | \"NET_BROADCAST\" | \"NET_RAW\" | \"SETFCAP\"\n\t\t\t\t| \"SETGID\" | \"SETPCAP\" | \"SETUID\" | \"SYS_ADMIN\" | \"SYS_BOOT\" | \"SYS_CHROOT\" |\n\t\t\t\t\"SYS_MODULE\" | \"SYS_NICE\" | \"SYS_PACCT\" | \"SYS_PTRACE\" | \"SYS_RAWIO\" |\n\t\t\t\t\"SYS_RESOURCE\" | \"SYS_TIME\" | \"SYS_TTY_CONFIG\" | \"SYSLOG\" |\n\t\t\t\"WAKE_ALARM\"\n
The Linux capabilities for the container that have been removed from the default\n\t\t\tconfiguration provided by Docker. This parameter maps to CapDrop in the docker create-container command and the\n\t\t\t\t--cap-drop option to docker\n\t\t\t\trun.
Valid values: \"ALL\" | \"AUDIT_CONTROL\" | \"AUDIT_WRITE\" | \"BLOCK_SUSPEND\" |\n\t\t\t\t\"CHOWN\" | \"DAC_OVERRIDE\" | \"DAC_READ_SEARCH\" | \"FOWNER\" | \"FSETID\" | \"IPC_LOCK\" |\n\t\t\t\t\"IPC_OWNER\" | \"KILL\" | \"LEASE\" | \"LINUX_IMMUTABLE\" | \"MAC_ADMIN\" | \"MAC_OVERRIDE\" |\n\t\t\t\t\"MKNOD\" | \"NET_ADMIN\" | \"NET_BIND_SERVICE\" | \"NET_BROADCAST\" | \"NET_RAW\" | \"SETFCAP\"\n\t\t\t\t| \"SETGID\" | \"SETPCAP\" | \"SETUID\" | \"SYS_ADMIN\" | \"SYS_BOOT\" | \"SYS_CHROOT\" |\n\t\t\t\t\"SYS_MODULE\" | \"SYS_NICE\" | \"SYS_PACCT\" | \"SYS_PTRACE\" | \"SYS_RAWIO\" |\n\t\t\t\t\"SYS_RESOURCE\" | \"SYS_TIME\" | \"SYS_TTY_CONFIG\" | \"SYSLOG\" |\n\t\t\t\"WAKE_ALARM\"\n
The Linux capabilities to add or remove from the default Docker configuration for a container defined in the task definition. For more information about the default capabilities\n\t\t\tand the non-default available capabilities, see Runtime privilege and Linux capabilities in the Docker run\n\t\t\t\treference. For more detailed information about these Linux capabilities,\n\t\t\tsee the capabilities(7) Linux manual page.
" + "smithy.api#documentation": "The Linux capabilities to add or remove from the default Docker configuration for a container defined in the task definition. For more detailed information about these Linux capabilities,\n\t\t\tsee the capabilities(7) Linux manual page.
" } }, "com.amazonaws.ecs#KeyValuePair": { @@ -6595,7 +6634,7 @@ "devices": { "target": "com.amazonaws.ecs#DevicesList", "traits": { - "smithy.api#documentation": "Any host devices to expose to the container. This parameter maps to\n\t\t\t\tDevices in the Create a container section of the\n\t\t\tDocker Remote API and the --device option to docker run.
If you're using tasks that use the Fargate launch type, the\n\t\t\t\t\tdevices parameter isn't supported.
Any host devices to expose to the container. This parameter maps to\n\t\t\tDevices in tthe docker create-container command and the --device option to docker run.
If you're using tasks that use the Fargate launch type, the\n\t\t\t\t\tdevices parameter isn't supported.
The value for the size (in MiB) of the /dev/shm volume. This parameter\n\t\t\tmaps to the --shm-size option to docker\n\t\t\t\trun.
If you are using tasks that use the Fargate launch type, the\n\t\t\t\t\tsharedMemorySize parameter is not supported.
The value for the size (in MiB) of the /dev/shm volume. This parameter\n\t\t\tmaps to the --shm-size option to docker\n\t\t\t\trun.
If you are using tasks that use the Fargate launch type, the\n\t\t\t\t\tsharedMemorySize parameter is not supported.
The container path, mount options, and size (in MiB) of the tmpfs mount. This\n\t\t\tparameter maps to the --tmpfs option to docker run.
If you're using tasks that use the Fargate launch type, the\n\t\t\t\t\ttmpfs parameter isn't supported.
The container path, mount options, and size (in MiB) of the tmpfs mount. This\n\t\t\tparameter maps to the --tmpfs option to docker run.
If you're using tasks that use the Fargate launch type, the\n\t\t\t\t\ttmpfs parameter isn't supported.
This allows you to tune a container's memory swappiness behavior. A\n\t\t\t\tswappiness value of 0 will cause swapping to not happen\n\t\t\tunless absolutely necessary. A swappiness value of 100 will\n\t\t\tcause pages to be swapped very aggressively. Accepted values are whole numbers between\n\t\t\t\t0 and 100. If the swappiness parameter is not\n\t\t\tspecified, a default value of 60 is used. If a value is not specified for\n\t\t\t\tmaxSwap then this parameter is ignored. This parameter maps to the\n\t\t\t\t--memory-swappiness option to docker run.
If you're using tasks that use the Fargate launch type, the\n\t\t\t\t\tswappiness parameter isn't supported.
If you're using tasks on Amazon Linux 2023 the swappiness parameter isn't\n\t\t\t\tsupported.
This allows you to tune a container's memory swappiness behavior. A\n\t\t\t\tswappiness value of 0 will cause swapping to not happen\n\t\t\tunless absolutely necessary. A swappiness value of 100 will\n\t\t\tcause pages to be swapped very aggressively. Accepted values are whole numbers between\n\t\t\t\t0 and 100. If the swappiness parameter is not\n\t\t\tspecified, a default value of 60 is used. If a value is not specified for\n\t\t\t\tmaxSwap then this parameter is ignored. This parameter maps to the\n\t\t\t\t--memory-swappiness option to docker run.
If you're using tasks that use the Fargate launch type, the\n\t\t\t\t\tswappiness parameter isn't supported.
If you're using tasks on Amazon Linux 2023 the swappiness parameter isn't\n\t\t\t\tsupported.
The log driver to use for the container.
\nFor tasks on Fargate, the supported log drivers are awslogs,\n\t\t\t\tsplunk, and awsfirelens.
For tasks hosted on Amazon EC2 instances, the supported log drivers are\n\t\t\t\tawslogs, fluentd, gelf,\n\t\t\t\tjson-file, journald,\n\t\t\t\tlogentries,syslog, splunk, and\n\t\t\t\tawsfirelens.
For more information about using the awslogs log driver, see Using\n\t\t\t\tthe awslogs log driver in the Amazon Elastic Container Service Developer Guide.
For more information about using the awsfirelens log driver, see Custom log routing in the Amazon Elastic Container Service Developer Guide.
If you have a custom driver that isn't listed, you can fork the Amazon ECS container\n\t\t\t\tagent project that's available\n\t\t\t\t\ton GitHub and customize it to work with that driver. We encourage you to\n\t\t\t\tsubmit pull requests for changes that you would like to have included. However, we\n\t\t\t\tdon't currently provide support for running modified copies of this software.
\nThe log driver to use for the container.
\nFor tasks on Fargate, the supported log drivers are awslogs,\n\t\t\t\tsplunk, and awsfirelens.
For tasks hosted on Amazon EC2 instances, the supported log drivers are\n\t\t\t\tawslogs, fluentd, gelf,\n\t\t\t\tjson-file, journald, syslog,\n\t\t\t\tsplunk, and awsfirelens.
For more information about using the awslogs log driver, see Send\n\t\t\t\tAmazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide.
For more information about using the awsfirelens log driver, see Send\n\t\t\t\tAmazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner.
If you have a custom driver that isn't listed, you can fork the Amazon ECS container\n\t\t\t\tagent project that's available\n\t\t\t\t\ton GitHub and customize it to work with that driver. We encourage you to\n\t\t\t\tsubmit pull requests for changes that you would like to have included. However, we\n\t\t\t\tdon't currently provide support for running modified copies of this software.
\nThe log configuration for the container. This parameter maps to LogConfig\n\t\t\tin the Create a container section of the Docker Remote API and the\n\t\t\t\t--log-driver option to \n docker\n\t\t\t\t\trun\n .
By default, containers use the same logging driver that the Docker daemon uses.\n\t\t\tHowever, the container might use a different logging driver than the Docker daemon by\n\t\t\tspecifying a log driver configuration in the container definition. For more information\n\t\t\tabout the options for different supported log drivers, see Configure logging\n\t\t\t\tdrivers in the Docker documentation.
\nUnderstand the following when specifying a log configuration for your\n\t\t\tcontainers.
\nAmazon ECS currently supports a subset of the logging drivers available to the\n\t\t\t\t\tDocker daemon. Additional log drivers may be available in future releases of the\n\t\t\t\t\tAmazon ECS container agent.
\nFor tasks on Fargate, the supported log drivers are awslogs,\n\t\t\t\t\t\tsplunk, and awsfirelens.
For tasks hosted on Amazon EC2 instances, the supported log drivers are\n\t\t\t\t\t\tawslogs, fluentd, gelf,\n\t\t\t\t\t\tjson-file, journald,\n\t\t\t\t\t\tlogentries,syslog, splunk, and\n\t\t\t\t\t\tawsfirelens.
This parameter requires version 1.18 of the Docker Remote API or greater on\n\t\t\t\t\tyour container instance.
\nFor tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must\n\t\t\t\t\tregister the available logging drivers with the\n\t\t\t\t\t\tECS_AVAILABLE_LOGGING_DRIVERS environment variable before\n\t\t\t\t\tcontainers placed on that instance can use these log configuration options. For\n\t\t\t\t\tmore information, see Amazon ECS container agent configuration in the\n\t\t\t\t\tAmazon Elastic Container Service Developer Guide.
For tasks that are on Fargate, because you don't have access to the\n\t\t\t\t\tunderlying infrastructure your tasks are hosted on, any additional software\n\t\t\t\t\tneeded must be installed outside of the task. For example, the Fluentd output\n\t\t\t\t\taggregators or a remote host running Logstash to send Gelf logs to.
\nThe log configuration for the container. This parameter maps to LogConfig\n\t\t\tin the docker create-container command and the\n\t\t\t\t--log-driver option to docker\n\t\t\t\t\trun.
By default, containers use the same logging driver that the Docker daemon uses.\n\t\t\tHowever, the container might use a different logging driver than the Docker daemon by\n\t\t\tspecifying a log driver configuration in the container definition.
\nUnderstand the following when specifying a log configuration for your\n\t\t\tcontainers.
\nAmazon ECS currently supports a subset of the logging drivers available to the\n\t\t\t\t\tDocker daemon. Additional log drivers may be available in future releases of the\n\t\t\t\t\tAmazon ECS container agent.
\nFor tasks on Fargate, the supported log drivers are awslogs,\n\t\t\t\t\t\tsplunk, and awsfirelens.
For tasks hosted on Amazon EC2 instances, the supported log drivers are\n\t\t\t\t\t\tawslogs, fluentd, gelf,\n\t\t\t\t\t\tjson-file, journald,syslog,\n\t\t\t\t\t\tsplunk, and awsfirelens.
This parameter requires version 1.18 of the Docker Remote API or greater on\n\t\t\t\t\tyour container instance.
\nFor tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must\n\t\t\t\t\tregister the available logging drivers with the\n\t\t\t\t\t\tECS_AVAILABLE_LOGGING_DRIVERS environment variable before\n\t\t\t\t\tcontainers placed on that instance can use these log configuration options. For\n\t\t\t\t\tmore information, see Amazon ECS container agent configuration in the\n\t\t\t\t\tAmazon Elastic Container Service Developer Guide.
For tasks that are on Fargate, because you don't have access to the\n\t\t\t\t\tunderlying infrastructure your tasks are hosted on, any additional software\n\t\t\t\t\tneeded must be installed outside of the task. For example, the Fluentd output\n\t\t\t\t\taggregators or a remote host running Logstash to send Gelf logs to.
\nPort mappings allow containers to access ports on the host container instance to send\n\t\t\tor receive traffic. Port mappings are specified as part of the container\n\t\t\tdefinition.
\nIf you use containers in a task with the awsvpc or host\n\t\t\tnetwork mode, specify the exposed ports using containerPort. The\n\t\t\t\thostPort can be left blank or it must be the same value as the\n\t\t\t\tcontainerPort.
Most fields of this parameter (containerPort, hostPort,\n\t\t\t\tprotocol) maps to PortBindings in the\n\t\t\tCreate a container section of the Docker Remote API and the\n\t\t\t\t--publish option to \n docker\n\t\t\t\t\trun\n . If the network mode of a task definition is set to\n\t\t\t\thost, host ports must either be undefined or match the container port\n\t\t\tin the port mapping.
You can't expose the same container port for multiple protocols. If you attempt\n\t\t\t\tthis, an error is returned.
\nAfter a task reaches the RUNNING status, manual and automatic host and\n\t\t\tcontainer port assignments are visible in the networkBindings section of\n\t\t\t\tDescribeTasks API responses.
Port mappings allow containers to access ports on the host container instance to send\n\t\t\tor receive traffic. Port mappings are specified as part of the container\n\t\t\tdefinition.
\nIf you use containers in a task with the awsvpc or host\n\t\t\tnetwork mode, specify the exposed ports using containerPort. The\n\t\t\t\thostPort can be left blank or it must be the same value as the\n\t\t\t\tcontainerPort.
Most fields of this parameter (containerPort, hostPort,\n\t\t\tprotocol) maps to PortBindings in the docker create-container command and the\n\t\t\t\t--publish option to docker\n\t\t\t\t\trun. If the network mode of a task definition is set to\n\t\t\t\thost, host ports must either be undefined or match the container port\n\t\t\tin the port mapping.
You can't expose the same container port for multiple protocols. If you attempt\n\t\t\t\tthis, an error is returned.
\nAfter a task reaches the RUNNING status, manual and automatic host and\n\t\t\tcontainer port assignments are visible in the networkBindings section of\n\t\t\t\tDescribeTasks API responses.
Registers a new task definition from the supplied family and\n\t\t\t\tcontainerDefinitions. Optionally, you can add data volumes to your\n\t\t\tcontainers with the volumes parameter. For more information about task\n\t\t\tdefinition parameters and defaults, see Amazon ECS Task\n\t\t\t\tDefinitions in the Amazon Elastic Container Service Developer Guide.
You can specify a role for your task with the taskRoleArn parameter. When\n\t\t\tyou specify a role for a task, its containers can then use the latest versions of the\n\t\t\tCLI or SDKs to make API requests to the Amazon Web Services services that are specified in the\n\t\t\tpolicy that's associated with the role. For more information, see IAM\n\t\t\t\tRoles for Tasks in the Amazon Elastic Container Service Developer Guide.
You can specify a Docker networking mode for the containers in your task definition\n\t\t\twith the networkMode parameter. The available network modes correspond to\n\t\t\tthose described in Network\n\t\t\t\tsettings in the Docker run reference. If you specify the awsvpc\n\t\t\tnetwork mode, the task is allocated an elastic network interface, and you must specify a\n\t\t\t\tNetworkConfiguration when you create a service or run a task with\n\t\t\tthe task definition. For more information, see Task Networking\n\t\t\tin the Amazon Elastic Container Service Developer Guide.
Registers a new task definition from the supplied family and\n\t\t\t\tcontainerDefinitions. Optionally, you can add data volumes to your\n\t\t\tcontainers with the volumes parameter. For more information about task\n\t\t\tdefinition parameters and defaults, see Amazon ECS Task\n\t\t\t\tDefinitions in the Amazon Elastic Container Service Developer Guide.
You can specify a role for your task with the taskRoleArn parameter. When\n\t\t\tyou specify a role for a task, its containers can then use the latest versions of the\n\t\t\tCLI or SDKs to make API requests to the Amazon Web Services services that are specified in the\n\t\t\tpolicy that's associated with the role. For more information, see IAM\n\t\t\t\tRoles for Tasks in the Amazon Elastic Container Service Developer Guide.
You can specify a Docker networking mode for the containers in your task definition\n\t\t\twith the networkMode parameter. If you specify the awsvpc\n\t\t\tnetwork mode, the task is allocated an elastic network interface, and you must specify a\n\t\t\t\tNetworkConfiguration when you create a service or run a task with\n\t\t\tthe task definition. For more information, see Task Networking\n\t\t\tin the Amazon Elastic Container Service Developer Guide.
The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent\n permission to make Amazon Web Services API calls on your behalf. The task execution IAM role is required\n depending on the requirements of your task. For more information, see Amazon ECS task\n execution IAM role in the Amazon Elastic Container Service Developer Guide.
" + "smithy.api#documentation": "The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent\n permission to make Amazon Web Services API calls on your behalf. For informationabout the required IAM roles for Amazon ECS, see IAM roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
" } }, "networkMode": { "target": "com.amazonaws.ecs#NetworkMode", "traits": { - "smithy.api#documentation": "The Docker networking mode to use for the containers in the task. The valid values are\n none, bridge, awsvpc, and host.\n If no network mode is specified, the default is bridge.
For Amazon ECS tasks on Fargate, the awsvpc network mode is required. \n For Amazon ECS tasks on Amazon EC2 Linux instances, any network mode can be used. For Amazon ECS tasks on Amazon EC2 Windows instances, or awsvpc can be used. If the network\n mode is set to none, you cannot specify port mappings in your container\n definitions, and the tasks containers do not have external connectivity. The\n host and awsvpc network modes offer the highest networking\n performance for containers because they use the EC2 network stack instead of the\n virtualized network stack provided by the bridge mode.
With the host and awsvpc network modes, exposed container\n ports are mapped directly to the corresponding host port (for the host\n network mode) or the attached elastic network interface port (for the\n awsvpc network mode), so you cannot take advantage of dynamic host port\n mappings.
When using the host network mode, you should not run\n containers using the root user (UID 0). It is considered best practice\n to use a non-root user.
If the network mode is awsvpc, the task is allocated an elastic network\n interface, and you must specify a NetworkConfiguration value when you create\n a service or run a task with the task definition. For more information, see Task Networking in the\n Amazon Elastic Container Service Developer Guide.
If the network mode is host, you cannot run multiple instantiations of the\n same task on a single container instance when port mappings are used.
For more information, see Network\n settings in the Docker run reference.
" + "smithy.api#documentation": "The Docker networking mode to use for the containers in the task. The valid values are\n none, bridge, awsvpc, and host.\n If no network mode is specified, the default is bridge.
For Amazon ECS tasks on Fargate, the awsvpc network mode is required. \n For Amazon ECS tasks on Amazon EC2 Linux instances, any network mode can be used. For Amazon ECS tasks on Amazon EC2 Windows instances, or awsvpc can be used. If the network\n mode is set to none, you cannot specify port mappings in your container\n definitions, and the tasks containers do not have external connectivity. The\n host and awsvpc network modes offer the highest networking\n performance for containers because they use the EC2 network stack instead of the\n virtualized network stack provided by the bridge mode.
With the host and awsvpc network modes, exposed container\n ports are mapped directly to the corresponding host port (for the host\n network mode) or the attached elastic network interface port (for the\n awsvpc network mode), so you cannot take advantage of dynamic host port\n mappings.
When using the host network mode, you should not run\n containers using the root user (UID 0). It is considered best practice\n to use a non-root user.
If the network mode is awsvpc, the task is allocated an elastic network\n interface, and you must specify a NetworkConfiguration value when you create\n a service or run a task with the task definition. For more information, see Task Networking in the\n Amazon Elastic Container Service Developer Guide.
If the network mode is host, you cannot run multiple instantiations of the\n same task on a single container instance when port mappings are used.
The process namespace to use for the containers in the task. The valid\n values are host or task. On Fargate for\n Linux containers, the only valid value is task. For\n example, monitoring sidecars might need pidMode to access\n information about other containers running in the same task.
If host is specified, all containers within the tasks\n that specified the host PID mode on the same container\n instance share the same process namespace with the host Amazon EC2\n instance.
If task is specified, all containers within the specified\n task share the same process namespace.
If no value is specified, the\n default is a private namespace for each container. For more information,\n see PID settings in the Docker run\n reference.
\nIf the host PID mode is used, there's a heightened risk\n of undesired process namespace exposure. For more information, see\n Docker security.
This parameter is not supported for Windows containers.
\nThis parameter is only supported for tasks that are hosted on\n Fargate if the tasks are using platform version 1.4.0 or later\n (Linux). This isn't supported for Windows containers on\n Fargate.
The process namespace to use for the containers in the task. The valid\n values are host or task. On Fargate for\n Linux containers, the only valid value is task. For\n example, monitoring sidecars might need pidMode to access\n information about other containers running in the same task.
If host is specified, all containers within the tasks\n that specified the host PID mode on the same container\n instance share the same process namespace with the host Amazon EC2\n instance.
If task is specified, all containers within the specified\n task share the same process namespace.
If no value is specified, the\n default is a private namespace for each container.
\nIf the host PID mode is used, there's a heightened risk\n of undesired process namespace exposure.
This parameter is not supported for Windows containers.
\nThis parameter is only supported for tasks that are hosted on\n Fargate if the tasks are using platform version 1.4.0 or later\n (Linux). This isn't supported for Windows containers on\n Fargate.
The IPC resource namespace to use for the containers in the task. The valid values are\n host, task, or none. If host is\n specified, then all containers within the tasks that specified the host IPC\n mode on the same container instance share the same IPC resources with the host Amazon EC2\n instance. If task is specified, all containers within the specified task\n share the same IPC resources. If none is specified, then IPC resources\n within the containers of a task are private and not shared with other containers in a\n task or on the container instance. If no value is specified, then the IPC resource\n namespace sharing depends on the Docker daemon setting on the container instance. For\n more information, see IPC\n settings in the Docker run reference.
If the host IPC mode is used, be aware that there is a heightened risk of\n undesired IPC namespace expose. For more information, see Docker\n security.
If you are setting namespaced kernel parameters using systemControls for\n the containers in the task, the following will apply to your IPC resource namespace. For\n more information, see System\n Controls in the Amazon Elastic Container Service Developer Guide.
For tasks that use the host IPC mode, IPC namespace related\n systemControls are not supported.
For tasks that use the task IPC mode, IPC namespace related\n systemControls will apply to all containers within a\n task.
This parameter is not supported for Windows containers or tasks run on Fargate.
\nThe IPC resource namespace to use for the containers in the task. The valid values are\n host, task, or none. If host is\n specified, then all containers within the tasks that specified the host IPC\n mode on the same container instance share the same IPC resources with the host Amazon EC2\n instance. If task is specified, all containers within the specified task\n share the same IPC resources. If none is specified, then IPC resources\n within the containers of a task are private and not shared with other containers in a\n task or on the container instance. If no value is specified, then the IPC resource\n namespace sharing depends on the Docker daemon setting on the container instance.
If the host IPC mode is used, be aware that there is a heightened risk of\n undesired IPC namespace expose.
If you are setting namespaced kernel parameters using systemControls for\n the containers in the task, the following will apply to your IPC resource namespace. For\n more information, see System\n Controls in the Amazon Elastic Container Service Developer Guide.
For tasks that use the host IPC mode, IPC namespace related\n systemControls are not supported.
For tasks that use the task IPC mode, IPC namespace related\n systemControls will apply to all containers within a\n task.
This parameter is not supported for Windows containers or tasks run on Fargate.
\nThe value for the specified resource type.
\nWhen the type is GPU, the value is the number of physical GPUs the\n\t\t\tAmazon ECS container agent reserves for the container. The number of GPUs that's reserved for\n\t\t\tall containers in a task can't exceed the number of available GPUs on the container\n\t\t\tinstance that the task is launched on.
When the type is InferenceAccelerator, the value matches\n\t\t\tthe deviceName for an InferenceAccelerator specified in a task definition.
The value for the specified resource type.
\nWhen the type is GPU, the value is the number of physical\n\t\t\t\tGPUs the Amazon ECS container agent reserves for the container. The number\n\t\t\tof GPUs that's reserved for all containers in a task can't exceed the number of\n\t\t\tavailable GPUs on the container instance that the task is launched on.
When the type is InferenceAccelerator, the value matches the\n\t\t\t\tdeviceName for an InferenceAccelerator specified in a task definition.
An optional tag specified when a task is started. For example, if you automatically\n\t\t\ttrigger a task to run a batch process job, you could apply a unique identifier for that\n\t\t\tjob to your task with the startedBy parameter. You can then identify which\n\t\t\ttasks belong to that job by filtering the results of a ListTasks call\n\t\t\twith the startedBy value. Up to 128 letters (uppercase and lowercase),\n\t\t\tnumbers, hyphens (-), and underscores (_) are allowed.
If a task is started by an Amazon ECS service, then the startedBy parameter\n\t\t\tcontains the deployment ID of the service that starts it.
An optional tag specified when a task is started. For example, if you automatically\n\t\t\ttrigger a task to run a batch process job, you could apply a unique identifier for that\n\t\t\tjob to your task with the startedBy parameter. You can then identify which\n\t\t\ttasks belong to that job by filtering the results of a ListTasks call with\n\t\t\tthe startedBy value. Up to 128 letters (uppercase and lowercase), numbers,\n\t\t\thyphens (-), forward slash (/), and underscores (_) are allowed.
If a task is started by an Amazon ECS service, then the startedBy parameter\n\t\t\tcontains the deployment ID of the service that starts it.
The family and revision (family:revision) or\n\t\t\tfull ARN of the task definition to run. If a revision isn't specified,\n\t\t\tthe latest ACTIVE revision is used.
The full ARN value must match the value that you specified as the\n\t\t\t\tResource of the principal's permissions policy.
When you specify a task definition, you must either specify a specific revision, or\n\t\t\tall revisions in the ARN.
\nTo specify a specific revision, include the revision number in the ARN. For example,\n\t\t\tto specify revision 2, use\n\t\t\t\tarn:aws:ecs:us-east-1:111122223333:task-definition/TaskFamilyName:2.
To specify all revisions, use the wildcard (*) in the ARN. For example, to specify all\n\t\t\trevisions, use\n\t\t\t\tarn:aws:ecs:us-east-1:111122223333:task-definition/TaskFamilyName:*.
For more information, see Policy Resources for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
", + "smithy.api#documentation": "The family and revision (family:revision) or\n\t\t\tfull ARN of the task definition to run. If a revision isn't specified,\n\t\t\tthe latest ACTIVE revision is used.
The full ARN value must match the value that you specified as the\n\t\t\t\tResource of the principal's permissions policy.
When you specify a task definition, you must either specify a specific revision, or\n\t\t\tall revisions in the ARN.
\nTo specify a specific revision, include the revision number in the ARN. For example,\n\t\t\tto specify revision 2, use\n\t\t\t\tarn:aws:ecs:us-east-1:111122223333:task-definition/TaskFamilyName:2.
To specify all revisions, use the wildcard (*) in the ARN. For example, to specify\n\t\t\tall revisions, use\n\t\t\t\tarn:aws:ecs:us-east-1:111122223333:task-definition/TaskFamilyName:*.
For more information, see Policy Resources for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
", "smithy.api#required": {} } }, @@ -9609,7 +9648,7 @@ "tasks": { "target": "com.amazonaws.ecs#Tasks", "traits": { - "smithy.api#documentation": "A full description of the tasks that were run. The tasks that were successfully placed\n\t\t\ton your cluster are described here.
\n " + "smithy.api#documentation": "A full description of the tasks that were run. The tasks that were successfully placed\n\t\t\ton your cluster are described here.
" } }, "failures": { @@ -10618,7 +10657,7 @@ "startedBy": { "target": "com.amazonaws.ecs#String", "traits": { - "smithy.api#documentation": "An optional tag specified when a task is started. For example, if you automatically\n\t\t\ttrigger a task to run a batch process job, you could apply a unique identifier for that\n\t\t\tjob to your task with the startedBy parameter. You can then identify which\n\t\t\ttasks belong to that job by filtering the results of a ListTasks call\n\t\t\twith the startedBy value. Up to 36 letters (uppercase and lowercase),\n\t\t\tnumbers, hyphens (-), and underscores (_) are allowed.
If a task is started by an Amazon ECS service, the startedBy parameter\n\t\t\tcontains the deployment ID of the service that starts it.
An optional tag specified when a task is started. For example, if you automatically\n\t\t\ttrigger a task to run a batch process job, you could apply a unique identifier for that\n\t\t\tjob to your task with the startedBy parameter. You can then identify which\n\t\t\ttasks belong to that job by filtering the results of a ListTasks call with\n\t\t\tthe startedBy value. Up to 36 letters (uppercase and lowercase), numbers,\n\t\t\thyphens (-), forward slash (/), and underscores (_) are allowed.
If a task is started by an Amazon ECS service, the startedBy parameter\n\t\t\tcontains the deployment ID of the service that starts it.
Stops a running task. Any tags associated with the task will be deleted.
\nWhen StopTask is called on a task, the equivalent of docker\n\t\t\t\tstop is issued to the containers running in the task. This results in a\n\t\t\t\tSIGTERM value and a default 30-second timeout, after which the\n\t\t\t\tSIGKILL value is sent and the containers are forcibly stopped. If the\n\t\t\tcontainer handles the SIGTERM value gracefully and exits within 30 seconds\n\t\t\tfrom receiving it, no SIGKILL value is sent.
For Windows containers, POSIX signals do not work and runtime stops the container by sending\n\t\t\ta CTRL_SHUTDOWN_EVENT. For more information, see Unable to react to graceful shutdown\n\t\t\t\tof (Windows) container #25982 on GitHub.
The default 30-second timeout can be configured on the Amazon ECS container agent with\n\t\t\t\tthe ECS_CONTAINER_STOP_TIMEOUT variable. For more information, see\n\t\t\t\t\tAmazon ECS Container Agent Configuration in the\n\t\t\t\tAmazon Elastic Container Service Developer Guide.
Stops a running task. Any tags associated with the task will be deleted.
\nWhen StopTask is called on a task, the equivalent of docker\n\t\t\t\tstop is issued to the containers running in the task. This results in a\n\t\t\t\tSIGTERM value and a default 30-second timeout, after which the\n\t\t\t\tSIGKILL value is sent and the containers are forcibly stopped. If the\n\t\t\tcontainer handles the SIGTERM value gracefully and exits within 30 seconds\n\t\t\tfrom receiving it, no SIGKILL value is sent.
For Windows containers, POSIX signals do not work and runtime stops the container by\n\t\t\tsending a CTRL_SHUTDOWN_EVENT. For more information, see Unable to react to graceful shutdown\n\t\t\t\tof (Windows) container #25982 on GitHub.
The default 30-second timeout can be configured on the Amazon ECS container agent with\n\t\t\t\tthe ECS_CONTAINER_STOP_TIMEOUT variable. For more information, see\n\t\t\t\t\tAmazon ECS Container Agent Configuration in the\n\t\t\t\tAmazon Elastic Container Service Developer Guide.
A list of namespaced kernel parameters to set in the container. This parameter maps to\n\t\t\t\tSysctls in the Create a container section of the\n\t\t\tDocker Remote API and the --sysctl option to docker run. For example, you can configure\n\t\t\t\tnet.ipv4.tcp_keepalive_time setting to maintain longer lived\n\t\t\tconnections.
We don't recommend that you specify network-related systemControls\n\t\t\tparameters for multiple containers in a single task that also uses either the\n\t\t\t\tawsvpc or host network mode. Doing this has the following\n\t\t\tdisadvantages:
For tasks that use the awsvpc network mode including Fargate,\n\t\t\t\t\tif you set systemControls for any container, it applies to all\n\t\t\t\t\tcontainers in the task. If you set different systemControls for\n\t\t\t\t\tmultiple containers in a single task, the container that's started last\n\t\t\t\t\tdetermines which systemControls take effect.
For tasks that use the host network mode, the network namespace\n\t\t\t\t\t\tsystemControls aren't supported.
If you're setting an IPC resource namespace to use for the containers in the task, the\n\t\t\tfollowing conditions apply to your system controls. For more information, see IPC mode.
\nFor tasks that use the host IPC mode, IPC namespace\n\t\t\t\t\t\tsystemControls aren't supported.
For tasks that use the task IPC mode, IPC namespace\n\t\t\t\t\t\tsystemControls values apply to all containers within a\n\t\t\t\t\ttask.
This parameter is not supported for Windows containers.
\nThis parameter is only supported for tasks that are hosted on\n Fargate if the tasks are using platform version 1.4.0 or later\n (Linux). This isn't supported for Windows containers on\n Fargate.
A list of namespaced kernel parameters to set in the container. This parameter maps to\n\t\t\tSysctls in tthe docker create-container command and the --sysctl option to docker run. For example, you can configure\n\t\t\t\tnet.ipv4.tcp_keepalive_time setting to maintain longer lived\n\t\t\tconnections.
We don't recommend that you specify network-related systemControls\n\t\t\tparameters for multiple containers in a single task that also uses either the\n\t\t\t\tawsvpc or host network mode. Doing this has the following\n\t\t\tdisadvantages:
For tasks that use the awsvpc network mode including Fargate,\n\t\t\t\t\tif you set systemControls for any container, it applies to all\n\t\t\t\t\tcontainers in the task. If you set different systemControls for\n\t\t\t\t\tmultiple containers in a single task, the container that's started last\n\t\t\t\t\tdetermines which systemControls take effect.
For tasks that use the host network mode, the network namespace\n\t\t\t\t\t\tsystemControls aren't supported.
If you're setting an IPC resource namespace to use for the containers in the task, the\n\t\t\tfollowing conditions apply to your system controls. For more information, see IPC mode.
\nFor tasks that use the host IPC mode, IPC namespace\n\t\t\t\t\t\tsystemControls aren't supported.
For tasks that use the task IPC mode, IPC namespace\n\t\t\t\t\t\tsystemControls values apply to all containers within a\n\t\t\t\t\ttask.
This parameter is not supported for Windows containers.
\nThis parameter is only supported for tasks that are hosted on\n Fargate if the tasks are using platform version 1.4.0 or later\n (Linux). This isn't supported for Windows containers on\n Fargate.
The specified target wasn't found. You can view your available container instances\n\t\t\twith ListContainerInstances. Amazon ECS container instances are\n\t\t\tcluster-specific and Region-specific.
", + "smithy.api#documentation": "The specified target wasn't found. You can view your available container instances\n\t\t\twith ListContainerInstances. Amazon ECS container instances are cluster-specific and\n\t\t\tRegion-specific.
", "smithy.api#error": "client" } }, @@ -11473,19 +11512,19 @@ "taskRoleArn": { "target": "com.amazonaws.ecs#String", "traits": { - "smithy.api#documentation": "The short name or full Amazon Resource Name (ARN) of the Identity and Access Management role that grants containers in the\n\t\t\ttask permission to call Amazon Web Services APIs on your behalf. For more information, see Amazon ECS\n\t\t\t\tTask Role in the Amazon Elastic Container Service Developer Guide.
\nIAM roles for tasks on Windows require that the -EnableTaskIAMRole\n\t\t\toption is set when you launch the Amazon ECS-optimized Windows AMI. Your containers must also run some\n\t\t\tconfiguration code to use the feature. For more information, see Windows IAM roles\n\t\t\t\tfor tasks in the Amazon Elastic Container Service Developer Guide.
The short name or full Amazon Resource Name (ARN) of the Identity and Access Management role that grants containers in the\n\t\t\ttask permission to call Amazon Web Services APIs on your behalf. For informationabout the required\n\t\t\tIAM roles for Amazon ECS, see IAM\n\t\t\t\troles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
" } }, "executionRoleArn": { "target": "com.amazonaws.ecs#String", "traits": { - "smithy.api#documentation": "The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent\n permission to make Amazon Web Services API calls on your behalf. The task execution IAM role is required\n depending on the requirements of your task. For more information, see Amazon ECS task\n execution IAM role in the Amazon Elastic Container Service Developer Guide.
" + "smithy.api#documentation": "The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent\n permission to make Amazon Web Services API calls on your behalf. For informationabout the required IAM roles for Amazon ECS, see IAM roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
" } }, "networkMode": { "target": "com.amazonaws.ecs#NetworkMode", "traits": { - "smithy.api#documentation": "The Docker networking mode to use for the containers in the task. The valid values are\n none, bridge, awsvpc, and host.\n If no network mode is specified, the default is bridge.
For Amazon ECS tasks on Fargate, the awsvpc network mode is required. \n For Amazon ECS tasks on Amazon EC2 Linux instances, any network mode can be used. For Amazon ECS tasks on Amazon EC2 Windows instances, or awsvpc can be used. If the network\n mode is set to none, you cannot specify port mappings in your container\n definitions, and the tasks containers do not have external connectivity. The\n host and awsvpc network modes offer the highest networking\n performance for containers because they use the EC2 network stack instead of the\n virtualized network stack provided by the bridge mode.
With the host and awsvpc network modes, exposed container\n ports are mapped directly to the corresponding host port (for the host\n network mode) or the attached elastic network interface port (for the\n awsvpc network mode), so you cannot take advantage of dynamic host port\n mappings.
When using the host network mode, you should not run\n containers using the root user (UID 0). It is considered best practice\n to use a non-root user.
If the network mode is awsvpc, the task is allocated an elastic network\n interface, and you must specify a NetworkConfiguration value when you create\n a service or run a task with the task definition. For more information, see Task Networking in the\n Amazon Elastic Container Service Developer Guide.
If the network mode is host, you cannot run multiple instantiations of the\n same task on a single container instance when port mappings are used.
For more information, see Network\n settings in the Docker run reference.
" + "smithy.api#documentation": "The Docker networking mode to use for the containers in the task. The valid values are\n none, bridge, awsvpc, and host.\n If no network mode is specified, the default is bridge.
For Amazon ECS tasks on Fargate, the awsvpc network mode is required. \n For Amazon ECS tasks on Amazon EC2 Linux instances, any network mode can be used. For Amazon ECS tasks on Amazon EC2 Windows instances, or awsvpc can be used. If the network\n mode is set to none, you cannot specify port mappings in your container\n definitions, and the tasks containers do not have external connectivity. The\n host and awsvpc network modes offer the highest networking\n performance for containers because they use the EC2 network stack instead of the\n virtualized network stack provided by the bridge mode.
With the host and awsvpc network modes, exposed container\n ports are mapped directly to the corresponding host port (for the host\n network mode) or the attached elastic network interface port (for the\n awsvpc network mode), so you cannot take advantage of dynamic host port\n mappings.
When using the host network mode, you should not run\n containers using the root user (UID 0). It is considered best practice\n to use a non-root user.
If the network mode is awsvpc, the task is allocated an elastic network\n interface, and you must specify a NetworkConfiguration value when you create\n a service or run a task with the task definition. For more information, see Task Networking in the\n Amazon Elastic Container Service Developer Guide.
If the network mode is host, you cannot run multiple instantiations of the\n same task on a single container instance when port mappings are used.
The number of cpu units used by the task. If you use the EC2 launch type,\n\t\t\tthis field is optional. Any value can be used. If you use the Fargate launch type, this\n\t\t\tfield is required. You must use one of the following values. The value that you choose\n\t\t\tdetermines your range of valid values for the memory parameter.
The CPU units cannot be less than 1 vCPU when you use Windows containers on\n\t\t\tFargate.
\n256 (.25 vCPU) - Available memory values: 512 (0.5 GB), 1024 (1 GB), 2048 (2 GB)
512 (.5 vCPU) - Available memory values: 1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB)
1024 (1 vCPU) - Available memory values: 2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB)
2048 (2 vCPU) - Available memory values: 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB)
4096 (4 vCPU) - Available memory values: 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB)
8192 (8 vCPU) - Available memory values: 16 GB and 60 GB in 4 GB increments
This option requires Linux platform 1.4.0 or\n later.
16384 (16vCPU) - Available memory values: 32GB and 120 GB in 8 GB increments
This option requires Linux platform 1.4.0 or\n later.
The number of cpu units used by the task. If you use the EC2 launch type,\n\t\t\tthis field is optional. Any value can be used. If you use the Fargate launch type, this\n\t\t\tfield is required. You must use one of the following values. The value that you choose\n\t\t\tdetermines your range of valid values for the memory parameter.
If you use the EC2 launch type, this field is optional. Supported values\n\t\t\tare between 128 CPU units (0.125 vCPUs) and 10240\n\t\t\tCPU units (10 vCPUs).
The CPU units cannot be less than 1 vCPU when you use Windows containers on\n\t\t\tFargate.
\n256 (.25 vCPU) - Available memory values: 512 (0.5 GB), 1024 (1 GB), 2048 (2 GB)
512 (.5 vCPU) - Available memory values: 1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB)
1024 (1 vCPU) - Available memory values: 2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB)
2048 (2 vCPU) - Available memory values: 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB)
4096 (4 vCPU) - Available memory values: 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB)
8192 (8 vCPU) - Available memory values: 16 GB and 60 GB in 4 GB increments
This option requires Linux platform 1.4.0 or\n later.
16384 (16vCPU) - Available memory values: 32GB and 120 GB in 8 GB increments
This option requires Linux platform 1.4.0 or\n later.
The process namespace to use for the containers in the task. The valid\n values are host or task. On Fargate for\n Linux containers, the only valid value is task. For\n example, monitoring sidecars might need pidMode to access\n information about other containers running in the same task.
If host is specified, all containers within the tasks\n that specified the host PID mode on the same container\n instance share the same process namespace with the host Amazon EC2\n instance.
If task is specified, all containers within the specified\n task share the same process namespace.
If no value is specified, the\n default is a private namespace for each container. For more information,\n see PID settings in the Docker run\n reference.
\nIf the host PID mode is used, there's a heightened risk\n of undesired process namespace exposure. For more information, see\n Docker security.
This parameter is not supported for Windows containers.
\nThis parameter is only supported for tasks that are hosted on\n Fargate if the tasks are using platform version 1.4.0 or later\n (Linux). This isn't supported for Windows containers on\n Fargate.
The process namespace to use for the containers in the task. The valid\n values are host or task. On Fargate for\n Linux containers, the only valid value is task. For\n example, monitoring sidecars might need pidMode to access\n information about other containers running in the same task.
If host is specified, all containers within the tasks\n that specified the host PID mode on the same container\n instance share the same process namespace with the host Amazon EC2\n instance.
If task is specified, all containers within the specified\n task share the same process namespace.
If no value is specified, the\n default is a private namespace for each container.
\nIf the host PID mode is used, there's a heightened risk\n of undesired process namespace exposure.
This parameter is not supported for Windows containers.
\nThis parameter is only supported for tasks that are hosted on\n Fargate if the tasks are using platform version 1.4.0 or later\n (Linux). This isn't supported for Windows containers on\n Fargate.
The IPC resource namespace to use for the containers in the task. The valid values are\n host, task, or none. If host is\n specified, then all containers within the tasks that specified the host IPC\n mode on the same container instance share the same IPC resources with the host Amazon EC2\n instance. If task is specified, all containers within the specified task\n share the same IPC resources. If none is specified, then IPC resources\n within the containers of a task are private and not shared with other containers in a\n task or on the container instance. If no value is specified, then the IPC resource\n namespace sharing depends on the Docker daemon setting on the container instance. For\n more information, see IPC\n settings in the Docker run reference.
If the host IPC mode is used, be aware that there is a heightened risk of\n undesired IPC namespace expose. For more information, see Docker\n security.
If you are setting namespaced kernel parameters using systemControls for\n the containers in the task, the following will apply to your IPC resource namespace. For\n more information, see System\n Controls in the Amazon Elastic Container Service Developer Guide.
For tasks that use the host IPC mode, IPC namespace related\n systemControls are not supported.
For tasks that use the task IPC mode, IPC namespace related\n systemControls will apply to all containers within a\n task.
This parameter is not supported for Windows containers or tasks run on Fargate.
\nThe IPC resource namespace to use for the containers in the task. The valid values are\n host, task, or none. If host is\n specified, then all containers within the tasks that specified the host IPC\n mode on the same container instance share the same IPC resources with the host Amazon EC2\n instance. If task is specified, all containers within the specified task\n share the same IPC resources. If none is specified, then IPC resources\n within the containers of a task are private and not shared with other containers in a\n task or on the container instance. If no value is specified, then the IPC resource\n namespace sharing depends on the Docker daemon setting on the container instance.
If the host IPC mode is used, be aware that there is a heightened risk of\n undesired IPC namespace expose.
If you are setting namespaced kernel parameters using systemControls for\n the containers in the task, the following will apply to your IPC resource namespace. For\n more information, see System\n Controls in the Amazon Elastic Container Service Developer Guide.
For tasks that use the host IPC mode, IPC namespace related\n systemControls are not supported.
For tasks that use the task IPC mode, IPC namespace related\n systemControls will apply to all containers within a\n task.
This parameter is not supported for Windows containers or tasks run on Fargate.
\nThe total amount, in GiB, of the ephemeral storage to set for the task. The minimum \t\t\n\t\t\tsupported value is 20 GiB and the maximum supported value is\u2028 200 \n\t\t\tGiB.
The total amount, in GiB, of the ephemeral storage to set for the task. The minimum\n\t\t\tsupported value is 20 GiB and the maximum supported value is\u2028\n\t\t\t\t200 GiB.
Specify an Key Management Service key ID to encrypt the ephemeral storage for the task.
" + "smithy.api#documentation": "Specify an Key Management Service key ID to encrypt the ephemeral storage for the\n\t\t\ttask.
" } } }, @@ -12285,7 +12324,7 @@ } }, "traits": { - "smithy.api#documentation": "The ulimit settings to pass to the container.
Amazon ECS tasks hosted on Fargate use the default\n\t\t\t\t\t\t\tresource limit values set by the operating system with the exception of\n\t\t\t\t\t\t\tthe nofile resource limit parameter which Fargate\n\t\t\t\t\t\t\toverrides. The nofile resource limit sets a restriction on\n\t\t\t\t\t\t\tthe number of open files that a container can use. The default\n\t\t\t\t\t\t\t\tnofile soft limit is 1024 and the default hard limit\n\t\t\t\t\t\t\tis 65535.
You can specify the ulimit settings for a container in a task\n\t\t\tdefinition.
The ulimit settings to pass to the container.
Amazon ECS tasks hosted on Fargate use the default\n\t\t\t\t\t\t\tresource limit values set by the operating system with the exception of\n\t\t\t\t\t\t\tthe nofile resource limit parameter which Fargate\n\t\t\t\t\t\t\toverrides. The nofile resource limit sets a restriction on\n\t\t\t\t\t\t\tthe number of open files that a container can use. The default\n\t\t\t\t\t\t\t\tnofile soft limit is 65535 and the default hard limit\n\t\t\t\t\t\t\tis 65535.
You can specify the ulimit settings for a container in a task\n\t\t\tdefinition.
Modifies the status of an Amazon ECS container instance.
\nOnce a container instance has reached an ACTIVE state, you can change the\n\t\t\tstatus of a container instance to DRAINING to manually remove an instance\n\t\t\tfrom a cluster, for example to perform system updates, update the Docker daemon, or\n\t\t\tscale down the cluster size.
A container instance can't be changed to DRAINING until it has\n\t\t\t\treached an ACTIVE status. If the instance is in any other status, an\n\t\t\t\terror will be received.
When you set a container instance to DRAINING, Amazon ECS prevents new tasks\n\t\t\tfrom being scheduled for placement on the container instance and replacement service\n\t\t\ttasks are started on other container instances in the cluster if the resources are\n\t\t\tavailable. Service tasks on the container instance that are in the PENDING\n\t\t\tstate are stopped immediately.
Service tasks on the container instance that are in the RUNNING state are\n\t\t\tstopped and replaced according to the service's deployment configuration parameters,\n\t\t\t\tminimumHealthyPercent and maximumPercent. You can change\n\t\t\tthe deployment configuration of your service using UpdateService.
If minimumHealthyPercent is below 100%, the scheduler can ignore\n\t\t\t\t\t\tdesiredCount temporarily during task replacement. For example,\n\t\t\t\t\t\tdesiredCount is four tasks, a minimum of 50% allows the\n\t\t\t\t\tscheduler to stop two existing tasks before starting two new tasks. If the\n\t\t\t\t\tminimum is 100%, the service scheduler can't remove existing tasks until the\n\t\t\t\t\treplacement tasks are considered healthy. Tasks for services that do not use a\n\t\t\t\t\tload balancer are considered healthy if they're in the RUNNING\n\t\t\t\t\tstate. Tasks for services that use a load balancer are considered healthy if\n\t\t\t\t\tthey're in the RUNNING state and are reported as healthy by the\n\t\t\t\t\tload balancer.
The maximumPercent parameter represents an upper limit on the\n\t\t\t\t\tnumber of running tasks during task replacement. You can use this to define the\n\t\t\t\t\treplacement batch size. For example, if desiredCount is four tasks,\n\t\t\t\t\ta maximum of 200% starts four new tasks before stopping the four tasks to be\n\t\t\t\t\tdrained, provided that the cluster resources required to do this are available.\n\t\t\t\t\tIf the maximum is 100%, then replacement tasks can't start until the draining\n\t\t\t\t\ttasks have stopped.
Any PENDING or RUNNING tasks that do not belong to a service\n\t\t\taren't affected. You must wait for them to finish or stop them manually.
A container instance has completed draining when it has no more RUNNING\n\t\t\ttasks. You can verify this using ListTasks.
When a container instance has been drained, you can set a container instance to\n\t\t\t\tACTIVE status and once it has reached that status the Amazon ECS scheduler\n\t\t\tcan begin scheduling tasks on the instance again.
Modifies the status of an Amazon ECS container instance.
\nOnce a container instance has reached an ACTIVE state, you can change the\n\t\t\tstatus of a container instance to DRAINING to manually remove an instance\n\t\t\tfrom a cluster, for example to perform system updates, update the Docker daemon, or\n\t\t\tscale down the cluster size.
A container instance can't be changed to DRAINING until it has\n\t\t\t\treached an ACTIVE status. If the instance is in any other status, an\n\t\t\t\terror will be received.
When you set a container instance to DRAINING, Amazon ECS prevents new tasks\n\t\t\tfrom being scheduled for placement on the container instance and replacement service\n\t\t\ttasks are started on other container instances in the cluster if the resources are\n\t\t\tavailable. Service tasks on the container instance that are in the PENDING\n\t\t\tstate are stopped immediately.
Service tasks on the container instance that are in the RUNNING state are\n\t\t\tstopped and replaced according to the service's deployment configuration parameters,\n\t\t\t\tminimumHealthyPercent and maximumPercent. You can change\n\t\t\tthe deployment configuration of your service using UpdateService.
If minimumHealthyPercent is below 100%, the scheduler can ignore\n\t\t\t\t\t\tdesiredCount temporarily during task replacement. For example,\n\t\t\t\t\t\tdesiredCount is four tasks, a minimum of 50% allows the\n\t\t\t\t\tscheduler to stop two existing tasks before starting two new tasks. If the\n\t\t\t\t\tminimum is 100%, the service scheduler can't remove existing tasks until the\n\t\t\t\t\treplacement tasks are considered healthy. Tasks for services that do not use a\n\t\t\t\t\tload balancer are considered healthy if they're in the RUNNING\n\t\t\t\t\tstate. Tasks for services that use a load balancer are considered healthy if\n\t\t\t\t\tthey're in the RUNNING state and are reported as healthy by the\n\t\t\t\t\tload balancer.
The maximumPercent parameter represents an upper limit on the\n\t\t\t\t\tnumber of running tasks during task replacement. You can use this to define the\n\t\t\t\t\treplacement batch size. For example, if desiredCount is four tasks,\n\t\t\t\t\ta maximum of 200% starts four new tasks before stopping the four tasks to be\n\t\t\t\t\tdrained, provided that the cluster resources required to do this are available.\n\t\t\t\t\tIf the maximum is 100%, then replacement tasks can't start until the draining\n\t\t\t\t\ttasks have stopped.
Any PENDING or RUNNING tasks that do not belong to a service\n\t\t\taren't affected. You must wait for them to finish or stop them manually.
A container instance has completed draining when it has no more RUNNING\n\t\t\ttasks. You can verify this using ListTasks.
When a container instance has been drained, you can set a container instance to\n\t\t\t\tACTIVE status and once it has reached that status the Amazon ECS scheduler\n\t\t\tcan begin scheduling tasks on the instance again.
You get all of this information from the OIDC IdP you want to use to access * Amazon Web Services.
*Amazon Web Services secures communication with some OIDC identity providers (IdPs) through our library - * of trusted root certificate authorities (CAs) instead of using a certificate thumbprint to - * verify your IdP server certificate. In these cases, your legacy thumbprint remains in your - * configuration, but is no longer used for validation. These OIDC IdPs include Auth0, GitHub, - * GitLab, Google, and those that use an Amazon S3 bucket to host a JSON Web Key Set (JWKS) - * endpoint.
+ *Amazon Web Services secures communication with OIDC identity providers (IdPs) using our library of + * trusted root certificate authorities (CAs) to verify the JSON Web Key Set (JWKS) + * endpoint's TLS certificate. If your OIDC IdP relies on a certificate that is not signed + * by one of these trusted CAs, only then we secure communication using the thumbprints set + * in the IdP's configuration.
*The trust for the OIDC provider is derived from the IAM provider that this @@ -130,7 +129,8 @@ export interface CreateOpenIDConnectProviderCommandOutput * Amazon Web Services account limits. The error message describes the limit exceeded.
* * @throws {@link OpenIdIdpCommunicationErrorException} (client fault) - *The request failed because IAM cannot connect to the OpenID Connect identity provider URL.
+ *The request failed because IAM cannot connect to the OpenID Connect identity provider + * URL.
* * @throws {@link ServiceFailureException} (server fault) *The request processing has failed because of an unknown error, exception or diff --git a/clients/client-iam/src/commands/GetAccessKeyLastUsedCommand.ts b/clients/client-iam/src/commands/GetAccessKeyLastUsedCommand.ts index 2345381e7b719..f2cc205e3360f 100644 --- a/clients/client-iam/src/commands/GetAccessKeyLastUsedCommand.ts +++ b/clients/client-iam/src/commands/GetAccessKeyLastUsedCommand.ts @@ -45,7 +45,7 @@ export interface GetAccessKeyLastUsedCommandOutput extends GetAccessKeyLastUsedR * // { // GetAccessKeyLastUsedResponse * // UserName: "STRING_VALUE", * // AccessKeyLastUsed: { // AccessKeyLastUsed - * // LastUsedDate: new Date("TIMESTAMP"), // required + * // LastUsedDate: new Date("TIMESTAMP"), * // ServiceName: "STRING_VALUE", // required * // Region: "STRING_VALUE", // required * // }, diff --git a/clients/client-iam/src/commands/ListAccountAliasesCommand.ts b/clients/client-iam/src/commands/ListAccountAliasesCommand.ts index ce217ad0570dc..128bef0eb5b96 100644 --- a/clients/client-iam/src/commands/ListAccountAliasesCommand.ts +++ b/clients/client-iam/src/commands/ListAccountAliasesCommand.ts @@ -29,9 +29,9 @@ export interface ListAccountAliasesCommandOutput extends ListAccountAliasesRespo /** *
Lists the account alias associated with the Amazon Web Services account (Note: you can have only - * one). For information about using an Amazon Web Services account alias, see Creating, - * deleting, and listing an Amazon Web Services account alias in the Amazon Web Services Sign-In - * User Guide.
+ * one). For information about using an Amazon Web Services account alias, see Creating, + * deleting, and listing an Amazon Web Services account alias in the + * IAM User Guide. * @example * Use a bare-bones client and the command you need to make an API call. * ```javascript diff --git a/clients/client-iam/src/commands/UpdateOpenIDConnectProviderThumbprintCommand.ts b/clients/client-iam/src/commands/UpdateOpenIDConnectProviderThumbprintCommand.ts index b5daab0fd8e5e..46756beaa6dc4 100644 --- a/clients/client-iam/src/commands/UpdateOpenIDConnectProviderThumbprintCommand.ts +++ b/clients/client-iam/src/commands/UpdateOpenIDConnectProviderThumbprintCommand.ts @@ -42,12 +42,11 @@ export interface UpdateOpenIDConnectProviderThumbprintCommandOutput extends __Me * the OIDC provider as a principal fails until the certificate thumbprint is * updated. *Amazon Web Services secures communication with some OIDC identity providers (IdPs) through our library - * of trusted root certificate authorities (CAs) instead of using a certificate thumbprint to - * verify your IdP server certificate. In these cases, your legacy thumbprint remains in your - * configuration, but is no longer used for validation. These OIDC IdPs include Auth0, GitHub, - * GitLab, Google, and those that use an Amazon S3 bucket to host a JSON Web Key Set (JWKS) - * endpoint.
+ *Amazon Web Services secures communication with OIDC identity providers (IdPs) using our library of + * trusted root certificate authorities (CAs) to verify the JSON Web Key Set (JWKS) + * endpoint's TLS certificate. If your OIDC IdP relies on a certificate that is not signed + * by one of these trusted CAs, only then we secure communication using the thumbprints set + * in the IdP's configuration.
*Trust for the OIDC provider is derived from the provider certificate and is diff --git a/clients/client-iam/src/models/models_0.ts b/clients/client-iam/src/models/models_0.ts index d23544cc72b6e..66600fed67733 100644 --- a/clients/client-iam/src/models/models_0.ts +++ b/clients/client-iam/src/models/models_0.ts @@ -163,7 +163,7 @@ export interface AccessKeyLastUsed { *
The name of the Amazon Web Services service with which this access key was most recently used. The @@ -1275,7 +1275,8 @@ export interface CreateOpenIDConnectProviderResponse { } /** - *
The request failed because IAM cannot connect to the OpenID Connect identity provider URL.
+ *The request failed because IAM cannot connect to the OpenID Connect identity provider + * URL.
* @public */ export class OpenIdIdpCommunicationErrorException extends __BaseException { diff --git a/codegen/sdk-codegen/aws-models/iam.json b/codegen/sdk-codegen/aws-models/iam.json index e4a96f301e40d..a868a045d15fa 100644 --- a/codegen/sdk-codegen/aws-models/iam.json +++ b/codegen/sdk-codegen/aws-models/iam.json @@ -1995,8 +1995,7 @@ "LastUsedDate": { "target": "com.amazonaws.iam#dateType", "traits": { - "smithy.api#documentation": "The date and time, in ISO 8601 date-time\n format, when the access key was most recently used. This field is null in the\n following situations:
\nThe user does not have an access key.
\nAn access key exists but has not been used since IAM began tracking this\n information.
\nThere is no sign-in data associated with the user.
\nThe date and time, in ISO 8601 date-time\n format, when the access key was most recently used. This field is null in the\n following situations:
\nThe user does not have an access key.
\nAn access key exists but has not been used since IAM began tracking this\n information.
\nThere is no sign-in data associated with the user.
\nCreates an IAM entity to describe an identity provider (IdP) that supports OpenID Connect (OIDC).
\nThe OIDC provider that you create with this operation can be used as a principal in a\n role's trust policy. Such a policy establishes a trust relationship between Amazon Web Services and\n the OIDC provider.
\nIf you are using an OIDC identity provider from Google, Facebook, or Amazon Cognito, you don't\n need to create a separate IAM identity provider. These OIDC identity providers are\n already built-in to Amazon Web Services and are available for your use. Instead, you can move directly\n to creating new roles using your identity provider. To learn more, see Creating\n a role for web identity or OpenID connect federation in the IAM\n User Guide.
\nWhen you create the IAM OIDC provider, you specify the following:
\nThe URL of the OIDC identity provider (IdP) to trust
\nA list of client IDs (also known as audiences) that identify the application\n or applications allowed to authenticate using the OIDC provider
\nA list of tags that are attached to the specified IAM OIDC provider
\nA list of thumbprints of one or more server certificates that the IdP\n uses
\nYou get all of this information from the OIDC IdP you want to use to access\n Amazon Web Services.
\nAmazon Web Services secures communication with some OIDC identity providers (IdPs) through our library\n of trusted root certificate authorities (CAs) instead of using a certificate thumbprint to\n verify your IdP server certificate. In these cases, your legacy thumbprint remains in your\n configuration, but is no longer used for validation. These OIDC IdPs include Auth0, GitHub,\n GitLab, Google, and those that use an Amazon S3 bucket to host a JSON Web Key Set (JWKS)\n endpoint.
\nThe trust for the OIDC provider is derived from the IAM provider that this\n operation creates. Therefore, it is best to limit access to the CreateOpenIDConnectProvider operation to highly privileged\n users.
\nCreates an IAM entity to describe an identity provider (IdP) that supports OpenID Connect (OIDC).
\nThe OIDC provider that you create with this operation can be used as a principal in a\n role's trust policy. Such a policy establishes a trust relationship between Amazon Web Services and\n the OIDC provider.
\nIf you are using an OIDC identity provider from Google, Facebook, or Amazon Cognito, you don't\n need to create a separate IAM identity provider. These OIDC identity providers are\n already built-in to Amazon Web Services and are available for your use. Instead, you can move directly\n to creating new roles using your identity provider. To learn more, see Creating\n a role for web identity or OpenID connect federation in the IAM\n User Guide.
\nWhen you create the IAM OIDC provider, you specify the following:
\nThe URL of the OIDC identity provider (IdP) to trust
\nA list of client IDs (also known as audiences) that identify the application\n or applications allowed to authenticate using the OIDC provider
\nA list of tags that are attached to the specified IAM OIDC provider
\nA list of thumbprints of one or more server certificates that the IdP\n uses
\nYou get all of this information from the OIDC IdP you want to use to access\n Amazon Web Services.
\nAmazon Web Services secures communication with OIDC identity providers (IdPs) using our library of\n trusted root certificate authorities (CAs) to verify the JSON Web Key Set (JWKS)\n endpoint's TLS certificate. If your OIDC IdP relies on a certificate that is not signed\n by one of these trusted CAs, only then we secure communication using the thumbprints set\n in the IdP's configuration.
\nThe trust for the OIDC provider is derived from the IAM provider that this\n operation creates. Therefore, it is best to limit access to the CreateOpenIDConnectProvider operation to highly privileged\n users.
\nLists the account alias associated with the Amazon Web Services account (Note: you can have only\n one). For information about using an Amazon Web Services account alias, see Creating,\n deleting, and listing an Amazon Web Services account alias in the Amazon Web Services Sign-In\n User Guide.
", + "smithy.api#documentation": "Lists the account alias associated with the Amazon Web Services account (Note: you can have only\n one). For information about using an Amazon Web Services account alias, see Creating,\n deleting, and listing an Amazon Web Services account alias in the\n IAM User Guide.
", "smithy.api#examples": [ { "title": "To list account aliases", @@ -11310,7 +11309,7 @@ "code": "OpenIdIdpCommunicationError", "httpResponseCode": 400 }, - "smithy.api#documentation": "The request failed because IAM cannot connect to the OpenID Connect identity provider URL.
", + "smithy.api#documentation": "The request failed because IAM cannot connect to the OpenID Connect identity provider\n URL.
", "smithy.api#error": "client", "smithy.api#httpError": 400 } @@ -14924,7 +14923,7 @@ } ], "traits": { - "smithy.api#documentation": "Replaces the existing list of server certificate thumbprints associated with an OpenID\n Connect (OIDC) provider resource object with a new list of thumbprints.
\nThe list that you pass with this operation completely replaces the existing list of\n thumbprints. (The lists are not merged.)
\nTypically, you need to update a thumbprint only when the identity provider certificate\n changes, which occurs rarely. However, if the provider's certificate\n does change, any attempt to assume an IAM role that specifies\n the OIDC provider as a principal fails until the certificate thumbprint is\n updated.
\nAmazon Web Services secures communication with some OIDC identity providers (IdPs) through our library\n of trusted root certificate authorities (CAs) instead of using a certificate thumbprint to\n verify your IdP server certificate. In these cases, your legacy thumbprint remains in your\n configuration, but is no longer used for validation. These OIDC IdPs include Auth0, GitHub,\n GitLab, Google, and those that use an Amazon S3 bucket to host a JSON Web Key Set (JWKS)\n endpoint.
\nTrust for the OIDC provider is derived from the provider certificate and is\n validated by the thumbprint. Therefore, it is best to limit access to the\n UpdateOpenIDConnectProviderThumbprint operation to highly\n privileged users.
Replaces the existing list of server certificate thumbprints associated with an OpenID\n Connect (OIDC) provider resource object with a new list of thumbprints.
\nThe list that you pass with this operation completely replaces the existing list of\n thumbprints. (The lists are not merged.)
\nTypically, you need to update a thumbprint only when the identity provider certificate\n changes, which occurs rarely. However, if the provider's certificate\n does change, any attempt to assume an IAM role that specifies\n the OIDC provider as a principal fails until the certificate thumbprint is\n updated.
\nAmazon Web Services secures communication with OIDC identity providers (IdPs) using our library of\n trusted root certificate authorities (CAs) to verify the JSON Web Key Set (JWKS)\n endpoint's TLS certificate. If your OIDC IdP relies on a certificate that is not signed\n by one of these trusted CAs, only then we secure communication using the thumbprints set\n in the IdP's configuration.
\nTrust for the OIDC provider is derived from the provider certificate and is\n validated by the thumbprint. Therefore, it is best to limit access to the\n UpdateOpenIDConnectProviderThumbprint operation to highly\n privileged users.
Promotes the specified secondary DB cluster to be the primary DB cluster in the global cluster when failing over a global cluster occurs.
+ *Use this operation to respond to an unplanned event, such as a regional disaster in the primary region. + * Failing over can result in a loss of write transaction data that wasn't replicated to the chosen secondary before the failover event occurred. + * However, the recovery process that promotes a DB instance on the chosen seconday DB cluster to be the primary writer DB instance guarantees that the data is in a transactionally consistent state.
+ * @example + * Use a bare-bones client and the command you need to make an API call. + * ```javascript + * import { DocDBClient, FailoverGlobalClusterCommand } from "@aws-sdk/client-docdb"; // ES Modules import + * // const { DocDBClient, FailoverGlobalClusterCommand } = require("@aws-sdk/client-docdb"); // CommonJS import + * const client = new DocDBClient(config); + * const input = { // FailoverGlobalClusterMessage + * GlobalClusterIdentifier: "STRING_VALUE", // required + * TargetDbClusterIdentifier: "STRING_VALUE", // required + * AllowDataLoss: true || false, + * Switchover: true || false, + * }; + * const command = new FailoverGlobalClusterCommand(input); + * const response = await client.send(command); + * // { // FailoverGlobalClusterResult + * // GlobalCluster: { // GlobalCluster + * // GlobalClusterIdentifier: "STRING_VALUE", + * // GlobalClusterResourceId: "STRING_VALUE", + * // GlobalClusterArn: "STRING_VALUE", + * // Status: "STRING_VALUE", + * // Engine: "STRING_VALUE", + * // EngineVersion: "STRING_VALUE", + * // DatabaseName: "STRING_VALUE", + * // StorageEncrypted: true || false, + * // DeletionProtection: true || false, + * // GlobalClusterMembers: [ // GlobalClusterMemberList + * // { // GlobalClusterMember + * // DBClusterArn: "STRING_VALUE", + * // Readers: [ // ReadersArnList + * // "STRING_VALUE", + * // ], + * // IsWriter: true || false, + * // }, + * // ], + * // }, + * // }; + * + * ``` + * + * @param FailoverGlobalClusterCommandInput - {@link FailoverGlobalClusterCommandInput} + * @returns {@link FailoverGlobalClusterCommandOutput} + * @see {@link FailoverGlobalClusterCommandInput} for command's `input` shape. + * @see {@link FailoverGlobalClusterCommandOutput} for command's `response` shape. + * @see {@link DocDBClientResolvedConfig | config} for DocDBClient's `config` shape. + * + * @throws {@link DBClusterNotFoundFault} (client fault) + *
+ * DBClusterIdentifier doesn't refer to an existing cluster.
The GlobalClusterIdentifier doesn't refer to an existing global cluster.
The cluster isn't in a valid state.
+ * + * @throws {@link InvalidGlobalClusterStateFault} (client fault) + *The requested operation can't be performed while the cluster is in this state.
+ * + * @throws {@link DocDBServiceException} + *Base exception class for all service exceptions from DocDB service.
+ * + * @public + */ +export class FailoverGlobalClusterCommand extends $Command + .classBuilder< + FailoverGlobalClusterCommandInput, + FailoverGlobalClusterCommandOutput, + DocDBClientResolvedConfig, + ServiceInputTypes, + ServiceOutputTypes + >() + .ep({ + ...commonParams, + }) + .m(function (this: any, Command: any, cs: any, config: DocDBClientResolvedConfig, o: any) { + return [ + getSerdePlugin(config, this.serialize, this.deserialize), + getEndpointPlugin(config, Command.getEndpointParameterInstructions()), + ]; + }) + .s("AmazonRDSv19", "FailoverGlobalCluster", {}) + .n("DocDBClient", "FailoverGlobalClusterCommand") + .f(void 0, void 0) + .ser(se_FailoverGlobalClusterCommand) + .de(de_FailoverGlobalClusterCommand) + .build() {} diff --git a/clients/client-docdb/src/commands/index.ts b/clients/client-docdb/src/commands/index.ts index 2ca263b6e8e5b..cc18a32e36b6a 100644 --- a/clients/client-docdb/src/commands/index.ts +++ b/clients/client-docdb/src/commands/index.ts @@ -35,6 +35,7 @@ export * from "./DescribeGlobalClustersCommand"; export * from "./DescribeOrderableDBInstanceOptionsCommand"; export * from "./DescribePendingMaintenanceActionsCommand"; export * from "./FailoverDBClusterCommand"; +export * from "./FailoverGlobalClusterCommand"; export * from "./ListTagsForResourceCommand"; export * from "./ModifyDBClusterCommand"; export * from "./ModifyDBClusterParameterGroupCommand"; diff --git a/clients/client-docdb/src/models/models_0.ts b/clients/client-docdb/src/models/models_0.ts index 4b55f9cb94d9d..963ffcc710bf0 100644 --- a/clients/client-docdb/src/models/models_0.ts +++ b/clients/client-docdb/src/models/models_0.ts @@ -5041,6 +5041,84 @@ export interface FailoverDBClusterResult { DBCluster?: DBCluster; } +/** + * @public + */ +export interface FailoverGlobalClusterMessage { + /** + *The identifier of the Amazon DocumentDB global cluster to apply this operation. + * The identifier is the unique key assigned by the user when the cluster is created. + * In other words, it's the name of the global cluster.
+ *Constraints:
+ *Must match the identifier of an existing global cluster.
+ *Minimum length of 1. Maximum length of 255.
+ *Pattern: [A-Za-z][0-9A-Za-z-:._]*
+ *
The identifier of the secondary Amazon DocumentDB cluster that you want to promote to the primary for the global cluster. + * Use the Amazon Resource Name (ARN) for the identifier so that Amazon DocumentDB can locate the cluster in its Amazon Web Services region.
+ *Constraints:
+ *Must match the identifier of an existing secondary cluster.
+ *Minimum length of 1. Maximum length of 255.
+ *Pattern: [A-Za-z][0-9A-Za-z-:._]*
+ *
Specifies whether to allow data loss for this global cluster operation. Allowing data loss triggers a global failover operation.
+ *If you don't specify AllowDataLoss, the global cluster operation defaults to a switchover.
Constraints:
+ *Can't be specified together with the Switchover parameter.
Specifies whether to switch over this global database cluster.
+ *Constraints:
+ *Can't be specified together with the AllowDataLoss parameter.
A data type representing an Amazon DocumentDB global cluster.
+ * @public + */ + GlobalCluster?: GlobalCluster; +} + /** *Represents the input to ListTagsForResource.
* @public diff --git a/clients/client-docdb/src/protocols/Aws_query.ts b/clients/client-docdb/src/protocols/Aws_query.ts index 0df969084a102..d7960abfa9e06 100644 --- a/clients/client-docdb/src/protocols/Aws_query.ts +++ b/clients/client-docdb/src/protocols/Aws_query.ts @@ -141,6 +141,10 @@ import { DescribePendingMaintenanceActionsCommandOutput, } from "../commands/DescribePendingMaintenanceActionsCommand"; import { FailoverDBClusterCommandInput, FailoverDBClusterCommandOutput } from "../commands/FailoverDBClusterCommand"; +import { + FailoverGlobalClusterCommandInput, + FailoverGlobalClusterCommandOutput, +} from "../commands/FailoverGlobalClusterCommand"; import { ListTagsForResourceCommandInput, ListTagsForResourceCommandOutput, @@ -310,6 +314,8 @@ import { EventSubscriptionsMessage, FailoverDBClusterMessage, FailoverDBClusterResult, + FailoverGlobalClusterMessage, + FailoverGlobalClusterResult, Filter, GlobalCluster, GlobalClusterAlreadyExistsFault, @@ -1007,6 +1013,23 @@ export const se_FailoverDBClusterCommand = async ( return buildHttpRpcRequest(context, headers, "/", undefined, body); }; +/** + * serializeAws_queryFailoverGlobalClusterCommand + */ +export const se_FailoverGlobalClusterCommand = async ( + input: FailoverGlobalClusterCommandInput, + context: __SerdeContext +): Promise<__HttpRequest> => { + const headers: __HeaderBag = SHARED_HEADERS; + let body: any; + body = buildFormUrlencodedString({ + ...se_FailoverGlobalClusterMessage(input, context), + [_A]: _FGC, + [_V]: _, + }); + return buildHttpRpcRequest(context, headers, "/", undefined, body); +}; + /** * serializeAws_queryListTagsForResourceCommand */ @@ -2027,6 +2050,26 @@ export const de_FailoverDBClusterCommand = async ( return response; }; +/** + * deserializeAws_queryFailoverGlobalClusterCommand + */ +export const de_FailoverGlobalClusterCommand = async ( + output: __HttpResponse, + context: __SerdeContext +): PromisePromotes the specified secondary DB cluster to be the primary DB cluster in the global cluster when failing over a global cluster occurs.
\nUse this operation to respond to an unplanned event, such as a regional disaster in the primary region. \n Failing over can result in a loss of write transaction data that wasn't replicated to the chosen secondary before the failover event occurred. \n However, the recovery process that promotes a DB instance on the chosen seconday DB cluster to be the primary writer DB instance guarantees that the data is in a transactionally consistent state.
" + } + }, + "com.amazonaws.docdb#FailoverGlobalClusterMessage": { + "type": "structure", + "members": { + "GlobalClusterIdentifier": { + "target": "com.amazonaws.docdb#GlobalClusterIdentifier", + "traits": { + "smithy.api#clientOptional": {}, + "smithy.api#documentation": "The identifier of the Amazon DocumentDB global cluster to apply this operation. \n The identifier is the unique key assigned by the user when the cluster is created. \n In other words, it's the name of the global cluster.
\nConstraints:
\nMust match the identifier of an existing global cluster.
\nMinimum length of 1. Maximum length of 255.
\nPattern: [A-Za-z][0-9A-Za-z-:._]*\n
The identifier of the secondary Amazon DocumentDB cluster that you want to promote to the primary for the global cluster. \n Use the Amazon Resource Name (ARN) for the identifier so that Amazon DocumentDB can locate the cluster in its Amazon Web Services region.
\nConstraints:
\nMust match the identifier of an existing secondary cluster.
\nMinimum length of 1. Maximum length of 255.
\nPattern: [A-Za-z][0-9A-Za-z-:._]*\n
Specifies whether to allow data loss for this global cluster operation. Allowing data loss triggers a global failover operation.
\nIf you don't specify AllowDataLoss, the global cluster operation defaults to a switchover.
Constraints:
\nCan't be specified together with the Switchover parameter.
Specifies whether to switch over this global database cluster.
\nConstraints:
\nCan't be specified together with the AllowDataLoss parameter.
The identifier of the secondary Amazon DocumentDB cluster to promote to the new primary for the global database cluster. \n Use the Amazon Resource Name (ARN) for the identifier so that Amazon DocumentDB can locate the cluster in its Amazon Web Services region.
\nConstraints:
\nMust match the identifier of an existing secondary cluster.
\nMinimum length of 1. Maximum length of 255.
\nPattern: [A-Za-z][0-9A-Za-z-:._]*\n
- * Directory buckets - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format https://bucket_name.s3express-az_id.region.amazonaws.com/key-name
- * . Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the
+ *
+ * Directory buckets -
+ * If multipart uploads in a directory bucket are in progress, you can't delete the bucket until all the in-progress multipart uploads are aborted or completed.
+ * To delete these in-progress multipart uploads, use the
+ * ListMultipartUploads operation to list the in-progress multipart
+ * uploads in the bucket and use the AbortMultupartUpload operation to
+ * abort all the in-progress multipart uploads.
+ *
+ * Directory buckets - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format https://bucket_name.s3express-az_id.region.amazonaws.com/key-name
+ * . Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the
* Amazon S3 User Guide.
You can copy individual objects between general purpose buckets, between directory buckets, and * between general purpose buckets and directory buckets.
*
- * Directory buckets - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format https://bucket_name.s3express-az_id.region.amazonaws.com/key-name
- * . Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the
+ *
Amazon S3 supports copy operations using Multi-Region Access Points only as a destination when using the Multi-Region Access Point ARN.
+ *
+ * Directory buckets - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format https://bucket_name.s3express-az_id.region.amazonaws.com/key-name
+ * . Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the
* Amazon S3 User Guide.
VPC endpoints don't support cross-Region requests (including copies). If you're using VPC endpoints, your source and destination buckets should be in the same Amazon Web Services Region as your VPC endpoint.
+ *Both the * Region that you want to copy the object from and the Region that you want to copy the diff --git a/clients/client-s3/src/commands/HeadBucketCommand.ts b/clients/client-s3/src/commands/HeadBucketCommand.ts index ed54a832204a0..8ee90c4f6747f 100644 --- a/clients/client-s3/src/commands/HeadBucketCommand.ts +++ b/clients/client-s3/src/commands/HeadBucketCommand.ts @@ -31,22 +31,20 @@ export interface HeadBucketCommandOutput extends HeadBucketOutput, __MetadataBea /** *
You can use this operation to determine if a bucket exists and if you have permission to access it. The action returns a 200 OK if the bucket exists and you have permission
* to access it.
If the bucket does not exist or you do not have permission to access it, the
+ * If the bucket does not exist or you do not have permission to access it, the
*
- * Directory buckets - You must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format All
+ * General purpose buckets - Request to public buckets that grant the s3:ListBucket permission publicly do not need to be signed. All other
- * Directory bucket - You must use IAM credentials to authenticate and authorize your access to the Amazon Web Services CLI or SDKs handles authentication and authorization on your behalf.
* Directory buckets - The HTTP Host header syntax is You must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format The A A Request headers are limited to 8 KB in size. For more information, see Common
* Request Headers.
- * Directory buckets - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format
* Directory buckets - The HTTP Host header syntax is For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format The following actions are related to
* Directory buckets -
* If multipart uploads in a directory bucket are in progress, you can't delete the bucket until all the in-progress multipart uploads are aborted or completed.
+ * To delete these in-progress multipart uploads, use the HEAD request returns a generic 400 Bad Request, 403
* Forbidden or 404 Not Found code. A message body is not included, so
* you cannot determine the exception beyond these HTTP response codes.https://bucket_name.s3express-az_id.region.amazonaws.com. Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the
- * Amazon S3 User Guide.
*
* @example
diff --git a/clients/client-s3/src/commands/HeadObjectCommand.ts b/clients/client-s3/src/commands/HeadObjectCommand.ts
index ce4487a8286ba..37570096072fb 100644
--- a/clients/client-s3/src/commands/HeadObjectCommand.ts
+++ b/clients/client-s3/src/commands/HeadObjectCommand.ts
@@ -37,20 +37,16 @@ export interface HeadObjectCommandOutput extends HeadObjectOutput, __MetadataBea
/**
* HeadBucket requests must be authenticated and signed by using IAM credentials (access key ID and secret access key for the IAM identities). All headers with the x-amz- prefix, including
+ * HeadBucket requests must be authenticated and signed by using IAM credentials (access key ID and secret access key for the IAM identities). All headers with the x-amz- prefix, including
* x-amz-copy-source, must be signed. For more information, see REST Authentication.HeadBucket API operation, instead of using the
+ * Directory buckets - You must use IAM credentials to authenticate and authorize your access to the HeadBucket API operation, instead of using the
* temporary security credentials through the CreateSession API operation.
* Bucket_name.s3express-az_id.region.amazonaws.com.https://bucket_name.s3express-az_id.region.amazonaws.com. Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the
+ * Amazon S3 User Guide.HEAD operation retrieves metadata from an object without returning the
* object itself. This operation is useful if you're interested only in an object's metadata.HEAD request has the same options as a GET operation on an
+ * HEAD request has the same options as a GET operation on an
* object. The response is identical to the GET response except that there is no
* response body. Because of this, if the HEAD request generates an error, it
* returns a generic code, such as 400 Bad Request, 403 Forbidden, 404 Not
* Found, 405 Method Not Allowed, 412 Precondition Failed, or 304 Not Modified.
* It's not possible to retrieve the exact exception of these error codes.https://bucket_name.s3express-az_id.region.amazonaws.com/key-name
- * . Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the
- * Amazon S3 User Guide.
*
*
* Bucket_name.s3express-az_id.region.amazonaws.com.https://bucket_name.s3express-az_id.region.amazonaws.com/key-name
+ * . Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the
+ * Amazon S3 User Guide.HeadObject:ListMultipartUploads operation to list the in-progress multipart
+ * uploads in the bucket and use the AbortMultupartUpload operation to abort all the in-progress multipart uploads.
*
The ListMultipartUploads operation returns a maximum of 1,000 multipart uploads in the response. The limit of 1,000 multipart
diff --git a/clients/client-s3/src/commands/ListObjectsV2Command.ts b/clients/client-s3/src/commands/ListObjectsV2Command.ts
index a3147b84f0325..4867446cbe07b 100644
--- a/clients/client-s3/src/commands/ListObjectsV2Command.ts
+++ b/clients/client-s3/src/commands/ListObjectsV2Command.ts
@@ -37,10 +37,24 @@ export interface ListObjectsV2CommandOutput extends ListObjectsV2Output, __Metad
* For more information about listing objects, see Listing object keys
* programmatically in the Amazon S3 User Guide. To get a list of your buckets, see ListBuckets.
- * Directory buckets - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format https://bucket_name.s3express-az_id.region.amazonaws.com/key-name
- * . Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the
+ *
+ * General purpose bucket - For general purpose buckets, ListObjectsV2 doesn't return prefixes that are related only to in-progress multipart uploads.
+ * Directory buckets -
+ * For directory buckets, ListObjectsV2 response includes the prefixes that are related only to in-progress multipart uploads.
+ *
+ * Directory buckets - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format https://bucket_name.s3express-az_id.region.amazonaws.com/key-name
+ * . Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the
* Amazon S3 User Guide.
This action requires Amazon Web Services Signature Version 4. For more information, see
+ * If you're specifying a customer managed KMS key, we recommend using a fully qualified
+ * KMS key ARN. If you use a KMS key alias instead, then KMS resolves the key within the
+ * requester’s account. This behavior can result in data that's encrypted with a KMS key
+ * that belongs to the requester, and not the bucket owner. Also, this action requires Amazon Web Services Signature Version 4. For more information, see
* Authenticating Requests (Amazon Web Services Signature Version 4).
To use this operation, you must have permission to perform the
diff --git a/clients/client-s3/src/commands/PutBucketPolicyCommand.ts b/clients/client-s3/src/commands/PutBucketPolicyCommand.ts
index b27ab54fcd1e5..2999df40bca8e 100644
--- a/clients/client-s3/src/commands/PutBucketPolicyCommand.ts
+++ b/clients/client-s3/src/commands/PutBucketPolicyCommand.ts
@@ -6,7 +6,7 @@ import { Command as $Command } from "@smithy/smithy-client";
import { MetadataBearer as __MetadataBearer } from "@smithy/types";
import { commonParams } from "../endpoint/EndpointParameters";
-import { PutBucketPolicyRequest } from "../models/models_0";
+import { PutBucketPolicyRequest } from "../models/models_1";
import { de_PutBucketPolicyCommand, se_PutBucketPolicyCommand } from "../protocols/Aws_restXml";
import { S3ClientResolvedConfig, ServiceInputTypes, ServiceOutputTypes } from "../S3Client";
diff --git a/clients/client-s3/src/commands/PutBucketVersioningCommand.ts b/clients/client-s3/src/commands/PutBucketVersioningCommand.ts
index ae2ee3230358b..07c539450a1c3 100644
--- a/clients/client-s3/src/commands/PutBucketVersioningCommand.ts
+++ b/clients/client-s3/src/commands/PutBucketVersioningCommand.ts
@@ -32,6 +32,15 @@ export interface PutBucketVersioningCommandOutput extends __MetadataBearer {}
* This operation is not supported by directory buckets. When you enable versioning on a bucket for the first time, it might take a short
+ * amount of time for the change to be fully propagated. We recommend that you wait for 15
+ * minutes after enabling versioning before issuing write operations
+ * (PUT
+ * or
+ * DELETE)
+ * on objects in the bucket.
Sets the versioning state of an existing bucket.
*You can set the versioning state with one of the following values:
*diff --git a/clients/client-s3/src/models/models_0.ts b/clients/client-s3/src/models/models_0.ts index 6f0d5a2a4d225..d4fc5558251be 100644 --- a/clients/client-s3/src/models/models_0.ts +++ b/clients/client-s3/src/models/models_0.ts @@ -6027,6 +6027,12 @@ export interface GetBucketCorsRequest { * with SSE-KMS to a bucket. By default, Amazon S3 uses this KMS key for SSE-KMS. For more * information, see PUT Bucket encryption in * the Amazon S3 API Reference.
+ *If you're specifying a customer managed KMS key, we recommend using a fully qualified + * KMS key ARN. If you use a KMS key alias instead, then KMS resolves the key within the + * requester’s account. This behavior can result in data that's encrypted with a KMS key + * that belongs to the requester, and not the bucket owner.
+ *Specifies the default server-side encryption configuration.
+ *If you're specifying a customer managed KMS key, we recommend using a fully qualified + * KMS key ARN. If you use a KMS key alias instead, then KMS resolves the key within the + * requester’s account. This behavior can result in data that's encrypted with a KMS key + * that belongs to the requester, and not the bucket owner.
+ *Specifies the partition date source for the partitioned prefix. PartitionDateSource can be EventTime or DeliveryTime.
+ *Specifies the partition date source for the partitioned prefix.
+ * PartitionDateSource can be EventTime or
+ * DeliveryTime.
For DeliveryTime, the time in the log file names corresponds to the
+ * delivery time for the log files.
For EventTime, The logs delivered are for a specific day only. The year,
+ * month, and day correspond to the day on which the event occurred, and the hour, minutes and
+ * seconds are set to 00 in the key.
Specifies encryption-related information for an Amazon S3 bucket that is a destination for * replicated objects.
+ *If you're specifying a customer managed KMS key, we recommend using a fully qualified + * KMS key ARN. If you use a KMS key alias instead, then KMS resolves the key within the + * requester’s account. This behavior can result in data that's encrypted with a KMS key + * that belongs to the requester, and not the bucket owner.
+ *The container element for specifying the default Object Lock retention settings for new + *
The container element for optionally specifying the default Object Lock retention settings for new * objects placed in the specified bucket.
*Specifies whether Amazon S3 should restrict public bucket policies for this bucket. Setting
- * this element to TRUE restricts access to this bucket to only Amazon Web Service principals and authorized users within this account if the bucket has
+ * this element to TRUE restricts access to this bucket to only Amazon Web Servicesservice principals and authorized users within this account if the bucket has
* a public policy.
Enabling this setting doesn't affect previously stored bucket policies, except that * public and cross-account access within any public bucket policy, including non-public @@ -10518,9 +10543,6 @@ export interface HeadBucketOutput { /** *
The Region that the bucket is located.
- *This functionality is not supported for directory buckets.
- *Indicates whether the bucket name used in the request is an access point alias.
*This functionality is not supported for directory buckets.
+ *For directory buckets, the value of this field is false.
+ * ContinuationToken is included in the
+ * response when there are more buckets that can be listed with pagination. The next ListBuckets request to Amazon S3 can be continued with this ContinuationToken. ContinuationToken is obfuscated and is not a real bucket.
Maximum number of buckets to be returned in response. When the number is more than the count of buckets that are owned by an Amazon Web Services account, return all the buckets in response.
+ * @public + */ + MaxBuckets?: number; + + /** + *
+ * ContinuationToken indicates to Amazon S3 that the list is being continued on
+ * this bucket with a token. ContinuationToken is obfuscated and is not a real
+ * key. You can use this ContinuationToken for pagination of the list results.
Length Constraints: Minimum length of 0. Maximum length of 1024.
+ *Required: No.
+ * @public + */ + ContinuationToken?: string; } /** @@ -11523,9 +11575,8 @@ export interface ListDirectoryBucketsOutput { export interface ListDirectoryBucketsRequest { /** *
- * ContinuationToken indicates to Amazon S3 that the list is being continued on
- * this bucket with a token. ContinuationToken is obfuscated and is not a real
- * key. You can use this ContinuationToken for pagination of the list results.
ContinuationToken indicates to Amazon S3 that the list is being continued on buckets in this account with a token. ContinuationToken is obfuscated and is not a real
+ * bucket name. You can use this ContinuationToken for the pagination of the list results.
* @public
*/
ContinuationToken?: string;
@@ -11818,11 +11869,19 @@ export interface ListMultipartUploadsRequest {
Delimiter?: string;
/**
- * Requests Amazon S3 to encode the object keys in the response and specifies the encoding - * method to use. An object key can contain any Unicode character; however, the XML 1.0 parser - * cannot parse some characters, such as characters with an ASCII value from 0 to 10. For - * characters that are not supported in XML 1.0, you can add this parameter to request that - * Amazon S3 encode the keys in the response.
+ *Encoding type used by Amazon S3 to encode the object keys in the response. + * Responses are encoded only in UTF-8. An object key can contain any Unicode character. + * However, the XML 1.0 parser can't parse certain characters, such as characters with an + * ASCII value from 0 to 10. For characters that aren't supported in XML 1.0, you can add this + * parameter to request that Amazon S3 encode the keys in the response. For more information about + * characters to avoid in object key names, see Object key naming + * guidelines.
+ *When using the URL encoding type, non-ASCII characters that are used in an object's
+ * key name will be percent-encoded according to UTF-8 code values. For example, the object
+ * test_file(3).png will appear as
+ * test_file%283%29.png.
Encoding type used by Amazon S3 to encode object keys in the response. If using
- * url, non-ASCII characters used in an object's key name will be URL encoded.
- * For example, the object test_file(3).png will appear as
+ *
Encoding type used by Amazon S3 to encode the object keys in the response. + * Responses are encoded only in UTF-8. An object key can contain any Unicode character. + * However, the XML 1.0 parser can't parse certain characters, such as characters with an + * ASCII value from 0 to 10. For characters that aren't supported in XML 1.0, you can add this + * parameter to request that Amazon S3 encode the keys in the response. For more information about + * characters to avoid in object key names, see Object key naming + * guidelines.
+ *When using the URL encoding type, non-ASCII characters that are used in an object's
+ * key name will be percent-encoded according to UTF-8 code values. For example, the object
+ * test_file(3).png will appear as
* test_file%283%29.png.
Requests Amazon S3 to encode the object keys in the response and specifies the encoding - * method to use. An object key can contain any Unicode character; however, the XML 1.0 parser - * cannot parse some characters, such as characters with an ASCII value from 0 to 10. For - * characters that are not supported in XML 1.0, you can add this parameter to request that - * Amazon S3 encode the keys in the response.
+ *Encoding type used by Amazon S3 to encode the object keys in the response. + * Responses are encoded only in UTF-8. An object key can contain any Unicode character. + * However, the XML 1.0 parser can't parse certain characters, such as characters with an + * ASCII value from 0 to 10. For characters that aren't supported in XML 1.0, you can add this + * parameter to request that Amazon S3 encode the keys in the response. For more information about + * characters to avoid in object key names, see Object key naming + * guidelines.
+ *When using the URL encoding type, non-ASCII characters that are used in an object's
+ * key name will be percent-encoded according to UTF-8 code values. For example, the object
+ * test_file(3).png will appear as
+ * test_file%283%29.png.
Encoding type used by Amazon S3 to encode object keys in the response. If using
- * url, non-ASCII characters used in an object's key name will be URL encoded.
- * For example, the object test_file(3).png will appear as
+ *
Encoding type used by Amazon S3 to encode the object keys in the response. + * Responses are encoded only in UTF-8. An object key can contain any Unicode character. + * However, the XML 1.0 parser can't parse certain characters, such as characters with an + * ASCII value from 0 to 10. For characters that aren't supported in XML 1.0, you can add this + * parameter to request that Amazon S3 encode the keys in the response. For more information about + * characters to avoid in object key names, see Object key naming + * guidelines.
+ *When using the URL encoding type, non-ASCII characters that are used in an object's
+ * key name will be percent-encoded according to UTF-8 code values. For example, the object
+ * test_file(3).png will appear as
* test_file%283%29.png.
Requests Amazon S3 to encode the object keys in the response and specifies the encoding - * method to use. An object key can contain any Unicode character; however, the XML 1.0 parser - * cannot parse some characters, such as characters with an ASCII value from 0 to 10. For - * characters that are not supported in XML 1.0, you can add this parameter to request that - * Amazon S3 encode the keys in the response.
+ *Encoding type used by Amazon S3 to encode the object keys in the response. + * Responses are encoded only in UTF-8. An object key can contain any Unicode character. + * However, the XML 1.0 parser can't parse certain characters, such as characters with an + * ASCII value from 0 to 10. For characters that aren't supported in XML 1.0, you can add this + * parameter to request that Amazon S3 encode the keys in the response. For more information about + * characters to avoid in object key names, see Object key naming + * guidelines.
+ *When using the URL encoding type, non-ASCII characters that are used in an object's
+ * key name will be percent-encoded according to UTF-8 code values. For example, the object
+ * test_file(3).png will appear as
+ * test_file%283%29.png.
The name of the bucket.
- *
- * Directory buckets - When you use this operation with a directory bucket, you must use path-style requests in the format https://s3express-control.region_code.amazonaws.com/bucket-name
- * . Virtual-hosted-style requests aren't supported. Directory bucket names must be unique in the chosen Availability Zone. Bucket names must also follow the format
- * bucket_base_name--az_id--x-s3 (for example,
- * DOC-EXAMPLE-BUCKET--usw2-az1--x-s3). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide
- *
Note: To supply the Multi-region Access Point (MRAP) to Bucket, you need to install the "@aws-sdk/signature-v4-crt" package to your project dependencies. - * For more information, please go to https://github.com/aws/aws-sdk-js-v3#known-issues
- * @public - */ - Bucket: string | undefined; - - /** - *The MD5 hash of the request body.
- *For requests made using the Amazon Web Services Command Line Interface (CLI) or Amazon Web Services SDKs, this field is calculated automatically.
- *This functionality is not supported for directory buckets.
- *Indicates the algorithm used to create the checksum for the object when you use the SDK. This header will not provide any
- * additional functionality if you don't use the SDK. When you send this header, there must be a corresponding x-amz-checksum-algorithm
- * or
- * x-amz-trailer header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request.
For the x-amz-checksum-algorithm
- * header, replace
- * algorithm
- * with the supported algorithm from the following list:
CRC32
- *CRC32C
- *SHA1
- *SHA256
- *For more - * information, see Checking object integrity in - * the Amazon S3 User Guide.
- *If the individual checksum value you provide through x-amz-checksum-algorithm
- * doesn't match the checksum algorithm you set through x-amz-sdk-checksum-algorithm, Amazon S3 ignores any provided
- * ChecksumAlgorithm parameter and uses the checksum algorithm that matches the provided value in x-amz-checksum-algorithm
- * .
For directory buckets, when you use Amazon Web Services SDKs, CRC32 is the default checksum algorithm that's used for performance.
Set this parameter to true to confirm that you want to remove your permissions to change - * this bucket policy in the future.
- *This functionality is not supported for directory buckets.
- *The bucket policy as a JSON document.
- *For directory buckets, the only IAM action supported in the bucket policy is s3express:CreateSession.
The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden (access denied).
For directory buckets, this header is not supported in this API operation. If you specify this header, the request fails with the HTTP status code
- * 501 Not Implemented.
The name of the bucket.
+ *
+ * Directory buckets - When you use this operation with a directory bucket, you must use path-style requests in the format https://s3express-control.region_code.amazonaws.com/bucket-name
+ * . Virtual-hosted-style requests aren't supported. Directory bucket names must be unique in the chosen Availability Zone. Bucket names must also follow the format
+ * bucket_base_name--az_id--x-s3 (for example,
+ * DOC-EXAMPLE-BUCKET--usw2-az1--x-s3). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide
+ *
Note: To supply the Multi-region Access Point (MRAP) to Bucket, you need to install the "@aws-sdk/signature-v4-crt" package to your project dependencies. + * For more information, please go to https://github.com/aws/aws-sdk-js-v3#known-issues
+ * @public + */ + Bucket: string | undefined; + + /** + *The MD5 hash of the request body.
+ *For requests made using the Amazon Web Services Command Line Interface (CLI) or Amazon Web Services SDKs, this field is calculated automatically.
+ *This functionality is not supported for directory buckets.
+ *Indicates the algorithm used to create the checksum for the object when you use the SDK. This header will not provide any
+ * additional functionality if you don't use the SDK. When you send this header, there must be a corresponding x-amz-checksum-algorithm
+ * or
+ * x-amz-trailer header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request.
For the x-amz-checksum-algorithm
+ * header, replace
+ * algorithm
+ * with the supported algorithm from the following list:
CRC32
+ *CRC32C
+ *SHA1
+ *SHA256
+ *For more + * information, see Checking object integrity in + * the Amazon S3 User Guide.
+ *If the individual checksum value you provide through x-amz-checksum-algorithm
+ * doesn't match the checksum algorithm you set through x-amz-sdk-checksum-algorithm, Amazon S3 ignores any provided
+ * ChecksumAlgorithm parameter and uses the checksum algorithm that matches the provided value in x-amz-checksum-algorithm
+ * .
For directory buckets, when you use Amazon Web Services SDKs, CRC32 is the default checksum algorithm that's used for performance.
Set this parameter to true to confirm that you want to remove your permissions to change + * this bucket policy in the future.
+ *This functionality is not supported for directory buckets.
+ *The bucket policy as a JSON document.
+ *For directory buckets, the only IAM action supported in the bucket policy is s3express:CreateSession.
The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden (access denied).
For directory buckets, this header is not supported in this API operation. If you specify this header, the request fails with the HTTP status code
+ * 501 Not Implemented.
The byte array of partial, one or more result records.
+ *The byte array of partial, one or more result records. S3 Select doesn't guarantee that
+ * a record will be self-contained in one record frame. To ensure continuous streaming of
+ * data, S3 Select might split the same record across multiple record frames instead of
+ * aggregating the results in memory. Some S3 clients (for example, the SDK for Java) handle this behavior by creating a ByteStream out of the response by
+ * default. Other clients might not handle this behavior by default. In those cases, you must
+ * aggregate the results on the client side and parse the response.
This operation aborts a multipart upload. After a multipart upload is aborted, no\n additional parts can be uploaded using that upload ID. The storage consumed by any\n previously uploaded parts will be freed. However, if any part uploads are currently in\n progress, those part uploads might or might not succeed. As a result, it might be necessary\n to abort a given multipart upload multiple times in order to completely free all storage\n consumed by all parts.
\nTo verify that all parts have been removed and prevent getting charged for the part\n storage, you should call the ListParts API operation and ensure that\n the parts list is empty.
\n\n Directory buckets - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format https://bucket_name.s3express-az_id.region.amazonaws.com/key-name\n . Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the\n Amazon S3 User Guide.
\n General purpose bucket permissions - For information about permissions required to use the multipart upload, see Multipart Upload\n and Permissions in the Amazon S3\n User Guide.
\n\n Directory bucket permissions - To grant access to this API operation on a directory bucket, we recommend that you use the \n CreateSession\n API operation for session-based authorization. Specifically, you grant the s3express:CreateSession permission to the directory bucket in a bucket policy or an IAM identity-based policy. Then, you make the CreateSession API call on the bucket to obtain a session token. With the session token in your request header, you can make API requests to this operation. After the session token expires, you make another CreateSession API call to generate a new session token for use. \nAmazon Web Services CLI or SDKs create session and refresh the session token automatically to avoid service interruptions when a session expires. For more information about authorization, see \n CreateSession\n .
\n Directory buckets - The HTTP Host header syntax is \n Bucket_name.s3express-az_id.region.amazonaws.com.
The following operations are related to AbortMultipartUpload:
\n UploadPart\n
\n\n ListParts\n
\n\n ListMultipartUploads\n
\nThis operation aborts a multipart upload. After a multipart upload is aborted, no\n additional parts can be uploaded using that upload ID. The storage consumed by any\n previously uploaded parts will be freed. However, if any part uploads are currently in\n progress, those part uploads might or might not succeed. As a result, it might be necessary\n to abort a given multipart upload multiple times in order to completely free all storage\n consumed by all parts.
\nTo verify that all parts have been removed and prevent getting charged for the part\n storage, you should call the ListParts API operation and ensure that\n the parts list is empty.
\n\n Directory buckets - \n If multipart uploads in a directory bucket are in progress, you can't delete the bucket until all the in-progress multipart uploads are aborted or completed. \n To delete these in-progress multipart uploads, use the\n ListMultipartUploads operation to list the in-progress multipart\n uploads in the bucket and use the AbortMultupartUpload operation to\n abort all the in-progress multipart uploads.\n
\n Directory buckets - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format https://bucket_name.s3express-az_id.region.amazonaws.com/key-name\n . Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the\n Amazon S3 User Guide.
\n General purpose bucket permissions - For information about permissions required to use the multipart upload, see Multipart Upload\n and Permissions in the Amazon S3\n User Guide.
\n\n Directory bucket permissions - To grant access to this API operation on a directory bucket, we recommend that you use the \n CreateSession\n API operation for session-based authorization. Specifically, you grant the s3express:CreateSession permission to the directory bucket in a bucket policy or an IAM identity-based policy. Then, you make the CreateSession API call on the bucket to obtain a session token. With the session token in your request header, you can make API requests to this operation. After the session token expires, you make another CreateSession API call to generate a new session token for use. \nAmazon Web Services CLI or SDKs create session and refresh the session token automatically to avoid service interruptions when a session expires. For more information about authorization, see \n CreateSession\n .
\n Directory buckets - The HTTP Host header syntax is \n Bucket_name.s3express-az_id.region.amazonaws.com.
The following operations are related to AbortMultipartUpload:
\n UploadPart\n
\n\n ListParts\n
\n\n ListMultipartUploads\n
\nCreates a copy of an object that is already stored in Amazon S3.
\nYou can store individual objects of up to 5 TB in Amazon S3. You create a copy of your\n object up to 5 GB in size in a single atomic action using this API. However, to copy an\n object greater than 5 GB, you must use the multipart upload Upload Part - Copy\n (UploadPartCopy) API. For more information, see Copy Object Using the\n REST Multipart Upload API.
\nYou can copy individual objects between general purpose buckets, between directory buckets, and \n between general purpose buckets and directory buckets.
\n\n Directory buckets - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format https://bucket_name.s3express-az_id.region.amazonaws.com/key-name\n . Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the\n Amazon S3 User Guide.
Both the\n Region that you want to copy the object from and the Region that you want to copy the\n object to must be enabled for your account. For more information about how to enable a Region for your account, see Enable \n or disable a Region for standalone accounts in the\n Amazon Web Services Account Management Guide.
\nAmazon S3 transfer acceleration does not support cross-Region copies. If you request a\n cross-Region copy using a transfer acceleration endpoint, you get a 400 Bad\n Request error. For more information, see Transfer\n Acceleration.
All CopyObject requests must be authenticated and signed by using IAM credentials (access key ID and secret access key for the IAM identities). All headers with the x-amz- prefix, including\n x-amz-copy-source, must be signed. For more information, see REST Authentication.
\n Directory buckets - You must use the IAM credentials to authenticate and authorize your access to the CopyObject API operation, instead of using the \n temporary security credentials through the CreateSession API operation.
Amazon Web Services CLI or SDKs handles authentication and authorization on your behalf.
\nYou must have\n read access to the source object and write\n access to the destination bucket.
\n\n General purpose bucket permissions -\n You must have permissions in an IAM policy based on the source and destination\n bucket types in a CopyObject operation.
If the source object is in a general purpose bucket, you must have\n \n s3:GetObject\n \n permission to read the source object that is being copied.
If the destination bucket is a general purpose bucket, you must have\n \n s3:PutObject\n \n permission to write the object copy to the destination bucket.
\n Directory bucket permissions -\n You must have permissions in a bucket policy or an IAM identity-based policy based on the source and destination\n bucket types in a CopyObject operation.
If the source object that you want to copy is in a\n directory bucket, you must have the \n s3express:CreateSession\n permission in\n the Action element of a policy to read the object. By default, the session is in the ReadWrite mode. If you want to restrict the access, you can explicitly set the s3express:SessionMode condition key to ReadOnly on the copy source bucket.
If the copy destination is a directory bucket, you must have the \n s3express:CreateSession\n permission in the\n Action element of a policy to write the object\n to the destination. The s3express:SessionMode condition\n key can't be set to ReadOnly on the copy destination bucket.
For example policies, see Example bucket policies for S3 Express One Zone and Amazon Web Services Identity and Access Management (IAM) identity-based policies for S3 Express One Zone in the\n Amazon S3 User Guide.
\nWhen the request is an HTTP 1.1 request, the response is chunk encoded. When\n the request is not an HTTP 1.1 request, the response would not contain the\n Content-Length. You always need to read the entire response body\n to check if the copy succeeds.
If the copy is successful, you receive a response with information about the copied\n object.
\nA copy request might return an error when Amazon S3 receives the copy request or while Amazon S3\n is copying the files. A 200 OK response can contain either a success or an error.
If the error occurs before the copy action starts, you receive a\n standard Amazon S3 error.
\nIf the error occurs during the copy operation, the error response is\n embedded in the 200 OK response. For example, in a cross-region copy, you \n may encounter throttling and receive a 200 OK response. \n For more information, see Resolve \n the Error 200 response when copying objects to Amazon S3. \n The 200 OK status code means the copy was accepted, but \n it doesn't mean the copy is complete. Another example is \n when you disconnect from Amazon S3 before the copy is complete, Amazon S3 might cancel the copy and you may receive a 200 OK response. \n You must stay connected to Amazon S3 until the entire response is successfully received and processed.
If you call this API operation directly, make\n sure to design your application to parse the content of the response and handle it\n appropriately. If you use Amazon Web Services SDKs, SDKs handle this condition. The SDKs detect the\n embedded error and apply error handling per your configuration settings (including\n automatically retrying the request as appropriate). If the condition persists, the SDKs\n throw an exception (or, for the SDKs that don't use exceptions, they return an \n error).
\nThe copy request charge is based on the storage class and Region that you specify for\n the destination object. The request can also result in a data retrieval charge for the\n source if the source storage class bills for data retrieval. If the copy source is in a different region, the data transfer is billed to the copy source account. For pricing information, see\n Amazon S3 pricing.
\n\n Directory buckets - The HTTP Host header syntax is \n Bucket_name.s3express-az_id.region.amazonaws.com.
The following operations are related to CopyObject:
Creates a copy of an object that is already stored in Amazon S3.
\nYou can store individual objects of up to 5 TB in Amazon S3. You create a copy of your\n object up to 5 GB in size in a single atomic action using this API. However, to copy an\n object greater than 5 GB, you must use the multipart upload Upload Part - Copy\n (UploadPartCopy) API. For more information, see Copy Object Using the\n REST Multipart Upload API.
\nYou can copy individual objects between general purpose buckets, between directory buckets, and \n between general purpose buckets and directory buckets.
\nAmazon S3 supports copy operations using Multi-Region Access Points only as a destination when using the Multi-Region Access Point ARN.
\n\n Directory buckets - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format https://bucket_name.s3express-az_id.region.amazonaws.com/key-name\n . Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the\n Amazon S3 User Guide.
VPC endpoints don't support cross-Region requests (including copies). If you're using VPC endpoints, your source and destination buckets should be in the same Amazon Web Services Region as your VPC endpoint.
\nBoth the\n Region that you want to copy the object from and the Region that you want to copy the\n object to must be enabled for your account. For more information about how to enable a Region for your account, see Enable \n or disable a Region for standalone accounts in the\n Amazon Web Services Account Management Guide.
\nAmazon S3 transfer acceleration does not support cross-Region copies. If you request a\n cross-Region copy using a transfer acceleration endpoint, you get a 400 Bad\n Request error. For more information, see Transfer\n Acceleration.
All CopyObject requests must be authenticated and signed by using IAM credentials (access key ID and secret access key for the IAM identities). All headers with the x-amz- prefix, including\n x-amz-copy-source, must be signed. For more information, see REST Authentication.
\n Directory buckets - You must use the IAM credentials to authenticate and authorize your access to the CopyObject API operation, instead of using the \n temporary security credentials through the CreateSession API operation.
Amazon Web Services CLI or SDKs handles authentication and authorization on your behalf.
\nYou must have\n read access to the source object and write\n access to the destination bucket.
\n\n General purpose bucket permissions -\n You must have permissions in an IAM policy based on the source and destination\n bucket types in a CopyObject operation.
If the source object is in a general purpose bucket, you must have\n \n s3:GetObject\n \n permission to read the source object that is being copied.
If the destination bucket is a general purpose bucket, you must have\n \n s3:PutObject\n \n permission to write the object copy to the destination bucket.
\n Directory bucket permissions -\n You must have permissions in a bucket policy or an IAM identity-based policy based on the source and destination\n bucket types in a CopyObject operation.
If the source object that you want to copy is in a\n directory bucket, you must have the \n s3express:CreateSession\n permission in\n the Action element of a policy to read the object. By default, the session is in the ReadWrite mode. If you want to restrict the access, you can explicitly set the s3express:SessionMode condition key to ReadOnly on the copy source bucket.
If the copy destination is a directory bucket, you must have the \n s3express:CreateSession\n permission in the\n Action element of a policy to write the object\n to the destination. The s3express:SessionMode condition\n key can't be set to ReadOnly on the copy destination bucket.
For example policies, see Example bucket policies for S3 Express One Zone and Amazon Web Services Identity and Access Management (IAM) identity-based policies for S3 Express One Zone in the\n Amazon S3 User Guide.
\nWhen the request is an HTTP 1.1 request, the response is chunk encoded. When\n the request is not an HTTP 1.1 request, the response would not contain the\n Content-Length. You always need to read the entire response body\n to check if the copy succeeds.
If the copy is successful, you receive a response with information about the copied\n object.
\nA copy request might return an error when Amazon S3 receives the copy request or while Amazon S3\n is copying the files. A 200 OK response can contain either a success or an error.
If the error occurs before the copy action starts, you receive a\n standard Amazon S3 error.
\nIf the error occurs during the copy operation, the error response is\n embedded in the 200 OK response. For example, in a cross-region copy, you \n may encounter throttling and receive a 200 OK response. \n For more information, see Resolve \n the Error 200 response when copying objects to Amazon S3. \n The 200 OK status code means the copy was accepted, but \n it doesn't mean the copy is complete. Another example is \n when you disconnect from Amazon S3 before the copy is complete, Amazon S3 might cancel the copy and you may receive a 200 OK response. \n You must stay connected to Amazon S3 until the entire response is successfully received and processed.
If you call this API operation directly, make\n sure to design your application to parse the content of the response and handle it\n appropriately. If you use Amazon Web Services SDKs, SDKs handle this condition. The SDKs detect the\n embedded error and apply error handling per your configuration settings (including\n automatically retrying the request as appropriate). If the condition persists, the SDKs\n throw an exception (or, for the SDKs that don't use exceptions, they return an \n error).
\nThe copy request charge is based on the storage class and Region that you specify for\n the destination object. The request can also result in a data retrieval charge for the\n source if the source storage class bills for data retrieval. If the copy source is in a different region, the data transfer is billed to the copy source account. For pricing information, see\n Amazon S3 pricing.
\n\n Directory buckets - The HTTP Host header syntax is \n Bucket_name.s3express-az_id.region.amazonaws.com.
The following operations are related to CopyObject:
The container element for specifying the default Object Lock retention settings for new\n objects placed in the specified bucket.
\nThe DefaultRetention settings require both a mode and a\n period.
The DefaultRetention period can be either Days or\n Years but you must select one. You cannot specify\n Days and Years at the same time.
The container element for optionally specifying the default Object Lock retention settings for new\n objects placed in the specified bucket.
\nThe DefaultRetention settings require both a mode and a\n period.
The DefaultRetention period can be either Days or\n Years but you must select one. You cannot specify\n Days and Years at the same time.
Requests Amazon S3 to encode the object keys in the response and specifies the encoding\n method to use. An object key can contain any Unicode character; however, the XML 1.0 parser\n cannot parse some characters, such as characters with an ASCII value from 0 to 10. For\n characters that are not supported in XML 1.0, you can add this parameter to request that\n Amazon S3 encode the keys in the response.
" + "smithy.api#documentation": "Encoding type used by Amazon S3 to encode the object keys in the response.\n Responses are encoded only in UTF-8. An object key can contain any Unicode character.\n However, the XML 1.0 parser can't parse certain characters, such as characters with an\n ASCII value from 0 to 10. For characters that aren't supported in XML 1.0, you can add this\n parameter to request that Amazon S3 encode the keys in the response. For more information about\n characters to avoid in object key names, see Object key naming\n guidelines.
\nWhen using the URL encoding type, non-ASCII characters that are used in an object's\n key name will be percent-encoded according to UTF-8 code values. For example, the object\n test_file(3).png will appear as\n test_file%283%29.png.
Specifies encryption-related information for an Amazon S3 bucket that is a destination for\n replicated objects.
" + "smithy.api#documentation": "Specifies encryption-related information for an Amazon S3 bucket that is a destination for\n replicated objects.
\nIf you're specifying a customer managed KMS key, we recommend using a fully qualified\n KMS key ARN. If you use a KMS key alias instead, then KMS resolves the key within the\n requester’s account. This behavior can result in data that's encrypted with a KMS key\n that belongs to the requester, and not the bucket owner.
\nYou can use this operation to determine if a bucket exists and if you have permission to access it. The action returns a 200 OK if the bucket exists and you have permission\n to access it.
If the bucket does not exist or you do not have permission to access it, the\n HEAD request returns a generic 400 Bad Request, 403\n Forbidden or 404 Not Found code. A message body is not included, so\n you cannot determine the exception beyond these HTTP response codes.
\n Directory buckets - You must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format https://bucket_name.s3express-az_id.region.amazonaws.com. Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the\n Amazon S3 User Guide.
All HeadBucket requests must be authenticated and signed by using IAM credentials (access key ID and secret access key for the IAM identities). All headers with the x-amz- prefix, including\n x-amz-copy-source, must be signed. For more information, see REST Authentication.
\n Directory bucket - You must use IAM credentials to authenticate and authorize your access to the HeadBucket API operation, instead of using the \n temporary security credentials through the CreateSession API operation.
Amazon Web Services CLI or SDKs handles authentication and authorization on your behalf.
\n\n General purpose bucket permissions - To use this operation, you must have permissions to perform the\n s3:ListBucket action. The bucket owner has this permission by default and\n can grant this permission to others. For more information about permissions, see Managing\n access permissions to your Amazon S3 resources in the Amazon S3 User Guide.
\n Directory bucket permissions -\n You must have the \n s3express:CreateSession\n permission in the\n Action element of a policy. By default, the session is in the ReadWrite mode. If you want to restrict the access, you can explicitly set the s3express:SessionMode condition key to ReadOnly on the bucket.
For more information about example bucket policies, see Example bucket policies for S3 Express One Zone and Amazon Web Services Identity and Access Management (IAM) identity-based policies for S3 Express One Zone in the Amazon S3 User Guide.
\n\n Directory buckets - The HTTP Host header syntax is \n Bucket_name.s3express-az_id.region.amazonaws.com.
You can use this operation to determine if a bucket exists and if you have permission to access it. The action returns a 200 OK if the bucket exists and you have permission\n to access it.
If the bucket does not exist or you do not have permission to access it, the\n HEAD request returns a generic 400 Bad Request, 403\n Forbidden or 404 Not Found code. A message body is not included, so\n you cannot determine the exception beyond these HTTP response codes.
\n General purpose buckets - Request to public buckets that grant the s3:ListBucket permission publicly do not need to be signed. All other HeadBucket requests must be authenticated and signed by using IAM credentials (access key ID and secret access key for the IAM identities). All headers with the x-amz- prefix, including\n x-amz-copy-source, must be signed. For more information, see REST Authentication.
\n Directory buckets - You must use IAM credentials to authenticate and authorize your access to the HeadBucket API operation, instead of using the \n temporary security credentials through the CreateSession API operation.
Amazon Web Services CLI or SDKs handles authentication and authorization on your behalf.
\n\n General purpose bucket permissions - To use this operation, you must have permissions to perform the\n s3:ListBucket action. The bucket owner has this permission by default and\n can grant this permission to others. For more information about permissions, see Managing\n access permissions to your Amazon S3 resources in the Amazon S3 User Guide.
\n Directory bucket permissions -\n You must have the \n s3express:CreateSession\n permission in the\n Action element of a policy. By default, the session is in the ReadWrite mode. If you want to restrict the access, you can explicitly set the s3express:SessionMode condition key to ReadOnly on the bucket.
For more information about example bucket policies, see Example bucket policies for S3 Express One Zone and Amazon Web Services Identity and Access Management (IAM) identity-based policies for S3 Express One Zone in the Amazon S3 User Guide.
\n\n Directory buckets - The HTTP Host header syntax is \n Bucket_name.s3express-az_id.region.amazonaws.com.
You must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format https://bucket_name.s3express-az_id.region.amazonaws.com. Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the\n Amazon S3 User Guide.
The Region that the bucket is located.
\nThis functionality is not supported for directory buckets.
\nThe Region that the bucket is located.
", "smithy.api#httpHeader": "x-amz-bucket-region" } }, "AccessPointAlias": { "target": "com.amazonaws.s3#AccessPointAlias", "traits": { - "smithy.api#documentation": "Indicates whether the bucket name used in the request is an access point alias.
\nThis functionality is not supported for directory buckets.
\nIndicates whether the bucket name used in the request is an access point alias.
\nFor directory buckets, the value of this field is false.
The HEAD operation retrieves metadata from an object without returning the\n object itself. This operation is useful if you're interested only in an object's metadata.
A HEAD request has the same options as a GET operation on an\n object. The response is identical to the GET response except that there is no\n response body. Because of this, if the HEAD request generates an error, it\n returns a generic code, such as 400 Bad Request, 403 Forbidden, 404 Not\n Found, 405 Method Not Allowed, 412 Precondition Failed, or 304 Not Modified. \n It's not possible to retrieve the exact exception of these error codes.
Request headers are limited to 8 KB in size. For more information, see Common\n Request Headers.
\n\n Directory buckets - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format https://bucket_name.s3express-az_id.region.amazonaws.com/key-name\n . Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the\n Amazon S3 User Guide.
\n General purpose bucket permissions - To\n use HEAD, you must have the s3:GetObject permission. You need the relevant read object (or version) permission for this operation.\n For more information, see Actions, resources, and condition\n keys for Amazon S3 in the Amazon S3\n User Guide.
If the object you request doesn't exist, the error that\n Amazon S3 returns depends on whether you also have the s3:ListBucket permission.
If you have the s3:ListBucket permission on the bucket, Amazon S3\n returns an HTTP status code 404 Not Found error.
If you don’t have the s3:ListBucket permission, Amazon S3 returns\n an HTTP status code 403 Forbidden error.
\n Directory bucket permissions - To grant access to this API operation on a directory bucket, we recommend that you use the \n CreateSession\n API operation for session-based authorization. Specifically, you grant the s3express:CreateSession permission to the directory bucket in a bucket policy or an IAM identity-based policy. Then, you make the CreateSession API call on the bucket to obtain a session token. With the session token in your request header, you can make API requests to this operation. After the session token expires, you make another CreateSession API call to generate a new session token for use. \nAmazon Web Services CLI or SDKs create session and refresh the session token automatically to avoid service interruptions when a session expires. For more information about authorization, see \n CreateSession\n .
Encryption request headers, like x-amz-server-side-encryption,\n should not be sent for HEAD requests if your object uses server-side\n encryption with Key Management Service (KMS) keys (SSE-KMS), dual-layer server-side\n encryption with Amazon Web Services KMS keys (DSSE-KMS), or server-side encryption with Amazon S3\n managed encryption keys (SSE-S3). The x-amz-server-side-encryption header is used when you PUT an object to S3 and want to specify the encryption method. \n If you include this header in a HEAD request for an object that uses these types of keys, \n you’ll get an HTTP 400 Bad Request error. It's because the encryption method can't be changed when you retrieve the object.
If you encrypt an object by using server-side encryption with customer-provided\n encryption keys (SSE-C) when you store the object in Amazon S3, then when you retrieve the\n metadata from the object, you must use the following headers to provide the encryption key for the server to be able to retrieve the object's metadata. The headers are:
\n\n x-amz-server-side-encryption-customer-algorithm\n
\n x-amz-server-side-encryption-customer-key\n
\n x-amz-server-side-encryption-customer-key-MD5\n
For more information about SSE-C, see Server-Side Encryption\n (Using Customer-Provided Encryption Keys) in the Amazon S3\n User Guide.
\n\n Directory bucket permissions - For directory buckets, only server-side encryption with Amazon S3 managed keys (SSE-S3) (AES256) is supported.
If the current version of the object is a delete marker, Amazon S3 behaves as if the object was deleted and includes x-amz-delete-marker: true in the response.
If the specified version is a delete marker, the response returns a 405 Method Not Allowed error and the Last-Modified: timestamp response header.
\n Directory buckets - Delete marker is not supported by directory buckets.
\n\n Directory buckets - S3 Versioning isn't enabled and supported for directory buckets. For this API operation, only the null value of the version ID is supported by directory buckets. You can only specify null \n to the versionId query parameter in the request.
\n Directory buckets - The HTTP Host header syntax is \n Bucket_name.s3express-az_id.region.amazonaws.com.
The following actions are related to HeadObject:
\n GetObject\n
\n\n GetObjectAttributes\n
\nThe HEAD operation retrieves metadata from an object without returning the\n object itself. This operation is useful if you're interested only in an object's metadata.
A HEAD request has the same options as a GET operation on an\n object. The response is identical to the GET response except that there is no\n response body. Because of this, if the HEAD request generates an error, it\n returns a generic code, such as 400 Bad Request, 403 Forbidden, 404 Not\n Found, 405 Method Not Allowed, 412 Precondition Failed, or 304 Not Modified. \n It's not possible to retrieve the exact exception of these error codes.
Request headers are limited to 8 KB in size. For more information, see Common\n Request Headers.
\n\n General purpose bucket permissions - To\n use HEAD, you must have the s3:GetObject permission. You need the relevant read object (or version) permission for this operation.\n For more information, see Actions, resources, and condition\n keys for Amazon S3 in the Amazon S3\n User Guide.
If the object you request doesn't exist, the error that\n Amazon S3 returns depends on whether you also have the s3:ListBucket permission.
If you have the s3:ListBucket permission on the bucket, Amazon S3\n returns an HTTP status code 404 Not Found error.
If you don’t have the s3:ListBucket permission, Amazon S3 returns\n an HTTP status code 403 Forbidden error.
\n Directory bucket permissions - To grant access to this API operation on a directory bucket, we recommend that you use the \n CreateSession\n API operation for session-based authorization. Specifically, you grant the s3express:CreateSession permission to the directory bucket in a bucket policy or an IAM identity-based policy. Then, you make the CreateSession API call on the bucket to obtain a session token. With the session token in your request header, you can make API requests to this operation. After the session token expires, you make another CreateSession API call to generate a new session token for use. \nAmazon Web Services CLI or SDKs create session and refresh the session token automatically to avoid service interruptions when a session expires. For more information about authorization, see \n CreateSession\n .
Encryption request headers, like x-amz-server-side-encryption,\n should not be sent for HEAD requests if your object uses server-side\n encryption with Key Management Service (KMS) keys (SSE-KMS), dual-layer server-side\n encryption with Amazon Web Services KMS keys (DSSE-KMS), or server-side encryption with Amazon S3\n managed encryption keys (SSE-S3). The x-amz-server-side-encryption header is used when you PUT an object to S3 and want to specify the encryption method. \n If you include this header in a HEAD request for an object that uses these types of keys, \n you’ll get an HTTP 400 Bad Request error. It's because the encryption method can't be changed when you retrieve the object.
If you encrypt an object by using server-side encryption with customer-provided\n encryption keys (SSE-C) when you store the object in Amazon S3, then when you retrieve the\n metadata from the object, you must use the following headers to provide the encryption key for the server to be able to retrieve the object's metadata. The headers are:
\n\n x-amz-server-side-encryption-customer-algorithm\n
\n x-amz-server-side-encryption-customer-key\n
\n x-amz-server-side-encryption-customer-key-MD5\n
For more information about SSE-C, see Server-Side Encryption\n (Using Customer-Provided Encryption Keys) in the Amazon S3\n User Guide.
\n\n Directory bucket permissions - For directory buckets, only server-side encryption with Amazon S3 managed keys (SSE-S3) (AES256) is supported.
If the current version of the object is a delete marker, Amazon S3 behaves as if the object was deleted and includes x-amz-delete-marker: true in the response.
If the specified version is a delete marker, the response returns a 405 Method Not Allowed error and the Last-Modified: timestamp response header.
\n Directory buckets - Delete marker is not supported by directory buckets.
\n\n Directory buckets - S3 Versioning isn't enabled and supported for directory buckets. For this API operation, only the null value of the version ID is supported by directory buckets. You can only specify null \n to the versionId query parameter in the request.
\n Directory buckets - The HTTP Host header syntax is \n Bucket_name.s3express-az_id.region.amazonaws.com.
For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format https://bucket_name.s3express-az_id.region.amazonaws.com/key-name\n . Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the\n Amazon S3 User Guide.
The following actions are related to HeadObject:
\n GetObject\n
\n\n GetObjectAttributes\n
\nThe owner of the buckets listed.
" } + }, + "ContinuationToken": { + "target": "com.amazonaws.s3#NextToken", + "traits": { + "smithy.api#documentation": "\n ContinuationToken is included in the\n response when there are more buckets that can be listed with pagination. The next ListBuckets request to Amazon S3 can be continued with this ContinuationToken. ContinuationToken is obfuscated and is not a real bucket.
Maximum number of buckets to be returned in response. When the number is more than the count of buckets that are owned by an Amazon Web Services account, return all the buckets in response.
", + "smithy.api#httpQuery": "max-buckets" + } + }, + "ContinuationToken": { + "target": "com.amazonaws.s3#Token", + "traits": { + "smithy.api#documentation": "\n ContinuationToken indicates to Amazon S3 that the list is being continued on\n this bucket with a token. ContinuationToken is obfuscated and is not a real\n key. You can use this ContinuationToken for pagination of the list results.
Length Constraints: Minimum length of 0. Maximum length of 1024.
\nRequired: No.
", + "smithy.api#httpQuery": "continuation-token" + } + } + }, + "traits": { + "smithy.api#input": {} + } + }, "com.amazonaws.s3#ListDirectoryBuckets": { "type": "operation", "input": { @@ -26466,7 +26500,7 @@ "ContinuationToken": { "target": "com.amazonaws.s3#DirectoryBucketToken", "traits": { - "smithy.api#documentation": "\n ContinuationToken indicates to Amazon S3 that the list is being continued on\n this bucket with a token. ContinuationToken is obfuscated and is not a real\n key. You can use this ContinuationToken for pagination of the list results.
\n ContinuationToken indicates to Amazon S3 that the list is being continued on buckets in this account with a token. ContinuationToken is obfuscated and is not a real\n bucket name. You can use this ContinuationToken for the pagination of the list results.
This operation lists in-progress multipart uploads in a bucket. An in-progress multipart upload is a\n multipart upload that has been initiated by the CreateMultipartUpload request, but\n has not yet been completed or aborted.
\n Directory buckets - \n If multipart uploads in a directory bucket are in progress, you can't delete the bucket until all the in-progress multipart uploads are aborted or completed.\n
\nThe ListMultipartUploads operation returns a maximum of 1,000 multipart uploads in the response. The limit of 1,000 multipart\n uploads is also the default\n value. You can further limit the number of uploads in a response by specifying the\n max-uploads request parameter. If there are more than 1,000 multipart uploads that \n satisfy your ListMultipartUploads request, the response returns an IsTruncated element\n with the value of true, a NextKeyMarker element, and a NextUploadIdMarker element. \n To list the remaining multipart uploads, you need to make subsequent ListMultipartUploads requests. \n In these requests, include two query parameters: key-marker and upload-id-marker. \n Set the value of key-marker to the NextKeyMarker value from the previous response. \n Similarly, set the value of upload-id-marker to the NextUploadIdMarker value from the previous response.
\n Directory buckets - The upload-id-marker element and \n the NextUploadIdMarker element aren't supported by directory buckets. \n To list the additional multipart uploads, you only need to set the value of key-marker to the NextKeyMarker value from the previous response.
For more information about multipart uploads, see Uploading Objects Using Multipart\n Upload in the Amazon S3\n User Guide.
\n\n Directory buckets - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format https://bucket_name.s3express-az_id.region.amazonaws.com/key-name\n . Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the\n Amazon S3 User Guide.
\n General purpose bucket permissions - For information about permissions required to use the multipart upload API, see Multipart Upload\n and Permissions in the Amazon S3\n User Guide.
\n\n Directory bucket permissions - To grant access to this API operation on a directory bucket, we recommend that you use the \n CreateSession\n API operation for session-based authorization. Specifically, you grant the s3express:CreateSession permission to the directory bucket in a bucket policy or an IAM identity-based policy. Then, you make the CreateSession API call on the bucket to obtain a session token. With the session token in your request header, you can make API requests to this operation. After the session token expires, you make another CreateSession API call to generate a new session token for use. \nAmazon Web Services CLI or SDKs create session and refresh the session token automatically to avoid service interruptions when a session expires. For more information about authorization, see \n CreateSession\n .
\n General purpose bucket - In the ListMultipartUploads response, the multipart uploads are sorted based on two criteria:
Key-based sorting - Multipart uploads are initially sorted in ascending order based on their object keys.
\nTime-based sorting - For uploads that share the same object key, \n they are further sorted in ascending order based on the upload initiation time. Among uploads with the same key, the one that was initiated first will appear before the ones that were initiated later.
\n\n Directory bucket - In the ListMultipartUploads response, the multipart uploads aren't sorted lexicographically based on the object keys. \n \n
\n Directory buckets - The HTTP Host header syntax is \n Bucket_name.s3express-az_id.region.amazonaws.com.
The following operations are related to ListMultipartUploads:
\n UploadPart\n
\n\n ListParts\n
\n\n AbortMultipartUpload\n
\nThis operation lists in-progress multipart uploads in a bucket. An in-progress multipart upload is a\n multipart upload that has been initiated by the CreateMultipartUpload request, but\n has not yet been completed or aborted.
\n Directory buckets - \n If multipart uploads in a directory bucket are in progress, you can't delete the bucket until all the in-progress multipart uploads are aborted or completed. \n To delete these in-progress multipart uploads, use the ListMultipartUploads operation to list the in-progress multipart\n uploads in the bucket and use the AbortMultupartUpload operation to abort all the in-progress multipart uploads.\n
The ListMultipartUploads operation returns a maximum of 1,000 multipart uploads in the response. The limit of 1,000 multipart\n uploads is also the default\n value. You can further limit the number of uploads in a response by specifying the\n max-uploads request parameter. If there are more than 1,000 multipart uploads that \n satisfy your ListMultipartUploads request, the response returns an IsTruncated element\n with the value of true, a NextKeyMarker element, and a NextUploadIdMarker element. \n To list the remaining multipart uploads, you need to make subsequent ListMultipartUploads requests. \n In these requests, include two query parameters: key-marker and upload-id-marker. \n Set the value of key-marker to the NextKeyMarker value from the previous response. \n Similarly, set the value of upload-id-marker to the NextUploadIdMarker value from the previous response.
\n Directory buckets - The upload-id-marker element and \n the NextUploadIdMarker element aren't supported by directory buckets. \n To list the additional multipart uploads, you only need to set the value of key-marker to the NextKeyMarker value from the previous response.
For more information about multipart uploads, see Uploading Objects Using Multipart\n Upload in the Amazon S3\n User Guide.
\n\n Directory buckets - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format https://bucket_name.s3express-az_id.region.amazonaws.com/key-name\n . Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the\n Amazon S3 User Guide.
\n General purpose bucket permissions - For information about permissions required to use the multipart upload API, see Multipart Upload\n and Permissions in the Amazon S3\n User Guide.
\n\n Directory bucket permissions - To grant access to this API operation on a directory bucket, we recommend that you use the \n CreateSession\n API operation for session-based authorization. Specifically, you grant the s3express:CreateSession permission to the directory bucket in a bucket policy or an IAM identity-based policy. Then, you make the CreateSession API call on the bucket to obtain a session token. With the session token in your request header, you can make API requests to this operation. After the session token expires, you make another CreateSession API call to generate a new session token for use. \nAmazon Web Services CLI or SDKs create session and refresh the session token automatically to avoid service interruptions when a session expires. For more information about authorization, see \n CreateSession\n .
\n General purpose bucket - In the ListMultipartUploads response, the multipart uploads are sorted based on two criteria:
Key-based sorting - Multipart uploads are initially sorted in ascending order based on their object keys.
\nTime-based sorting - For uploads that share the same object key, \n they are further sorted in ascending order based on the upload initiation time. Among uploads with the same key, the one that was initiated first will appear before the ones that were initiated later.
\n\n Directory bucket - In the ListMultipartUploads response, the multipart uploads aren't sorted lexicographically based on the object keys. \n \n
\n Directory buckets - The HTTP Host header syntax is \n Bucket_name.s3express-az_id.region.amazonaws.com.
The following operations are related to ListMultipartUploads:
\n UploadPart\n
\n\n ListParts\n
\n\n AbortMultipartUpload\n
\nEncoding type used by Amazon S3 to encode object keys in the response. If using\n url, non-ASCII characters used in an object's key name will be URL encoded.\n For example, the object test_file(3).png will appear as\n test_file%283%29.png.
Encoding type used by Amazon S3 to encode the object keys in the response.\n Responses are encoded only in UTF-8. An object key can contain any Unicode character.\n However, the XML 1.0 parser can't parse certain characters, such as characters with an\n ASCII value from 0 to 10. For characters that aren't supported in XML 1.0, you can add this\n parameter to request that Amazon S3 encode the keys in the response. For more information about\n characters to avoid in object key names, see Object key naming\n guidelines.
\nWhen using the URL encoding type, non-ASCII characters that are used in an object's\n key name will be percent-encoded according to UTF-8 code values. For example, the object\n test_file(3).png will appear as\n test_file%283%29.png.
Returns some or all (up to 1,000) of the objects in a bucket with each request. You can\n use the request parameters as selection criteria to return a subset of the objects in a\n bucket. A 200 OK response can contain valid or invalid XML. Make sure to\n design your application to parse the contents of the response and handle it appropriately.\n \n For more information about listing objects, see Listing object keys\n programmatically in the Amazon S3 User Guide. To get a list of your buckets, see ListBuckets.
\n Directory buckets - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format https://bucket_name.s3express-az_id.region.amazonaws.com/key-name\n . Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the\n Amazon S3 User Guide.
\n General purpose bucket permissions - To use this operation, you must have READ access to the bucket. You must have permission to perform\n the s3:ListBucket action. The bucket owner has this permission by default and\n can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing\n Access Permissions to Your Amazon S3 Resources in the\n Amazon S3 User Guide.
\n Directory bucket permissions - To grant access to this API operation on a directory bucket, we recommend that you use the \n CreateSession\n API operation for session-based authorization. Specifically, you grant the s3express:CreateSession permission to the directory bucket in a bucket policy or an IAM identity-based policy. Then, you make the CreateSession API call on the bucket to obtain a session token. With the session token in your request header, you can make API requests to this operation. After the session token expires, you make another CreateSession API call to generate a new session token for use. \nAmazon Web Services CLI or SDKs create session and refresh the session token automatically to avoid service interruptions when a session expires. For more information about authorization, see \n CreateSession\n .
\n General purpose bucket - For general purpose buckets, ListObjectsV2 returns objects in lexicographical order based on their key names.
\n Directory bucket - For directory buckets, ListObjectsV2 does not return objects in lexicographical order.
\n Directory buckets - The HTTP Host header syntax is \n Bucket_name.s3express-az_id.region.amazonaws.com.
This section describes the latest revision of this action. We recommend that you use\n this revised API operation for application development. For backward compatibility, Amazon S3\n continues to support the prior version of this API operation, ListObjects.
\nThe following operations are related to ListObjectsV2:
\n GetObject\n
\n\n PutObject\n
\n\n CreateBucket\n
\nReturns some or all (up to 1,000) of the objects in a bucket with each request. You can\n use the request parameters as selection criteria to return a subset of the objects in a\n bucket. A 200 OK response can contain valid or invalid XML. Make sure to\n design your application to parse the contents of the response and handle it appropriately.\n \n For more information about listing objects, see Listing object keys\n programmatically in the Amazon S3 User Guide. To get a list of your buckets, see ListBuckets.
\n General purpose bucket - For general purpose buckets, ListObjectsV2 doesn't return prefixes that are related only to in-progress multipart uploads.
\n Directory buckets - \n For directory buckets, ListObjectsV2 response includes the prefixes that are related only to in-progress multipart uploads.\n
\n Directory buckets - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format https://bucket_name.s3express-az_id.region.amazonaws.com/key-name\n . Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the\n Amazon S3 User Guide.
\n General purpose bucket permissions - To use this operation, you must have READ access to the bucket. You must have permission to perform\n the s3:ListBucket action. The bucket owner has this permission by default and\n can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing\n Access Permissions to Your Amazon S3 Resources in the\n Amazon S3 User Guide.
\n Directory bucket permissions - To grant access to this API operation on a directory bucket, we recommend that you use the \n CreateSession\n API operation for session-based authorization. Specifically, you grant the s3express:CreateSession permission to the directory bucket in a bucket policy or an IAM identity-based policy. Then, you make the CreateSession API call on the bucket to obtain a session token. With the session token in your request header, you can make API requests to this operation. After the session token expires, you make another CreateSession API call to generate a new session token for use. \nAmazon Web Services CLI or SDKs create session and refresh the session token automatically to avoid service interruptions when a session expires. For more information about authorization, see \n CreateSession\n .
\n General purpose bucket - For general purpose buckets, ListObjectsV2 returns objects in lexicographical order based on their key names.
\n Directory bucket - For directory buckets, ListObjectsV2 does not return objects in lexicographical order.
\n Directory buckets - The HTTP Host header syntax is \n Bucket_name.s3express-az_id.region.amazonaws.com.
This section describes the latest revision of this action. We recommend that you use\n this revised API operation for application development. For backward compatibility, Amazon S3\n continues to support the prior version of this API operation, ListObjects.
\nThe following operations are related to ListObjectsV2:
\n GetObject\n
\n\n PutObject\n
\n\n CreateBucket\n
\nEncoding type used by Amazon S3 to encode object keys in the response. If using\n url, non-ASCII characters used in an object's key name will be URL encoded.\n For example, the object test_file(3).png will appear as\n test_file%283%29.png.
Encoding type used by Amazon S3 to encode the object keys in the response.\n Responses are encoded only in UTF-8. An object key can contain any Unicode character.\n However, the XML 1.0 parser can't parse certain characters, such as characters with an\n ASCII value from 0 to 10. For characters that aren't supported in XML 1.0, you can add this\n parameter to request that Amazon S3 encode the keys in the response. For more information about\n characters to avoid in object key names, see Object key naming\n guidelines.
\nWhen using the URL encoding type, non-ASCII characters that are used in an object's\n key name will be percent-encoded according to UTF-8 code values. For example, the object\n test_file(3).png will appear as\n test_file%283%29.png.
Specifies the partition date source for the partitioned prefix. PartitionDateSource can be EventTime or DeliveryTime.
" + "smithy.api#documentation": "Specifies the partition date source for the partitioned prefix.\n PartitionDateSource can be EventTime or\n DeliveryTime.
For DeliveryTime, the time in the log file names corresponds to the\n delivery time for the log files.
For EventTime, The logs delivered are for a specific day only. The year,\n month, and day correspond to the day on which the event occurred, and the hour, minutes and\n seconds are set to 00 in the key.
Specifies whether Amazon S3 should restrict public bucket policies for this bucket. Setting\n this element to TRUE restricts access to this bucket to only Amazon Web Service principals and authorized users within this account if the bucket has\n a public policy.
Enabling this setting doesn't affect previously stored bucket policies, except that\n public and cross-account access within any public bucket policy, including non-public\n delegation to specific accounts, is blocked.
", + "smithy.api#documentation": "Specifies whether Amazon S3 should restrict public bucket policies for this bucket. Setting\n this element to TRUE restricts access to this bucket to only Amazon Web Servicesservice principals and authorized users within this account if the bucket has\n a public policy.
Enabling this setting doesn't affect previously stored bucket policies, except that\n public and cross-account access within any public bucket policy, including non-public\n delegation to specific accounts, is blocked.
", "smithy.api#xmlName": "RestrictPublicBuckets" } } @@ -29450,7 +29493,7 @@ "requestAlgorithmMember": "ChecksumAlgorithm", "requestChecksumRequired": true }, - "smithy.api#documentation": "This operation is not supported by directory buckets.
\nThis action uses the encryption subresource to configure default encryption\n and Amazon S3 Bucket Keys for an existing bucket.
By default, all buckets have a default encryption configuration that uses server-side\n encryption with Amazon S3 managed keys (SSE-S3). You can optionally configure default encryption\n for a bucket by using server-side encryption with Key Management Service (KMS) keys (SSE-KMS) or\n dual-layer server-side encryption with Amazon Web Services KMS keys (DSSE-KMS). If you specify default encryption by using\n SSE-KMS, you can also configure Amazon S3 Bucket\n Keys. If you use PutBucketEncryption to set your default bucket encryption to SSE-KMS, you should verify that your KMS key ID is correct. Amazon S3 does not validate the KMS key ID provided in PutBucketEncryption requests.
\nThis action requires Amazon Web Services Signature Version 4. For more information, see \n Authenticating Requests (Amazon Web Services Signature Version 4).
\nTo use this operation, you must have permission to perform the\n s3:PutEncryptionConfiguration action. The bucket owner has this permission\n by default. The bucket owner can grant this permission to others. For more information\n about permissions, see Permissions Related to Bucket Subresource Operations and Managing\n Access Permissions to Your Amazon S3 Resources in the\n Amazon S3 User Guide.
The following operations are related to PutBucketEncryption:
\n GetBucketEncryption\n
\nThis operation is not supported by directory buckets.
\nThis action uses the encryption subresource to configure default encryption\n and Amazon S3 Bucket Keys for an existing bucket.
By default, all buckets have a default encryption configuration that uses server-side\n encryption with Amazon S3 managed keys (SSE-S3). You can optionally configure default encryption\n for a bucket by using server-side encryption with Key Management Service (KMS) keys (SSE-KMS) or\n dual-layer server-side encryption with Amazon Web Services KMS keys (DSSE-KMS). If you specify default encryption by using\n SSE-KMS, you can also configure Amazon S3 Bucket\n Keys. If you use PutBucketEncryption to set your default bucket encryption to SSE-KMS, you should verify that your KMS key ID is correct. Amazon S3 does not validate the KMS key ID provided in PutBucketEncryption requests.
\nIf you're specifying a customer managed KMS key, we recommend using a fully qualified\n KMS key ARN. If you use a KMS key alias instead, then KMS resolves the key within the\n requester’s account. This behavior can result in data that's encrypted with a KMS key\n that belongs to the requester, and not the bucket owner.
\nAlso, this action requires Amazon Web Services Signature Version 4. For more information, see \n Authenticating Requests (Amazon Web Services Signature Version 4).
\nTo use this operation, you must have permission to perform the\n s3:PutEncryptionConfiguration action. The bucket owner has this permission\n by default. The bucket owner can grant this permission to others. For more information\n about permissions, see Permissions Related to Bucket Subresource Operations and Managing\n Access Permissions to Your Amazon S3 Resources in the\n Amazon S3 User Guide.
The following operations are related to PutBucketEncryption:
\n GetBucketEncryption\n
\nThis operation is not supported by directory buckets.
\nSets the versioning state of an existing bucket.
\nYou can set the versioning state with one of the following values:
\n\n Enabled—Enables versioning for the objects in the\n bucket. All objects added to the bucket receive a unique version ID.
\n\n Suspended—Disables versioning for the objects in the\n bucket. All objects added to the bucket receive the version ID null.
\nIf the versioning state has never been set on a bucket, it has no versioning state; a\n GetBucketVersioning request does not return a versioning state value.
\nIn order to enable MFA Delete, you must be the bucket owner. If you are the bucket owner\n and want to enable MFA Delete in the bucket versioning configuration, you must include the\n x-amz-mfa request header and the Status and the\n MfaDelete request elements in a request to set the versioning state of the\n bucket.
If you have an object expiration lifecycle configuration in your non-versioned bucket\n and you want to maintain the same permanent delete behavior when you enable versioning,\n you must add a noncurrent expiration policy. The noncurrent expiration lifecycle\n configuration will manage the deletes of the noncurrent object versions in the\n version-enabled bucket. (A version-enabled bucket maintains one current and zero or more\n noncurrent object versions.) For more information, see Lifecycle and Versioning.
\nThe following operations are related to PutBucketVersioning:
\n CreateBucket\n
\n\n DeleteBucket\n
\n\n GetBucketVersioning\n
\nThis operation is not supported by directory buckets.
\nWhen you enable versioning on a bucket for the first time, it might take a short\n amount of time for the change to be fully propagated. We recommend that you wait for 15\n minutes after enabling versioning before issuing write operations\n (PUT\n or\n DELETE)\n on objects in the bucket.
Sets the versioning state of an existing bucket.
\nYou can set the versioning state with one of the following values:
\n\n Enabled—Enables versioning for the objects in the\n bucket. All objects added to the bucket receive a unique version ID.
\n\n Suspended—Disables versioning for the objects in the\n bucket. All objects added to the bucket receive the version ID null.
\nIf the versioning state has never been set on a bucket, it has no versioning state; a\n GetBucketVersioning request does not return a versioning state value.
\nIn order to enable MFA Delete, you must be the bucket owner. If you are the bucket owner\n and want to enable MFA Delete in the bucket versioning configuration, you must include the\n x-amz-mfa request header and the Status and the\n MfaDelete request elements in a request to set the versioning state of the\n bucket.
If you have an object expiration lifecycle configuration in your non-versioned bucket\n and you want to maintain the same permanent delete behavior when you enable versioning,\n you must add a noncurrent expiration policy. The noncurrent expiration lifecycle\n configuration will manage the deletes of the noncurrent object versions in the\n version-enabled bucket. (A version-enabled bucket maintains one current and zero or more\n noncurrent object versions.) For more information, see Lifecycle and Versioning.
\nThe following operations are related to PutBucketVersioning:
\n CreateBucket\n
\n\n DeleteBucket\n
\n\n GetBucketVersioning\n
\nThe byte array of partial, one or more result records.
", + "smithy.api#documentation": "The byte array of partial, one or more result records. S3 Select doesn't guarantee that\n a record will be self-contained in one record frame. To ensure continuous streaming of\n data, S3 Select might split the same record across multiple record frames instead of\n aggregating the results in memory. Some S3 clients (for example, the SDK for Java) handle this behavior by creating a ByteStream out of the response by\n default. Other clients might not handle this behavior by default. In those cases, you must\n aggregate the results on the client side and parse the response.
Describes the default server-side encryption to apply to new objects in the bucket. If a\n PUT Object request doesn't specify any server-side encryption, this default encryption will\n be applied. If you don't specify a customer managed key at configuration, Amazon S3 automatically creates\n an Amazon Web Services KMS key in your Amazon Web Services account the first time that you add an object encrypted\n with SSE-KMS to a bucket. By default, Amazon S3 uses this KMS key for SSE-KMS. For more\n information, see PUT Bucket encryption in\n the Amazon S3 API Reference.
" + "smithy.api#documentation": "Describes the default server-side encryption to apply to new objects in the bucket. If a\n PUT Object request doesn't specify any server-side encryption, this default encryption will\n be applied. If you don't specify a customer managed key at configuration, Amazon S3 automatically creates\n an Amazon Web Services KMS key in your Amazon Web Services account the first time that you add an object encrypted\n with SSE-KMS to a bucket. By default, Amazon S3 uses this KMS key for SSE-KMS. For more\n information, see PUT Bucket encryption in\n the Amazon S3 API Reference.
\nIf you're specifying a customer managed KMS key, we recommend using a fully qualified\n KMS key ARN. If you use a KMS key alias instead, then KMS resolves the key within the\n requester’s account. This behavior can result in data that's encrypted with a KMS key\n that belongs to the requester, and not the bucket owner.
\nSpecifies the default server-side encryption configuration.
" + "smithy.api#documentation": "Specifies the default server-side encryption configuration.
\nIf you're specifying a customer managed KMS key, we recommend using a fully qualified\n KMS key ARN. If you use a KMS key alias instead, then KMS resolves the key within the\n requester’s account. This behavior can result in data that's encrypted with a KMS key\n that belongs to the requester, and not the bucket owner.
\n