<?xml version="1.0" encoding="utf-8" ?><rss version="2.0" xmlns:tt="http://teletype.in/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:media="http://search.yahoo.com/mrss/"><channel><title>Stan Yudin</title><generator>teletype.in</generator><description><![CDATA[The last refuge of the insomniac is a sense of superiority to the sleeping world.]]></description><link>https://blog.endlessinsomnia.com/?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=stan1y</link><atom:link rel="self" type="application/rss+xml" href="https://teletype.in/rss/stan1y?offset=0"></atom:link><atom:link rel="next" type="application/rss+xml" href="https://teletype.in/rss/stan1y?offset=10"></atom:link><atom:link rel="search" type="application/opensearchdescription+xml" title="Teletype" href="https://teletype.in/opensearch.xml"></atom:link><pubDate>Wed, 15 Apr 2026 19:46:52 GMT</pubDate><lastBuildDate>Wed, 15 Apr 2026 19:46:52 GMT</lastBuildDate><item><guid isPermaLink="true">https://blog.endlessinsomnia.com/tradio-tera</guid><link>https://blog.endlessinsomnia.com/tradio-tera?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=stan1y</link><comments>https://blog.endlessinsomnia.com/tradio-tera?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=stan1y#comments</comments><dc:creator>stan1y</dc:creator><title>Интернет радио</title><pubDate>Mon, 15 Sep 2025 10:32:49 GMT</pubDate><description><![CDATA[Сегодня я психанул на переписал tera на питоне. Дело в том, что у меня не работал поиск нормально, вернее работал, но выбрать найденную станцию не получалось. Ну я написал свою замену, с поддержкой списков сохраненных станций, что бы не искать все заново. Назвал tradio.]]></description><content:encoded><![CDATA[
  <p id="3CHp">Сегодня я психанул и переписал <a href="https://tera.codewithshin.com/" target="_blank">tera</a> на питоне. Дело в том, что у меня не работал поиск нормально, вернее работал, но выбрать найденную станцию не получалось. Ну я написал свою замену, с поддержкой списков сохраненных станций, что бы не искать все заново. Назвал <code>tradio</code>.</p>
  <p id="Tedr">Музыку играет самый лучший <a href="https://mpv.io/" target="_blank">mpv.</a> И надо установить отдельно <code>yay -S mpv</code>.</p>
  <p id="zYK7">Получилось, как мне думается, нормально. И я уже больше не буду использовать tera. Устанавливать пока что можно только из <a href="https://gitflic.ru/project/stan1y/tradio" target="_blank">репозитория</a>, но потом будет и AUR.</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://blog.endlessinsomnia.com/J-K7jejFSDv</guid><link>https://blog.endlessinsomnia.com/J-K7jejFSDv?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=stan1y</link><comments>https://blog.endlessinsomnia.com/J-K7jejFSDv?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=stan1y#comments</comments><dc:creator>stan1y</dc:creator><title>ECS AutoScaling with CloudFormation</title><pubDate>Mon, 18 Mar 2024 08:20:24 GMT</pubDate><media:content medium="image" url="https://img3.teletype.in/files/ac/59/ac591802-bc03-46c8-9361-5ef17018f144.png"></media:content><category>AWS</category><description><![CDATA[<img src="https://img1.teletype.in/files/45/16/45169c68-eacf-4747-a8b0-5e492617fd1a.png"></img>Autoscaling ECS services with combination of target tracking and stepped scaling policies.]]></description><content:encoded><![CDATA[
  <blockquote id="NOXI">Autoscaling ECS services with combination of target tracking and stepped scaling policies.</blockquote>
  <p id="aT95">The topic of ECS autoscaling is a vast area of heated discussions and broken dreams. It is quite hard to come up with efficient scaling policies for your ECS services. And the more distributed your architecture, the more issues with cascading load and increasing latency you are going to face. But fear not, the promised salvation in form of autoscaling for your services is here to save the day and distribute your computing load evenly across your micro services. So lets examine what we have to work with to achieve that.</p>
  <h3 id="PPWx">Scaling services</h3>
  <p id="Bsfz">Autoscaling of ECS services is implemented as an automated action executed upon an event: scale in or scale out. The source of such event can be either an alarm with <em>StepScaling</em> type of policy or <em>TargetTrackingScaling</em> type. Usage of the target tracking is very similar to the implementation for DynamoDB, with selection of <em>ECSServiceAverageCPUUtilization</em> and <em>ECSServiceAverageMemoryUtilization</em> metrics available for tracking. Notice that ECS can track only <strong>average</strong> metrics of the service, so it means you need to make sure that tasks have load distributed evenly on load balancer. Significant gaps between maximum and average consumption can lead to a termination of a task with out of memory or CPU and lead to 502 errors.</p>
  <figure id="f3bA" class="m_column">
    <img src="https://img1.teletype.in/files/45/16/45169c68-eacf-4747-a8b0-5e492617fd1a.png" width="2146" />
    <figcaption>Maximum utilization is way in the cloud</figcaption>
  </figure>
  <h3 id="VZtg">New and Old</h3>
  <p id="Kp3X">ECS scaling policies can be combined to produce even greater efficiency in load distribution. Usage of the &#x60;StepScaling&#x60; policies to handle scale out events on ALB or SQS metrics by estimating the load in the input source. ALB&#x27;s Target Group metrics such as &#x60;AWS/ApplicationELB/RequestCountPerTarget&#x60; are good baseline to start policies. Size of the SQS queue is another example of deterministic metric to estimate incoming load for service.</p>
  <p id="QEdI">Combination of <em>StepScaling</em> and <em>TargetTrackingScaling</em> looking at <em>ECSServiceAverageCPUUtilization</em> or <em>ECSServiceAverageMemoryUtilization</em> can allow greater flexibility in the how your service can react on load. If it is possible to determine of the service in question is mostly CPU or memory bound, then selection of a threshold for one of these average metrics should be pretty easy by observing the service under generated test load.</p>
  <h3 id="Ie0u">CloudFormation support for ECS scaling</h3>
  <p id="iMbi">To define an ECS service with scaling policies in CloudFormation you need to have a cluster, instance role for EC2 hosts and other essentials omitted from this example.</p>
  <p id="yrxu">First we need a service role to perform scaling actions on our behalf.</p>
  <pre id="7B3h" data-lang="yaml">ScalingRole:
  Type: AWS::IAM::Role
  Properties:
    RoleName: ScalingRole
    AssumeRolePolicyDocument:
      Version: &quot;2012-10-17&quot;
      Statement:
        - Effect: Allow
          Principal:
            Service:
              - application-autoscaling.amazonaws.com
          Action:
            - sts:AssumeRole
            
ScalingRolePolicy:
  Type: AWS::IAM::Policy
  Properties:
    Roles:
      - !Ref ScalingRole
    PolicyName: ScalingRolePolicyPolicy
    PolicyDocument:
      Version: &#x27;2012-10-17&#x27;
      Statement:
        - Effect: Allow
          Resource: &#x27;*&#x27;
          Action:
            - application-autoscaling:*
            - ecs:RunTask
            - ecs:UpdateSerice
            - ecs:DescribeServices
            - cloudwatch:PutMetricAlarm
            - cloudwatch:DescribeAlarms
            - cloudwatch:GetMetricStatistics
            - cloudwatch:SetAlarmState
            - cloudwatch:DeleteAlarms
</pre>
  <p id="dBA1">Now we&#x27;re going to have a look at a service definition, its target group for ALB, scaling targets and policies and a CloudWatch alarm. For this example we are going to define <em>ExampleCPUAutoScalingPolicy</em> for a new capacity to grow to a value so that current usage <em>ECSServiceAverageCPUUtilization</em> accounts for 50% and <em>ExampleRequestsAutoScalingPolicy</em> when we have more than 1000 requests per target in a minute.</p>
  <pre id="MyE5" data-lang="yaml">ExampleTargetGroup:
  Type: AWS::ElasticLoadBalancingV2::TargetGroup
  Properties:
    Port: 80
    Protocol: HTTP
    VpcId: !Ref VpcId
    HealthCheckIntervalSeconds: 30
    HealthCheckPath: /status
    HealthCheckTimeoutSeconds: 15
    HealthyThresholdCount: 2
    UnhealthyThresholdCount: 6
    Matcher:
      HttpCode: 200
    TargetGroupAttributes:
      - Key: deregistration_delay.timeout_seconds
        Value: 30
        
ExampleService:
  Type: AWS::ECS::Service
  Properties:
    TaskDefinition: !Ref ExampleTask # omitted
    PlacementStrategies:
      - Field: attribute:ecs.availability-zone
        Type: spread
    DesiredCount: 1
    Cluster: example-cluster # omitted
    LoadBalancers:
      - TargetGroupArn: !Ref ExampleTargetGroup
    ContainerPort: 8080
    ContainerName: example-service

ExampleAutoScalingTarget:
  Type: AWS::ApplicationAutoScaling::ScalableTarget
  Properties:
    MaxCapacity: !Ref MaxServicesCount # parameters
    MinCapacity: !Ref MinServicesCount
    ResourceId:
      Fn::Sub:
        - service/ExampleCluster/${ServiceName}
        - ServiceName: !GetAtt ExampleService.Name
    RoleARN: !GetAtt ScalingRole.Arn
    ScalableDimension: ecs:service:DesiredCount
    ServiceNamespace: ecs
    
ExampleCPUAutoScalingPolicy:
  Type: AWS::ApplicationAutoScaling::ScalingPolicy
  Properties:
    PolicyName: ExampleCPUAutoScalingPolicy
    PolicyType: TargetTrackingScaling
    ScalingTargetId: !Ref ExampleAutoScalingTarget
    TargetTrackingScalingPolicyConfiguration:
      DisableScaleIn: True
      TargetValue: 50
      ScaleInCooldown: 60
      ScaleOutCooldown: 60
      PredefinedMetricSpecification:
        PredefinedMetricType: ECSServiceAverageCPUUtilization
        
ExampleRequestsAutoScalingPolicy:
  Type: AWS::ApplicationAutoScaling::ScalingPolicy
  Properties:
    PolicyName: ExampleRequestsAutoScalingPolicy
    PolicyType: StepScaling
    ScalingTargetId: !Ref ExampleAutoScalingTarget
    ScalableDimension: ecs:service:DesiredCount
    ServiceNamespace: ecs
    StepScalingPolicyConfiguration:
      AdjustmentType: ChangeInCapacity
      Cooldown: 60
      MetricAggregationType: Average
      StepAdjustments:
        - MetricIntervalLowerBound: 0
          ScalingAdjustment: 1
        - MetricIntervalUpperBound: 0
          ScalingAdjustment: -1

ExampleRequestsAlarm:
  Type: AWS::CloudWatch::Alarm
  Properties:
    MetricName: RequestCountPerTarget
    Namespace: AWS/ApplicationELB
    Statistic: Sum
    Period: 60
    EvaluationPeriods: 1
    Threshold: 1000
    ComparisonOperator: GreaterThanOrEqualToThreshold
    AlarmActions:
      - !Ref ExampleRequestsAutoScalingPolicy
    OKActions:
      - !Ref ExampleRequestsAutoScalingPolicy
    Dimensions:
      - Name: TargetGroup
        Value: !GetAtt ExampleTargetGroup.TargetGroupFullName</pre>
  <p id="XV8W">Notice that parameters section of the <em>ExampleCPUAutoScalingPolicy</em> resource contains <em>DisableScaleIn: true</em> for a specific reason. In order to guarantee that requests per target scaling events have priority over target tracking, the scale in logic of tracking can be disabled completely.</p>
  <h3 id="X94H">Stability is the key</h3>
  <p id="1Ynw">Ok, so now we have the service scaling up and down based on the number of requests per target in ELB. However, you would notice that the threshold in <em>StepAdjustments</em> for scale up starts right after scale down. It means that your service&#x27;s desired count would oscillate around some value, going up and down with new tasks spun up. To allow for a window of stability, you need to have a range with <em>ScalingAdjustment: 0</em>, where you would have a boundary to increase &amp; decrease desired count. That way it is possible to make alarm to alert on the <strong>scale in</strong> boundary, and <em>StepAdjustments</em> to interpret the range. Lets see the example, where we want to scale out on more than <em>RequestsScaleOutThreshold</em> requests per target, and scale in on less than <em>RequestsScaleInThreshold</em>:</p>
  <pre id="RTfS" data-lang="yaml">ExampleRequestsAlarm:
  Type: AWS::CloudWatch::Alarm
  Properties:
    MetricName: RequestCountPerTarget
    Namespace: AWS/ApplicationELB
    Statistic: Sum
    Period: 60
    EvaluationPeriods: 1
    Threshold: 500 # scale in boundary to trigger the alarm
    ComparisonOperator: GreaterThanOrEqualToThreshold
    AlarmActions:
      - !Ref ExampleRequestsAutoScalingPolicy
    Dimensions:
      - Name: TargetGroup
        Value: !GetAtt ExampleTargetGroup.TargetGroupFullName

ExampleRequestsAutoScalingPolicy:
  Type: AWS::ApplicationAutoScaling::ScalingPolicy
  Properties:
    PolicyName: ExampleRequestsAutoScalingPolicy
    PolicyType: StepScaling
    ScalingTargetId: !Ref ExampleAutoScalingTarget
    ScalableDimension: ecs:service:DesiredCount
    ServiceNamespace: ecs
    StepScalingPolicyConfiguration:
      AdjustmentType: ChangeInCapacity
      Cooldown: 60
      MetricAggregationType: Average
      StepAdjustments:
        - MetricIntervalLowerBound: !Ref RequestsScaleOutThreshold
          ScalingAdjustment: 1
        - MetricIntervalLowerBound: !Ref RequestsScaleInThreshold
          MetricIntervalUpperBound: !Ref RequestsScaleOutThreshold
          ScalingAdjustment: 0
        - MetricIntervalUpperBound: !Ref RequestsScaleInThreshold
          ScalingAdjustment: -1</pre>
  <p id="NiTo">Here we have a range between <em>MetricIntervalLowerBound=RequestsScaleInThreshold </em>and <em>MetricIntervalUpperBound=RequestsScaleOutThreshold</em> where <em>ScalingAdjustment=0</em> and no changes are done to desired count. This will ensure that oscillation of desired count does not happen for you.</p>
  <h3 id="BZhV">Further reading</h3>
  <p id="mIeY">Another approach would be to define to alarms, one to scale out and one to scale in. Each would have a specific range and specific policy associated. Such approach in fact is used quite a lot, but the problem is that CloudWatch alarms are not free, in fact they are pretty <a href="https://aws.amazon.com/cloudwatch/pricing/" target="_blank">expensive</a>.</p>
  <p id="Z2In">Additional details are found over the <a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-applicationautoscaling-scalabletarget.html" target="_blank">AWS::ApplicationAutoScaling::ScalableTarget</a> and <a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-cw-alarm.html" target="_blank">AWS::CloudWatch::Alarm</a> documentation.</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://blog.endlessinsomnia.com/l_ubT6UIj2y</guid><link>https://blog.endlessinsomnia.com/l_ubT6UIj2y?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=stan1y</link><comments>https://blog.endlessinsomnia.com/l_ubT6UIj2y?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=stan1y#comments</comments><dc:creator>stan1y</dc:creator><title>AWS SSM Parameter Store secrets management for Docker containers</title><pubDate>Mon, 18 Mar 2024 07:11:00 GMT</pubDate><category>AWS</category><description><![CDATA[<img src="https://img1.teletype.in/files/c3/6d/c36d302a-0591-4f08-9dca-060af33871b1.jpeg"></img>Secure way to provide environment secrets to docker containers from AWS Parameter Store.]]></description><content:encoded><![CDATA[
  <blockquote id="DHLY">Secure way to provide environment secrets to docker containers from AWS Parameter Store.</blockquote>
  <h2 id="8f0E">Storing and using secret information securely with AWS SSM</h2>
  <p id="Kry4">The SSM Parameters store is a great way to securely store hierarchical configuration data. It is widely used by AWS to provide parameters it own services. The names of the parameters are guaranteed to be unique and of arbitrary structure, allowing users to store nested structures of configuration data. In this post I will examine how to use Parameters Store efficiently to provide settings and secrets to Docker containers executed on ECS.</p>
  <p id="hb3n">The code, examples and additional information for this post is available on the <a href="https://github.com/stan1y/ssm-bootstrap" target="_blank">GitHub</a>, and images instrumented with SSM bootstrap are available from the <a href="https://hub.docker.com/r/stan1y/ssm-bootstrap/" target="_blank">Docker hub</a>.</p>
  <h3 id="HgM4">AWS SSM Parameters Store Values</h3>
  <p id="VwXH">The SSM allows to use both single and list values for parameters. Optionally the value</p>
  <p id="QUbf">can be encrypted with the KMS key. The encryption allows user to keep environmental secrets in SSM, such as service to service authentication, docker registry auth strings, database credentials, X.509 keys and everything else that fits into the 4096 characters. Values that do not fit into 4096 characters, can be stored into the S3 bucket and pre-signed access URL can be written into the SSM parameter instead.</p>
  <h3 id="NVE5">Using the Parameter Store</h3>
  <p id="pc1M">The main approach to usage of the SSM parameters store is give your ECS task a role to access SSM API and call it on service start. The service itself is responsible for communication with SSM, in a safe and scalable way. This presents a natural problem in case of multiple services, probably written in different languages, each having a unique implementation of SSM nested structures reading code. It is better to decouple configuration and secrets retrieval logic from the application code with a formal contract. Environmental variables are the most common way to pass configuration values to the application code in the Docker containers and it is convenient to use them for SSM stored configuration as well.</p>
  <h2 id="eRl8">Docker images and entrypoints</h2>
  <p id="Wabn">Application service packaged as Docker image can benefit from the use of intermediate base images which contain bootstrapping code and decouple secrets management from the application itself. The intermediate image can define an <a href="https://docs.docker.com/engine/reference/builder/#entrypoint" target="_blank">entrypoint</a>, used to perform bootstrapping steps, communicate with AWS services are prepare container environment before executing the service code. The image itself can be based on a number of possible runtime environments: NodeJS, Ruby or Python. The entrypoint in docker image receives the &#x60;CMD&#x60; value of the image using it as a base, or an executable argument of &#x60;docker run&#x60;, and can invoke it directly after the setup. Below is an example of the intermediate Docker image adding SSM bootstrapping capability to an <a href="https://alpinelinux.org/" target="_blank">alpine</a> based image, such as NodeJS&#x27;s <em>node:alpine</em>.</p>
  <pre id="OPvP" data-lang="dockerfile">ARG BASE
FROM $BASE

# Install python runtime for the bootstrap script
RUN apk update &amp;&amp; \
  apk add python py-pip py-yaml &amp;&amp; \
  pip install awscli &amp;&amp; \
  pip install boto3

# Copy bootstrap scripts to image
COPY src/ssm-bootstrap.py /usr/bin/ssm-bootstrap
COPY src/kickstart.sh /usr/bin/kickstart

RUN chmod +x /usr/bin/ssm-bootstrap /usr/bin/kickstart
ENTRYPOINT /usr/bin/kickstart</pre>
  <p id="oyGL">Here we have an entrypoint defined as <em>/usr/bin/kickstart</em> script. This script will execute <em>/usr/bin/ssm-bootstrap</em> to communicate with SSM and save environment file and other files. Lets examine the contents of the <em>/usr/bin/kickstart</em>:</p>
  <pre id="gtxe" data-lang="bash">#!/bin/sh
# query SSM parameters store for secrets and save 
# files and environment variables

ssm-bootstrap --environ /tmp/app_environ --root /app/
[ -f /tmp/app_environ ] &amp;&amp; . /tmp/app_environ
exec &quot;$@&quot;</pre>
  <p id="1qVi">The kickstart script uses <em>/usr/bin/ssm-bootstrap</em> to create <em>/tmp/app_environ</em> file, loads it as environment and passes execution to it&#x27;s arguments. So that executed process would inherit environment populated with data from the <em>/tmp/app_environ</em> file. Additionally <em>/usr/bin/ssm-bootstrap </em>would write a number of files at root path specified as <em>--root</em> argument. Such files can be various encryption keys, salts and certificates shared across your environments. Using SSM bootstrapping to provide files to Docker containers allows to avoid volume mounts to containers and specialization of your ECS host. Otherwise you have to bake such files into your an AMI or somehow persist on host&#x27;s filesystem from external storage, so that files could be volume mounted into running containers.</p>
  <h2 id="KByc">SSM Parameters Namespaces</h2>
  <p id="zWWS">The actual communications with SSM to retrieve environmental variables is done by the <em>/usr/bin/ssm-bootstrap</em> tool. It is the place where logic to build parameter paths is implemented and shared across all services based on the intermediate Docker image. The interface between infrastructure code and application code, defined as environment variables and (smallish) files, is implemented once and reused everywhere else. Potential structure of the parameter names can be arbitrary, but in general it is useful to have at least 2 nested levels of parameters:</p>
  <ul id="JDg6">
    <li id="tQDr">ECS Cluster - Each service on a specific ECS cluster. The cluster scope corresponds to an instance of portable environment, usually CloudFormation stack(s).</li>
    <li id="NZpJ">ECS Service/Container name - Specific service on a specific ECS cluster.</li>
  </ul>
  <h3 id="9FcA">Integration with the ECS</h3>
  <p id="IVGN">SSM bootstrap is using ECS metadata file determine the cluster and container name it is executed for. It must be enabled in your &#x60;ecs.config&#x60; on cluster instances. When ECS runs Docker container, it makes metadata file available inside the container. This metadata contains cluster name and container name used by ECS.</p>
  <p id="9GvU">Below is an example of the ECS Task Definition in CloudFormation, lets examine what configuration path are available to such service:</p>
  <pre id="uEBU" data-lang="yaml">ExampleTask:
  Type: AWS::ECS::TaskDefinition
  Properties:
    ContainerDefinitions:
      - Name: example-service
        Image: ...
        Essential: true
        PortMappings:
          - ContainerPort: 8080
        Environment:
          AWS_DEFAULT_REGION: !Ref AWS::Region
    ...

ExampleCluster:
  Type: AWS::ECS::Cluster
  Properties:
    ClusterName: example-cluster

ExampleService:
  Type: AWS::ECS::Service
  Properties:
    TaskDefinition: !Ref ExampleTask
    Cluster: !Ref ExampleCluster
    ...</pre>
  <p id="cTYn">Notice that parameter names are built using <em>Cluster</em> and <em>Container Name</em>. This information about running container is read from <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/container-metadata.html" target="_blank">ECS container metadata file</a>, so this needs to be enabled in your ECS agent configuration. The source code for the <em>/usr/bin/ssm-bootstrap</em> utility can be found on the <a href="https://github.com/stan1y/ssm-bootstrap/blob/master/src/ssm-bootstrap.py" target="_blank">GitHub</a>.</p>
  <h3 id="xU3O">Name structure for nested configurations</h3>
  <p id="fOdU">With the configuration above the service would have the following paths in SSM injected into it&#x27;s environment:</p>
  <ul id="B6VX">
    <li id="Wpme">/example-cluster/environment/somevar</li>
    <li id="O0LX">/example-cluster/example-service/environment/anothervar</li>
  </ul>
  <p id="EA1t">Also the following paths would be saved as files:</p>
  <ul id="c7yQ">
    <li id="kYlL">/example-cluster/files/somefile</li>
    <li id="9odZ">/example-cluster/example-service/files/anotherfile</li>
  </ul>
  <h3 id="LSTn">Task Role and Policy for SSM access</h3>
  <p id="NkBT">In order to access SSM API the task still needs to have a role allowing that. Below is an example of such role in CloudFormation syntax.</p>
  <pre id="G1Hn" data-lang="yaml">ExampleTaskRole:
  Type: AWS::IAM::Role
  Properties:
    RoleName: ExampleTaskRole
    AssumeRolePolicyDocument:
      Version: 2012-10-17
      Statement:
        - Effect: Allow
          Principal:
            Service:
              - ecs-tasks.amazonaws.com
          Action:
            - sts:AssumeRole
      
      
ExamplePolicy:
  Type: AWS::IAM::Policy
  Properties:
    Roles:
      - !Ref ExampleTaskRole
    PolicyName: ExamplePolicy
    PolicyDocument:
      Version: 2012-10-17
      Statement:
        - Action:
            - kms:Decrypt
          Resource: !Sub &quot;arn:aws:kms:${AWS::Region}:alias/my-key&quot;
          Effect: Allow
        - Action:
            - ssm:GetParameter
            - ssm:GetParameters
            - ssm:DescribeParameters
          Resource: &quot;arn:aws:ssm:*&quot;
          Effect: Allow</pre>
  <h2 id="i4Z2">Using published images</h2>
  <p id="bBkn">The images with SSM bootstrap middleware layer are published to the <a href="https://hub.docker.com/r/stan1y/ssm-bootstrap/" target="_blank">Docker Hub</a> and available for general use. Simply specify the base image for your runtime in the <em>FROM</em> statement and your service will automatically receive configuration from the SSM Parameters Store. Below is an example of a generic <em>Dockerfile</em> for NodeJS based service:</p>
  <pre id="b15x" data-lang="dockerfile">FROM stan1y/ssm-bootstrap:node-alpine-latest

WORKDIR /app
COPY src/ /app/src/
COPY .npmrc package.json /app/

RUN npm install &amp;&amp; rm -f .npmrc
CMD npm start</pre>

]]></content:encoded></item></channel></rss>