In this post, you will learn how the new Amazon API Gateway’s enhanced TLS security policies help you meet standards such as PCI DSS, Open Banking, and FIPS, while strengthening how your APIs handle TLS negotiation. This new capability increases your security posture without adding operational complexity, and provides you with a single, consistent way to standardize TLS configuration across your API Gateway infrastructure.
Natural Language Processing
Natural language processing, text analysis, translation, chatbots, and conversational AI capabilities
AWS announces general availability of Flexible Cost Allocation on AWS Transit Gateway, enhancing how you can distribute Transit Gateway costs across your organization. Previously, Transit Gateway only used a sender-pay model, where the source attachment account owner was responsible for all data usage related costs. The new Flexible Cost Allocation (FCA) feature provides more versatile cost allocation options through a central metering policy. Using FCA metering policy, you can choose to allocate all of your Transit Gateway data processing and data transfer usage to the source attachment account, the destination attachment account, or the central Transit Gateway account. FCA metering policies can be configured at an attachment-level or individual flow-level granularity. FCA also supports middle-box deployment models enabling you to allocate data processing usage on middle-box appliances such as AWS Network Firewall to the original source or destination attachment owners. This flexibility allows you to implement multiple cost allocation models on a single Transit Gateway, accommodating various chargeback scenarios within your AWS network infrastructure. Flexible Cost Allocation is available in all commercial AWS Regions where Transit Gateway is available. You can enable these features using the AWS Management Console, AWS Command Line Interface (CLI) and the AWS Software Development Kit (SDK). There is no additional charge for using FCA on Transit Gateway. For more information, see the Transit Gateway documentation pages.
Amazon Athena now gives you control over Data Processing Unit (DPU) usage for queries running on Capacity Reservations. You can now configure DPU settings at the workgroup or query level to balance cost efficiency, concurrency, and query-level performance needs. Capacity Reservations provides dedicated serverless processing capacity for your Athena queries. Capacity is measured in DPUs, and queries consume DPUs based on their complexity. Now you can set explicit DPU values for each query—ensuring small queries use only what they need while guaranteeing critical queries get sufficient resources for fast execution. The Athena console and API now return per-query DPU usage, helping you understand DPU usage and determine your capacity needs. These updates help you control per-query capacity usage, control query concurrency, reduce costs by eliminating over-provisioning, and deliver consistent performance for business-critical workloads. Cost and performance controls are available today in AWS Regions where Capacity Reservations is supported. To learn more, see Control capacity usage in the Athena user guide.
AWS Control Tower offers the easiest way to manage and govern your environment with AWS managed controls. Starting today, customers can have direct access to these AWS managed controls without requiring a full Control Tower deployment. This new experience offers over 750 managed controls that customers can deploy within minutes while maintaining their existing account structure. AWS Control Tower v4.0 introduces direct access to Control Catalog, allowing customers to review available managed controls and deploy them into their existing AWS Organization. With this release, customers now have more flexibility and autonomy over their organizational structure, as Control Tower will no longer enforce a mandatory structure. Additionally, customers will have improved operations such as cleaner resource and permissions management and cost attribution due to the separation of S3 buckets and SNS notifications for the AWS Config and AWS CloudTrail integrations. This controls-focused experience is now available in all AWS Regions where AWS Control Tower is supported. For more information about this new capability see the AWS Control Tower User Guide or contact your AWS account team. For a full list of Regions where AWS Control Tower is available, see the AWS Region Table.
Amazon EC2 Image Builder now allows you to distribute existing Amazon Machine Images(AMIs), retry distributions, and define custom distribution workflows. Distribution workflows are a new workflow type that complements existing build and test workflows, enabling you to define sequential distribution steps such as AMI copy operations, wait-for-action checkpoints, and AMI attribute modifications. With enhanced distribution capabilities, you can now distribute an existing image to multiple regions and accounts without running a full Image Builder pipeline. Simply specify your AMI and distribution configuration, and Image Builder handles the copying and sharing process. Additionally, with distribution workflows, you can now customize distribution process by defining custom steps. For example, you can distribute AMIs to a test region first, add a wait-for-action step to pause for validation, and then continue distribution to production regions after approval. This provides the same step-level visibility and control you have with build and test workflows. These capabilities are available to all customers at no additional costs, in all AWS regions including AWS China (Beijing) Region, operated by Sinnet, AWS China (Ningxia) Region, operated by NWCD, and AWS GovCloud (US) Regions. You can get started from the EC2 Image Builder Console, CLI, API, CloudFormation, or CDK, and learn more in the EC2 Image Builder documentation.
Amazon API Gateway REST APIs now support direct private integration with Application Load Balancer (ALB), enabling inter-VPC connectivity to internal ALBs. This enhancement extends API Gateways existing VPC connectivity, providing you with more flexible and efficient architecture choices for your REST API implementations. This direct ALB integration delivers multiple advantages: reduced latency by eliminating the additional network hop previously required through Network Load Balancer, lower infrastructure costs through simplified architecture, and enhanced Layer 7 capabilities including HTTP/HTTPS health checks, advanced request-based routing, and native container service integration. You can still use API Gateway's integration with Network Load Balancers for layer-4 connectivity. Amazon API Gateway private integration with ALB is available in all AWS GovCloud (US) regions and the following AWS commercial regions US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Africa (Cape Town), Asia Pacific (Hong Kong), Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Asia Pacific (Malaysia), Asia Pacific (Melbourne), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Canada West (Calgary), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Spain), Europe (Stockholm), Europe (Zurich), Israel (Tel Aviv), Middle East (Bahrain), Middle East (UAE), South America (São Paulo). For more information, visit the Amazon API Gateway documentation and blog post.
Amazon Lex now supports wait & continue functionality in 10 new languages, enabling more natural conversational experiences in Chinese, Japanese, Korean, Cantonese, Spanish, French, Italian, Portuguese, Catalan, and German. This feature allows deterministic voice and chat bots to pause while customers gather additional information, then seamlessly resume when ready. For example, when asked for payment details, customers can say "hold on a second" to retrieve their credit card, and the bot will wait before continuing. This feature is available in all AWS Regions where Amazon Lex operates. To learn more, visit the Amazon Lex documentation or explore the Amazon Connect website to learn how Amazon Connect and Amazon Lex deliver seamless end-customer self-service experiences.
AWS announces the general availability of Cloud WAN Routing Policy providing customers fine-grained controls to optimize route management, control traffic patterns, and customize network behavior across their global wide-area networks. AWS Cloud WAN allows you to build, monitor, and manage a unified global network that interconnects your resources in the AWS cloud and your on-premises environments. Using the new Routing Policy feature, customers can perform advanced routing techniques such as route filtering and summarization to have better control on routes exchanged between AWS Cloud WAN and external networks. This feature enables customers to build controlled routing environments to minimize route reachability blast radius, prevent sub-optimal or asymmetric connectivity patterns, and avoid over-running of route-tables due to propagation of unnecessary routes in global networks. In addition, this feature allows customers to set advanced Border Gateway Protocol (BGP) attributes to customize network traffic behavior per their individual needs and build highly resilient hybrid-cloud network architectures. This feature also provides advanced visibility in the routing databases to allow rapid troubleshooting of network issues in complex multi-path environments. The new Routing Policy feature is available in all AWS Regions where AWS Cloud WAN is available. You can enable these features using the AWS Management Console, AWS Command Line Interface (CLI) and the AWS Software Development Kit (SDK). There is no additional charge for enabling Routing Policy on AWS Cloud WAN. For more information, see the AWS Cloud WAN documentation pages and blog.
Application Load Balancer (ALB) now offers Target Optimizer, a new feature that allows you to enforce a maximum number of concurrent requests on a target. With Target Optimizer, you can fine-tune your application stack so that targets receive only the number of requests they can process, achieving higher request success rate, more target utilization, and lower latency. This is particularly useful for compute-intensive workloads. For example, if you have applications that perform complex data processing or inference, you can configure each target to receive as few as one request at a time, ensuring the number of concurrent requests is in line with the target's processing capabilities. You can enable this feature by creating a new target group with a target control port. Once enabled, the feature works with the help of an agent provided by AWS that you run on your targets that tracks request concurrency. For deployments that include multiple target groups per ALB, you have the flexibility to configure this capability for each target group individually. You can enable Target Optimizer through the AWS Management Console, AWS CLI, AWS SDKs, and AWS APIs. ALB Target Optimizer is available in all AWS Commercial Regions, AWS GovCloud (US) Regions, and AWS China Regions. Traffic to target groups that enable Target Optimizer generates more LCU usage than regular target groups. For more information, see the pricing page, launch blog, and ALB User Guide.
Amazon CloudFront now supports three new capabilities for CloudFront Functions: edge location and Regional Edge Cache (REC) metadata, raw query string retrieval, and advanced origin overrides. Developers can now build more sophisticated edge computing logic with greater visibility into CloudFront's infrastructure and precise, granular control over origin connections. CloudFront Functions allows you to run lightweight JavaScript code at CloudFront edge locations to customize content delivery and implement security policies with sub-millisecond execution times. Edge location metadata, includes the three-letter airport code of the serving edge location and the expected REC. This enables geo-specific content routing or compliance requirements, such as directing European users to GDPR-compliant origins based on client location. The raw query string capability provides access to the complete, unprocessed query string as received from the viewer, preserving special characters and encoding that may be altered during standard parsing. Advanced origin overrides solve critical challenges for complex application infrastructures by allowing you to customize SSL/TLS handshake parameters, including Server Name Indication (SNI). For example, multi-tenant setups may override SNI where CloudFront connects through CNAME chains that resolve to servers with different certificate domains. These new CloudFront Functions capabilities are available at no additional charge in all CloudFront edge location. To learn more about CloudFront Functions, see the Amazon CloudFront Developer Guide.
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8i and R8i-flex instances are available in the Asia Pacific (Sydney), Canada (Central) and US West (N. California) regions. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The R8i and R8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver 20% higher performance than R7i instances, with even higher gains for specific workloads. They are up to 30% faster for PostgreSQL databases, up to 60% faster for NGINX web applications, and up to 40% faster for AI deep learning recommendation models compared to R7i. R8i-flex, our first memory-optimized Flex instances, are the easiest way to get price performance benefits for a majority of memory-intensive workloads. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources. R8i instances are a great choice for all memory-intensive workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. R8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications. R8i instances are SAP-certified and deliver 142,100 aSAPS, the highest among all comparable machines in on-premises and cloud environments, delivering exceptional performance for mission-critical SAP workloads. To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information about the new R8i and R8i-flex instances visit the AWS News blog.
India customers can now sign-up for AWS using UPI (Unified Payments Interface) AutoPay as their default payment method, with automatic recurring payments set up from the start. UPI is a popular and convenient payment method in India, which facilitates instant bank to bank transfers between two parties through mobile phones with internet. Customers can make payments through UPI mobile app simply by using a Virtual Payment Address or UPI ID linked to their bank account. Customers now have the flexibility to sign-up for AWS using UPI, where previously only card payments were accepted. This addition of UPI, India's most widely used payment method, makes it easier for customers to start their AWS journey using their preferred payment method. Customers can use UPI AutoPay to make automated recurring payments, which will avoid the need to come to console to make manual payments, reduce the risk of missed payments and any non-payment related actions. Customers can set up automatic payments up to INR 15,000 using their UPI ID linked to their bank account. To enable this, customers can log in to the AWS console and add UPI AutoPay from the payment page. Customers will be required to provide their UPI ID, verify it, and confirm billing address. Once completed, Customers will receive a request in their UPI mobile app (such as Amazon Pay) associated with their UPI ID for verifying and authorizing automated payments. After verification, future bills up to INR 15,000 will be automatically charged starting from the next billing cycle. To learn more, see Managing Payment Methods in India.
In this post, we explore how to optimize processing array data embedded within complex JSON structures using AWS Step Functions Distributed Map. You’ll learn how to use ItemsPointer to reduce the complexity of your state machine definitions, create more flexible workflow designs, and streamline your data processing pipelines—all without writing additional transformation code or AWS Lambda functions.
You can use AWS Step Functions to orchestrate complex business problems. However, as workflows grow and evolve, you can find yourself grappling with monolithic state machines that become increasingly difficult to maintain and update. In this post, we show you strategies for decomposing large Step Functions workflows into modular, maintainable components.
In this post, we show you how to implement comprehensive monitoring for Amazon Elastic Kubernetes Service (Amazon EKS) workloads using AWS managed services. This solution demonstrates building an EKS platform that combines flexible compute options with enterprise-grade observability using AWS native services and OpenTelemetry.