AWS AI News Hub

Your central source for the latest AWS artificial intelligence and machine learning service announcements, features, and updates

Filter by Category

203
Total Updates
92
What's New
20
ML Blog Posts
20
News Articles
Showing 203 of 203 updates

Today, AWS announced the availability of DeepSeek OCR, MiniMax M2.1, and Qwen3-VL-8B-Instruct in Amazon SageMaker JumpStart, expanding the portfolio of foundation models available to AWS customers. These three models bring specialized capabilities spanning document intelligence, multilingual coding, advanced multimodal reasoning, and vision-language understanding, enabling customers to build sophisticated AI applications across diverse use cases on AWS infrastructure. These models address different enterprise AI challenges with specialized capabilities: DeepSeek OCR explores visual-text compression for document processing. It can extract structured information from forms, invoices, diagrams, and complex documents with dense text layouts. MiniMax M2.1 is optimized for coding, tool use, instruction following, and long-horizon planning. It automates multilingual software development and executes complex, multi-step office workflows, empowering developers to build autonomous applications. Qwen3-VL-8B-Instruct delivers ssuperior text understanding and generation, deeper visual perception and reasoning, extended context length, enhanced spatial and video dynamics comprehension, and stronger agent interaction capabilities. With SageMaker JumpStart, customers can deploy any of these models with just a few clicks to address their specific AI use cases. To get started with these models, navigate to the SageMaker JumpStart model catalog in the SageMaker console or use the SageMaker Python SDK to deploy the models to your AWS account. For more information about deploying and using foundation models in SageMaker JumpStart, see the Amazon SageMaker JumpStart documentation.

sagemakerjumpstartlex
#sagemaker#jumpstart#lex#ga#now-available

Amazon Lightsail now offers memory-optimized instance bundles with up to 512 GB memory. The new instance bundles are available in 7 sizes, with Linux and Windows operating system (OS) and application blueprints, for both IPv6-only and dual-stack networking types. You can create instances using the new bundles with pre-configured OS and application blueprints including WordPress, cPanel & WHM, Plesk, Drupal, Magento, MEAN, LAMP, Node.js, Ruby on Rails, Amazon Linux, Ubuntu, CentOS, Debian, AlmaLinux, and Windows. The new memory-optimized instance bundles enable you to run memory-intensive workloads that require high RAM-to-vCPU ratios in Lightsail. These high-memory instance bundles are ideal for workloads such as in-memory databases, real-time big data analytics, in-memory caching systems, high-performance computing (HPC) applications, and large-scale enterprise applications that process extensive datasets in memory. These new bundles are now available in all AWS Regions where Amazon Lightsail is available. For more information on pricing, click here.

#now-available

AWS Security Token Service (STS) now supports validation of select identity provider specific claims from Google, GitHub, CircleCI and Oracle Cloud Infrastructure in IAM role trust policies and resource control policies for OpenID Connect (OIDC) federation into AWS via the AssumeRoleWithWebIdentity API. With this new capability, you can reference these custom claims as condition keys in IAM role trust policies and resource control policies, expanding your ability to implement fine-grained access control for federated identities and help you establish your data perimeters. This enhancement builds upon IAM's existing OIDC federation capabilities, which allow you to grant temporary AWS credentials to users authenticated through external OIDC-compatible identity providers.

iam
#iam#enhancement#support#new-capability

Amazon CloudFront announces support for mutual TLS authentication (mTLS) for origins, a security protocol that enables customers to verify that requests to their origin servers come only from their authorized CloudFront distributions using TLS certificates. This certificate-based authentication provides cryptographic verification of CloudFront's identity, eliminating the need for customers to manage custom security controls. Previously, verifying that requests came from CloudFront distributions required customers to build and maintain custom authentication solutions like shared secret headers or IP allow-lists, particularly for public or externally hosted origins. These approaches required ongoing operational overhead to rotate secrets, update allow-lists, and maintain custom code. Now with origin mTLS support, customers can implement a standardized, certificate-based authentication approach that eliminates this operational burden. This enables organizations to enforce strict authentication for their proprietary content, ensuring that only verified CloudFront distributions can establish connections to backend infrastructure ranging from AWS origins and on-premises servers to third-party cloud providers and external CDNs. Customers can leverage client certificates issued by AWS Private Certificate Authority or third-party private Certificate Authorities, which they import through AWS Certificate Manager. Customers can configure origin mTLS using the AWS Management Console, CLI, SDK, CDK, or CloudFormation. Origin mTLS is supported for all origins that support mutual TLS on AWS such as Application Load Balancer and API Gateway, as well as on-premises and custom origins. There is no additional charge for origin mTLS. Origin mTLS is also available in the Business and Premium flat-rate pricing plans. For detailed implementation guidance and best practices, visit the CloudFront origin mutual TLS documentation.

cloudformationcloudfrontapi gatewayorganizations
#cloudformation#cloudfront#api gateway#organizations#ga#update

AWS Network Firewall now supports flexible cost allocation through AWS Transit Gateway native attachments in AWS GovCloud (US) Regions, enabling you to automatically distribute data processing costs across different AWS accounts. Customers can create metering policies to apply data processing charges based on their organization's chargeback requirements instead of consolidating all expenses in the firewall owner account. This capability helps security and network teams better manage centralized firewall costs by distributing charges to application teams based on actual usage. Organizations can now maintain centralized security controls while automatically allocating inspection costs to the appropriate business units or application owners, eliminating the need for custom cost management solutions. Flexible cost allocation is available in AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions. You can enable these features using the AWS Management Console, AWS Command Line Interface (CLI) and the AWS Software Development Kit (SDK). There are no additional charges for using this attachment or flexible cost allocation beyond standard pricing of AWS Network Firewall and AWS Transit Gateway. To get started, visit the Flexible Cost Allocation on AWS Transit Gateway service documentation.

lexorganizations
#lex#organizations#ga#support

Over the past week, we passed Laba festival, a traditional marker in the Chinese calendar that signals the final stretch leading up to the Lunar New Year. For many in China, it’s a moment associated with reflection and preparation, wrapping up what the year has carried, and turning attention toward what lies ahead. Looking forward, […]

bedrocksagemaker
#bedrock#sagemaker

In this post, we illustrate how Clarus Care, a healthcare contact center solutions provider, worked with the AWS Generative AI Innovation Center (GenAIIC) team to develop a generative AI-powered contact center prototype. This solution enables conversational interaction and multi-intent resolution through an automated voicebot and chat interface. It also incorporates a scalable service model to support growth, human transfer capabilities--when requested or for urgent cases--and an analytics pipeline for performance insights.

bedrocknova
#bedrock#nova#support

Amazon Connect now offers APIs to configure and run tests that simulate contact center experiences, making it easy to validate workflows, self-service voice interactions, and their outcomes. With these APIs, you can programmatically configure test parameters, including the caller's phone number or customer profile, the reason for the call (such as "I need to check my order status"), the expected responses (such as "Your request has been processed"), and business conditions like after-hours scenarios or full call queues. With this launch, you can also integrate testing directly into CI/CD pipelines, run multiple tests simultaneously to validate workflows at scale, and enable automated regression testing as part of your deployment cycles. These capabilities allow you to rapidly validate changes to your workflows and confidently deploy new customer experiences to production. To learn more about these features, see the Amazon Connect API Reference and Amazon Connect Administrator Guide. These features are available in Asia Pacific (Mumbai), Africa (Cape Town), Europe (Frankfurt), US East (N. Virginia), Asia Pacific (Seoul), Europe (London), Asia Pacific (Tokyo), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), and Canada (Central) regions. To learn more about Amazon Connect, AWS’s AI-native customer experience solution, please visit the Amazon Connect website.

#launch#ga

AWS HealthImaging now supports storing and retrieving lossy compressed medical images in the JPEG XL transfer syntax (1.2.840.10008.1.2.4.112). It is now simpler than ever to integrate HealthImaging with applications that require JPEG XL encoded DICOM data, such as digital pathology whole slide imaging systems. With this launch, HealthImaging stores your JPEG XL Lossy image data without transcoding, which maintains the fidelity of your data and reduces your storage costs. Further, you can retrieve stored image frames in the JPEG XL format without the latency of transcoding at retrieval time.

#launch#support

Evaluating the performance of large language models (LLMs) goes beyond statistical metrics like perplexity or bilingual evaluation understudy (BLEU) scores. For most real-world generative AI scenarios, it’s crucial to understand whether a model is producing better outputs than a baseline or an earlier iteration. This is especially important for applications such as summarization, content generation, […]

novasagemakerlex
#nova#sagemaker#lex

Amazon Connect now delivers improved estimated wait time metrics for queues and enqueued contacts, empowering organizations. This allows contact centers to set accurate customer expectations, provide convenient options such as callbacks when hold times are extended, and balance workloads effectively across multiple queues. By leveraging the improved estimated wait time metrics, contact centers can make more strategic routing choices across queues while gaining enhanced visibility for better resource planning. For example, a customer calling about billing during peak hours with a 15-minute wait is seamlessly transferred to a cross-trained team with 2-minute availability, getting help faster without repeating their issue. The metric works seamlessly with routing criteria and agent proficiency configurations.

organizations
#organizations#launch#ga

Today, Amazon SageMaker announced a new capability allowing you to establish connectivity between your Amazon Virtual Private Cloud (VPC) and Amazon SageMaker Unified Studio without customer data traffic going through the public internet. Customers needing to go beyond the standard data transfer protocol (HTTPS/TLS2) can choose to configure their VPC so data transfer stays within the AWS network. Through AWS PrivateLink, Network Administrators can now onboard AWS service endpoints to their VPC used by Amazon SageMaker Unified Studio. With the endpoints are onboarded, IAM policies used by Amazon SageMaker will enforce that customer data stay within the AWS network. Amazon SageMaker private access using AWS PrivateLink is available in all AWS Regions where Amazon SageMaker Unified Studio is supported, including: Asia Pacific (Tokyo), Europe (Ireland), US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), South America (São Paulo), Asia Pacific (Seoul), Europe (London), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), Asia Pacific (Mumbai), Europe (Paris), Europe (Stockholm) To learn more, visit Amazon SageMaker then get started with the network isolation documentation.

sagemakerunified studioiam
#sagemaker#unified studio#iam#ga#support#new-capability

Amazon Elastic Container Service (Amazon ECS) now publishes container health status as a new metric in CloudWatch Container Insights with enhanced observability. Customers can now track the operational health of their containers through a dedicated CloudWatch metric and create alarms to respond proactively to unhealthy containers. When customers configure a container health check in the container definition of an ECS task definition, Container Insights now publishes the UnHealthyContainerHealthStatus metric in the ECS/ContainerInsights namespace. The metric reports 0 for HEALTHY and 1 for UNHEALTHY. Container health state information is also available in embedded metric format (EMF) logs, providing additional context while health checks are being evaluated during the UNKNOWN state. The metric is available across cluster, service, task, and container-level dimensions, enabling customers to monitor health at their preferred level of granularity. Customers can create CloudWatch alarms on the metric to receive notifications when containers become unhealthy, allowing teams to take immediate action and maintain application reliability. To get started, enable Container Insights with enhanced observability on your ECS cluster and configure a container health check in your task definition to start collecting the metric in CloudWatch. Container health metric is available in all AWS Regions where Amazon ECS Container Insights is supported. For more information, see the Amazon ECS container health checks documentation and the CloudWatch Container Insights documentation.

ecscloudwatch
#ecs#cloudwatch#support

AWS Lambda launches enhanced observability for Kafka event source mappings (ESM) that provides Amazon CloudWatch Logs and metrics to monitor event polling setup, scaling, and processing state of Kafka events. This capability allows customers to quickly diagnose setup issues and take timely corrective actions to operate resilient data streaming workloads. This capability is available for both Amazon Managed Streaming for Apache Kafka (Amazon MSK) and self-managed Apache Kafka (SMK) event source mappings. Customers use Kafka event source mappings (ESM) with their Lambda functions to build mission-critical applications. However, the lack of visibility into event polling setup, scaling, and processing state for events slows down troubleshooting for issues resulting from faulty permissions, misconfiguration, or function errors, which increases mean time to resolution and adds operational overhead. With this launch, customers can enable CloudWatch Logs and metrics to monitor their Kafka polling setup, scaling, and event processing state. Customers can select from multiple log level options that provide logs ranging from warnings and errors to detailed information about event processing progress. Similarly, customers can enable one or more metrics groups—EventCount, ErrorCount, and KafkaMetrics—to monitor various aspects of event processing. Customers can view all their metrics and logs via a dedicated monitoring page on AWS Console for ESM. This capability allows customers to utilize their observability tooling to quickly diagnose setup issues and track performance metrics to meet their stringent business requirements. This feature is available in all AWS Commercial Regions where AWS Lambda's Provisioned mode for Kafka ESM is available. You can enable ESM logs and metrics for your Kafka ESM using AWS Lambda's Create and Update ESM APIs, AWS Console, AWS CLI, AWS SDK, AWS CloudFormation, and AWS SAM. To learn more about these capabilities, visit the Lambda Kafka ESM developer documentation. These logs and metrics are charged at standard CloudWatch pricing.

lambdacloudformationkafkamskcloudwatch
#lambda#cloudformation#kafka#msk#cloudwatch#launch

This post explores how you can use Amazon S3-based templates to simplify ModelOps workflows, walk through the key benefits compared to using Service Catalog approaches, and demonstrates how to create a custom ModelOps solution that integrates with GitHub and GitHub Actions—giving your team one-click provisioning of a fully functional ML environment.

sagemakers3
#sagemaker#s3

In this post, we walk through how global cross-Region inference routes requests and where your data resides, then show you how to configure the required AWS Identity and Access Management (IAM) permissions and invoke Claude 4.5 models using the global inference profile Amazon Resource Name (ARN). We also cover how to request quota increases for your workload. By the end, you'll have a working implementation of global cross-Region inference in af-south-1.

bedrockiam
#bedrock#iam

Today, AWS announces the launch of Partner Revenue Measurement, a new capability that gives AWS Partners visibility into how their solutions impact AWS service consumption across partner-managed and customer-managed accounts. Partner Revenue Measurement allows Partners to better understand their AWS revenue impact and product consumption patterns. Partners can now tag AWS resources using the product code from their AWS Marketplace listing with tag key: aws-apn-id and tag value: pc:<AWS Marketplace product-code> to quantify and measure the AWS revenue impact of that solution. Partner Revenue Measurement is generally available in all commercial regions. To learn more about implementing Partner Revenue Measurement, review the onboarding guide for more information.

#launch#generally-available#new-capability

Starting today, Amazon GameLift Streams provides streaming capabilities in six new locations - eu-west-2 (London), eu-north-1 (Stockholm), sa-east-1 (São Paulo), ap-south-1 (Mumbai), ap-northeast-2 (Seoul), and ap-southeast-2 (Sydney) for all customers. New streaming locations enable customers to provide low latency streaming experiences to their players in Europe, South America, India, and Asia regions. Additionally, these locations increase overall GPU availability, enabling customers to scale their streaming services more effectively. The service supports all stream classes in these new regions. To get started, customers need to edit Location and capacity configurations to add new locations to their new or existing stream groups via console or CLI. For more details, see Amazon GameLift Streams developer guide: AWS Regions and remote locations

#ga#support#new-region

Starting today, Amazon EC2 R8a instances are now available in Europe (Spain) and Europe (Frankfurt) Regions. These instances, feature 5th Gen AMD EPYC processors (formerly code named Turin) with a maximum frequency of 4.5 GHz, deliver up to 30% higher performance, and up to 19% better price-performance compared to R7a instances. R8a instances deliver 45% more memory bandwidth compared to R7a instances, making these instances ideal for latency sensitive workloads. Compared to Amazon EC2 R7a instances, R8a instances provide up to 60% faster performance for GroovyJVM, allowing higher request throughput and better response times for business-critical applications. Built on the AWS Nitro System using sixth generation Nitro Cards, R8a instances are ideal for high performance, memory-intensive workloads, such as SQL and NoSQL databases, distributed web scale in-memory caches, in-memory databases, real-time big data analytics, and Electronic Design Automation (EDA) applications. R8a instances offer 12 sizes including 2 bare metal sizes. Amazon EC2 R8a instances are SAP-certified, and providing 38% more SAPS compared to R7a instances. To get started, sign in to the AWS Management Console. For more information about the new instances, visit the Amazon EC2 R8a instance page.

ec2rds
#ec2#rds#now-available

Amazon RDS for Oracle now supports cross-Region replicas with additional storage volumes. With additional storage volumes, customers can add up to three storage volumes, each with up to 64 TiB, in addition to the primary storage volume for their database instance. As a result, customers get flexibility to add or remove storage with evolving workload demands, without incurring application downtime, and set up their database instance with up to 256 TiB storage. Now, with support for cross-Region replicas, customers that set up database instances with cross-Region replicas for business-critical applications also get the benefit of using additional storage volumes for storage flexibility. When you create a cross-Region replica for a database instance that is set up with additional storage volumes, Amazon RDS for Oracle automatically configures the same storage layout on the replica. Subsequently, you can apply changes to additional storage volumes on the primary instance and the replica using the AWS Management Console, AWS CLI, or AWS SDK. In disaster recovery situations, you can promote a cross-Region replica to serve as the new standalone database, or execute a switchover to reverse roles between the primary database and the replica to meet low recovery point objective (RPO) and recovery time objective (RTO) for business critical applications. You will need an Oracle Database Enterprise Edition (EE) license to use replicas in mounted mode, and an additional Oracle Active Data Guard license to use replicas in read-only mode. We recommend consulting your legal team or licensing expert to verify Oracle license requirements for your specific use case. Amazon RDS for Oracle cross-Region replicas with additional storage volumes is available in all AWS Regions including the AWS GovCloud (US) Regions. To learn more, see Amazon RDS for Oracle User Guide.

lexrds
#lex#rds#ga#support

Stay current with the latest serverless innovations that can transform your applications. In this 31st quarterly recap, discover the most impactful AWS serverless launches, features, and resources from Q4 2025 that you might have missed.

nova
#nova#launch

As organizations accelerate cloud adoption, meeting digital sovereignty requirements has become essential to build trust with customers and regulators worldwide. The challenge isn’t whether to adopt the cloud—it’s how to do so while meeting sovereignty requirements, using a multidisciplinary approach. Even though requirements vary by geography, organizations commonly address them through technical and operational controls […]

organizations
#organizations#ga

The Alation and SageMaker Unified Studio integration helps organizations bridge the gap between fast analytics and ML development and the governance requirements most enterprises face. By cataloging metadata from SageMaker Unified Studio in Alation, you gain a governed, discoverable view of how assets are created and used. In this post, we demonstrate who benefits from this integration, how it works, the specific metadata it synchronizes, and provide a complete deployment guide for your environment.

sagemakerunified studioorganizations
#sagemaker#unified studio#organizations#ga#integration

The agent-based approach we present is applicable to any type of enterprise content, from product documentation and knowledge bases to marketing materials and technical specifications. To demonstrate these concepts in action, we walk through a practical example of reviewing blog content for technical accuracy. These patterns and techniques can be directly adapted to various content review needs by adjusting the agent configurations, tools, and verification sources.

To support cloud applications that increasingly depend on rich contextual data, AWS is raising the maximum payload size from 256 KB to 1 MB for asynchronous AWS Lambda function invocations, Amazon Amazon SQS, and Amazon EventBridge. Developers can use this enhancement to build and maintain context-rich event-driven systems and reduce the need for complex workarounds such as data chunking or external large object storage.

lexlambdaeventbridgesqs
#lex#lambda#eventbridge#sqs#enhancement#support

Amazon EventBridge increases event payload size from 256 KB to 1 MB, enabling developers to ingest richer, complex payloads for their event-driven workloads without the need to split, compress, or externalize data. Amazon EventBridge is a serverless event router that enables you to create scalable event-driven applications by routing events between your applications, third-party SaaS applications, and AWS services. These applications often need to process rich contextual data, including large-language model prompts, telemetry signals, and complex JSON structures for machine learning outputs. The new 1MB payload support in EventBridge Event Buses enables developers to streamline their architectures by including comprehensive data in a single event, reducing the need for complex data chunking or external storage solutions. This feature is available in all commercial AWS Regions where Amazon EventBridge is offered, except Asia Pacific (New Zealand), Asia Pacific (Thailand), Asia Pacific (Malaysia), Asia Pacific (Taipei), and Mexico (Central). For a full list, see the AWS Regional Services List. To learn more, visit the EventBridge documentation.

lexeventbridge
#lex#eventbridge#support

Amazon Bedrock now supports server-side tools in the Responses API using OpenAI API-compatible service endpoints. Bedrock already supports client-side tool use with the Converse, Chat Completions, and Responses APIs. Now, with the launch of server-side tool use for Responses API, Amazon Bedrock calls the tools directly without going through a client, enabling your AI applications to perform real-time, multi-step actions such as searching the web, executing code, and updating databases within the organizational, governance, compliance, and security boundaries of your AWS accounts. You can either submit your own custom Lambda function to run custom tools or use AWS-provided tools, such as notes and tasks. Server-side tools using the Responses API is available starting today with OpenAI's GPT OSS 20B/120B models in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Mumbai), South America (São Paulo), Europe (Ireland), Europe (London), and Europe (Milan) AWS Regions. Support for other regions and models is coming soon. To get started, visit the service documentation.

bedrocklambda
#bedrock#lambda#launch#ga#support#coming-soon

Amazon Cognito introduces inbound federation Lambda triggers that enable you to transform and customize federated user attributes during the authentication process. You can now modify responses from external SAML and OIDC providers before they are stored in your user pool, providing complete programmatic control over the federation flow without requiring changes to your identity provider configuration.. Inbound federation Lambda trigger addresses current limitations in federated authentication workflows, particularly issues caused by attribute size limits and the need for selective attribute storage from external identity providers. For example, large group attributes from external SAML or OIDC identity providers that exceed Cognito’s 2,048 character limit per attribute can block the authentication flow. This capability allows you to add, override, or suppress attribute values, such as modifying large group attributes, before creating new federated users or updating existing federated user profiles in Cognito. The new inbound federation Lambda trigger is available through hosted UI (classic) and managed login in all AWS Regions where Amazon Cognito is available. To get started, configure the trigger using the AWS Management Console, AWS Command Line Interface (CLI), AWS Software Development Kits (SDKs), Cloud Development Kit (CDK), or AWS CloudFormation by adding the new parameter to your User Pool LambdaConfig. To learn more, see the Amazon Cognito Developer Guide for implementation examples and best practices.

lambdacloudformation
#lambda#cloudformation

Amazon Keyspaces (for Apache Cassandra) now supports table pre-warming, allowing you to proactively prepare both new and existing tables to meet future traffic demands. This capability is available for tables in both provisioned and on-demand capacity modes, including multi-Region replicated tables. Amazon Keyspaces (for Apache Cassandra) is a scalable, highly available, and managed Apache Cassandra–compatible database service. Amazon Keyspaces is serverless, so you pay for only the resources that you use and you can build applications that serve thousands of requests per second with virtually unlimited throughput and storage. While Amazon Keyspaces automatically scales to accommodate growing workloads, certain scenarios like application launches, marketing campaigns, or seasonal events can create sudden traffic spikes that exceed normal scaling patterns. With pre-warming, you can now manually specify your expected peak throughput requirements during table creation or update operations, ensuring your tables are immediately ready to handle large traffic surges without scaling delays or increased error rates. The pre-warming process is non-disruptive and runs asynchronously, allowing you to continue making other table modifications while pre-warming is in progress. Pre-warming incurs a one-time charge based on the difference between your specified values and the baseline capacity. The feature is now available in all AWS Commercial and AWS GovCloud (US) Regions where Amazon Keyspaces is offered. To learn more, visit the pre-warming launch blog or Amazon Keyspaces documentation.

#launch#now-available#update#support

You can now change the server-side encryption type of encrypted objects in Amazon S3 without any data movement. You can use the UpdateObjectEncryption API to atomically change the encryption key of your objects regardless of the object size or storage class. With S3 Batch Operations, you can use UpdateObjectEncryption at scale to standardize the encryption type on entire buckets of objects while preserving object properties and S3 Lifecycle eligibility. Customers across many industries face increasingly stringent audit and compliance requirements on data security and privacy. A common requirement for these compliance frameworks is more rigorous encryption standards for data-at-rest, where organizations must encrypt data using a key management service. With UpdateObjectEncryption, customers can now change the encryption type of existing encrypted objects to move from Amazon S3 managed server-side encryption (SSE-S3) to use server-side encryption with AWS KMS keys (SSE-KMS). You can also change the customer-managed KMS key used to encrypt your data to comply with custom key rotation standards or enable the use of S3 Bucket Keys to reduce your KMS requests. The Amazon S3 UpdateObjectEncryption API is available in all AWS Regions. To get started, you can use the AWS Management Console or the latest AWS SDKs to update the server-side encryption type of your objects. To learn more, please visit the documentation.

s3rdsorganizations
#s3#rds#organizations#ga#update

AWS announces the launch of deployment Standard Operating Procedures (SOPs) available in the AWS MCP Server. SOPs are structured, natural language instructions that guide AI agents through complex, multi-step tasks to ensure consistent, reliable, and efficient behavior. With these automated procedures, customers can deploy web applications to their AWS account using natural language prompts from any MCP-compatible IDE or CLI, including Kiro, Kiro CLI, Cursor, and Claude Code. Deployment works by generating AWS CDK infrastructure, deploying CloudFormation stacks, and creating CI/CD pipelines with recommended AWS security best practices. Previously, developers struggled to take their vibe-coded applications to production with DevOps best practices in place. Now, developers can move quickly from prototype to production in as little as one prompt. When you ask your AI assistant configured with AWS MCP Server to deploy your web application, your AI agent will follow the multi-step plan defined in Agent SOPs to analyze the project structure, generate CDK infrastructure, and deploy a preview environment hosted on Amazon S3 and Amazon CloudFront. Once you are ready, it can configure AWS CodePipeline for automated production deployments from source repositories, setting up CI/CD automatically for your application. The Agent SOPs support web applications built with popular frameworks including React, Vue.js, Angular, and Next.js. Deployment documentation is automatically created in the repository, enabling agents to handle future deployments, query logs for troubleshooting and resume work across sessions. The Agent SOPs are available in preview as part of the AWS MCP Server at no additional cost in the US East (N. Virginia) Region. You pay only for AWS resources you create and applicable data transfer costs. To get started, see the AWS MCP Server documentation.

lexs3cloudformationcloudfront
#lex#s3#cloudformation#cloudfront#launch#preview

Amazon GameLift Servers now enables automatic scaling to and from zero instances, addressing a critical cost optimization challenge for game developers. Previously, developers had to maintain running instances even during periods of low or no activity in order for Fleet autoscaling to remain active. This resulted in unnecessary infrastructure costs during off-peak hours. With automatic scaling to and from zero instances, game developers using Amazon GameLift Servers can optimize their multiplayer gaming infrastructure costs while maintaining responsive performance. By eliminating charges for unused instances during inactive periods, while automatically scaling up when game sessions are requested, this new capability delivers significant cost savings for game developers. This is particularly valuable for games with distinct peak and off-peak periods, seasonal or event-based games, new game launches with uncertain traffic patterns, and regional games with time-zone specific activity. Additionally, scaling decisions no longer need manual intervention, as Amazon GameLift Servers intelligently adapts to natural gaming activity patterns. The automatic scaling to zero instances capability is available in all Amazon GameLift Servers supported regions. To learn more about Amazon GameLift Servers automatic scaling capabilities and implementation details, visit the Amazon GameLift Servers documentation.

#launch#ga#support#new-capability

Kubernetes version 1.35 introduced several new features and bug fixes, and AWS is excited to announce that you can now use Amazon Elastic Kubernetes Service (EKS) and Amazon EKS Distro to run Kubernetes version 1.35. Starting today, you can create new EKS clusters using version 1.35 and upgrade existing clusters to version 1.35 using the EKS console, the eksctl command line interface, or through an infrastructure-as-code tool. Kubernetes version 1.35 introduces several key improvements, including In-Place Pod Resource Updates allowing CPU and memory adjustments without restarting Pods, and PreferSameNode Traffic Distribution prioritizing local endpoints before routing to remote nodes for reduced latency. The release brings Node Topology Labels via Downward API enabling Pods to access region and zone information without API server queries, alongside Image Volumes delivering data artifacts like AI models using OCI container images. To learn more about the changes in Kubernetes version 1.35, see our documentation and the Kubernetes project release notes. EKS now supports Kubernetes version 1.35 in all the AWS Regions where EKS is available, including the AWS GovCloud (US) Regions. You can learn more about the Kubernetes versions available on EKS and instructions to update your cluster to version 1.35 by visiting EKS documentation. You can use EKS cluster insights to check if there are any issues that can impact your Kubernetes cluster upgrades. EKS Distro builds of Kubernetes version 1.35 are available through ECR Public Gallery and GitHub. Learn more about the EKS version lifecycle policies in the documentation.

eks
#eks#ga#new-feature#update#improvement#support

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R7gd instances with up to 3.8 TB of local NVMe-based SSD block-level storage are available in Europe (Paris) Region. R7gd are powered by AWS Graviton3 processors with DDR5 memory are built on the AWS Nitro System. They are ideal for memory-intensive workloads such as open-source databases, in-memory caches, and real-time big data analytics and are a great fit for applications that need access to high-speed, low latency local storage, including those that need temporary storage of data for scratch space, temporary files, and caches.  To learn more, see Amazon R7gd Instances. To get started, see the AWS Management Console.

ec2graviton
#ec2#graviton#now-available

Tipico is the number one name in sports betting in Germany. Every day, we connect millions of fans to the thrill of sport, combining technology, passion, and trust to deliver fast, secure, and exciting betting, both online and in more than a thousand retail shops across Germany. We also bring this experience to Austria, where we proudly operate a strong sports betting business. In this post, we show how Tipico built a unified data transformation platform using Amazon Managed Workflows for Apache Airflow (Amazon MWAA) and AWS Batch.

You can now use Amazon MSK Replicator to replicate streaming data across Amazon Managed Streaming for Apache Kafka (Amazon MSK) clusters in Asia Pacific (New Zealand) Region. MSK Replicator is a feature of Amazon MSK that enables you to reliably replicate data across Amazon MSK clusters in different or the same AWS Region(s) in a few clicks. With MSK Replicator, you can easily build regionally resilient streaming applications for increased availability and business continuity. MSK Replicator provides automatic asynchronous replication across MSK clusters, eliminating the need to write custom code, manage infrastructure, or setup cross-region networking. MSK Replicator automatically scales the underlying resources so that you can replicate data on-demand without having to monitor or scale capacity. MSK Replicator also replicates the necessary Kafka metadata including topic configurations, Access Control Lists (ACLs), and consumer group offsets. If an unexpected event occurs in a region, you can failover to the other AWS Region and seamlessly resume processing. You can get started with MSK Replicator from the Amazon MSK console or the Amazon CLI. With this launch, MSK Replicator is now available in thirty six AWS Regions. To learn more, visit the MSK Replicator documentation, product page, and pricing page.

kafkamsk
#kafka#msk#launch#now-available

Amazon DynamoDB global tables with multi-Region strong consistency (MRSC) now supports application resiliency testing with AWS Fault Injection Service (FIS), a fully managed service for running controlled fault injection experiments to improve application performance, observability, and resilience. With this launch, you can create real-world failure scenarios to MRSC global tables, such as during regional failures, enabling you to observe how your applications respond to these disruptions and validate your resilience mechanisms. MRSC global tables replicate your DynamoDB tables automatically across your choice of AWS Regions to achieve fast, strongly consistent read and write performance, providing you 99.999% availability, increased application resiliency, and improved business continuity. FIS is a fully managed service for running controlled fault injection experiments to improve an application’s performance, observability, and resilience. You can use the new FIS action to observe how your application responds to a pause in regional replication and tune monitoring and recovery processes to improve resiliency and application availability. MRSC global tables support for FIS is available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Osaka), Asia Pacific (Seoul), Europe (Ireland), Europe (London), Europe (Frankfurt), and Europe (Paris). To get started, visit the DynamoDB FIS actions documentation.

dynamodb
#dynamodb#launch#support

AWS Deadline Cloud now supports editing job names and descriptions after submission. This new feature makes it easier to organize and identify jobs after submission by updating names or adding useful tracking details in the description field. Job names and descriptions are critical metadata for organizing and understanding between users and projects, often tracking things like shot and sequence number across systems. Previously they could only be set at job creation during submission, being able to edit them after submission allows you to fix issues in naming as well as add key tracking information to the job description for other users. You can edit job names and descriptions using the AWS SDK, Deadline client, and Deadline Monitor. To learn more see the AWS Deadline Cloud documentation.

#ga#new-feature#support

AWS Network Firewall now provides visibility into generative AI (GenAI) application traffic and supports traffic filtering based on web categories. This new capability simplifies governance by enabling you to identify and control access to GenAI services, social media platforms, streaming sites, and other web categories directly within your firewall rules using pre-defined URL categories. This approach of inspecting traffic based on URL categories helps security and compliance teams enforce consistent policies across their AWS environments while providing visibility into usage of emerging technologies like GenAI. You can now easily block access to inappropriate or high-risk domains, restrict GenAI tool usage to approved services, and meet regulatory requirements—all while reducing operational overhead. When combined with AWS Network Firewall's TLS inspection feature, you can inspect the full URL path using category-based rules for even more granular control. This feature is available in all AWS commercial regions where AWS Network Firewall is supported. To learn more about URL category filtering in AWS Network Firewall, visit AWS Network Firewall product page and service documentation. You can get started by updating your stateful rule groups in the AWS Management Console, AWS CLI, or AWS SDKs.

#support#new-capability

Using Amazon SageMaker Unified Studio serverless notebooks, AI-assisted development, and unified governance, you can speed up your data and AI workflows across data team functions while maintaining security and compliance. In this post, we walk you through how these new capabilities in SageMaker Unified Studio can help you consolidate your fragmented data tools, reduce time to insight, and collaborate across your data teams.

sagemakerunified studio
#sagemaker#unified studio

In this post, you learn how to build Log Lake, a customizable cross-company data lake for compliance-related use cases that combines AWS CloudTrail and Amazon CloudWatch logs. You'll discover how to set up separate tables for writing and reading, implement event-driven partition management using AWS Lambda, and transform raw JSON files into read-optimized Apache ORC format using AWS Glue jobs. Additionally, you'll see how to extend Log Lake by adding Amazon Bedrock model invocation logs to enable human review of agent actions with elevated permissions, and how to use an AI agent to query your log data without writing SQL.

bedrocklambdagluecloudwatch
#bedrock#lambda#glue#cloudwatch

When processing data at scale, many organizations use Apache Spark on Amazon EMR to run shared clusters that handle workloads across tenants, business units, or classification levels. In such multi-tenant environments, different datasets often require distinct AWS Key Management Service (AWS KMS) keys to enforce strict access controls and meet compliance requirements. At the same […]

s3emrorganizations
#s3#emr#organizations#ga

AWS announces advanced printer redirection for Amazon WorkSpaces Personal, enabling Windows users to access the full feature set of their printers from their virtual desktop environments. With this feature, customers can now use printer-specific capabilities such as double-sided printing, paper tray selection, finishing options (stapling, hole-punching), and color management directly from their Windows WorkSpaces. Advanced printer redirection addresses the need for specialized printing features that require printer-specific drivers rather than generic drivers. This capability is ideal for organizations with users who need advanced printing features for professional documents, labels, or specialized output. The feature includes configurable driver validation modes (exact match, partial match, or name-only matching) to balance compatibility with feature support, allowing administrators to optimize for their specific environment. When matching drivers are not found, WorkSpaces automatically falls back to basic printing mode, ensuring users can always print. This feature is available in all AWS Regions where Amazon WorkSpaces Personal is offered. Advanced printer redirection is supported on Windows WorkSpaces with Windows clients only, and requires WorkSpaces Agent version 2.2.0.2116 or later and Windows client version 5.31 or later. Matching printer drivers must be installed on both the WorkSpace and the client device. For more information about advanced printer redirection in Amazon WorkSpaces, see Configure Printer Support for DCV in the Amazon WorkSpaces Administration Guide, or visit the Amazon WorkSpaces page to learn more about virtual desktop solutions from AWS.

organizations
#organizations#ga#support

AWS Marketplace now offers a self-service listing experience for sellers listing Amazon Machine Image (AMI) products with FPGA (Field Programmable Gate Array) images. This new capability eliminates the previous dependency on manual Product Load Forms and accelerates time-to-market for AWS partners that offer specialized hardware accelerators using FPGA technology on supported Amazon F2 instance types. With this launch, sellers can now create and manage AMIs with Amazon FPGA images using a new UI experience or programmatically through the AWS Marketplace Catalog API. During listing creation, sellers are guided through a step-by-step workflow to fill in required information about their listings including up to 15 Amazon FPGA images. The self-service experience includes comprehensive inline validation and error messages to help sellers identify and resolve configuration issues before submission, streamlining the publishing process and improving speed to market. To learn more, see the AWS Marketplace Seller Guide and the AWS Marketplace Catalog API guide. To get started, visit the server product page in the AWS Marketplace Management Portal.

#launch#ga#support#new-capability

AWS memory optimized R6id database instances are now generally available for Amazon RDS for PostgreSQL, MySQL, and MariaDB in the Tel Aviv region. R6gd instances are now supported for Amazon RDS for PostgreSQL, MySQL, and MariaDB in Asia Pacific (Osaka), and EU (Spain, Zurich) regions.  AWS Graviton2-based instances provide up to 40% performance improvement over R5-based instances of equivalent sizes on Amazon Aurora and Amazon RDS databases, depending on database engine, version, and workload. R6gd instances also deliver local NVMe-based block level storage for low latency local storage. Memory-optimized R6id instances offer 58% higher TB storage per vCPU and 15% better price performance when compared with R5d instances. You can easily launch R6gd or R6id database instances through the Amazon RDS Management Console or by using the AWS Command Line Interface (CLI). For detailed information about specific engine versions that support these database instance types, please refer to the Aurora and RDS documentation. For complete information on pricing and regional availability, please refer to the Amazon RDS pricing page.

rdsgraviton
#rds#graviton#launch#generally-available#improvement#support

Amazon Connect now enables you to apply tag-based access control to cases, giving administrators more control over who can view and manage case data. With this capability, you can associate tags with case templates and configure security profiles to control which users can access cases that include specific tags. For example, you can tag fraud-related cases and restrict access so that only users assigned to a fraud security profile can view or edit those cases, helping you enforce internal controls and data access policies. Amazon Connect Cases is available in the following AWS regions: US East (N. Virginia), US West (Oregon), Canada (Central), Europe (Frankfurt), Europe (London), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Africa (Cape Town) AWS regions. To learn more and get started, visit the Amazon Connect Cases webpage and documentation.

#ga#support

Amazon Lightsail now offers new Node.js, LAMP, and Ruby on Rails blueprints. These new blueprints have Instance Metadata Service Version 2 (IMDSv2) enforced by default. With just a few clicks, you can create a Lightsail virtual private server (VPS) of your preferred size with Node.js, LAMP, or Ruby on Rails preinstalled. With Lightsail, you can easily get started on the cloud by choosing a blueprint and an instance bundle to build your web application. Lightsail instance bundles include instances preinstalled with your preferred operating system, storage, and monthly data transfer allowance, giving you everything you need to get up and running quickly. These new blueprints are now available in all AWS Regions where Lightsail is available. For more information on blueprints supported on Lightsail, see Lightsail documentation. For more information on pricing, or to get started with your free trial, click here.

#now-available#update#support

Amazon Bedrock now supports a 1-hour time-to-live (TTL) option for prompt caching for select Anthropic Claude models. With this update, you can extend the persistence of cached prompt prefixes from the default 5 minutes to 1 hour, improving cost efficiency and performance for long-running agentic workflows and multi-turn conversations. Previously, cached content remained active for a fixed 5-minute window and refreshed when reused. With the new 1-hour TTL option, you can maintain context for users who interact less frequently, or for complex agents that require more time between steps—such as tool use, retrieval, and orchestration. The 1-hour TTL is also useful for longer sessions and batch processing where you want cached content to persist across extended periods. 1-hour TTL prompt caching is generally available for Anthropic’s Claude Sonnet 4.5, Claude Haiku 4.5, and Claude Opus 4.5 in all commercial AWS Regions and AWS GovCloud (US) Regions where these models are available. The 1-hour cache is billed at a different rate than the standard 5-minute cache. To learn more, refer to the Amazon Bedrock documentation and Amazon Bedrock Pricing page.

bedrocklex
#bedrock#lex#generally-available#update#support

AWS IAM Identity Center now supports Internet Protocol version 6 (IPv6) through new dual-stack endpoints. Customers can now connect to AWS IAM Identity Center using IPv6, IPv4, or dual-stack clients. The existing AWS IAM Identity Center endpoints supporting IPv4 remain available for backward compatibility. IAM Identity Center allows customers to enable workforce access to AWS managed applications and AWS accounts. When your client, such as a browsers or an application, makes a request to a dual-stack endpoint, the endpoint resolves to an IPv4 or IPv6 address, depending on the protocol used by your network and client. This launch helps you meet IPv6 compliance requirements, and minimize the need for complex NAT infrastructure. IPv6 support is available in all AWS Regions where IAM Identity Center is available, except the AWS GovCloud (US) Regions and the Taipei Region. To learn more, visit the IAM Identity Center User Guide.

lexiamiam identity center
#lex#iam#iam identity center#launch#support

Amazon EMR Serverless is a deployment option for Amazon EMR that you can use to run open source big data analytics frameworks such as Apache Spark and Apache Hive without having to configure, manage, or scale clusters and servers. Based on insights from hundreds of customer engagements, in this post, we share the top 10 best practices for optimizing your EMR Serverless workloads for performance, cost, and scalability. Whether you're getting started with EMR Serverless or looking to fine-tune existing production workloads, these recommendations will help you build efficient, cost-effective data processing pipelines.

emr
#emr#ga

AWS Glue DQDL labels add organizational context to data quality management by attaching business metadata directly to validation rules. In this post, we highlight the new DQDL labels feature, which enhances how you organize, prioritize, and operationalize your data quality efforts at scale. We show how labels such as business criticality, compliance requirements, team ownership, or data domain can be attached to data quality rules to streamline triage and analysis. You’ll learn how to quickly surface targeted insights (for example, “all high-priority customer data failures owned by marketing” or “GDPR-related issues from our Salesforce ingestion pipeline”) and how DQDL labels can help teams improve accountability and accelerate remediation workflows.

glue
#glue#ga

In this post, we explore key benefits, technical capabilities, and considerations for getting started with Spark 4.0.1 on Amazon EMR Serverless. With the emr-spark-8.0-preview release label, you can evaluate new SQL capabilities, Python API improvements, and streaming enhancements in your existing EMR Serverless environment.

emr
#emr#preview#now-available#improvement#enhancement

In this post, we discuss how to use AppSync Events as the foundation of a capable, serverless, AI gateway architecture. We explore how it integrates with AWS services for comprehensive coverage of the capabilities offered in AI gateway architectures. Finally, we get you started on your journey with sample code you can launch in your account and begin building.

#launch#ga

Amazon Managed Grafana is now available in both AWS GovCloud (US-West) and AWS GovCloud (US-East) Regions, enabling government customers and regulated industries to securely visualize and analyze their operational data while meeting stringent compliance requirements. Amazon Managed Grafana is a fully managed service based on open-source Grafana that makes it easier for you to visualize and analyze your operational data at scale. All Amazon Managed Grafana features are supported in AWS GovCloud (US) Regions except for Enterprise plugins. To get started with Amazon Managed Grafana, visit the AWS Console and Amazon Managed Grafana user guide. To learn more about Amazon Managed Grafana, visit the product page and pricing page.

grafana
#grafana#now-available#support

AWS is announcing flexible billing for Amazon WorkSpaces Core managed instances, adding monthly flat-rate pricing alongside existing hourly billing. Customers can now choose the optimal pricing model based on their end user usage patterns. Monthly billing is ideal for predictable full-time desktops and hourly billing is ideal for variable usage patterns. Both options are pay-as-you-go with no long-term commitments. Amazon WorkSpaces Core managed instances simplifies virtual desktop infrastructure (VDI) migrations with highly customizable instance configurations. WorkSpaces Core managed instances provisions resources in your AWS account, handling infrastructure lifecycle management for both persistent and non-persistent workloads. Monthly pricing delivers savings vs hourly billing at always-on utilization, optimized for real-world VDI use cases. With flexible billing, customers benefit from predictable costs for persistent desktop workloads and the flexibility to mix hourly and monthly billing within the same deployment. VDI partners utilizing WorkSpaces Core managed instances including Citrix, Workspot, Dizzion, and Leostream can now integrate with new WorkSpaces API billing features to enable the monthly billing option when instances are created. Hourly billing remains the default billing option for managed instances. In addition, starting today, hourly utility rates for WorkSpaces Core managed instances will now be combined and billed by Amazon WorkSpaces to simplify pricing. Previously, hourly rates were split between Amazon EC2 and Amazon WorkSpaces on customer bills. There is no change to the effective rates for on-demand hourly usage of WorkSpaces Core Managed Instances with this announcement. To learn more about Amazon WorkSpaces Core managed instances flexible billing, visit the WorkSpaces for VDI partners pricing page. For more information, see the WorkSpaces for VDI partners product page. For technical documentation, see the Amazon WorkSpaces Core Documentation.

lexec2
#lex#ec2#announcement

Amazon Route 53 Domains now supports registration and management of ten new top-level domains (TLDs): .ai, .nz, .shop, .bot, .moi, .spot, .free, .deal, .now, and .hot. This expansion enhances Route 53's capabilities as a domain registration and DNS management service, offering customers more options to establish their online presence. With these additions, businesses and individuals can now leverage domain names tailored to specific industries, regions, or purposes directly through Amazon Web Services (AWS). The new TLDs cater to various use cases. To name a few, the .ai domain, originally for Anguilla, has become popular among artificial intelligence companies. E-commerce sites can utilize .shop for their online storefronts. The .bot domain suits chatbot and AI-related services. The .now domain works well for time-sensitive services and instant delivery platforms. Users can register these domains through the Route 53 console, AWS CLI, or SDKs, enjoying integrated DNS management and automatic renewal features. This seamless integration allows for efficient domain administration alongside existing Route 53 hosted zones and DNS records. To learn more about Amazon Route 53 Domains and start registering new domains, visit the Amazon Route 53 page. Domain registration pricing varies by TLD. Visit the pricing page for detailed pricing information.

rds
#rds#integration#support#expansion

EC2 Auto Scaling is introducing a new policy condition key autoscaling:ForceDelete. This condition key is used with the DeleteAutoScalingGroup action to control whether the ForceDelete parameter can be used during deletion, which determines if an Auto Scaling group (ASG) can be deleted while it still contains running instances. You can use this condition key in IAM policies to restrict deletion permissions. This provides a safety measure to prevent accidental deletion of ASGs that still have running instances. Furthermore, EC2 Auto Scaling now offers deletion protection at the group level. The new deletion-protection configuration can be set either when you create your ASGs or update them. This new feature lets you set enhanced controls based on your workload's criticality, with multiple protection levels available to safeguard against accidental deletions and help maintain application availability. Combining the autoscaling:ForceDelete condition key with deletion protection at the group level provides a layered defense against unwanted ASG termination by allowing you to both restrict IAM permissions for force-delete operations and set enhanced protection controls directly on critical ASGs. The features now available in all AWS Regions and AWS GovCloud (US) Regions. To get started, visit the EC2 Auto Scaling console or refer to our technical documentation for deletion protection and policy condition keys for Amazon EC2 Auto Scaling.

ec2iam
#ec2#iam#ga#now-available#new-feature#update

Amazon Bedrock AgentCore services are now being supported by various IaC frameworks such as AWS Cloud Development Kit (AWS CDK), Terraform and AWS CloudFormation Templates. This integration brings the power of IaC directly to AgentCore so developers can provision, configure, and manage their AI agent infrastructure. In this post, we use CloudFormation templates to build an end-to-end application for a weather activity planner.

bedrockagentcorecloudformation
#bedrock#agentcore#cloudformation#integration#support

Today, we're announcing that Amazon Elastic VMware Service (Amazon EVS) now supports the ability to deploy multiple VMware NSX Tier-0 Gateways within VMware Software-Defined Data Centers (SDDC), enabling enhanced network segmentation and more flexible routing configurations. Multiple NSX Tier-0 Gateways allow for better performance and scale by distributing network traffic across multiple NSX Edge Clusters. This latest enhancement enables improved network segmentation, allowing you to isolate different workload environments and maintain distinct security policies for each gateway. You can also use multiple gateways to create separate test environments for validating network configurations and performing gateway upgrades with minimal impact to production workloads. This architecture flexibility helps you align your network topology with specific business requirements while maintaining operational efficiency in running your VMware workloads on AWS with Amazon EVS.  To learn more about this newest enhancement, read this re:Post article that walks you through the process of deploying multiple NSX Edge Clusters within your EVS environment. To get started with Amazon EVS, visit the product detail page and user guide.

lex
#lex#ga#enhancement#support

Amazon Web Services announces general availability of Amazon EC2 M4 Max Mac instances, powered by the latest Mac Studio hardware. Amazon EC2 M4 Max Mac instances are the next-generation EC2 Mac instances, that enable Apple developers to migrate their most demanding build and test workloads onto AWS. These instances are ideal for building and testing applications for Apple platforms such as iOS, macOS, iPadOS, tvOS, watchOS, visionOS, and Safari. Amazon EC2 M4 Max Mac instances offer up to 25% better application build performance compared to Amazon EC2 M1 Ultra Mac instances. M4 Max Mac instances are powered by the AWS Nitro System, providing up to 10 Gbps network bandwidth and 8 Gbps of Amazon Elastic Block Store (Amazon EBS) storage bandwidth. These instances are built on Apple M4 Max Mac Studio computers featuring a 16-core CPU, 40-core GPU, 16-core Neural Engine, and 128GB of unified memory.  Amazon EC2 M4 Max Mac instances are available in US East (N. Virginia) and US West (Oregon).  To learn more about Amazon EC2 M4 Max Mac instances, visit the Amazon EC2 Mac page.

ec2
#ec2

Amazon RDS for Oracle now supports database replicas for instances set up in Oracle multi-tenant configuration. Oracle multi-tenant configuration allows customers to host multiple, isolated pluggable databases in a single container database, which allows for cost reduction through consolidation and easier management. With support for replicas in Oracle multi-tenant configuration, customers can now distribute read workloads to a replica to scale workloads, or setup cross-Region replicas. In disaster recovery situations, customers can promote replicas to serve as a new standalone database, or execute a switchover to reverse roles between the primary database and the replica for a quick recovery. To set up replicas in Oracle multi-tenant configuration, customers can create a replica in either mounted or read-only mode using the AWS management console, AWS CLI, or AWS SDK. Once a replica is set up, Amazon RDS for Oracle manages asynchronous physical replication between primary and replica database instances using Oracle Data Guard. Amazon RDS for Oracle read replicas use Oracle Data Guard. Using mounted mode replicas require an Oracle Database Enterprise Edition (EE) license, and using read-only mode replicas require additional Oracle Active Data Guard licenses. We recommend customers to consult their Oracle licensing expert to determine Oracle licensing requirements. Refer to RDS for Oracle User Guide for more information, and Amazon RDS for Oracle pricing for available instance configurations, pricing, and region availability.

rds
#rds#ga#support

Amazon Connect Step-by-Step Guides now enables managers to build more dynamic and responsive guided experiences. Managers can create conditional user interfaces that adapt based on user interactions, making workflows more efficient. For example, managers can configure dropdown menus to show or hide fields, change default values, or adjust required fields based on the input in prior fields, creating tailored experiences for different scenarios. In addition, Step-by-Step Guides can now automatically refresh data from Connect resources such as flow modules at specified intervals, ensuring agents always work with the most current information. Amazon Connect Step-by-Step Guides is available in the following AWS regions: US East (N. Virginia), US West (Oregon), Canada (Central), Africa (Cape Town), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (London), and the AWS GovCloud (US-West) Region. To learn more and get started, visit the Amazon Connect webpage and documentation.

#ga#update

As organizations scale their observability and analytics capabilities across multiple AWS Regions and environments, maintaining consistent dashboards becomes increasingly complex. Teams often spend hours manually recreating dashboards, creating workspaces, linking data sources, and validating configurations across deployments—a repetitive and error-prone process that slows down operational visibility. The next generation OpenSearch UI in Amazon OpenSearch Service […]

lexopensearchopensearch servicerdsorganizations
#lex#opensearch#opensearch service#rds#organizations#ga

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8i and C8i-flex instances are available in the Asia Pacific (Sydney), and Europe (Frankfurt) regions. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. These C8i and C8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver up to 20% higher performance than C7i and C7i-flex instances, with even higher gains for specific workloads. The C8i and C8i-flex are up to 60% faster for NGINX web applications, up to 40% faster for AI deep learning recommendation models, and 35% faster for Memcached stores compared to C7i and C7i-flex. C8i-flex are the easiest way to get price performance benefits for a majority of compute intensive workloads like web and application servers, databases, caches, Apache Kafka, Elasticsearch, and enterprise applications. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources. C8i instances are a great choice for all memory-intensive workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. C8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications. To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information about the new C8i and C8i-flex instances visit the AWS News blog.

lexec2kafka
#lex#ec2#kafka#ga#now-available

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8i instances are available in the Europe (London) region. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. These C8i instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver up to 20% higher performance than C7i instances, with even higher gains for specific workloads. The C8i are up to 60% faster for NGINX web applications, up to 40% faster for AI deep learning recommendation models, and 35% faster for Memcached stores compared to C7i. C8i instances are a great choice for all memory-intensive workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. C8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications. To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information about the new C8i instances visit the AWS News blog.

ec2
#ec2#ga#now-available

AWS Config announces launch of an additional 13 managed Config rules for various use cases such as security, durability, and operations. You can now search, discover, enable and manage these additional rules directly from AWS Config and govern more use cases for your AWS environment. With this launch, you can now enable these controls across your account or across your organization. For example, you can assess your security posture across Amazon Cognito User pools, Amazon EBS Snapshots, AWS Cloudformation Stacks and more. Additionally, you can leverage Conformance Packs to group these new controls and deploy across an account or across organization, streamlining your multi-account governance. For the full list of recently released rules, visit the AWS Config developer guide. For description of each rule and the AWS Regions in which it is available, please refer our Config managed rules documentation. To start using Config rules, please refer our documentation. New Rules Launched: AURORA_GLOBAL_DATABASE_ENCRYPTION_AT_REST CLOUDFORMATION_STACK_SERVICE_ROLE_CHECK CLOUDFORMATION_TERMINATION_PROTECTION_CHECK CLOUDFRONT_DISTRIBUTION_KEY_GROUP_ENABLED COGNITO_USER_POOL_DELETE_PROTECTION_ENABLED COGNITO_USER_POOL_MFA_ENABLED COGNITO_USERPOOL_CUST_AUTH_THREAT_FULL_CHECK EBS_SNAPSHOT_BLOCK_PUBLIC_ACCESS ECS_CAPACITY_PROVIDER_TERMINATION_CHECK ECS_TASK_DEFINITION_EFS_ENCRYPTION_ENABLED ECS_TASK_DEFINITION_LINUX_USER_NON_ROOT ECS_TASK_DEFINITION_WINDOWS_USER_NON_ADMIN SES_SENDING_TLS_REQUIRED

ecscloudformationcloudfront
#ecs#cloudformation#cloudfront#launch#ga

Amazon Neptune Analytics is now available in US West (N. California), Asia Pacific (Seoul), Asia Pacific (Osaka), Asia Pacific (Hong Kong), Europe (Stockholm), Europe (Paris), and South America (São Paulo) regions. You can now create and manage Neptune Analytics graphs in these new regions and run advanced graph analytics. Amazon Neptune is a serverless graph database for connected data, improves the accuracy of AI applications, and lowers operational burden and costs. Neptune instantly scales graph workloads removing the need to manage capacity. By modeling data as a graph, Neptune captures context that improves accuracy and explainability of generative AI applications. To make AI application development easier, Neptune offers fully managed GraphRAG with Amazon Bedrock Knowledge Bases, and integrations with Strands AI Agents SDK and popular agentic memory tools. It also easily analyzes tens of billions of relationships across structured and unstructured data within seconds delivering strategic insights. Neptune is the only database and analytics engine that gives you the power of connected data with the enterprise capabilities and value of AWS. To get started, you can create a new Neptune Analytics graphs using the AWS Management Console, or AWS CLI. For more information on pricing and region availability, refer to the Neptune pricing page and AWS Region Table.

bedrock
#bedrock#now-available#integration#new-region

Amazon MSK Express brokers are a key component to dynamically scaling clusters to meet demand. Express based clusters deliver 3 times higher throughput, 20 times faster scaling capabilities, and 90% faster broker recovery compared to Amazon MSK Provisioned clusters. In addition, Express brokers support intelligent rebalancing for 180 times faster operation performance, so partitions are automatically and consistently well distributed across brokers. Intelligent rebalancing automatically tracks cluster health and triggers partition redistribution when resource imbalances are detected, maintaining performance across brokers. This post demonstrates how to use the intelligent rebalancing feature and build a custom solution that scales Express based clusters horizontally (adding and removing brokers) dynamically based on Amazon CloudWatch metrics and predefined schedules. The solution provides capacity management while maintaining cluster performance and minimizing overhead.

mskcloudwatch
#msk#cloudwatch#support

AWS Resource Control Policies (RCPs) now provide support for Amazon Cognito and Amazon CloudWatch Logs. Resource control policies (RCPs) are a type of organization policy that you can use to manage permissions in your organization. RCPs offer central control over the maximum available permissions for resources in your organization. With this expansion, you can now use RCPs to manage permissions for Amazon Cognito and Amazon CloudWatch Logs resources. For example, you can create policies that prevent identities outside your organization from accessing these resources, helping you build a data perimeter and enforce baseline security standards across your AWS environment. RCPs are available in all AWS commercial Regions and AWS GovCloud (US) Regions. To learn more about RCPs and view the full list of supported AWS services, visit the Resource control policies (RCPs) documentation in the AWS Organizations User Guide.

rdscloudwatchorganizations
#rds#cloudwatch#organizations#ga#support#expansion

Amazon Bedrock AgentCore Browser now supports custom Chrome browser extensions, enabling automation for complex workflows that standard browser automation cannot handle alone. This enhancement builds upon AgentCore’s existing secure browser features, allowing users to upload Chrome-compatible extensions to S3 and automatically install them during browser sessions. The feature serves enterprise developers, automation engineers, and organizations across industries requiring specialized browser functionality within a secure environment. This new feature enables powerful use cases including custom authentication flows, automated testing, and improved web navigation with performance optimization through ad blocking. Organizations gain the ability to integrate third-party tools that operate as browser extensions, eliminating manual processes while maintaining security within the AgentCore Browser environment. This feature is available in all nine AWS Regions where Amazon Bedrock AgentCore Browser is available: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (Ireland). To learn more about implementing custom browser extensions in Amazon Bedrock AgentCore, visit the Browser documentation.

bedrockagentcorelexs3organizations
#bedrock#agentcore#lex#s3#organizations#ga

Today, AWS announces the general availability of the Amazon Elastic Block Storage (Amazon EBS) optimized Amazon Elastic Compute Cloud (Amazon EC2) C8gb, M8gb, and R8gb instances in 48xlarge sizes. We are also offering C8gb and R8gb in metal-48xl sizes. These instances are powered by AWS Graviton4 processors to deliver up to 30% better compute performance than AWS Graviton3 processors. At up to 300 Gbps of EBS bandwidth, these instances offer the highest EBS performance among non-accelerated compute EC2 instances. Take advantage of the higher block storage performance offered by these new EBS optimized EC2 instances to scale the performance and throughput of a wide variety of workloads. For increased scalability, these instances offer sizes up to 48xlarge, including two metal sizes (C8gb and R8gb only), 3 varieties of memory to vCPUs ratios, up to 300 Gbps of EBS bandwidth, up to 400 Gbps of networking bandwidth. Offering up to 1440K IOPS, these instances have the highest Amazon EBS IOPS performance in Amazon EC2. These new instances support Elastic Fabric Adapter (EFA) networking, which enables lower latency and improved cluster performance for workloads deployed on tightly coupled clusters. The new instance sizes are available in US East (N. Virginia) and US West (Oregon) regions. Metal sizes are only available in US East (N. Virginia) region. To learn more, see Amazon C8gb, M8gb, and R8gb Instances. To begin your Graviton journey, visit the Level up your compute with AWS Graviton page. To get started, see AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs.

ec2graviton
#ec2#graviton#now-available#support

Amazon MQ now supports the ability for RabbitMQ 4 brokers to connect to JMS applications through the RabbitMQ JMS Topic Exchange plugin and JMS client. The JMS topic exchange plugin is enabled by default on all RabbitMQ 4 brokers, allowing you to use the JMS client to run your JMS 1.1, JMS 2.0, and JMS 3.1 applications on RabbitMQ. You can also use the RabbitMQ JMS client to send JMS messages to an AMQP exchange and consume messages from an AMQP queue to interoperate or migrate JMS workloads to AMQP workloads. To start using your JMS applications on RabbitMQ, simply select RabbitMQ 4.2 when creating a new broker using the M7g instance type through the AWS Management console, AWS CLI, or AWS SDKs, and then use the RabbitMQ JMS client to connect your applications. To learn more about the plugin, see the Amazon MQ release notes and the Amazon MQ developer guide. This plugin is available in all regions where Amazon MQ RabbitMQ 4 instances are available today.

q developer
#q developer#support

AWS Security Agent now supports GitHub Enterprise Cloud, enabling customers to connect their GitHub Enterprise Organization and leverage AI-powered security capabilities across their private repositories. With this expansion, development teams can integrate security analysis directly into their GitHub workflows. Customers can now connect their GitHub Enterprise Organization to AWS Security Agent by installing the AWS Security Agent GitHub app with the required permissions. Once connected, the agent provides three key capabilities for private repositories: Automated Code Reviews: AWS Security Agent performs comprehensive security reviews on new pull requests, identifying vulnerabilities and compliance with internal security requirements before code is merged. Penetration Testing Integration: Leverage your GitHub Enterprise code repositories during penetration testing activities, allowing the agent to analyze your codebase for potential security weaknesses and attack vectors. Automated Code Remediation: When security issues are identified during penetration testing, customers can choose to have AWS Security Agent automatically submit pull requests with recommended fixes, accelerating remediation workflows. This capability is available in the US East (N. Virginia) region where AWS Security Agent operates. To get started, connect your GitHub Enterprise Organization to AWS Security Agent through the AWS Security Agent console. To learn more about AWS Security Agent, visit the product page.

#ga#integration#support#expansion

Amazon SageMaker HyperPod now provides enhanced troubleshooting capabilities for lifecycle scripts, making it easier to identify and resolve issues during cluster node provisioning. SageMaker HyperPod helps you provision resilient clusters for running AI/ML workloads and developing state-of-the-art models such as large language models (LLMs), diffusion models, and foundation models (FMs). When lifecycle scripts encounter issues during cluster creation or node operations, you now receive detailed error messages that include the specific CloudWatch log group and log stream names where you can find execution logs for lifecycle scripts. You can view these error messages by running the DescribeCluster API or by viewing the cluster details page in the SageMaker console. The console also provides a "View lifecycle script logs" button that navigates directly to the relevant CloudWatch log stream, making it easier to locate logs. Additionally, CloudWatch logs for lifecycle scripts now include specific markers to help you track lifecycle script execution progress, including indicators for when the lifecycle script log begins, when scripts are being downloaded, when downloads complete, and when scripts succeed or fail. These markers help you quickly identify where issues occurred during the provisioning process. These enhancements reduce the time required to diagnose and fix lifecycle script failures, helping you get your HyperPod clusters up and running faster. This feature is available in all AWS Regions where Amazon SageMaker HyperPod is supported. To learn more, see SageMaker HyperPod cluster management in the Amazon SageMaker Developer Guide.

sagemakerhyperpodcloudwatch
#sagemaker#hyperpod#cloudwatch#ga#enhancement#support

AWS Clean Rooms announces support for join and partition hints for SQL queries, enabling optimization of join strategies and data partitioning for improved query performance and reduced costs. This launch enables you to apply SQL hints to your queries using comment-style syntax in pre-approved analysis templates as well as ad hoc SQL queries. You can now optimize large table joins using a broadcast join hint and you can improve data distribution with partition hints for better parallel processing. For example, a measurement company analyzing how many households viewed a live sports event uses a broadcast join hint on their lookup table to improve query performance and reduce costs. With AWS Clean Rooms, customers can create a secure data clean room in minutes and collaborate with any company on AWS or Snowflake to generate unique insights about advertising campaigns, investment decisions, and research and development. For more information about the AWS Regions where AWS Clean Rooms is available, see the AWS Regions table. To learn more about collaborating with AWS Clean Rooms, visit AWS Clean Rooms.

#launch#support

In this post, we present a scalable solution that addresses the challenge of migrating your large binary objects (LOBs) from Oracle to AWS by using a streaming architecture that separates LOB storage from structured data. This approach avoids size constraints, reduces Oracle licensing costs, and preserves data integrity throughout extended migration periods.

s3kafka
#s3#kafka

Amazon EMR Serverless now supports encrypting local disks with AWS Key Management Service (KMS) customer managed keys (CMKs). You can now meet strict regulatory and compliance requirements with additional encryption options beyond default AWS-owned keys, giving you greater control over your encryption strategy. Amazon EMR Serverless is a deployment option in Amazon EMR that makes it simple for data engineers and data scientists to run open-source big data analytics frameworks without configuring, managing, and scaling clusters or servers. Local disks on EMR Serverless workers are encrypted by default using AWS-owned keys. With this launch, customers who have strict regulatory and compliance needs can encrypt local disks with AWS KMS customer managed keys (CMKs) in the same account or from another account. This integration is supported on new or existing EMR Serverless applications and on all supported EMR release versions. You can specify the AWS KMS customer managed key at the application level where it applies to all workloads submitted on the application or you can specify the AWS KMS customer managed key for a specific job run or interactive session. This feature is available in all supported EMR Releases and in all AWS Regions where Amazon EMR Serverless is available including AWS GovCloud (US) and China regions. To learn more, see Local Disk Encryption with AWS KMS CMK in the Amazon EMR Serverless User Guide.

emr
#emr#launch#integration#support

Today, Amazon Bedrock introduces the expansion of the Reserved service tier designed for workloads requiring predictable performance and guaranteed tokens-per-minute capacity. The Reserved tier provides the ability to reserve prioritized compute capacity, keeping service levels predictable for your mission critical applications. It also includes the flexibility to allocate different input and output tokens-per-minute capacities to match the exact requirements of your workload and control cost. This is particularly valuable because many workloads have asymmetric token usage patterns. For instance, summarization tasks consume many input tokens but generate fewer output tokens, while content generation applications require less input and more output capacity. When your application needs more tokens-per-minute capacity than what you reserved , the service automatically overflows to the pay-as-you-go Standard tier, ensuring uninterrupted operations. The Reserved tier is available today for Anthropic Claude Sonnet 4.5 in AWS GovCloud (US-West). Customers can reserve capacity for 1 month or 3 month duration. Customers pay a fixed price per 1K tokens-per-minute and are billed monthly. Amazon Bedrock Reserved Tier is available for customers in AWS GovCloud (US-West) via GOV-CRIS cross-region profile. With the expansion of the Reserved service tier, Amazon Bedrock continues to provide more choice to customers, helping them develop, scale, and deploy applications and agents that improve productivity and customer experiences while balancing performance and cost requirements. For more information about the AWS Regions where Amazon Bedrock Reserved tier is available, refer to the Documentation. To get access to the Reserved tier, please contact your AWS account team.

bedrocklex
#bedrock#lex#expansion

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8gn instances, powered by the latest-generation AWS Graviton4 processors, are available in the AWS Region Asia Pacific (Mumbai), Africa (Cape Town), Europe (Ireland, London), Canada West (Calgary). The new instances provide up to 30% better compute performance than Graviton3-based Amazon EC2 C7gn instances. Amazon EC2 C8gn instances feature the latest 6th generation AWS Nitro Cards, and offer up to 600 Gbps network bandwidth, the highest network bandwidth among network optimized EC2 instances.    Take advantage of the enhanced networking capabilities of C8gn to scale performance and throughput, while optimizing the cost of running network-intensive workloads such as network virtual appliances, data analytics, CPU-based artificial intelligence and machine learning (AI/ML) inference.    For increased scalability, C8gn instances offer instance sizes up to 48xlarge, up to 384 GiB of memory, and up to 60 Gbps of bandwidth to Amazon Elastic Block Store (EBS). C8gn instances support Elastic Fabric Adapter (EFA) networking on the 16xlarge, 24xlarge, 48xlarge, metal-24xl, and metal-48xl sizes, which enables lower latency and improved cluster performance for workloads deployed on tightly coupled clusters.    C8gn instances are available in the following AWS Regions: US East (N. Virginia, Ohio), US West (Oregon, N.California), Europe (Frankfurt, Stockholm, Ireland, London), Asia Pacific (Singapore, Malaysia, Sydney, Thailand, Mumbai), Middle East (UAE), Africa (Cape Town), Canada West (Calgary).    To learn more, see Amazon C8gn Instances. To begin your Graviton journey, visit the Level up your compute with AWS Graviton page. To get started, see AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs.

ec2rdsgraviton
#ec2#rds#graviton#ga#now-available#support

AWS now includes the AWS Identity and Access Management (IAM) and AWS Organizations policy’s Amazon Resource Name (ARN) in access denied error messages in same account and same organization scenarios. This allows you to quickly identify the exact policy responsible for the denied access and take action to troubleshoot the issue. Before this launch, customers had to identify the root cause of access denied errors based only on the policy type in the error message. This launch expedites troubleshooting when you have multiple policies of the same type, as you can directly see which policy to address for explicit deny cases. The error message now includes the policy ARN for Service Control Policies (SCP), Resource Control Policies (RCP), identity-based policies, session policies, and permission boundaries. This additional context will gradually become available across AWS services in all AWS regions. To learn more, refer to IAM documentation.

iamorganizations
#iam#organizations#launch#ga

Today AWS announced enhanced scheduling orchestration to track AWS tagging events, self-service troubleshooting via informational resource tags, an optional EC2 insufficient-capacity retry flow using alternate instance types, and automatic creation of a dedicated EventBridge bus for scheduling events for Instance Scheduler (IS) on AWS. IS’s orchestration and fan-out mechanisms have been re-architected to enable customers to track AWS tagging events, allowing the product to more intelligently sequence and distribute scheduling operations - improving scaling performance and addressing cost-scaling concerns. The product now enables distributed cloud engineer personas to perform self-service troubleshooting in their spoke accounts through informational tags applied to their resources without relying on a central cloud administrator. In addition, an optional Insufficient Capacity Error Retry flow has been added to automatically retry failed start actions using alternate instance types when EC2 encounters insufficient capacity errors, ensuring workloads start reliably even in constrained Availability Zones or regions. Lastly, Instance Scheduler on AWS now automatically creates a dedicated EventBus for scheduling-related events, streamlining integrations and automation workflows. This update improves Instance Scheduler’s scalability, reduces operational overhead, and increases workload reliability across complex customer environments. You can accelerate issue resolution and boost operational efficiency by empowering distributed cloud engineers to troubleshoot independently. You can enhance overall workload resilience by improving handling of EC2 capacity shortages and simplify integrations by streamlining event routing through the new EventBus to support more extensible automation workflows. To learn more about Instance Scheduler, visit the Product Page or contact your AWS account team.

lexec2eventbridge
#lex#ec2#eventbridge#update#integration#support

Second-generation AWS Outposts racks can now be shipped and installed at your data center and on-premises locations in Argentina, Bangladesh, Colombia, Dominican Republic, Ecuador, India, Kazakhstan, Mexico, Morocco, Nigeria, Oman, Panama, Qatar, Senegal, Serbia, South Africa, Republic of Korea, Taiwan, Thailand, and Uruguay. Outposts racks extend AWS infrastructure, AWS services, APIs, and tools to virtually any on-premises data center or colocation space for a truly consistent hybrid experience. Outposts racks are ideal for workloads requiring low-latency access to on-premises systems, local data processing, data residency compliance, and migration of applications with local system dependencies. Second-generation Outposts racks support the latest generation of x86-powered Amazon Elastic Compute Cloud (Amazon EC2) instances, starting with C7i, M7i, and R7i instances. These instances provide up to 40% better performance compared to C5, M5, and R5 instances on first-generation Outposts racks. Second-generation Outposts racks also offer simplified network scaling and configuration, and support a new category of accelerated networking Amazon EC2 instances optimized for ultra-low latency and high throughput needs. With the availability of second-generation Outposts racks in the above countries, you can run AWS services locally in your on-premises facilities to maintain data residency within your country, while connecting to a supported AWS Region for management and operations. To learn more about second-generation Outposts racks, read this blog post and user guide. For the most updated list of countries and territories and the AWS Regions where second-generation Outposts racks are supported, check out the Outposts rack FAQs page.

ec2outposts
#ec2#outposts#ga#update#support

You can now use the AWS Transfer Family Terraform module to deploy Transfer Family web apps, which enable end users to transfer files to and from Amazon S3 over a web interface. This release streamlines centralized provisioning of web apps with federated authentication and user access controls, enabling consistent, repeatable deployments through infrastructure as code. With Transfer Family web apps, you can provide your workforce with a fully managed, branded web portal to browse, upload, and download data in S3. In a single deployment, this module allows you to programmatically provision your web apps that authenticate users through AWS IAM Identity Center using your existing identity provider and Amazon S3 Access Grants for fine-grained user permissions. An included end-to-end example shows you how to assign and optionally create IAM Identity Center users and groups, configure S3 Access Grants, set up the web app, and enable security auditing through Amazon CloudTrail. You can get started by downloading the new module from the Terraform Registry. To learn more about Transfer Family web apps, visit the user guide. To see all the regions where Transfer Family web apps is available, visit the AWS Region table.

s3iamiam identity center
#s3#iam#iam identity center#support

Amazon Elastic Container Registry (ECR) now enables you to share common image layers across repositories within a registry through a capability called blob mounting. This feature is especially valuable if you manage multiple microservices or applications built from common base images. With blob mounting, you can achieve faster image pushes by reusing existing layers instead of re-uploading identical content, and reduce storage costs by storing common layers once and referencing them across repositories. Getting started is simple, enable the registry-level setting through the ECR console or AWS CLI. Once enabled, ECR automatically handles layer sharing when you push images. Blob mounting is available in all AWS commercial and AWS GovCloud (US) Regions. To learn more about blob mounting, please visit our documentation.

#support

Amazon SageMaker Unified Studio now supports cross-Region subscriptions for comprehensive and flexible data access and governance. With cross-Region support, you can subscribe to AWS Glue tables and views, as well as Amazon Redshift tables and views published in a different AWS Region than your project. This capability helps break down data silos and enable better collaboration across your organization by allowing teams to access curated data assets from any AWS Region without manual replication. To get started with cross-Region subscriptions, log into SageMaker Unified Studio, or use the Amazon DataZone API, SDK, or AWS CLI. Cross-Region subscriptions support is available in all AWS Regions where SageMaker Unified Studio is supported. To learn more, see the SageMaker Unified Studio user guide.

sagemakerunified studiolexredshiftglue
#sagemaker#unified studio#lex#redshift#glue#ga

Bazaarvoice is an Austin-based company powering a world-leading reviews and ratings platform. Our system processes billions of consumer interactions through ratings, reviews, images, and videos, helping brands and retailers build shopper confidence and drive sales by using authentic user-generated content (UGC) across the customer journey. In this post, we show you the steps we took to migrate our workloads from self-hosted Kafka to Amazon Managed Streaming for Apache Kafka (Amazon MSK). We walk you through our migration process and highlight the improvements we achieved after this transition.

kafkamsk
#kafka#msk#improvement

In this post, we'll guide you through building multimodal RAG applications. You'll learn how multimodal knowledge bases work, how to choose the right processing strategy based on your content type, and how to configure and implement multimodal retrieval using both the console and code examples.

bedrock
#bedrock

AWS IoT Device Management now offers the managed integrations feature in the Middle East (UAE) region. Organizations operating in this region can now better serve their local customers, helping them build unified Internet of Things (IoT) solutions that can easily onboard and manage diverse IoT devices through a single interface, regardless of connection type - direct, hub-based, or third-party cloud-based. Managed integrations provides developers with a unified interface and device SDKs supporting ZigBee, Z-Wave, Matter and Wi-Fi protocols. The feature includes partner built cloud-to-cloud connectors and 80+ device data model templates based on AWS's implementation of the Matter data model standard. These capabilities allow developers to rapidly integrate devices into end user applications, such as home security, energy management, and elderly care monitoring. The managed integrations feature is available in Canada (Central), Europe (Ireland) and Middle East (UAE) regions. To learn more, refer to the developer guide and get started on the AWS IoT console.

organizations
#organizations#ga#integration#support

AWS Glue is now available in the Asia Pacific (New Zealand) Region, enabling customers to build and run their ETL workloads closer to their data sources in this region. AWS Glue is a serverless data integration service that makes it simple to discover, prepare, and combine data for analytics, machine learning, and application development. AWS Glue provides both visual and code-based interfaces to make data integration simpler so you can analyze your data and put it to use in minutes instead of months. To learn more, visit the AWS Glue product page and our documentation. For AWS Glue Region availability, please see the AWS Region table.

glue
#glue#now-available#integration

Amazon Quick Suite SPICE engine is now supporting higher scale, faster ingestion, and broader data types to power advanced analytics and AI-driven workloads. With this launch, customers can load up to 2TB of data per dataset, doubling the previous 1TB limit, when using the new data preparation experience. Despite the increased dataset size, SPICE continues to deliver strong performance, with ingestion further optimized to enable even faster data loading and refresh to reduce time to insight. We’ve also expanded SPICE’s data type support by increasing string length limits from 2K to 64K Unicode characters and extending the supported timestamp range from year 1400 back to year 0001. As Quick Suite customers bring richer, more complex, and increasingly AI-driven workloads into SPICE, these enhancements enable broader data coverage, faster data onboarding, and more powerful analytics, without compromising performance. To learn more, visit our documentation. The new SPICE dataset size limitation is now available in Amazon Quick Sight Enterprise Editions across all supported Amazon Quick Sight regions.

amazon qlex
#amazon q#lex#launch#now-available#enhancement#support

Amazon RDS for Oracle now supports bare metal instances with Bring Your Own License (BYOL) license for Oracle Standard Edition 2. You can use M7i, R7i, X2iedn, X2idn, X2iezn, M6i, M6id, M6in, R6i, R6id, and R6in bare metal instances at 25% lower price compared to equivalent virtualized instances. With bare metal instances, you may be able to reduce your commercial database license and support costs by using bare metal instances since they provide full visibility into the number of CPU cores and sockets of the underlying server. Most bare metal instances have 2 sockets while db.m7i.metal-24xl and db.r7i.metal-24xl instances each have a single socket. Consult your legal or licensing partner to determine if you can use bare metal instances with Oracle Standard Edition 2 and if you can reduce license and support costs. Bare metal instances are available with Bring Your Own License (BYOL) license for Oracle Enterprise Edition Standard Edition 2. Refer to Amazon RDS for Oracle Pricing for available instance configurations, pricing, and region availability.

rds
#rds#ga#support

Today, Amazon announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) G7e instances, accelerated by NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. G7e instances offer up to 2.3x inference performance compared to G6e. Customers can use G7e instances to deploy large language models (LLMs), agentic AI models, multimodal generative AI models, and physical AI models. G7e instances offer the highest performance for spatial computing workloads as well as workloads that require both graphics and AI processing capabilities. G7e instances feature up to 8 NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, with 96 GB of memory per GPU, and 5th Generation Intel Xeon processors. They support up to 192 virtual CPUs (vCPUs) and up to 1600 Gbps of Elastic Fabric Adapter networking bandwidth. G7e instances support NVIDIA GPUDirect Peer to Peer (P2P) that boosts performance for multi-GPU workloads. Multi-GPU G7e instances also support NVIDIA GPUDirect Remote Direct Memory Access (RDMA) with EFAv4 in EC2 UltraClusters, reducing latency for small-scale multi-node workloads. You can use G7e instances for Amazon EC2 in the following AWS Regions: US East (N. Virginia) and US East (Ohio). You can purchase G7e instances as On-Demand Instances, Spot Instances, or as part of Savings Plans. To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit G7e instances.

ec2
#ec2#generally-available#support

On January 20, 2026 Amazon announced quarterly security and critical updates for Amazon Corretto Long-Term Supported (LTS) versions of OpenJDK. Corretto 25.0.2, 21.0.10, 17.0.18, 11.0.30, and 8u482 are now available for download. Amazon Corretto is a no-cost, multi-platform, production-ready distribution of OpenJDK. Click on the Corretto home page to download Corretto 25, Corretto 21, Corretto 17, Corretto 11, or Corretto 8. You can also get the updates on your Linux system by configuring a Corretto Apt, Yum, or Apk repo. Feedback is welcomed!

#now-available#update#support

AWS Graviton4-based R8g database instances are now generally available for Amazon Aurora (MySQL and PostgreSQL compatibility) and Amazon RDS for PostgreSQL, MySQL, and MariaDB in additional Asia Pacific regions (Hong Kong, Osaka, and Jakarta). R8G instances are now supported for Amazon Aurora with MySQL compatibility and Amazon RDS for PostgreSQL, MySQL, and MariaDB in Asia Pacific (Seoul and Singapore), and Canada (Central) regions, expanding on the previous launch of R8g support for Amazon Aurora with PostgreSQL compatibility in these three regions. Additionally, Amazon Aurora with MySQL and PostgreSQL compatibility now also supports R7i database instances in Asia Pacific (Hyderabad) and R7g database instances in Africa (Cape Town). AWS Graviton4-based instances provide up to 40% performance improvement and up to 29% price/performance improvement for on-demand pricing over Graviton3-based instances of equivalent sizes on Amazon Aurora and Amazon RDS databases, depending on database engine, version, and workload. Built on the AWS Nitro System, the new R8g database instances introduce 24xlarge and 48xlarge sizes, delivering up to 192 vCPUs, an 8:1 ratio of memory to vCPU with the latest DDR5 memory, up to 50Gbps enhanced networking bandwidth, and up to 40Gbps of bandwidth to Amazon Elastic Block Store (Amazon EBS). You can easily launch R8g, R7g, or R7i database instances through the Amazon RDS Management Console or by using the AWS Command Line Interface (CLI). For detailed information about specific engine versions that support these database instance types, please refer to the Aurora and RDS documentation. For complete information on pricing and regional availability, please refer to the Amazon RDS pricing page.

rdsgraviton
#rds#graviton#launch#generally-available#ga#improvement

Amazon Relational Database Service (Amazon RDS) now supports faster Blue/Green Deployments switchover, reducing your primary database, or writer node, upgrade downtime to typically five seconds or lower for single-Region configurations. Blue/Green Deployment creates a staging environment (Green) to test changes while keeping production (Blue) safe, enabling seamless switchover that requires no application endpoint changes. Your database writer instance downtime during switchover varies based on your connection method. For single-Region configurations, applications connecting directly to the database endpoint experience typically five seconds or lower downtime, while those using the AWS Advanced JDBC Driver typically see two seconds or lower due to eliminated DNS propagation delays. You can use Amazon RDS Blue/Green Deployments for deploying changes to production, such as major version database engine upgrades, maintenance updates, and scaling instances. Support for faster RDS Blue/Green Deployments switchover for single-Region configurations is available for Amazon Aurora and Amazon RDS database engines including PostgreSQL, MySQL, and MariaDB in all AWS regions. In a few clicks, update your databases using RDS Blue/Green Deployments via the Amazon RDS Console or Amazon RDS CLI. Learn more about RDS Blue/Green Deployments and the supported engine versions here.

rds
#rds#ga#update#support

Building on our recent launch of customizable tables and pivot tables, Amazon Quick Sight now enables readers to add or remove fields, change aggregations, and modify formatting directly in dashboards—all without requiring updates from dashboard authors. These enhanced capabilities empower readers with even greater flexibility to tailor their data views for specific analytical needs. For example, sales managers can add revenue breakdowns by product category to identify growth opportunities, while finance teams can change aggregations from sum to average to better understand spending patterns across departments. These new customization features are now available in Amazon Quick Sight Enterprise Edition across all supported Amazon Quick Sight regions. To get started with these new customization features, see our blog post.

amazon qlexrds
#amazon q#lex#rds#launch#ga#now-available

Amazon CloudWatch Database Insights expands the availability of its on-demand analysis experience to four additional Regions - Asia Pacific (New Zealand), Asia Pacific (Taipei), Asia Pacific (Thailand), and Mexico (Central). CloudWatch Database Insights is a monitoring and diagnostics solution that helps database administrators and developers optimize database performance by providing comprehensive visibility into database metrics, query analysis, and resource utilization patterns. This feature leverages machine learning models to help identify performance bottlenecks during the selected time period, and gives advice on what to do next. Previously, database administrators had to manually analyze performance data, correlate metrics, and investigate root cause. This process is time-consuming and requires deep database expertise. With this launch, you can now analyze database performance monitoring data for any time period with automated intelligence. The feature automatically compares your selected time period against normal baseline performance, identifies anomalies, and provides specific remediation advice. Through intuitive visualizations and clear explanations, you can quickly identify performance issues and receive step-by-step guidance for resolution. This automated analysis and recommendation system reduces mean-time-to-diagnosis from hours to minutes. You can get started with this feature by enabling the Advanced mode of CloudWatch Database Insights on your Amazon Aurora and Amazon RDS databases using the RDS service console, AWS APIs, the AWS SDK, or AWS CloudFormation. Please refer to Aurora documentation or RDS documentation to get started.

rdscloudformationcloudwatch
#rds#cloudformation#cloudwatch#launch#ga#now-available

Amazon Elastic VMware Service (Amazon EVS) now supports the ability to specify supported combinations of VMware Cloud Foundation (VCF) and ESX software versions when setting up your EVS environments and hosts. Amazon EVS lets you run VCF natively within your Amazon Virtual Private Cloud (VPC), powered by AWS Nitro EC2 bare-metal instances. Amazon EVS automates deployment of a complete VCF environment in hours using either an intuitive step-by-step configuration workflow on the AWS console or the AWS Command Line Interface (CLI). This newest enhancement addresses the critical need for version flexibility to help you migrate workloads to AWS faster, reduce operational complexity and risk, and meet data center exit deadlines with Amazon EVS. With software versioning support, you can now specify a VCF version when creating new environments using the CreateEnvironment API and select an ESX version when adding new hosts to existing environments using the CreateEnvironmentHost API. You can also query supported version combinations with the new GetVersions API. As part of this capability, we're also adding support for new environment deployments with VCF 5.2.2. To get started, visit the Amazon EVS product detail page and user guide.

lexec2
#lex#ec2#enhancement#support

Amazon EC2 High Memory U7i instances are available in new regions. U7i-6tb.112xlarge instances are now available in AWS Asia Pacific (Thailand, Sydney, Singapore), Canada (Central), and AWS GovCloud (US-East), u7i-8tb.112xlarge instances are now available in AWS South America (Sao Paulo), and u7in-16tb.224xlarge instances are now available in AWS GovCloud (US-East). U7i instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-6tb instances offer 6TiB of DDR5 memory, U7in-8tb instances offer 8TiB of DDR5 memory, and U7in-16tb instances offer 16TiB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment. U7i-6tb and U7i-8tb instances offer 448 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 100Gbps of network bandwidth, and support ENA Express. U7in-16tb instances offer 896 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 200Gbps of network bandwidth, and support ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server.  To learn more about U7i instances, visit the High Memory instances page.

ec2
#ec2#ga#now-available#support#new-region

AWS now supports multiple local gateway (LGW) routing domains on AWS Outposts racks to simplify network segmentation. Network segmentation is the practice of splitting a computer network into isolated subnetworks, or network segments. This reduces the attack surface so that if a host on one network segment is compromised, the hosts on the other network segments are not affected. Many customers in regulated industries such as manufacturing, health care and life sciences, banking, and others implement network segmentation as part of their on-premises network security standards to reduce the impact of a breach and help address compliance requirements.

rdsoutposts
#rds#outposts#ga#support

Today, Amazon Bedrock announces the expansion of the Reserved service tier designed for workloads requiring predictable performance and guaranteed tokens-per-minute capacity. The Reserved tier provides the ability to reserve prioritized compute capacity, keeping service levels predictable for your mission critical applications. It also includes the flexibility to allocate different input and output tokens-per-minute capacities to match the exact requirements of your workload and control cost. This is particularly valuable because many workloads have asymmetric token usage patterns. For instance, summarization tasks consume many input tokens but generate fewer output tokens, while content generation applications require less input and more output capacity. When your application needs more tokens-per-minute capacity than what you reserved , the service automatically overflows to the pay-as-you-go Standard tier, ensuring uninterrupted operations. The Reserved tier and is available today for Anthropic Claude Opus 4.5 and Claude Haiku 4.5. Customers can reserve capacity for 1 month or 3 month duration. Customers pay a fixed price per 1K tokens-per-minute and are billed monthly. With the expansion of the Reserved service tier, Amazon Bedrock continues to provide more choice to customers, helping them develop, scale, and deploy applications and agents that improve productivity and customer experiences while balancing performance and cost requirements. For more information about the AWS Regions where Amazon Bedrock Reserved tier is available, refer to the Documentation. To get access to the Reserved tier, please contact your AWS account team.

bedrocklex
#bedrock#lex#expansion

Amazon Managed Workflows for Apache Airflow (MWAA) is now available in AWS Region Asia Pacific (Thailand). Amazon MWAA is a managed service for Apache Airflow that lets you use the same familiar Apache Airflow platform as you do today to orchestrate your workflows and enjoy improved scalability, availability, and security without the operational burden of having to manage the underlying infrastructure. Learn more about using Amazon MWAA on the product page. Please visit the AWS region table for more information on AWS regions and services. To learn more about Amazon MWAA visit the Amazon MWAA documentation. Apache, Apache Airflow, and Airflow are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.

#now-available

AWS Outposts racks now support multiple local gateway (LGW) routing domains, enabling you to create up to 10 isolated routing domains per Outpost, each with independent route tables and BGP sessions to your on-premises network. This feature provides traffic separation between routing domains and enables both customer-owned IP (CoIP) and direct VPC routing (DVR) modes on the same Outpost. With multiple LGW routing domains, you can segment on-premises network connectivity for different departments or business units sharing an Outpost. Each routing domain maintains its own LGW VIF Group, LGW Route Table, and VPC associations, preventing traffic from crossing between domains. You can configure multiple LGW routing domains through the AWS Management Console or AWS CLI. Multiple LGW routing domains is available on second-generation Outposts racks at no additional charge. See the FAQs for Outposts racks for the latest list of supported AWS Regions. To learn more about implementation details and best practices, check out this blog post or visit our technical documentation.

outposts
#outposts#ga#support

In this post, we show you how fine-tuning enabled a 33% reduction in dangerous medication errors (Amazon Pharmacy), engineering 80% human effort reduction (Amazon Global Engineering Services), and content quality assessments improving 77% to 96% accuracy (Amazon A+). This post details the techniques behind these outcomes: from foundational methods like Supervised Fine-Tuning (SFT) (instruction tuning), and Proximal Policy Optimization (PPO), to Direct Preference Optimization (DPO) for human alignment, to cutting-edge reasoning optimizations such as Grouped-based Reinforcement Learning from Policy Optimization (GRPO), Direct Advantage Policy Optimization (DAPO), and Group Sequence Policy Optimization (GSPO) purpose-built for agentic systems.

Palo Alto Networks’ Device Security team wanted to detect early warning signs of potential production issues to provide more time to SMEs to react to these emerging problems. They partnered with the AWS Generative AI Innovation Center (GenAIIC) to develop an automated log classification pipeline powered by Amazon Bedrock. In this post, we discuss how Amazon Bedrock, through Anthropic’ s Claude Haiku model, and Amazon Titan Text Embeddings work together to automatically classify and analyze log data. We explore how this automated pipeline detects critical issues, examine the solution architecture, and share implementation insights that have delivered measurable operational improvements.

bedrocknova
#bedrock#nova#improvement

Second-generation AWS Outposts racks are now supported in the South America (Sao Paulo) and Europe (Stockholm) Regions. Outposts racks extend AWS infrastructure, AWS services, APIs, and tools to virtually any on-premises data center or colocation space for a truly consistent hybrid experience. Organizations from startups to enterprises and the public sector in and outside of Europe and South America can now order their Outposts racks connected to these new supported regions, optimizing for their latency and data residency needs. Outposts allows customers to run workloads that need low latency access to on-premises systems locally while connecting back to their home Region for application management. Customers can also use Outposts and AWS services to manage and process data that needs to remain on-premises to meet data residency requirements. This regional expansion provides additional flexibility in the AWS Regions that customers’ Outposts can connect to. To learn more about second-generation Outposts racks, read this blog post and user guide. For the most updated list of countries and territories and the AWS Regions where second-generation Outposts racks are supported, check out the Outposts rack FAQs page.

lexorganizationsoutposts
#lex#organizations#outposts#ga#update#support

Amazon S3 Storage Lens provides organization-wide visibility into your storage usage and activity. S3 Storage Lens is now available in AWS GovCloud (US) Regions, bringing metrics to help you optimize storage costs, identify data protection opportunities, and improve application performance. S3 Storage Lens provides a single view of object storage usage and activity across thousands of accounts in an organization, with drill-downs to generate insights at multiple aggregation levels. You can optimize storage costs by identifying prefixes with incomplete multipart uploads or buckets accumulating non-current object versions. You can identify buckets that don’t follow your data protection best practices, such as using S3 Cross-Region Replication to replicate data across AWS Regions or S3 Versioning to keep multiple versions of an object. With the newly added performance metrics, you can identify application performance constraints—for example, using request and object size distribution metrics to detect inefficient access patterns or tracking cross-Region data transfer to reduce latency and costs. Amazon S3 Storage Lens is available in all AWS Regions. S3 Storage Lens is pre-configured to receive free metrics by default for all customers and 14 days of historical data. For more detailed visibility with up to 15 months data retention, you can upgrade to S3 Storage Lens advanced metrics. To learn more about S3 Storage Lens metrics, including free and advanced metrics, refer to the documentation. For S3 Storage Lens Advanced pricing details, visit the Amazon S3 pricing page.

s3
#s3#ga#now-available

Amazon S3 on Outposts is now available on second-generation AWS Outposts racks for your data residency, low latency, and local data processing use cases on-premises. S3 on Outposts on second-generation Outposts racks offers three storage tiers: 196 TB, 490 TB, and 786 TB. Choose the storage tier that matches your workload, whether for production workloads, backups, or archival workloads. With S3 on Outposts, you can store, secure, retrieve, and control access to your data using familiar S3 APIs and features. AWS Outposts is a fully managed service that extends AWS infrastructure, services, and tools to virtually any data center, co-location space, or on-premises facility for a consistent hybrid experience. S3 on Outposts on second-generation Outposts racks is available in all AWS Regions and countries/territories where these racks are available. To learn more, visit the S3 on Outposts page or read our documentation.

s3outposts
#s3#outposts#now-available

Today, AWS Deadline Cloud announces integration with Foundry Nuke CopyCat, allowing you to run machine learning training jobs for visual effects in the cloud. AWS Deadline Cloud is a fully managed service that simplifies render management for computer-generated 2D/3D graphics and visual effects for films, TV shows, commercials, games, and industrial design. CopyCat learns visual effects, such as color corrections, beauty finishes, or deblurring, from a set of adjusted frames and automatically applies these adjustments across entire sequences. Instead of cleaning up frames one by one, you can fix a handful of representative frames and CopyCat will automatically apply these changes to the rest of your footage, saving you significant amounts of valuable time. With this integration, you can now train models by submitting CopyCat training jobs directly to your Deadline Cloud render farm. This gives you the ability to scale and run multiple training workloads in parallel while freeing up your artist workstations for creative work. You can monitor and track training jobs alongside other render jobs through the Deadline Cloud interface for one simple view across an entire project. AWS Deadline Cloud with Nuke CopyCat integration is available in all AWS Regions where AWS Deadline Cloud is supported. To get started, visit our CopyCat integration user guide.

#ga#integration#support

AWS Clean Rooms announces support for parameters in PySpark analysis templates, offering increased flexibility for organizations and their partners to scale their privacy-enhanced data collaboration use cases. With this launch, you can create a single PySpark analysis template that allows different values to be provided by the Clean Rooms collaborator running a job at submission time without modifying the template code. With parameters in PySpark analysis templates, the code author creates a PySpark template with parameters support, and if approved to run, the job runner submits parameter values directly to the PySpark job. For example, a measurement company running attribution analysis for advertising campaigns can input time windows and geographic regions dynamically to surface insights that drive campaign optimizations and media planning accelerating time-to-insights. With AWS Clean Rooms, customers can create a secure data clean room in minutes and collaborate with any company on AWS or Snowflake to generate unique insights about advertising campaigns, investment decisions, and research and development. For more information about the AWS Regions where AWS Clean Rooms is available, see the AWS Regions table. To learn more about collaborating with AWS Clean Rooms, visit AWS Clean Rooms.

lexorganizations
#lex#organizations#launch#ga#support

Amazon Aurora PostgreSQL, Amazon Aurora DSQL, and Amazon DynamoDB serverless databases are now available on v0 by Vercel, an AI-powered tool that transforms your ideas into production-ready, full-stack web applications in minutes. With this launch, you can build your ideas as well as create and connect to AWS databases from v0 using natural language prompts. To get started, simply describe what you want to build in v0. The tool takes care of developing the frontend user interface and backend logic, storing your application data in the AWS database that best meets your application needs. v0 provides an end-to-end setup experience where you can choose and configure database resources under a new AWS account or link to an existing account, all without leaving v0 interface. New AWS accounts from Vercel include access to all three databases and $100 USD in credits that can be used with any of the database options for up to six months. You can also manage your plan, add payment information, and view usage details anytime by visiting the AWS settings portal from the Vercel dashboard. To learn more, visit v0 or the AWS landing page on the Vercel Marketplace. The serverless options for Amazon Aurora PostgreSQL, Amazon Aurora DSQL, and Amazon DynamoDB do not require infrastructure management and reduce costs by scaling down to zero automatically when not in use. You can create a database in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Tokyo), and Asia Pacific (Mumbai). AWS Databases deliver security, reliability, and price performance without the operational overhead, whether you're prototyping your next big idea or running production AI and data driven applications. For more information, visit the AWS Databases webpage.

dynamodb
#dynamodb#launch#now-available

Amazon Web Services (AWS) is announcing the general availability of Amazon EC2 X8i instances, next-generation memory optimized instances powered by custom Intel Xeon 6 processors available only on AWS. X8i instances are SAP-certified and deliver the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. They deliver up to 43% higher performance, 1.5x more memory capacity (up to 6TB), and 3.4x more memory bandwidth compared to previous generation X2i instances. X8i instances are designed for memory-intensive workloads like SAP HANA, large databases, data analytics, and Electronic Design Automation (EDA). Compared to X2i instances, X8i instances offer up to 50% higher SAPS performance, up to 47% faster PostgreSQL performance, 88% faster Memcached performance, and 46% faster AI inference performance. X8i instances come in 14 sizes, from large to 96xlarge, including two bare metal options. X8i instances are available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Frankfurt). To get started, visit the AWS Management Console. X8i instances can be purchased via Savings Plans, On-Demand instances, and Spot instances. For more information visit X8i instances page.

ec2
#ec2

Amazon Connect now provides agent scheduling metrics in data lake, making it easier for you to generate reports and insights from this data. For example, after publishing schedules for next month, you can access interval level (15 minutes or 30 minutes) metrics such as forecasted headcount, scheduled headcount, and projected service level in Connect analytics data lake. You can view aggregated metrics for an entire business unit (forecast group) or broken down by specific demand segments (demand groups). You can then visualize this data in Amazon Quick Sight or another BI tool of your choice for further analysis, such as identifying periods of over or under-staffing. This eliminates the need for manual reviews of agent schedules thus improving productivity for schedulers and supervisors. This feature is available in all AWS Regions where Amazon Connect agent scheduling is available. To learn more about Amazon Connect agent scheduling, click here.

amazon qforecast
#amazon q#forecast#ga

AWS Lambda now supports cross-account access for Amazon DynamoDB Streams event-source mappings (ESMs), enabling you to trigger Lambda functions in one account from DynamoDB Streams in another account. Customers build event-driven applications using Lambda's fully-managed DynamoDB Streams ESMs, which poll change events from DynamoDB tables and trigger your Lambda functions. Organizations implementing multi-account architectures—whether to centralize event processing or share events with partner teams—previously needed to build complex data replication solutions to share data across accounts, which added operational overhead. With this launch, you can now provide cross-account access to your DynamoDB Streams to trigger Lambda functions in another account. By setting a resource-based policy on your DynamoDB stream, you can enable a Lambda function in one account to access a DynamoDB stream in another account. This capability allows you to simplify your streaming applications across accounts without the overhead of replication solutions in each account. This feature is generally available in all AWS Commercial and AWS GovCloud (US) Regions. You can enable cross-account Lambda triggers by creating resource-based policies for your DynamoDB Streams using the AWS Management Console, AWS CLI, AWS SDKs, AWS CloudFormation, or AWS APIs. To learn more, read Lambda ESM documentation.

lexlambdadynamodbcloudformationorganizations
#lex#lambda#dynamodb#cloudformation#organizations#launch

Amazon Redshift Serverless introduces queue-based query resource management. You can create dedicated query queues with customized monitoring rules for different workloads. This feature provides granular control over resource usage. Queues let you set metrics-based predicates and automated responses. For example, you can configure rules to automatically abort queries that exceed time limits or consume too many resources. Previously, Query Monitoring Rules (QMR) were applied only at the Redshift Serverless workgroup level, affecting all queries run in this workgroup uniformly. The new queue-based approach lets you create queues with distinct monitoring rules. You can assign these queues to specific user roles and query groups. Each queue operates independently, with rules affecting only the queries within that queue. The available monitoring metrics can be found in Query monitoring metrics for Amazon Redshift Serverless. This feature is available in all AWS regions that support Amazon Redshift Serverless. You can manage QMR with queues through the AWS Console and Redshift APIs. For implementation details, see the documentation in the Amazon Redshift management guide.

redshift
#redshift#support

Amazon Elastic Block Store (EBS) now supports up to four Elastic Volumes modifications per volume within a rolling 24-hour window. Elastic Volumes modifications allow you to increase the size, change the type, and adjust the performance of your EBS volumes. With this update, you can start a new modification immediately after the previous one completes, as long as you have initiated fewer than four modifications in the past 24 hours. This enhancement improves your operational agility to immediately scale storage capacity or adjust performance in response to sudden data growth or unanticipated workload spikes. With Elastic Volumes modifications, you can modify your volumes without detaching them or restarting your instances, allowing your application to continue running with minimal performance impact. This feature is available in all commercial AWS Regions, the AWS GovCloud (US) Regions, and the China Regions. This capability is automatically enabled without requiring changes to your existing workflows. To learn more, see Modify an Amazon EBS volume using Elastic Volumes operations in the Amazon EBS User Guide.

#update#enhancement#support

AWS Data Exports now enables customers to distinguish between Amazon Bedrock operation types in their cost reports for enhanced cost analysis and optimization. This granular operation information is available in Cost and Usage Reports (CUR), CUR 2.0, and Data Exports for FOCUS, and is particularly valuable for FinOps teams, cost optimization professionals, and organizations using Amazon Bedrock that require detailed billing analysis. Customers will now see specific operation types such as "InvokeModelInference" and "InvokeModelStreamingInference" in their cost reports, replacing generic "Usage" labels. These granular operation types appear in the "line_item_operation" column in Legacy CUR and CUR 2.0 reports, the "x_Operation" column in FOCUS reports, and AWS Cost Explorer API Operation dimension values. This visibility now extends to all foundation models on Amazon Bedrock, enabling precise tracking of usage patterns and cost optimization opportunities across all model providers. To learn more about AWS Billing and Cost Management, visit the AWS Billing and Cost Management documentation, and for AWS Data Exports, visit the AWS Data Exports documentation. To get started with Amazon Bedrock, visit the Amazon Bedrock page and see the Amazon Bedrock documentation for more details.

bedrockorganizations
#bedrock#organizations#ga

Deutsch | English | Español | Français | Italiano As a European citizen, I understand first-hand the importance of digital sovereignty, especially for our public sector organisations and highly regulated industries. Today, I’m delighted to share that the AWS European Sovereign Cloud is now generally available to all customers. We first announced our plans to […]

#generally-available#ga

Amazon VPC Route Server is now available in 16 new regions in addition to the 14 existing ones. VPC Route Server simplifies dynamic routing between virtual appliances in your Amazon VPC. It allows you to advertise routing information through Border Gateway Protocol (BGP) from virtual appliances and dynamically update the VPC route tables associated with subnets and internet gateway. With this launch, Amazon VPC Route Server is available in 30 AWS Regions: US East (Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Tokyo), US West (N. California), Canada West (Calgary), Asia Pacific (Malaysia), Europe (Milan), Europe (Paris), Asia Pacific (Sydney), Europe (London), Canada (Central), Mexico (Central), South America (Sao Paulo),Asia Pacific (Seoul), Europe (Zurich), Europe (Stockholm), Middle East (UAE), Israel (Tel Aviv), Asia Pacific (Taipei), Asia Pacific (New Zealand), Asia Pacific (Melbourne), Middle East (Bahrain), Asia Pacific (Jakarta), Africa (Cape Town), Asia Pacific (Hong Kong), Asia Pacific (Osaka) and Asia Pacific (Thailand). To learn more about Amazon VPC Route Server, visit this page.

#launch#ga#now-available#update#new-region

Amazon Virtual Private Cloud (VPC) IP Address Manager (IPAM) now supports policies for Amazon Relational Database Service (RDS) instances and Application Load Balancers (ALB). This feature enables IP administrators to centrally configure and enforce IP allocation strategies for these resources, improving operational posture and simplifying network and security management. Using IPAM policies, IP administrators can centrally define public IP allocation rules for AWS resources, such as RDS instances, Application Load Balancers and Network Address Translation (NAT) Gateways when used in regional availability mode, and Elastic IP addresses. The IP allocation policy configured centrally cannot be superseded by individual application teams, ensuring compliance at all times. Before this feature, IP administrators had to educate database administrators and application developers about IP allocation requirements for RDS instances and Application Load Balancers, and rely on them to always comply with best practices. Now, you can add IP-based filters for RDS and ALB traffic in your networking and security constructs like access control lists, route tables, security groups, and firewalls, with confidence that public IPv4 address assignments to these resources always come from specific IPAM pools. The feature is available in all AWS commercial regions and the AWS GovCloud (US) Regions, in both Free Tier and Advanced Tier of VPC IPAM. When used with the Advanced Tier of VPC IPAM, customers can set policies across AWS accounts and AWS regions. To get started please see the IPAM policies documentation page. To learn more about IPAM, view the IPAM documentation. For details on pricing, refer to the IPAM tab on the Amazon VPC Pricing Page.

rds
#rds#ga#support

Amazon Elastic Kubernetes Service (Amazon EKS) on AWS Outposts brings the power of managed Kubernetes to your on-premises infrastructure. Use Amazon EKS on Outposts rack to create hybrid cloud deployments that maintain consistent AWS experiences across environments. As organizations increasingly adopt edge computing and hybrid architectures, storage optimization and performance tuning become critical for successful workload deployment.

eksorganizationsoutposts
#eks#organizations#outposts#ga

Amazon Web Services (AWS) Lambda now supports .NET 10 as both a managed runtime and base container image. .NET is a popular language for building serverless applications. Developers can now use the new features and enhancements in .NET when creating serverless applications on Lambda. This includes support for file-based apps to streamline your projects by implementing functions using just a single file.

lambda
#lambda#now-available#new-feature#enhancement#support

In healthcare, generative AI is transforming how medical professionals analyze data, summarize clinical notes, and generate insights to improve patient outcomes. From automating medical documentation to assisting in diagnostic reasoning, large language models (LLMs) have the potential to augment clinical workflows and accelerate research. However, these innovations also introduce significant privacy, security, and intellectual property challenges.

nova
#nova

In this post, we walk through building a generative AI–powered troubleshooting assistant for Kubernetes. The goal is to give engineers a faster, self-service way to diagnose and resolve cluster issues, cut down Mean Time to Recovery (MTTR), and reduce the cycles experts spend finding the root cause of issues in complex distributed systems.

lex
#lex

This post is about AWS SDK for JavaScript v3 announcing end of support for Node.js versions based on Node.js release schedule, and it is not about AWS Lambda. For the latter, refer to the Lambda runtime deprecation policy. In the second week of January 2026, the AWS SDK for JavaScript v3 (JS SDK) will start […]

lambda
#lambda#support

Have you ever wondered what it is really like to be a woman in tech at one of the world's leading cloud companies? Or maybe you are curious about how diverse perspectives drive innovation beyond the buzzwords? Today, we are providing an insider's perspective on the role of a solutions architect (SA) at Amazon Web Services (AWS). However, this is not a typical corporate success story. We are three women who have navigated challenges, celebrated wins, and found our unique paths in the world of cloud architecture, and we want to share our real stories with you.

novards
#nova#rds#ga

Organizations often have large volumes of documents containing valuable information that remains locked away and unsearchable. This solution addresses the need for a scalable, automated text extraction and knowledge base pipeline that transforms static document collections into intelligent, searchable repositories for generative AI applications.

bedrockstep functionsorganizations
#bedrock#step functions#organizations#ga

In this post, we demonstrate how to utilize AWS Network Firewall to secure an Amazon EVS environment, using a centralized inspection architecture across an EVS cluster, VPCs, on-premises data centers and the internet. We walk through the implementation steps to deploy this architecture using AWS Network Firewall and AWS Transit Gateway.

#ga

You can now develop AWS Lambda functions using Node.js 24, either as a managed runtime or using the container base image. Node.js 24 is in active LTS status and ready for production use. It is expected to be supported with security patches and bugfixes until April 2028. The Lambda runtime for Node.js 24 includes a new implementation of the […]

lambda
#lambda#now-available#support

Organizations running critical workloads on Amazon Elastic Compute Cloud (Amazon EC2) reserve compute capacity using On-Demand Capacity Reservations (ODCR) to have availability when needed. However, reserved capacity can intermittently sit idle during off-peak periods, between deployments, or when workloads scale down. This unused capacity represents a missed opportunity for cost optimization and resource efficiency across the organization.

ec2organizations
#ec2#organizations#ga

Amazon Web Services (AWS) provides many mechanisms to optimize the price performance of workloads running on Amazon Elastic Compute Cloud (Amazon EC2), and the selection of the optimal infrastructure to run on can be one of the most impactful levers. When we started building the AWS Graviton processor, our goal was to optimize AWS Graviton […]

ec2graviton
#ec2#graviton

In this post, you will learn how the new Amazon API Gateway’s enhanced TLS security policies help you meet standards such as PCI DSS, Open Banking, and FIPS, while strengthening how your APIs handle TLS negotiation. This new capability increases your security posture without adding operational complexity, and provides you with a single, consistent way to standardize TLS configuration across your API Gateway infrastructure.

lexrdsapi gateway
#lex#rds#api gateway#ga#new-capability

Event-driven applications often need to process data in real-time. When you use AWS Lambda to process records from Apache Kafka topics, you frequently encounter two typical requirements: you need to process very high volumes of records in close to real-time, and you want your consumers to have the ability to scale rapidly to handle traffic spikes. Achieving both necessitates understanding how Lambda consumes Kafka streams, where the potential bottlenecks are, and how to optimize configurations for high throughput and best performance.

lambdardskafka
#lambda#rds#kafka

Modern generative AI applications often need to stream large language model (LLM) outputs to users in real-time. Instead of waiting for a complete response, streaming delivers partial results as they become available, which significantly improves the user experience for chat interfaces and long-running AI tasks. This post compares three serverless approaches to handle Amazon Bedrock LLM streaming on Amazon Web Services (AWS), which helps you choose the best fit for your application.

bedrock
#bedrock

Today, AWS is announcing tenant isolation for AWS Lambda, enabling you to process function invocations in separate execution environments for each end-user or tenant invoking your Lambda function. This capability simplifies building secure multi-tenant SaaS applications by managing tenant-level compute environment isolation and request routing, allowing you to focus on core business logic rather than implementing tenant-aware compute environment isolation.

lambda
#lambda

In this post, we'll explore a reference architecture that helps enterprises govern their Amazon Bedrock implementations using Amazon API Gateway. This pattern enables key capabilities like authorization controls, usage quotas, and real-time response streaming. We'll examine the architecture, provide deployment steps, and discuss potential enhancements to help you implement AI governance at scale.

bedrockapi gateway
#bedrock#api gateway#ga#enhancement

Today, AWS announced support for response streaming in Amazon API Gateway to significantly improve the responsiveness of your REST APIs by progressively streaming response payloads back to the client. With this new capability, you can use streamed responses to enhance user experience when building LLM-driven applications (such as AI agents and chatbots), improve time-to-first-byte (TTFB) performance for web and mobile applications, stream large files, and perform long-running operations while reporting incremental progress using protocols such as server-sent events (SSE).

api gateway
#api gateway#ga#support#new-capability

Amazon Elastic Cloud Compute (Amazon EC2) instances with locally attached NVMe storage can provide the performance needed for workloads demanding ultra-low latency and high I/O throughput. High-performance workloads, from high-frequency trading applications and in-memory databases to real-time analytics engines and AI/ML inference, need comprehensive performance tracking. Operating system tools like iostat and sar provide valuable system-level insights, and Amazon CloudWatch offers important disk IOPs and throughput measurements, but high-performance workloads can benefit from even more detailed visibility into instance store performance.

ec2cloudwatch
#ec2#cloudwatch

At re:Invent 2025, we introduce one new lens and two significant updates to the AWS Well-Architected Lenses specifically focused on AI workloads: the Responsible AI Lens, the Machine Learning (ML) Lens, and the Generative AI Lens. Together, these lenses provide comprehensive guidance for organizations at different stages of their AI journey, whether you're just starting to experiment with machine learning or already deploying complex AI applications at scale.

lexorganizations
#lex#organizations#launch#ga#update

We are delighted to announce an update to the AWS Well-Architected Generative AI Lens. This update features several new sections of the Well-Architected Generative AI Lens, including new best practices, advanced scenario guidance, and improved preambles on responsible AI, data architecture, and agentic workflows.

#update

This post was co-written with Frederic Haase and Julian Blau with BASF Digital Farming GmbH. At xarvio – BASF Digital Farming, our mission is to empower farmers around the world with cutting-edge digital agronomic decision-making tools. Central to this mission is our crop optimization platform, xarvio FIELD MANAGER, which delivers actionable insights through a range […]

eks
#eks

Version 2.0 of the AWS Deploy Tool for .NET is now available. This new major version introduces several foundational upgrades to improve the deployment experience for .NET applications on AWS. The tool comes with new minimum runtime requirements. We have upgraded it to require .NET 8 because the predecessor, .NET 6, is now out of […]

#now-available

The global real-time payments market is experiencing significant growth. According to Fortune Business Insights, the market was valued at USD 24.91 billion in 2024 and is projected to grow to USD 284.49 billion by 2032, with a CAGR of 35.4%. Similarly, Grand View Research reports that the global mobile payment market, valued at USD 88.50 […]

Generative AI agents in production environments demand resilience strategies that go beyond traditional software patterns. AI agents make autonomous decisions, consume substantial computational resources, and interact with external systems in unpredictable ways. These characteristics create failure modes that conventional resilience approaches might not address. This post presents a framework for AI agent resilience risk analysis […]

The AWS SDK for Java 1.x (v1) entered maintenance mode on July 31, 2024, and will reach end-of-support on December 31, 2025. We recommend that you migrate to the AWS SDK for Java 2.x (v2) to access new features, enhanced performance, and continued support from AWS. To help you migrate efficiently, we’ve created a migration […]

#new-feature#support

In this post, we explore how Metagenomi built a scalable database and search solution for over 1 billion protein vectors using LanceDB and Amazon S3. The solution enables rapid enzyme discovery by transforming proteins into vector embeddings and implementing a serverless architecture that combines AWS Lambda, AWS Step Functions, and Amazon S3 for efficient nearest neighbor searches.

lambdas3step functions
#lambda#s3#step functions

In this post, we explore an efficient approach to managing encryption keys in a multi-tenant SaaS environment through centralization, addressing challenges like key proliferation, rising costs, and operational complexity across multiple AWS accounts and services. We demonstrate how implementing a centralized key management strategy using a single AWS KMS key per tenant can maintain security and compliance while reducing operational overhead as organizations scale.

lexorganizations
#lex#organizations#ga

Today, we are excited to announce the general availability of the AWS .NET Distributed Cache Provider for Amazon DynamoDB. This is a seamless, serverless caching solution that enables .NET developers to efficiently manage their caching needs across distributed systems. Consistent caching is a difficult problem in distributed architectures, where maintaining data integrity and performance across […]

dynamodb
#dynamodb#generally-available

This blog was co-authored by Afroz Mohammed and Jonathan Nunn, Software Developers on the AWS PowerShell team. We’re excited to announce the general availability of the AWS Tools for PowerShell version 5, a major update that brings new features and improvements in security, along with a few breaking changes. New Features You can now cancel […]

#generally-available#new-feature#update#improvement

Software development is far more than just writing code. In reality, a developer spends a large amount of time maintaining existing applications and fixing bugs. For example, migrating a Go application from the older AWS SDK for Go v1 to the newer v2 can be a significant undertaking, but it’s a crucial step to future-proof […]

amazon qq developer
#amazon q#q developer

We’re excited to announce that the AWS Deploy Tool for .NET now supports deploying .NET applications to select ARM-based compute platforms on AWS! Whether you’re deploying from Visual Studio or using the .NET CLI, you can now target cost-effective ARM infrastructure like AWS Graviton with the same streamlined experience you’re used to. Why deploy to […]

graviton
#graviton#support

Version 4.0 of the AWS SDK for .NET has been released for general availability (GA). V4 has been in development for a little over a year in our SDK’s public GitHub repository with 13 previews being released. This new version contains performance improvements, consistency with other AWS SDKs, and bug and usability fixes that required […]

#preview#ga#improvement

Today, AWS launches the developer preview of the AWS IoT Device SDK for Swift. The IoT Device SDK for Swift empowers Swift developers to create IoT applications for Linux and Apple macOS, iOS, and tvOS platforms using the MQTT 5 protocol. The SDK supports Swift 5.10+ and is designed to help developers easily integrate with […]

#launch#preview#support

We are excited to announce the Developer Preview of the Amazon S3 Transfer Manager for Rust, a high-level utility that speeds up and simplifies uploads and downloads with Amazon Simple Storage Service (Amazon S3). Using this new library, developers can efficiently transfer data between Amazon S3 and various sources, including files, in-memory buffers, memory streams, […]

s3
#s3#preview

In a recent post we gave some background on .NET Aspire and introduced our AWS integrations with .NET Aspire that integrate AWS into the .NET dev inner loop for building applications. The integrations included how to provision application resources with AWS CloudFormation or AWS Cloud Development Kit (AWS CDK) and using Amazon DynamoDB local for […]

lambdadynamodbcloudformation
#lambda#dynamodb#cloudformation#ga#integration

.NET Aspire is a new way of building cloud-ready applications. In particular, it provides an orchestration for local environments in which to run, connect, and debug the components of distributed applications. Those components can be .NET projects, databases, containers, or executables. .NET Aspire is designed to have integrations with common components used in distributed applications. […]

#integration

AWS announces important configuration updates coming July 31st, 2025, affecting AWS SDKs and CLIs default settings. Two key changes include switching the AWS Security Token Service (STS) endpoint to regional and updating the default retry strategy to standard. These updates aim to improve service availability and reliability by implementing regional endpoints to reduce cross-regional dependencies and introducing token-bucket throttling for standardized retry behavior. Organizations should test their applications before the release date and can opt-in early or temporarily opt-out of these changes. These updates align with AWS best practices for optimal service performance and security.

organizations
#organizations#ga#update