AWS AI News Hub

Your central source for the latest AWS artificial intelligence and machine learning service announcements, features, and updates

Filter by Category

204
Total Updates
96
What's New
20
ML Blog Posts
20
News Articles
Showing 204 of 204 updates

AWS has introduced a new catalog federation feature that enables direct access to Snowflake Horizon Catalog data through AWS Glue Data Catalog. This integration allows organizations to discover and query data in Iceberg format while maintaining security through AWS Lake Formation. This post provides a step-by-step guide to establishing this integration, including configuring Snowflake Horizon Catalog, setting up authentication, creating necessary IAM roles, and implementing AWS Lake Formation permissions. Learn how to enable cross-platform analytics while maintaining robust security and governance across your data environment.

iamglueorganizations
#iam#glue#organizations#ga#integration

Amazon VPC Route Server is now available in 16 new regions in addition to the 14 existing ones. VPC Route Server simplifies dynamic routing between virtual appliances in your Amazon VPC. It allows you to advertise routing information through Border Gateway Protocol (BGP) from virtual appliances and dynamically update the VPC route tables associated with subnets and internet gateway. With this launch, Amazon VPC Route Server is available in 30 AWS Regions: US East (Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Tokyo), US West (N. California), Canada West (Calgary), Asia Pacific (Malaysia), Europe (Milan), Europe (Paris), Asia Pacific (Sydney), Europe (London), Canada (Central), Mexico (Central), South America (Sao Paulo),Asia Pacific (Seoul), Europe (Zurich), Europe (Stockholm), Middle East (UAE), Israel (Tel Aviv), Asia Pacific (Taipei), Asia Pacific (New Zealand), Asia Pacific (Melbourne), Middle East (Bahrain), Asia Pacific (Jakarta), Africa (Cape Town), Asia Pacific (Hong Kong), Asia Pacific (Osaka) and Asia Pacific (Thailand). To learn more about Amazon VPC Route Server, visit this page.

#launch#ga#now-available#update#new-region

Amazon Virtual Private Cloud (VPC) IP Address Manager (IPAM) now supports policies for Amazon Relational Database Service (RDS) instances and Application Load Balancers (ALB). This feature enables IP administrators to centrally configure and enforce IP allocation strategies for these resources, improving operational posture and simplifying network and security management. Using IPAM policies, IP administrators can centrally define public IP allocation rules for AWS resources, such as RDS instances, Application Load Balancers and Network Address Translation (NAT) Gateways when used in regional availability mode, and Elastic IP addresses. The IP allocation policy configured centrally cannot be superseded by individual application teams, ensuring compliance at all times. Before this feature, IP administrators had to educate database administrators and application developers about IP allocation requirements for RDS instances and Application Load Balancers, and rely on them to always comply with best practices. Now, you can add IP-based filters for RDS and ALB traffic in your networking and security constructs like access control lists, route tables, security groups, and firewalls, with confidence that public IPv4 address assignments to these resources always come from specific IPAM pools. The feature is available in all AWS commercial regions and the AWS GovCloud (US) Regions, in both Free Tier and Advanced Tier of VPC IPAM. When used with the Advanced Tier of VPC IPAM, customers can set policies across AWS accounts and AWS regions. To get started please see the IPAM policies documentation page. To learn more about IPAM, view the IPAM documentation. For details on pricing, refer to the IPAM tab on the Amazon VPC Pricing Page.

rds
#rds#ga#support

API keys for Amazon Bedrock are now available in AWS GovCloud (US) regions, expanding a feature that simplifies authentication and accelerates generative AI development. Originally launched in commercial AWS regions in July 2025, API keys for Amazon Bedrock enable developers to quickly generate access credentials directly within the Amazon Bedrock console or AWS SDK without needing to manually configure IAM principals and policies. With the introduction of API keys for Amazon Bedrock, developers can generate short-term and long-term API keys directly from the Amazon Bedrock console or API to authenticate API calls to Amazon Bedrock models. Short-term API keys are valid for the duration of your console session, or up to 12 hours, whichever is shorter. Long-term API keys give you the flexibility to define key validity duration and manage the keys from the AWS IAM console. Bedrock API key authentication is now available in the AWS GovCloud (US) and commercial AWS Regions where Amazon Bedrock is available. To learn more about API keys in Amazon Bedrock, visit the API Keys documentation in the Amazon Bedrock user guide, or check out our blog for code snippets and implementation examples.

bedrocklexiam
#bedrock#lex#iam#launch#now-available

AWS Transform custom now supports AWS PrivateLink and is available in a new AWS Region, Europe (Frankfurt), in addition to the US East (N. Virginia) Region. AWS Transform custom helps organizations reduce technical debt by automating repetitive transformation tasks such as language version upgrades, API migrations, and framework updates. The agent is designed for enterprise development teams and consulting partners who need to execute consistent, repeatable code transformations across large codebases. With AWS Transform custom, teams can create custom transformation definitions using natural language, documentation, and code samples, or use AWS-managed transformations for common scenarios including Java, Python, and Node.js version upgrades. Through continual learning, the service improves transformation quality with every execution over time. With AWS PrivateLink support, customers can now access AWS Transform custom from their Amazon VPC without routing traffic over the public internet, helping meet security and compliance requirements. To learn more about AWS Transform custom, visit the product page and user guide.

organizations
#organizations#ga#update#support

Amazon Connect now makes it easier to manage contact center operating hours for recurring events like holidays, maintenance windows, and promotional periods, with a visual calendar that provides at-a-glance visibility by day, month, or year. You can set up recurring overrides that automatically take effect weekly, monthly, or every other Friday, and use them to provide customers with personalized experiences, all without having to manually revisit configurations. For example, every January 1st you can automatically greet customers with "Happy New Year!" and route them to a special holiday message before checking if agents are available, then on January 2nd your contact center automatically returns to normal operations. These additional hours of operation override capabilities are available in all AWS regions where Amazon Connect is available and offer public API and AWS CloudFormation support. To learn more, see the Amazon Connect Administrator Guide. To learn more about Amazon Connect, please visit the Amazon Connect website.

personalizecloudformation
#personalize#cloudformation#support

AWS announces significant improvements to the Transactions view in the AWS Billing and Cost Management Console’s Payments page, delivering faster performance, simplified payment reconciliation, and improved data accuracy. The enhanced view provides customers with a unified interface to manage all their financial transactions, improving visibility and reducing the time spent on payment tracking. The enhanced view delivers improved performance, with pages loading in mili seconds instead of minutes. Even customers with tens of thousands of transactions can now access their complete transaction history without timeouts. The enhanced view introduces new capabilities including comprehensive balance tracking, clear transaction status indicators, and advanced filtering options. Key improvements include consolidated visibility into invoice statuses, clear +/- indicators for amounts owed versus funds available, and the ability to view records in a single view without performance impact. For organizations using Billing Transfer, the new "Usage Consolidation Account" column makes it easier to track transactions across multiple accounts. This feature was made available to all customers on January 12, 2026, in all AWS commercial regions. To learn more about managing your transactions, visit the AWS Billing and Cost Management documentation. For additional information about managing your AWS payments and billing, see the AWS Billing Console User Guide.

rdsorganizations
#rds#organizations#ga#now-available#improvement

In this post, we explore the security considerations and best practices for implementing Amazon Bedrock cross-Region inference profiles. Whether you're building a generative AI application or need to meet specific regional compliance requirements, this guide will help you understand the secure architecture of Amazon Bedrock CRIS and how to properly configure your implementation.

bedrock
#bedrock

Amazon Neptune Database now supports Graviton3-based R7g and Graviton4-based R8g instances for Amazon Neptune engine versions 1.4.5 or above, in Asia Pacific (Hong Kong), Asia Pacific (Osaka), Asia Pacific (Singapore), Canada (Central) and US West (N. California). R7g and R8g instances are priced -16% vs R6g. Graviton3-based R7g are the first AWS database instances to feature the latest DDR5 memory, enabling high-speed access to data in memory. R7g database instances offer up to 30Gbps enhanced networking bandwidth and up to 20 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). Graviton4-based R8g instances offer larger instance sizes, up to 48xlarge and features an 8:1 ratio of memory to vCPU, and the latest DDR5 memory. AWS Graviton4 processors are up to 40% faster for databases than AWS Graviton3 processors. You can launch R7g and R8g instances for Neptune using the AWS Management Console or using the AWS CLI. Upgrading a Neptune cluster to R7g or R8g instances requires a simple instance type modification for Neptune engine versions 1.4.5 or higher. For more information on pricing and regional availability, refer to the Amazon Neptune pricing page.

graviton
#graviton#launch#ga#support

Amazon Elastic Kubernetes Service (Amazon EKS) on AWS Outposts brings the power of managed Kubernetes to your on-premises infrastructure. Use Amazon EKS on Outposts rack to create hybrid cloud deployments that maintain consistent AWS experiences across environments. As organizations increasingly adopt edge computing and hybrid architectures, storage optimization and performance tuning become critical for successful workload deployment.

eksorganizationsoutposts
#eks#organizations#outposts#ga

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) X8aedz instances are available in Asia Pacific (Mumbai) and Asia Pacific (Seoul) regions. These instances are powered by 5th Gen AMD EPYC processors (formerly code named Turin). These instances offer the highest maximum CPU frequency, 5GHz in the cloud. X8aedz instances are built using the latest sixth generation AWS Nitro Cards and are ideal for electronic design automation (EDA) workloads such as physical layout and physical verification jobs, and relational databases that benefit from high single-threaded processor performance and a large memory footprint. The combination of 5 GHz processors and local NVMe storage enables faster processing of memory-intensive backend EDA workloads such as floor planning, logic placement, clock tree synthesis (CTS), routing, and power/signal integrity analysis. X8aedz instances feature a 32:1 ratio of memory to vCPU and are available in 8 sizes ranging from 2 to 96 vCPUs with 64 to 3,072 GiB of memory, including two bare metal variants, and up to 8 TB of local NVMe SSD storage. Customers can purchase X8aedz instances via Savings Plans, On-Demand instances, and Spot instances. To get started, sign in to the AWS Management Console. For more information visit the Amazon EC2 X8aedz instance page.

ec2rds
#ec2#rds#now-available

Amazon Connect Cases now supports AWS CloudFormation, enabling you to model, provision, and manage case resources as infrastructure as code. With this launch, administrators can create CloudFormation templates to programmatically deploy and update their Cases configuration—such as templates, fields, and layouts—across Amazon Connect instances, reducing manual setup time and minimizing configuration errors. Amazon Connect Cases is available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Canada (Central), Europe (Frankfurt), Europe (London), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Africa (Cape Town). To learn more and get started, visit the Amazon Connect Cases webpage and documentation.

cloudformation
#cloudformation#launch#ga#update#support

Amazon MSK Connect is now available in three additional AWS Regions: Asia Pacific (New Zealand), AWS GovCloud (US-East), and AWS GovCloud (US-West). MSK Connect enables you to run fully managed Kafka Connect clusters with Amazon Managed Streaming for Apache Kafka (Amazon MSK). With a few clicks, MSK Connect allows you to easily deploy, monitor, and scale connectors that move data in and out of Apache Kafka and Amazon MSK clusters from external systems such as databases, file systems, and search indices. MSK Connect eliminates the need to provision and maintain cluster infrastructure. Connectors scale automatically in response to increases in usage and you pay only for the resources you use. With full compatibility with Kafka Connect, it is easy to migrate workloads without code changes. MSK Connect will support both Amazon MSK-managed and self-managed Apache Kafka clusters. You can get started with MSK Connect from the Amazon MSK console or the Amazon CLI. With this launch, MSK Connect is now available in thirty eight AWS Regions. To get started visit, the MSK Connect product page, pricing page, and the Amazon MSK Developer Guide.

kafkamsk
#kafka#msk#launch#now-available#support

Amazon Lex now offers a neural automatic speech recognition (ASR) model for English that delivers improved recognition accuracy for your voice bots. Trained on data from multiple English locales, the model excels at recognizing conversational speech patterns across diverse speaking styles, including non-native English speakers and regional accents. This reduces the need for end-customers to repeat themselves and improves self-service success rates. To enable this feature, select "Neural" as the speech recognition option in your bot's locale settings. This feature is available in all AWS commercial regions where Amazon Connect and Lex operate. To learn more, visit the Amazon Lex documentation or explore the Amazon Connect website to learn how Amazon Connect and Amazon Lex deliver seamless end-customer self-service experiences.

lex
#lex#launch

Over time, several distinct lakehouse approaches have emerged. In this post, we show you how to evaluate and choose the right lakehouse pattern for your needs. A lakehouse architecture isn’t about choosing between a data lake and a data warehouse. Instead, it’s an approach to interoperability where both frameworks coexist and serve different purposes within a unified data architecture. By understanding fundamental storage patterns, implementing effective catalog strategies, and using native storage capabilities, you can build scalable, high-performance data architectures that support both your current analytics needs and future innovation.

novasagemaker
#nova#sagemaker#ga#support

AWS has launched the catalog federation capability, enabling direct access to Apache Iceberg tables managed in Databricks Unity Catalog through the AWS Glue Data Catalog. With this integration, you can discover and query Unity Catalog data in Iceberg format using an Iceberg REST API endpoint, while maintaining granular access controls through AWS Lake Formation. In this post, we demonstrate how to set up catalog federation between the Glue Data Catalog and Databricks Unity Catalog, enabling data querying using AWS analytics services.

glue
#glue#launch#integration

Amazon SageMaker HyperPod console now validates service quotas for your AWS account before initiating cluster creation, enabling you to confirm sufficient quota availability before provisioning begins. SageMaker HyperPod helps you provision resilient clusters for running AI/ML workloads and developing state-of-the-art models such as large language models (LLMs), diffusion models, and foundation models (FMs). When creating large-scale AI/ML clusters, you need to ensure your account has sufficient quotas for instances, storage, and networking resources, but quota validation previously required manual checks across multiple AWS services, often resulting in failed cluster creation attempts and wasted time if you miss requesting quota limit increases. The new quota validation capability in the SageMaker HyperPod console automatically checks your account-level quotas against your cluster configuration, including instance type limits, EBS volume sizes, and VPC-related quotas when creating new resources. The validation displays a clear table showing expected utilization, applied quota values, and compliance status for each quota. When quotas may be exceeded, you receive a warning alert with direct links to the Service Quotas console to request increases. This feature is available in all AWS Regions where Amazon SageMaker HyperPod is supported. For a complete list of service quota validation checks performed, refer to the Amazon SageMaker HyperPod User Guide.

sagemakerhyperpod
#sagemaker#hyperpod#ga#support

Amazon Lex now provides three VAD sensitivity levels that can be configured for each bot locale: Default, High, and Maximum. The Default setting is suitable for most environments with typical background noise levels. High is designed for environments with consistent but moderate noise levels, such as busy offices or retail spaces. Maximum provides the highest tolerance for very noisy environments such as manufacturing floors, construction sites, or outdoor locations with significant ambient noise. You can configure VAD sensitivity when creating or updating a bot locale in the Amazon Connect's Conversational AI designer. This feature is available in all AWS commercial regions where Amazon Connect and Lex operate. To learn more, visit the Amazon Lex documentation or explore the Amazon Connect website to learn how Amazon Connect and Amazon Lex deliver seamless end-customer self-service experiences.

lex
#lex#launch

This post is co-written with Sunaina Kavi, AI/ML Product Manager at Omada Health. Omada Health, a longtime innovator in virtual healthcare delivery, launched a new nutrition experience in 2025, featuring OmadaSpark, an AI agent trained with robust clinical input that delivers real-time motivational interviewing and nutrition education. It was built on AWS. OmadaSpark was designed […]

novasagemaker
#nova#sagemaker#launch

Amazon Connect now offers customers the ability to view status of agent screen recordings in near real time in CloudWatch using Amazon EventBridge. With screen recording, supervisors can identify areas for agent coaching (e.g., non-compliance with business processes) by not only listening to customer calls or reviewing chat transcripts, but also watching agents’ actions while handling a contact (i.e., a voice call, chat and task). Using Amazon EventBridge, customers can see status of each agent screen recording including success/failure, failure codes with description, installed client version, agent web browser version, agent operating system, screen recording start and end times from CloudWatch. Customers can start using Amazon Connect screen recording status tracking by subscribing to Screen Recording Status Changed event type in Amazon EventBridge event bus. Screen recording status tracking is available in all the AWS Regions where Amazon Connect is already available. To learn more about screen recording, please visit the documentation and webpage. For information about screen recording pricing, visit the Amazon Connect pricing page.

eventbridgecloudwatch
#eventbridge#cloudwatch

Amazon Redshift Serverless, which allows you to run and scale analytics without having to provision and manage data warehouse clusters, is now generally available in the AWS Asia Pacific (New Zealand) region. With Amazon Redshift Serverless, all users, including data analysts, developers, and data scientists, can use Amazon Redshift to get insights from data in seconds. Amazon Redshift Serverless automatically provisions and intelligently scales data warehouse capacity to deliver high performance for all your analytics. You only pay for the compute used for the duration of the workloads on a per-second basis. You can benefit from this simplicity without making any changes to your existing analytics and business intelligence applications. With a few clicks in the AWS Management Console, you can get started with querying data using the Query Editor V2 or your tool of choice with Amazon Redshift Serverless. There is no need to choose node types, node count, workload management, scaling, and other manual configurations. You can create databases, schemas, and tables, and load your own data from Amazon S3, access data using Amazon Redshift data shares, or restore an existing Amazon Redshift provisioned cluster snapshot. With Amazon Redshift Serverless, you can directly query data in open formats, such as Apache Parquet, Apache Iceberg in Amazon S3 data lakes. Amazon Redshift Serverless provides unified billing for queries on any of these data sources, helping you efficiently monitor and manage costs. To get started, see the Amazon Redshift Serverless feature page, user documentation, and API Reference.

s3redshift
#s3#redshift#generally-available#now-available

Amazon Inspector scanning for Lambda functions and Elastic Container Registry (ECR) images now supports Java Gradle inventory and vulnerability scanning. This release also adds coverage for MySQL, MariaDB, PHP, Jenkins-core, 7zip (on Windows), Elasticsearch, and Curl/LibCurl. This update enhances Amazon Inspector's ability to detect vulnerabilities and misconfigurations across a broader range of applications and environments. Amazon Inspector is an automated vulnerability management service that continually scans AWS workloads for software vulnerabilities and unintended network exposure, helping organizations improve their security posture and meet compliance requirements. The new Java Gradle support allows Inspector to scan Java dependencies based on gradle.lockfile content, providing comprehensive vulnerability assessments for Java applications. When you use Inspector to scan Lambda functions and ECR images, you will now see findings for MySQL, MariaDB, PHP, Jenkins-core, 7zip (on Windows), Elasticsearch, and Curl/LibCurl installations. These enhancements enable more accurate detection of vulnerabilities in packages installed outside of package managers, improving overall security coverage for AWS customers using these technologies. To learn more about Amazon Inspector and how it can help secure your AWS workloads, visit the Amazon Inspector page. For a full list of Amazon Inspector supported operating systems and programming languages, see the user guide. You can start using these new features today in all AWS Regions where Amazon Inspector is available.

lambdaorganizations
#lambda#organizations#ga#new-feature#update#enhancement

In this post, we explore how Amazon Nova Multimodal Embeddings addresses the challenges of crossmodal search through a practical ecommerce use case. We examine the technical limitations of traditional approaches and demonstrate how Amazon Nova Multimodal Embeddings enables retrieval across text, images, and other modalities. You learn how to implement a crossmodal search system by generating embeddings, handling queries, and measuring performance. We provide working code examples and share how to add these capabilities to your applications.

nova
#nova

Amazon Lightsail now offers two larger database bundles with up to 8 vCPUs, 32GB memory, and 960GB SSD storage. The new database bundles are available in both standard and high-availability plans. You can create MySQL and PostgreSQL databases using the new Lightsail managed database bundles. The new larger database bundles enable you to scale your database workloads and run more data-intensive applications in Lightsail. These higher-performance database bundles are ideal for production workloads that require increased storage capacity and processing power to handle growing datasets and concurrent connections. Using these new bundles, you can run e-commerce platforms, content management systems, business intelligence applications, SaaS products, and more. These new bundles are now available in all AWS Regions where Amazon Lightsail is available. For more information on pricing, or to get started with your free trial, click here.

#now-available

Amazon EMR Serverless now supports job run-level cost allocation that provides better visibility into charges for individual job runs by allowing you to configure granular billing attribution at the individual job run level. You can get granular cost visibility by filtering and tracking costs in AWS Cost Explorer and Cost and Usage Reports by specific job run IDs and cost allocation tags associated with job runs. Amazon EMR Serverless is a deployment option in Amazon EMR that makes it simple for data engineers and data scientists to run open-source big data analytics frameworks without configuring, managing, and scaling clusters or servers. Previously, you could assign cost allocation tags to EMR Serverless applications, with cost attribution limited to the application level. With job run-level cost allocation, now you can assign cost allocation tags to each job run, enabling fine-grained billing attribution at the individual job run level. Cost allocation tags at the job run level also allow you to track costs by domains within a single application. For example, a single application could support jobs for finance and marketing domains, allowing you to track costs separately for each domain. Tracking costs for individual job runs makes it easier to conduct benchmarks that assess the costs of each job run as well as focus cost optimization efforts more precisely, allowing deeper insights into resource utilization and spending patterns across different jobs and domains. This feature is available in all AWS Regions where Amazon EMR Serverless is available including AWS GovCloud (US) and China regions. To learn more, see Enabling Job Level Cost Allocation in the Amazon EMR Serverless User Guide

emr
#emr#support

Quantized models can be seamlessly deployed on Amazon SageMaker AI using a few lines of code. In this post, we explore why quantization matters—how it enables lower-cost inference, supports deployment on resource-constrained hardware, and reduces both the financial and environmental impact of modern LLMs, while preserving most of their original performance. We also take a deep dive into the principles behind PTQ and demonstrate how to quantize the model of your choice and deploy it on Amazon SageMaker.

sagemaker
#sagemaker#support

This post, developed through a strategic scientific partnership between AWS and the Instituto de Ciência e Tecnologia Itaú (ICTi), P&D hub maintained by Itaú Unibanco, the largest private bank in Latin America, explores the technical aspects of sentiment analysis for both text and audio. We present experiments comparing multiple machine learning (ML) models and services, discuss the trade-offs and pitfalls of each approach, and highlight how AWS services can be orchestrated to build robust, end-to-end solutions. We also offer insights into potential future directions, including more advanced prompt engineering for large language models (LLMs) and expanding the scope of audio-based analysis to capture emotional cues that text data alone might miss.

This post provides a detailed architectural overview of how TrueLook built its AI-powered safety monitoring system using SageMaker AI, highlighting key technical decisions, pipeline design patterns, and MLOps best practices. You will gain valuable insights into designing scalable computer vision solutions on AWS, particularly around model training workflows, automated pipeline creation, and production deployment strategies for real-time inference.

sagemaker
#sagemaker#ga

Amazon Relational Database Service (Amazon RDS) for SQL Server now supports setting up cross-region read replicas in 16 additional AWS Regions. Cross-region read replicas enable customers to provide a replica database for read-only applications closer to users in a different region, and scale out read-only workloads. Since a read replica can be "promoted" to a standalone production database, cross-region read replicas can also be used for disaster recovery in case of regional failures. Customers can setup up to fifteen read replicas in the same or different region as the primary database instance. This launch adds support for cross-region read replicas in RDS for SQL Server in the following AWS Regions: Africa (Cape Town), Asia Pacific (Hong Kong), Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Asia Pacific (Malaysia), Asia Pacific (Melbourne), Asia Pacific (Taipei), Asia Pacific (Thailand), Canada West (Calgary), Europe (Milan), Europe (Spain), Europe (Zurich), Israel (Tel Aviv), Mexico (Central), Middle East (Bahrain), and Middle East (UAE). To get started, visit the Amazon RDS SQL Server User Guide.

rds
#rds#launch#ga#support

Amazon SageMaker announced a new feature that you can use to add custom tags to resources created through an Amazon SageMaker Unified Studio project. This helps you enforce tagging standards that conform to your organization’s service control policies (SCPs) and helps enable cost tracking reporting practices on resources created across the organization. In this post, we look at use cases for custom tags and how to use the AWS Command Line Interface (AWS CLI) to add tags to project resources.

sagemakerunified studiords
#sagemaker#unified studio#rds#ga#new-feature

Amazon Web Services (AWS) Lambda now supports .NET 10 as both a managed runtime and base container image. .NET is a popular language for building serverless applications. Developers can now use the new features and enhancements in .NET when creating serverless applications on Lambda. This includes support for file-based apps to streamline your projects by implementing functions using just a single file.

lambda
#lambda#now-available#new-feature#enhancement#support

Amazon MQ now supports the ability for RabbitMQ brokers to perform authentication (determining who can log in) using X.509 client certificates with mutual TLS (mTLS). The RabbitMQ auth_mechanism_ssl plugin can be configured on brokers running RabbitMQ version 4.2 and above on Amazon MQ by making changes to the associated configuration file. To start using certificate based authentication on Amazon MQ, simply select RabbitMQ 4.2 when creating a new broker using the M7g instance type through the AWS Management console, AWS CLI, or AWS SDKs, and then edit the associated configuration file with the required values. To learn more about the plugin, see the Amazon MQ release notes and the Amazon MQ developer guide. This plugin is available in all regions where Amazon MQ RabbitMQ 4 instances are available today.

q developer
#q developer#support

This two-part series explores Flo Health's journey with generative AI for medical content verification. Part 1 examines our proof of concept (PoC), including the initial solution, capabilities, and early results. Part 2 covers focusing on scaling challenges and real-world implementation. Each article stands alone while collectively showing how AI transforms medical content management at scale.

bedrock
#bedrock

Amazon DocumentDB (with MongoDB compatibility) is now available in the Asia Pacific (Jakarta) region adding to the list of available regions where you can use Amazon DocumentDB. Amazon DocumentDB is a fully managed, native JSON database that makes it simple and cost-effective to operate critical document workloads at virtually any scale without managing infrastructure. Amazon DocumentDB is designed to give you the scalability and durability you need when operating mission-critical MongoDB workloads. Storage scales automatically up to 128TiB without any impact to your application. In addition, Amazon DocumentDB natively integrates with AWS Database Migration Service(DMS), Amazon CloudWatch, AWS CloudTrail, AWS Lambda, AWS Backup and more. Amazon DocumentDB supports millions of requests per second and can be scaled out to 15 low latency read replicas in minutes with no application downtime. To learn more about Amazon DocumentDB, please visit the Amazon DocumentDB product page and pricing page. You can create a Amazon DocumentDB cluster from the AWS Management console, AWS Command Line Interface (CLI), or SDK.

lambdacloudwatch
#lambda#cloudwatch#now-available#support

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8i and R8i-flex instances are available in the Asia Pacific (Mumbai, Hyderabad) and Europe (Paris) regions. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The R8i and R8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver 20% higher performance than R7i instances, with even higher gains for specific workloads. They are up to 30% faster for PostgreSQL databases, up to 60% faster for NGINX web applications, and up to 40% faster for AI deep learning recommendation models compared to R7i. R8i-flex, our first memory-optimized Flex instances, are the easiest way to get price performance benefits for a majority of memory-intensive workloads. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources. R8i instances are a great choice for all memory-intensive workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. R8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications. R8i instances are SAP-certified and deliver 142,100 aSAPS, delivering exceptional performance for mission-critical SAP workloads. To get started, sign in to the AWS Management Console. For more information about the R8i and R8i-flex instances visit the AWS News blog.

lexec2
#lex#ec2#ga#now-available

Amazon Quick is expanding its third-party integrations by adding AI agents and growing its built-in actions library. Quick is Amazon's new AI-powered workspace and agentic teammate that helps organizations get answers from their business data and move quickly from insights to action. As organizations navigate newly adopted AI agents and work with existing enterprise tools for CRM, support, collaboration, and more, users face fragmented experiences. Users are forced to switch between different interfaces, repeat context, and manually stitch together outputs. Quick enables users to work with third-party agents and enterprise tools from a single interface, eliminating the wasted time and cognitive load of constantly switching between applications. With Quick, business users can now invoke specialized agents from Box, Canva, and PagerDuty to accomplish chat and automation tasks. For example, you can pull incident insights from PagerDuty, generate a presentation in Canva, and query documents stored in Box - all directly from Quick. Additionally, Quick has expanded its built-in actions to include integrations with GitHub, Notion, Canva, Box, Linear, Hugging Face, Monday.com, HubSpot, Intercom, and more. This enables Quick users to accomplish tasks like creating GitHub issues, summarizing meeting notes in Notion, managing their CRM, and more. Beyond our new built-in integrations, customers can continue to leverage custom Model Context Protocol (MCP) and OpenAPI connectors to connect Quick to thousands of additional applications. These features are now available in all AWS Regions where Amazon Quick is available. To learn more, visit the Amazon Quick Supported Integrations Guide and Integration Specific Guide.

amazon qorganizations
#amazon q#organizations#ga#now-available#integration#support

Starting today, Amazon EC2 M8i instances are now available in Europe (Frankfurt) and Asia Pacific (Malaysia) Regions. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The M8i offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver up to 20% better performance than M7i instances, with even higher gains for specific workloads. The M8i instances are up to 30% faster for PostgreSQL databases, up to 60% faster for NGINX web applications, and up to 40% faster for AI deep learning recommendation models compared to M7i instances. M8i instances are a great choice for all general purpose workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. The SAP-certified M8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications. To get started, sign in to the AWS Management Console. For more information about the new instances, visit the M8i instance page or visit the AWS News blog.

ec2
#ec2#ga#now-available

This post shows an automated PII detection and redaction solution using Amazon Bedrock Data Automation and Amazon Bedrock Guardrails through a use case of processing text and image content in high volumes of incoming emails and attachments. The solution features a complete email processing workflow with a React-based user interface for authorized personnel to more securely manage and review redacted email communications and attachments. We walk through the step-by-step solution implementation procedures used to deploy this solution. Finally, we discuss the solution benefits, including operational efficiency, scalability, security and compliance, and adaptability.

bedrock
#bedrock

AWS Lambda now supports creating serverless applications using .NET 10. Developers can use .NET 10 as both a managed runtime and a container base image, and AWS will automatically apply updates to the managed runtime and base image as they become available. .NET 10 is the latest long-term support release of .NET and is expected to be supported for security and bug fixes until November 2028. This release provides Lambda developers with access to the latest .NET features, including file-based apps. It also includes support for Lambda Managed Instances, enabling you to run Lambda functions on Amazon EC2 instances while maintaining serverless operational simplicity, providing cost efficiency and specialized compute options. Powertools for AWS Lambda (.NET), a developer toolkit to implement serverless best practices and increase developer velocity, also supports .NET 10. You can use the full range of AWS deployment tools, including the Lambda console, AWS CLI, AWS Serverless Application Model (AWS SAM), AWS CDK, and AWS CloudFormation to deploy and manage serverless applications written in .NET 10. The .NET 10 runtime is available in all Regions, including the AWS GovCloud (US) Regions and China Regions. For more information, including guidance on upgrading existing Lambda functions, see our blog post. For more information about AWS Lambda, visit our product page.

lambdaec2cloudformation
#lambda#ec2#cloudformation#update#support

re:Invent 2025 showcased the bold Amazon Web Services (AWS) vision for the future of analytics, one where data warehouses, data lakes, and AI development converge into a seamless, open, intelligent platform, with Apache Iceberg compatibility at its core. Across over 18 major announcements spanning three weeks, AWS demonstrated how organizations can break down data silos, […]

eksorganizations
#eks#organizations#ga#announcement

Today, AWS announces a simplified onboarding experience for AWS Client VPN, introducing a new Quickstart setup method that streamlines the process of creating and configuring Client VPN endpoints. AWS Client VPN allows you to securely connect remote users to AWS resources and on-premises networks. The new Quickstart setup, reduces the number of steps required to set up an Client VPN endpoint. You can now easily set up Client VPN endpoints with pre-defined default configurations, requiring only three key inputs: IPv4 CIDR, server certificate ARN, and subnet selection. For example, development teams in large organizations who use Client VPN for remote access to their VPC resources for quick testing can now create endpoints quickly with the new simplified setup process. The Quickstart method is available alongside the existing Standard Setup option, giving you the flexibility to choose the approach that best fits your deployment needs. Additionally, when you create a VPC, the Client VPN Quickstart workflow is automatically suggested as a follow up step. Once endpoint creation is complete, you can immediately download the client configuration file to connect using your VPN client. You can later modify and enhance your endpoint configuration using the Client VPN standard console or API as your usage patterns evolve. This enhancement is available at no additional cost in all AWS Regions where AWS Client VPN is generally available. To learn more about Client VPN: Visit the AWS Client VPN product page Read the AWS Client VPN documentation

lexorganizations
#lex#organizations#generally-available#ga#enhancement

You can now create Apache Airflow version 2.11 environments on Amazon Managed Workflows for Apache Airflow (MWAA). Apache Airflow 2.11 introduces several changes to help you prepare for upgrading to Apache Airflow 3.  Amazon MWAA is a managed orchestration service for Apache Airflow that makes it easier to set up and operate end-to-end data pipelines in the cloud. Apache Airflow 2.11 introduces several notable enhancements, such as new trigger-based scheduling for delta intervals, consistent reporting of metrics in milliseconds and other changes that will make it easy to migrate to Apache Airflow 3. In addition, MWAA now provides support for Python 3.12 that you can leverage in your workflows. You can launch a new Apache Airflow 2.11 environment on Amazon MWAA with just a few clicks in the AWS Management Console in all currently supported Amazon MWAA regions. To learn more about Apache Airflow 2.11 visit the Amazon MWAA documentation, and the Apache Airflow 2.11 change log in the Apache Airflow documentation. Apache, Apache Airflow, and Airflow are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.

#launch#enhancement#support

AWS is announcing starting today, Amazon EC2 I7ie instances are now available in AWS Asia Pacific (Mumbai), Canada West (Calgary) and Europe (Paris) regions. Designed for large storage I/O intensive workloads, I7ie instances are powered by 5th Gen Intel Xeon Processors with an all-core turbo frequency of 3.2 GHz, offering up to 40% better compute performance and 20% better price performance over existing I3en instances. I7ie instances offer up to 120TB local NVMe storage density for storage optimized instances and offer up to twice as many vCPUs and memory compared to prior generation instances. Powered by 3rd generation AWS Nitro SSDs, I7ie instances deliver up to 65% better real-time storage performance, up to 50% lower storage I/O latency, and 65% lower storage I/O latency variability compared to I3en instances. I7ie are high density storage optimized instances, ideal for workloads requiring fast local storage with high random read/write performance at very low latency consistency to access large data sets. These instances are available in 9 different virtual sizes and deliver up to 100Gbps of network bandwidth and 60Gbps of bandwidth for Amazon Elastic Block Store (EBS). To learn more, visit the I7ie instances page.

ec2
#ec2#ga#now-available

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8i and C8i-flex instances are available in the Asia Pacific (Mumbai), Asia Pacific (Seoul), and Asia Pacific (Tokyo) regions. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. These C8i and C8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver up to 20% higher performance than C7i and C7i-flex instances, with even higher gains for specific workloads. The C8i and C8i-flex are up to 60% faster for NGINX web applications, up to 40% faster for AI deep learning recommendation models, and 35% faster for Memcached stores compared to C7i and C7i-flex. C8i-flex are the easiest way to get price performance benefits for a majority of compute intensive workloads like web and application servers, databases, caches, Apache Kafka, Elasticsearch, and enterprise applications. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources. C8i instances are a great choice for all memory-intensive workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. C8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications. To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information about the new C8i and C8i-flex instances visit the AWS News blog.

lexec2kafka
#lex#ec2#kafka#ga#now-available

In this post, you'll learn how Amazon EMR Serverless eliminates the need to configure local disk storage for Apache Spark workloads through a new serverless storage capability. We explain how this feature automatically handles shuffle operations, reduces data processing costs by up to 20%, prevents job failures from disk capacity constraints, and enables elastic scaling by decoupling storage from compute.

emr
#emr

Organizations often struggle with building scalable and maintainable data lakes—especially when handling complex data transformations, enforcing data quality, and monitoring compliance with established governance. Traditional approaches typically involve custom scripts and disparate tools, which can increase operational overhead and complicate access control. A scalable, integrated approach is needed to simplify these processes, improve data reliability, […]

lexorganizations
#lex#organizations#ga

Today, AWS announces collection visibility in AWS Marketplace Seller Reporting, which adds up-to-date payment collection status to the Billed Revenue Dashboard and Billing Event Data Feed. This enhancement enables sellers to distinguish between invoiced, collected, and disbursed amounts, eliminating the visibility gap between invoice creation and disbursement. With this feature, sellers can make informed business decisions and reduce unnecessary follow-ups with customers about payment status. Collection visibility particularly benefits sellers using monthly disbursement who previously waited up to 30 days to understand payment collection status. All AWS Marketplace sellers can now improve payment forecasting accuracy and detect collection issues earlier. This enhanced visibility streamlines seller operations and improves customer relationships by providing clarity on payment status. Collection visibility is available in all AWS Regions where AWS Seller Reporting is available. The feature launches on January 6th, 2026 for all AWS sellers. To access collection visibility, log into the AWS Marketplace Management Portal and navigate to Insights → Finance Operations

forecast
#forecast#launch#ga#enhancement

Amazon Elastic Container Service (Amazon ECS) now supports tmpfs mounts for Linux tasks running on AWS Fargate and Amazon ECS Managed Instances, extending beyond the EC2 launch type. With tmpfs, you can now create memory‑backed file systems for your containerized workloads without writing this data to task storage. tmpfs mounts provide a temporary file system that is backed by memory and exposed inside the container at a path you choose. This is ideal for performance‑sensitive workloads that need fast access to scratch files, caches, or temporary working sets, and for security‑sensitive data such as short‑lived secrets or credentials, because the data does not persist after the task stops. tmpfs also lets you keep the container root file system read‑only using the readonlyRootFilesystem setting while still allowing applications to write to specific in‑memory directories. To get started, update your task definition so that the container definitions include a linuxParameters block with one or more tmpfs entries. For each tmpfs mount, specify the containerPath, size, and optional mountOptions. You can register or update task definitions using the Amazon ECS console, AWS CLI, AWS CloudFormation, or AWS CDK. This feature is available in all AWS Regions where Amazon ECS, AWS Fargate, and Amazon ECS Managed Instances are supported. To learn more, see the LinuxParameters and Tmpfs sections in the Amazon ECS API Reference and the Amazon ECS Developer Guide.

ec2ecsfargatecloudformation
#ec2#ecs#fargate#cloudformation#launch#ga

Amazon MQ now supports the ability for RabbitMQ brokers to perform authentication (determining who can log in) and authorization (determining what permissions they have) by making requests to an HTTP server. This plugin can be configured on brokers running RabbitMQ 4.2 and above on Amazon MQ by making changes to the associated configuration file. To start using HTTP based authentication and authorization on Amazon MQ, simply select RabbitMQ 4.2 when creating a new broker using the m7g instance type through the AWS Management console, AWS CLI, or AWS SDKs, and then edit the associated configuration file. To learn more about the plugin, see the Amazon MQ release notes and the Amazon MQ developer guide. This plugin is available in all regions where Amazon MQ RabbitMQ 4 instances are available today.

q developer
#q developer#support

AWS Config now supports 21 additional AWS resource types across key services including Amazon EC2, Amazon SageMaker, and Amazon S3 Tables. This expansion provides greater coverage over your AWS environment, enabling you to more effectively discover, assess, audit, and remediate an even broader range of resources. With this launch, if you have enabled recording for all resource types, then AWS Config will automatically track these new additions. The newly supported resource types are also available in Config rules and Config aggregators. You can now use AWS Config to monitor the following newly supported resource types in all AWS Regions where the supported resources are available: Resource Types: AWS::AppStream::AppBlockBuilder AWS::IoT::ThingGroup AWS::B2BI::Capability AWS::IoTSiteWise::Asset AWS::CleanRoomsML::TrainingDataset AWS::Location::APIKey AWS::CloudFront::KeyValueStore AWS::MediaPackageV2::OriginEndpoint AWS::Connect::SecurityProfile AWS::PCAConnectorAD::Connector AWS::Deadline::Monitor AWS::Route53::DNSSEC AWS::EC2::SubnetCidrBlock AWS::S3Tables::TableBucketPolicy AWS::ECR::ReplicationConfiguration AWS::SageMaker::UserProfile AWS::GameLift::Build AWS::SecretsManager::ResourcePolicy AWS::GuardDuty::MalwareProtectionPlan       AWS::SSMContacts::Contact AWS::ImageBuilder::LifecyclePolicy

sagemakers3ec2cloudfront
#sagemaker#s3#ec2#cloudfront#launch#ga

Starting today, the Amazon Elastic Compute Cloud (Amazon EC2) G5 instances powered by NVIDIA A10G Tensor Core GPUs are now available in the Asia Pacific (Hong Kong) region. G5 instances can be used for a wide range of graphics intensive and machine learning use cases. Customers can use G5 instances for graphics-intensive applications such as remote workstations, video rendering, and cloud gaming to produce high fidelity graphics in real time. Machine learning customers can use G5 instances for high performance and cost-efficient training and inference for natural language processing, computer vision, and recommender engine use cases. G5 instances feature up to 8 NVIDIA A10G Tensor Core GPUs and 2nd generation AMD EPYC processors. They also support up to 192 vCPUs, up to 100 Gbps of network bandwidth, and up to 7.6 TB of local NVMe SSD storage. With eight G5 instance sizes that offer access to single or multiple GPUs, customers have the flexibility to pick the right instance size for their applications. Customers can easily optimize G5 instances for their workloads with NVIDIA drivers specific to compute, gaming or workstation workloads. Customers can purchase G5 instances as On-Demand Instances or Reserved Instances.

lexec2
#lex#ec2#ga#now-available#support

Today, AWS Resource Explorer has expanded the availability of resource search and discovery to the Asia Pacific (New Zealand) Region. With AWS Resource Explorer you can search for and discover your AWS resources across AWS Regions and accounts in your organization, either using the AWS Resource Explorer console, the AWS Command Line Interface (AWS CLI), the AWS SDKs, or the unified search bar from wherever you are in the AWS Management Console. For more information about the Regions where AWS Resource Explorer is available, see the AWS Region table. To turn on AWS Resource Explorer, visit the AWS Resource Explorer console. Read about getting started in our AWS Resource Explorer documentation, or explore the AWS Resource Explorer product page.

#ga#now-available

Customers in AWS Asia Pacific (New Zealand) Region can now use AWS Transfer Family for file transfers over Secure File Transfer Protocol (SFTP), File Transfer Protocol (FTP), FTP over SSL (FTPS) and Applicability Statement 2 (AS2). AWS Transfer Family provides fully managed file transfers for Amazon Simple Storage Service (Amazon S3) and Amazon Elastic File System (Amazon EFS) over SFTP, FTP, FTPS and AS2 protocols. In addition to file transfers, Transfer Family enables common file processing and event-driven automation for managed file transfer (MFT) workflows, helping customers to modernize and migrate their business-to-business file transfers to AWS. To learn more about AWS Transfer Family, visit our product page and user guide. See the AWS Region Table for complete regional availability information.

s3
#s3#now-available

Today, AWS announces new Spot interruption metrics for Amazon EC2 Capacity Manager that allow you to better understand Spot capacity across your organization. EC2 Capacity Manager helps you monitor, analyze, and manage your EC2 capacity across On-Demand, Spot, and Capacity Reservations from a single location. With this new capability, you can now track how many Spot instances are running, monitor interruption counts, and calculate interruption rates across regions, availability zones, and accounts. This enables you to make data-driven decisions about your Spot instance strategy. EC2 Capacity Manager now includes three new metrics: ‘Spot Total Count’, ‘Spot Total Interruptions’, and ‘Spot Interruption Rate. ‘Spot Total Count’ shows the total number of distinct Spot instances or vCPUs that ran during a selected period, ‘Spot Total Interruptions’ tracks how many were interrupted, and ‘Spot Interruption Rate’ calculates the percentage of running instances that experienced interruptions. This data helps you identify patterns, compare across different regions and availability zones, and optimize your Spot instance strategy by diversifying instance types, expanding across availability zones, or using Spot placement score to identify optimal capacity pools with higher availability. EC2 Capacity Manager with Spot interruption metrics is available in all commercial AWS Regions enabled by default at no additional cost. To get started, visit EC2 Capacity Manager in the AWS console.

ec2
#ec2#ga#new-capability

Today, AWS Clean Rooms announces the launch of detailed monitoring for SQL queries in a collaboration. This new capability publishes detailed metrics to CloudWatch for operational monitoring of collaborations, including query performance and resource utilization. You can choose to publish detailed monitoring metrics for SQL queries run in a Clean Rooms collaboration to CloudWatch, helping you improve the observability for your workloads at scale. The collaboration creator can enable detailed monitoring for a collaboration, and the analysis runner or configured payor can enable detailed monitoring when configuring their collaboration membership. For example, advertisers can monitor their campaign lift analysis queries in CloudWatch to identify performance issues and optimize costs. With AWS Clean Rooms, customers can create a secure data clean room in minutes and collaborate with any company on AWS or Snowflake to generate unique insights about advertising campaigns, investment decisions, and research and development. For more information about the AWS Regions where AWS Clean Rooms is available, see the AWS Regions table. To learn more about collaborating with AWS Clean Rooms, visit AWS Clean Rooms.

cloudwatch
#cloudwatch#launch#support#new-capability

Amazon Connect now enables you to store and work with complex data structures in your flows, making it easy to build dynamic automated experiences that use rich information returned from your internal business systems. You can save complete data records, including nested JSON objects and lists, and reference specific elements within them, such as a particular order from a list of orders returned in JSON format. Additionally, you can automatically loop through lists of items in your customer service flows, moving through each entry in sequence while tracking the current position in the loop. This allows you to easily access item-level details and present relevant information to end-customers. For example, a travel agency can retrieve all of a customer’s itineraries in a single request and guide the caller through each booking to review or update their reservations. A bank can similarly walk customers through recent transactions one by one using data retrieved securely from its systems. These capabilities reduce the need for repeated calls to your business systems, simplify workflow design, and make it easier to deliver advanced automated experiences that adapt as your business requirements evolve. To learn more about these features, see the Amazon Connect Administrator Guide. These features are available in all AWS regions where Amazon Connect is available. To learn more about Amazon Connect, AWS’s AI-native customer experience solution, please visit the Amazon Connect website.

lexrds
#lex#rds#update

Starting today, AWS WAF is available in the AWS Asia Pacific (New Zealand) Region. AWS WAF is a web application firewall that helps you protect your web application resources against common web exploits and bots that can affect availability, compromise security, or consume excessive resources. With AWS WAF, you can control access to your content. Based on conditions that you specify, such as the IP addresses that requests originate from or the values of query strings, your protected resource responds to requests either with the requested content, with an HTTP 403 status code (Forbidden), or with a custom response. To see the full list of regions where AWS WAF is currently available, visit the AWS Region Table. For more information about the service, visit the AWS WAF page. For more information about pricing, visit the AWS WAF Pricing page.

waf
#waf#ga#now-available

Today, AWS launches simplified import of CloudTrail Lake data in Amazon CloudWatch, a data management and analytics service that allows you to unify operational, security, and compliance data across your AWS environment and third-party sources. With this launch, you can now import your historical CloudTrail Lake data into CloudWatch with a few steps enabling you to easily consolidate operational, security, and compliance data in one place. In CloudWatch, you simply specify the CloudTrail Lake event data store (EDS), and the date range to initiate import of your CloudTrail data. Simplified import of CloudTrail Lake data is supported via the AWS console, CLI, and SDK. While simplified import of CloudTrail Lake data is available at no additional cost, you incur CloudWatch fees based on custom logs pricing. To learn more about simplified import of CloudTrail Lake data and supported AWS regions, visit the Amazon CloudWatch documentation.

cloudwatch
#cloudwatch#launch#support

This post shows you how to migrate your self-managed MLflow tracking server to a MLflow App – a serverless tracking server on SageMaker AI that automatically scales resources based on demand while removing server patching and storage management tasks at no cost. Learn how to use the MLflow Export Import tool to transfer your experiments, runs, models, and other MLflow resources, including instructions to validate your migration's success.

sagemaker
#sagemaker

Amazon OpenSearch Service now supports AWS KMS customer managed keys (CMKs) and increased size for OpenSearch UI metadata. Amazon OpenSearch UI is a managed service for dashboarding and operational analytics that provides a unified view across multiple data sources, including OpenSearch domains and collections, Amazon S3, Amazon CloudWatch, and AWS Security Lake. You can now create new OpenSearch UI applications with metadata encrypted with your own CMKs, helping organizations meet regulatory and compliance requirements. This launch also increases the metadata size limit for saved objects in OpenSearch UI, enabling you to create and store complex queries, extensive visualizations, and large-scale dashboards. CMK support and increased metadata size are available in all regions that OpenSearch UI is available. Learn more at Amazon OpenSearch UI Developer Guide.

lexs3opensearchopensearch servicerds+2 more
#lex#s3#opensearch#opensearch service#rds#cloudwatch

Amazon Connect now automates agent performance evaluations in Portuguese, French, Italian, German, and Spanish using generative AI. Managers define custom evaluation criteria in natural language and receive AI-generated evaluations with justifications in their preferred language. Performance evaluations also supports cross-language evaluation and can complete assessments in English, even when the conversation is in another language. This enables multilingual contact centers to use a standardized evaluation framework across languages. This feature is supported in 8 AWS regions including US East (N. Virginia), US West (Oregon), Europe (Frankfurt), Europe (London), Canada (Central), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Asia Pacific (Singapore). For information about Amazon Connect pricing, please visit our pricing page. To learn more, please visit our documentation and our webpage.

#ga#support

In this post, we explore how to programmatically create an IDP solution that uses Strands SDK, Amazon Bedrock AgentCore, Amazon Bedrock Knowledge Base, and Bedrock Data Automation (BDA). This solution is provided through a Jupyter notebook that enables users to upload multi-modal business documents and extract insights using BDA as a parser to retrieve relevant chunks and augment a prompt to a foundational model (FM).

bedrockagentcore
#bedrock#agentcore

Enterprise organizations increasingly rely on web-based applications for critical business processes, yet many workflows remain manually intensive, creating operational inefficiencies and compliance risks. Despite significant technology investments, knowledge workers routinely navigate between eight to twelve different web applications during standard workflows, constantly switching contexts and manually transferring information between systems. Data entry and validation tasks […]

organizations
#organizations#ga

Oracle Database@AWS is now generally available in three additional AWS Regions - US-East-2 (Ohio), EU-Central-1 (Frankfurt), and AP-Northeast-1 (Tokyo). Oracle Database@AWS enables customers to access database services on Oracle Cloud Infrastructure (OCI) managed Oracle Exadata systems within AWS data centers. With this launch, customers in the EU and Japan with in-region data residency requirements can easily migrate on-premises Oracle Exadata applications to AWS. With this expansion, AWS customers can run OCI Exadata Database Service, OCI Autonomous Database on Dedicated Infrastructure, and OCI Autonomous Recovery Service in five Regions - US-East-1 (N.Virginia), US-West-2 (Oregon), US-East-2 (Ohio), EU-Central-1 (Frankfurt), and AP-Northeast-1 (Tokyo). To use these services, request a private offer from Oracle through the AWS Marketplace, and use AWS Management Console to setup database resources. To learn more, visit Oracle Database@AWS overview and documentation.

#launch#generally-available#now-available#expansion

Amazon Bedrock now supports NVIDIA Nemotron 3 Nano 30B A3B model, NVIDIA's latest breakthrough in efficient language modeling that delivers high reasoning performance, native tool calling support, and extended context processing with 256k token context window. This model employs an efficient hybrid Mixture-of-Experts (MoE) architecture to ensure higher throughput than its predecessors for agentic and coding workloads, while maintaining the reasoning depth of a larger model. With explicit reasoning controls and higher accuracy enabled by advanced reinforcement learning techniques and multi-environment post-training at scale, this model is ideal for enterprises, startups, and individual developers building multi-agent workflows, developer productivity tools, processes automation, and for scientific and mathematical reasoning analysis, amongst others. NVIDIA Nemotron 3 Nano on Amazon Bedrock is powered by Project Mantle, a new distributed inference engine for large-scale machine learning model serving on Amazon Bedrock. Project Mantle simplifies and expedites onboarding of new models onto Amazon Bedrock, provides highly performant and reliable serverless inference with sophisticated quality of service controls, unlocks higher default customer quotas with automated capacity management and unified pools, and provides out-of-the-box compatibility with OpenAI API specifications. NVIDIA Nemotron 3 Nano is available today on Amazon Bedrock in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Mumbai), South America (Sao Paulo), Europe (London), and Europe (Milan) AWS Regions, and supports both unified and OpenAI API-compatible service endpoints on Amazon Bedrock. To learn more and get started, visit Amazon Bedrock console or the service documentation here. To get started with Amazon Bedrock OpenAI API-compatible service endpoints, visit documentation here.

bedrock
#bedrock#now-available#support#new-model

Amazon Managed Streaming for Apache Kafka (Amazon MSK) is now available in Asia Pacific (New Zealand) region. Customers can create Amazon MSK Provisioned clusters in this region starting today. Amazon MSK is a fully managed service for Apache Kafka and Kafka Connect that makes it easier for you to build and run applications that use Apache Kafka as a data store. Amazon MSK is fully compatible with Apache Kafka, which enables you to more quickly migrate your existing Apache Kafka workloads to Amazon MSK with confidence or build new ones from scratch. With Amazon MSK, you spend more time building innovative streaming applications and less time managing Kafka clusters. Amazon MSK offers two types of Apache Kafka provisioned broker - Standard brokers and Express brokers. Standard brokers offer the most flexibility to configure your cluster’s performance. You can configure availability, durability, throughput, and latency. You also control the storage configurations on your cluster and are responsible for managing storage provisioning and utilization. Express brokers are a new broker type for Amazon MSK Provisioned designed to deliver up to 3x more throughput per broker, scale up to 20x faster, up to 5x more partitions per broker, and reduce recovery time by 90% as compared to Standard brokers. Express brokers come pre-configured with Kafka best practices by default, support all Kafka APIs, and provide the same low-latency performance that Amazon MSK customers expect, so they can continue using existing client applications without any changes. You can now create an MSK provisioned cluster with Standard or Express brokers in Asia Pacific (New Zealand) Region through the Amazon MSK console or the Amazon CLI. To get started, see the Amazon MSK Developer Guide.

novalexkafkamsk
#nova#lex#kafka#msk#now-available#support

Amazon WorkSpaces Secure Browser now supports Web Authentication (WebAuthn) redirection, allowing users to authenticate to websites using their local FIDO2 security keys, biometric authenticators, and platform authenticators while browsing in their WorkSpaces Secure Browser session. This feature is compatible with Chromium-based browser on users’ local devices, such as Google Chrome 136 (or later) or Microsoft Edge 137 (or later). It is not supported on non-Chromium-based browsers such as Safari or Firefox. WebAuthn redirection helps users enjoy seamless and secure authentication on websites within their WorkSpaces Secure Browser sessions. This feature supports FIDO2 security keys, passkeys, and platform authenticators like Windows Hello or Touch ID. To enable the feature, administrators must activate WebAuthn redirection in Secure browser’s portal settings and configure the local browsers using the WebAuthenticationRemoteDesktopAllowedOrigins policy. This configuration allows WebAuthn tokens to be securely transmitted from a user’s local device to websites within a Secure Browser session, ensuring that users can authenticate securely without compromising the security benefits of the remote browsing environment. This feature is available at no additional cost in all regions where Amazon WorkSpaces Secure Browser is available, including US East (N. Virginia), US West (Oregon), Canada (Central), Europe (Frankfurt, London, Ireland), and Asia Pacific (Tokyo, Mumbai, Sydney, Singapore) To get started and enable WebAuthn redirection, visit the Amazon WorkSpaces Secure Browser console. For more information, see the WebAuthn redirection section in the Amazon WorkSpaces Secure Browser’s documentation.

#ga#support

Amazon RDS for MySQL now supports community MySQL Innovation Release 9.5 in the Amazon RDS Database Preview Environment, allowing you to evaluate the latest Innovation Release on Amazon RDS for MySQL. You can deploy MySQL 9.5 in the Amazon RDS Database Preview Environment which provides the benefits of a fully managed database, making it simpler to set up, operate, and monitor databases. MySQL 9.5 is the latest Innovation Release from the MySQL community. MySQL Innovation releases include bug fixes, security patches, as well as new features. MySQL Innovation releases are supported by the community until the next innovation minor, whereas MySQL Long Term Support (LTS) Releases, such as MySQL 8.0 and MySQL 8.4, are supported by the community for up to eight years. Please refer to the MySQL 9.5 release notes and Amazon RDS MySQL release notes for more details. Amazon RDS Database Preview Environment supports both Single-AZ and Multi-AZ deployments on the latest generation of instance classes. Amazon RDS Database Preview Environment database instances are retained for a maximum of 60 days and are automatically deleted after the retention period. Amazon RDS database snapshots created in the Preview Environment can only be used to create or restore database instances within the Preview Environment. Amazon RDS Database Preview Environment database instances are priced the same as production RDS instances created in the US East (Ohio) Region. For further information, see Working with the Database Preview Environment. To get started with the Preview Environment from the RDS console, navigate here.

novards
#nova#rds#preview#ga#new-feature#support

Today, AWS Secrets Manager announces enhanced secret sorting capabilities in the Secrets Manager console and for ListSecrets API. You can now sort secrets by name, last changed date, last accessed date, and creation date—expanding beyond the previous creation date-only option. Secrets Manager is a fully managed service that helps you manage, retrieve, and rotate database credentials, application credentials, API keys, and other secrets throughout their lifecycles. This enhancement improves secret discovery by providing flexible sorting options across multiple dimensions through both Secrets Manager console and APIs. The new sorting capabilities are available in Secrets Manager console and ListSecrets API in all AWS commercial and AWS GovCloud (US) Regions. For a list of regions where Secrets Manager is available, see the AWS Region table.

lexsecrets manager
#lex#secrets manager#enhancement

MiniMax-M2 is now available on Amazon SageMaker JumpStart, providing customers with immediate access to deploy this efficient open-source model in minutes. With SageMaker JumpStart, you can quickly discover, evaluate, and deploy MiniMax-M2 using either SageMaker Studio's intuitive interface or the SageMaker Python SDK for programmatic deployment. MiniMax-M2 redefines efficiency for agents. It's a compact, fast, and cost-effective MoE model (230 billion total parameters with 10 billion active parameters) built for elite performance in coding and agentic tasks, all while maintaining powerful general intelligence. To learn more about deploying foundation models with SageMaker JumpStart, deployment options with the SDK, and best practices for implementation, refer to our documentation. MiniMax-M2 is available in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Mumbai), Asia Pacific (Sydney), Asia Pacific (Jakarta), Canada (Central), Europe (Frankfurt), Europe (Stockholm), Europe (Ireland), Europe (London), Europe (Paris), South America (São Paulo).

sagemakerjumpstart
#sagemaker#jumpstart#ga#now-available

AWS Transform now supports automatic network conversion from hybrid data centers, eliminating manual network mapping for environments running both VMware and non-VMware workloads. The service now analyzes VLANs and IP ranges across all exported source networks and maps these to AWS constructs like Virtual Private Clouds (VPCs), subnets, and security groups. AWS Transform for VMware is an agentic AI-powered service that automates the discovery, planning, and migration of VMware workloads, accelerating infrastructure modernization with increased confidence. The service extends support to hybrid data centers by analyzing exported data from application mapping tools such as modelizeIT to automatically generate Infrastructure as Code and provision AWS networking resources. This feature is available in all AWS Transform target Regions. To learn more, visit the AWS Transform product page, read the user guide, or get started in the AWS Transform web experience.

transform for vmware
#transform for vmware#support

Amazon WorkSpaces Secure Browser now supports branding customization, enabling you to create a consistent, branded experience that helps you align with your organization's visual identity. This feature allows you to customize the sign-in and session loading screens that appear to your end users by modifying visual elements and text content to maintain brand consistency across all user touch points. You can personalize the sign-in and session loading experience by uploading your organization's favicon, logo, and wallpaper, selecting color themes, and customizing the welcome message , the browser tab title, and other text fields in all 11 languages supported by the service. You can also modify the "Contact Us" link to redirect to your organization's support page and add a Terms of Service page that users must acknowledge before starting a session. All customization settings are designed to meet WCAG AA accessibility and contrast requirements. This feature is available at no additional cost in 10 AWS Regions, including US East (N. Virginia), US West (Oregon), Canada (Central), Europe (Frankfurt, London, Ireland), and Asia Pacific (Tokyo, Mumbai, Sydney, Singapore). WorkSpaces Secure Browser offers pay-as-you go pricing. To get started, visit the Amazon WorkSpaces Secure Browser console to configure your branding settings. For more information, see the branding customization section in the Amazon WorkSpaces Secure Browser’s documentation.

personalize
#personalize#ga#support

Starting today, AWS End User Messaging customers can use AWS generative AI to review their phone number registrations, so you can submit to mobile carriers correctly the first time. With the registration reviewer (preview), AWS will provide you feedback on your registration form checking the message sample, opt-in description, use-case, help and stop messages, etc., helping you submit an accurate and complete registration. AWS End User Messaging provides developers with a scalable and cost-effective messaging infrastructure without compromising the safety, security, or results of their communications. Developers can integrate messaging to support uses cases such as one-time passcodes (OTP) at sign-ups, account updates, appointment reminders, delivery notifications, promotions and more. Support for generative AI registration reviewer is available in all AWS Regions where End User Messaging is available, see the AWS Region table. To learn more, see AWS End User Messaging.

#launch#preview#update#support

Amazon Redshift now allows you to run create MV and refresh MV commands from multiple Amazon Redshift data warehouses. This update also allows you to create an MV on shared MVs. Finally, this release now supports concurrency scaling of the create materialized view (MV) data definition language (DDL) command. With this update, you can now scale the create MV DDL command whenever your main Amazon Redshift data warehouse cluster or workgroup runs out of resources simply by enabling concurrency scaling in your Amazon Redshift account. You can start using these new capabilities immediately in all AWS regions where Amazon Redshift is available to scale your workload and build resilient analytics applications with predictable Service Level Agreements. To get started, refer to the Concurrency Scaling, Materialized Views and Data Sharing sections of the Amazon Redshift documentation.

redshift
#redshift#update#support

AWS Deadline Cloud now lets you submit rendering jobs directly from the Deadline Cloud Monitor desktop application. This new feature makes it easier to submit renders for applications that don’t have built-in Deadline Cloud plugins or submission scripts, expanding compatibility with content creation tools and streamlining your rendering workflows. Previously, you needed the command line interface (CLI) to submit job bundles. With this update, you can submit job bundles directly from Deadline Cloud Monitor desktop interface, managing jobs from start to finish in one place. It is particularly useful for legacy applications, specialized rendering tools, or custom workflows that lack built-in Deadline Cloud integration. To access direct job submission, download the latest Deadline Cloud Monitor desktop application (version 1.1.7) in the AWS Console. To learn more about AWS Deadline Cloud job submission capabilities, see the AWS Deadline Cloud documentation.

#ga#new-feature#update#integration#support

In this post, we show you how to unify governance and metadata across Amazon SageMaker Unified Studio and Atlan through a comprehensive bidirectional integration. You’ll learn how to deploy the necessary AWS infrastructure, configure secure connections, and set up automated synchronization to maintain consistent metadata across both platforms.

sagemakerunified studio
#sagemaker#unified studio#integration

Today, Oracle Database@AWS announced ability to share AWS Marketplace entitlements across accounts within an AWS Organization. With this feature, customers can now accept an Oracle Database@AWS AWS Marketplace offer in one AWS account, and share that entitlement with additional accounts in their AWS Organization. This allows customers to consume Oracle Database@AWS services from multiple AWS accounts using a single AWS Marketplace entitlement purchased for their organization. Many Oracle Database@AWS customers use separate AWS accounts for their development and production environments, and for different business units within their organization. Customers want a single buyer agreement to use Oracle Database@AWS within their organization, and use the purchased AWS Marketplace entitlement across multiple business units, and across their development and production environments. With AWS Marketplace Managed Entitlements, customers can now share their Oracle Database@AWS entitlement with other accounts in their AWS Organization using AWS License Manager console or APIs. These accounts can accept and activate their shared AWS Marketplace entitlement from AWS License Manager, and then start consuming Oracle Database@AWS services using the shared entitlement. This feature is available in all AWS Regions where Oracle Database@AWS is offered. For information about managing entitlements on Oracle Database@AWS, see documentation. To learn more about Oracle Database@AWS, visit the Oracle Database@AWS product page.

#ga#support

Amazon Kinesis Video Streams (Amazon KVS) now supports Internet Protocol version 6 (IPv6) addressing for WebRTC. This release introduces dual-stack endpoint support, enabling developers to use both IPv4 and IPv6 addresses to stream video from millions of devices. The dual-stack support is designed to ensure that existing IPv4 implementations continue to work reliably while gaining IPv6 connectivity benefits. Moreover, the update simplifies transition to IPv6 addresses while eliminating the need for address translation equipment. This feature is available in all commercial AWS Regions where Amazon KVS is offered, except Asia Pacific (Singapore) and China (Beijing, operated by Sinnet). For implementation details, refer to the Amazon KVS Developer Guide.

kinesis
#kinesis#ga#update#support

Amazon Elastic Container Service (Amazon ECS) Service Connect now supports Envoy access logs, providing deeper observability into request-level traffic patterns and service interactions. This new capability captures detailed per-request telemetry for end-to-end tracing, debugging, and compliance monitoring. Amazon ECS Service Connect makes it simple to build secure, resilient service-to-service communication across clusters, VPCs, and AWS accounts. It integrates service discovery and service mesh capabilities by automatically injecting AWS-managed Envoy proxies as sidecars that handle traffic routing, load balancing, and inter-service connectivity. Envoy Access logs capture detailed traffic metadata enabling request-level visibility into service communication patterns. This enables you to perform network diagnostics, troubleshoot issues efficiently, and maintain audit trails for compliance requirements. You can now configure access logs within ECS Service Connect by updating the ServiceConnectConfiguration to enable access logging. Query strings are redacted by default to protect sensitive data. Envoy access logs will output to the standard output (STDOUT) stream alongside application logs and flow through the existing ECS log pipeline without requiring additional infrastructure. This configuration supports all existing application protocols (HTTP, HTTP2, GRPC and TCP). This feature is available with both Fargate and EC2 launch modes in AWS GovCloud (US-West) and AWS GovCloud (US-East) regions via the AWS Management Console, API, SDK, CLI, and CloudFormation. To learn more, visit the Amazon ECS Developer Guide.

ec2ecsfargatecloudformation
#ec2#ecs#fargate#cloudformation#launch#ga

Today, Amazon GameLift Streams launched two new capabilities to optimize performance and cost: Gen6 stream classes and enhanced autoscaling with warm buffer. The new Gen6 stream classes provide a wider range of price performance options, while autoscaling enables customers to dynamically manage capacity scaling.  The seven new Gen6 stream classes available today are based on EC2 G6 instances powered by NVIDIA L4 Tensor Core GPUs, which provide up to 2x higher performance over Gen4 stream classes. The pro and ultra stream class deliver improved performance for graphics-intensive AAA games, while the medium and small stream class offer cost-efficient options for casual games. The gen6n_small stream class is available at $0.16/hour in us-east-2.  The enhanced autoscaling capabilities provide automatic capacity management that scales provisioned capacity dynamically with demand, helping customers optimize utilization and stream start time for new players. Developers can use the new capacity controls (minimum, maximum, and target-idle capacity) to precisely manage their scaling needs.  New Gen6 stream classes are available in five AWS Regions: US West (Oregon), US East (Ohio), US East (N. Virginia), Europe (Frankfurt), and Asia Pacific (Tokyo). Improved autoscaling is available in all AWS Regions where Amazon GameLift Streams is offered. To learn more and get started, visit: AWS Docs: Gen6n based stream classes; Enhanced Autoscaling - Capacity configuration options; API Reference Guide: Create Stream group with gen6n based stream class; Capacity configuration option for stream groups

ec2
#ec2#launch#ga

Today, Amazon GameLift Streams launched new powerful observability capabilities with session performance stats that provide real-time data for individual stream sessions, offering insights into application performance issues. Game developers requested deeper visibility into session performance and resource utilization. The new stats we're delivering today help developers understand how their applications perform on specific stream classes and GPUs, giving them the data to make optimal GPU selections, optimize performance, and troubleshoot individual user experiences. Developers can now access detailed data on CPU, memory, GPU, and VRAM usage for active sessions through the GameLift Streams Web SDK, or view them in the AWS console's built-in overlay on the "Test stream" page. These performance stats can also be exported to a file for post-session analysis. GameLift Streams also launched improved session status reasons and error messaging. Game developers now have deeper insights and actionable information on the reasons for stream session termination, enabling them to root cause unexpected termination and improve their troubleshooting experience.  The stream session performance stats and improved API error messaging are available in all AWS Regions where Amazon GameLift Streams is offered at no additional cost. To get started with these new features, create new stream groups. To integrate session performance stats into your clients, download the updated Web SDK 1.1.0. To learn more, visit: AWS Docs: Real-time performance stats; Export stream session files API Reference: Stream Session > Status Reason

#launch#ga#new-feature#update

Today we announce Research and Engineering Studio (RES) on AWS version 2025.12, which introduces tag propagation for CloudFormation resources, enhanced Windows domain configuration options, default session scheduling, and security improvements. Research and Engineering Studio on AWS is an open source solution that provides a web-based portal for administrators to create and manage secure cloud-based research and engineering environments. RES enables scientists and engineers to access powerful Windows and Linux virtual desktops with pre-installed applications and shared resources, without requiring cloud expertise. Tags applied to the CloudFormation stack now propagate to all resources created during RES deployment, making it easier to track costs and manage resources across your organization. Administrators can disable automatic Windows domain joining for hosts, providing flexibility to implement custom domain-join logic when needed. You can now set a default schedule for all new desktop sessions, helping teams standardize session management practices. This version includes several security improvements to help RES deployments meet the NIST 800-223 standard and fixes a bug where some sessions were logged out after 2 minutes when using a custom DNS domain. This release is available in all AWS Regions where RES is available. To learn more about RES 2025.12, including detailed release notes and deployment instructions, visit the Research and Engineering Studio documentation or check out the RES GitHub repository.

lexcloudformation
#lex#cloudformation#ga#now-available#improvement

The AWS Storage Gateway service now supports the Nutanix AHV hypervisor as a deployment option for S3 File, Tape and Volume gateways. If you use Nutanix AHV hypervisor-based on-premises infrastructure, you can now deploy Storage Gateway in your environment to access virtually unlimited cloud storage.
 Nutanix AHV (Acropolis Hypervisor) is a KVM-based virtualization platform that is integrated into the Nutanix hyper-converged infrastructure (HCI) solution. With this launch, you have the option to deploy Storage Gateway on a Nutanix AHV hypervisor.
 Storage Gateway is a hybrid cloud storage service that provides on-premises applications access to virtually unlimited cloud storage using NFS, SMB, iSCSI, and iSCSI-VTL interfaces. You can use the service to backup and archive data to AWS, shift on-premises storage to cloud-backed file shares, and provide on-premises applications low-latency access to data in AWS. You can deploy Storage Gateway as a virtual appliance (VMware ESXi, Microsoft Hyper-V, Linux KVM, and now Nutanix) on premises or as an Amazon EC2 instance in AWS. This capability is available in all AWS Regions. Visit the Storage Gateway User guide to learn more, or log into the Storage Gateway management console to get started.

s3ec2
#s3#ec2#launch#ga#support

Today, AWS announces Neuron SDK 2.27.0, introducing support for Trainium3 UltraServer with expanded open source components. Neuron also introduces the Neuron Explorer tools suite, Enhanced NKI with open source NKI Compiler built on MLIR (private beta), the NKI Library of optimized kernels, native PyTorch support through TorchNeuron (private beta), and Neuron DRA for Kubernetes-native resource management (private beta). These updates enable standard frameworks to run unchanged on Trainium, removing barriers for researchers to experiment and innovate. For developers requiring deeper control, the enhanced Neuron Kernel Interface (NKI) Beta 2 provides direct access to hardware-level optimizations, enabling customers to scale AI workloads with improved performance. If you're interested in early access to new NKI features and improvements, you can join the Neuron private beta program. The new SDK version is available in all AWS Regions supporting Inferentia and Trainium instances, offering enhanced performance and monitoring capabilities for machine learning workloads. For more details, see: What’s New in Neuron AWS Neuron 2.27.0 Release Notes AWS Trainium

novatrainiumtrainium3inferentianeuron
#nova#trainium#trainium3#inferentia#neuron#beta

AWS Wickr now provides a suite of admin APIs that empower administrators to programmatically manage secure communication networks at scale. These APIs enable you to automate critical administrative workflows including user lifecycle management, network configuration, and security group administration. With user lifecycle management APIs, you can automatically create users and assign security groups when employees join, or deactivate accounts when they leave. Network configuration APIs allow you to quickly create or delete networks on demand as your organization scales or restructures, and push standardized retention and federation policies across departments. Security group administration APIs enable automatic user placement based on directory attributes such as job function or clearance level. By connecting Wickr administration directly into your identity management systems, policy management frameworks, and automation pipelines, you can now manage secure communications infrastructure across thousands of users alongside your other cloud service integrations. AWS Wickr is a security-first messaging and collaboration service designed to help keep your communications secure, private, and compliant. AWS Wickr protects messaging, voice and video calling, file sharing, screen sharing, and location sharing with end-to-end encryption. Customers have full administrative control over data and users, including single sign-on (SSO) integration. Administrators can enforce policies that set password complexity and retention rules, configure ephemeral messaging options, or remotely delete credentials. You can log conversations to a private data store so you can retain messages and files sent to and from the organization to meet compliance requirements. The AWS Wickr Admin APIs are available today in all AWS regions where AWS Wickr is currently supported, including AWS GovCloud (US-West). You can leverage these APIs through AWS SDKs, the AWS Command Line Interface (AWS CLI), or direct REST API calls. To learn more, see: AWS Wickr API Reference AWS Wickr Product Page AWS Wickr Administrator Guide

lex
#lex#launch#ga#integration#support

Amazon Application Recovery Controller (ARC) Region switch allows you to orchestrate the specific steps to switch your multi-Region applications to operate out of another AWS Region and achieve a bounded recovery time in the event of a Regional impairment to your applications. Region switch saves hours of engineering effort and eliminates the operational overhead previously required to complete failover steps, create custom dashboards, and manually gather evidence of a successful recovery for applications across your organization and hosted in multiple AWS accounts. Today, we are announcing three new Region switch capabilities: AWS GovCloud (US) support: ARC Region switch is now generally available in AWS GovCloud (US-East and US-West) Regions. Plan execution reports: Region switch now automatically generates a comprehensive report from each plan execution and saves it to an Amazon S3 bucket of your choice. Each report includes a detailed timeline of events for the recovery operation, resources in scope for the Region switch, alarm states for optional application status alarms, and recovery time objective (RTO) calculations. This eliminates the manual effort previously required to compile evidence and documentation for compliance officers and auditors. DocumentDB global cluster execution blocks: Adding to the catalog of 9 execution blocks, Region switch now supports Amazon DocumentDB global cluster execution blocks for automated multi-Region database recovery. This feature allows you to orchestrate DocumentDB global cluster failover and switchover operations within your Region switch plans. To get started, build a Region switch plan using the ARC console, API, or CLI. See the AWS Regional Services List for availability information. Visit our home page or read the documentation.

s3rds
#s3#rds#generally-available#ga#support#new-region

AWS Private Certificate Authority (AWS Private CA) now supports Online Certificate Status Protocol (OCSP) in China and AWS GovCloud (US) Regions. AWS Private CA is a fully managed certificate authority service that makes it easy to create and manage private certificates for your organization without the operational overhead of running your own CA infrastructure. OCSP enables real-time certificate validation, allowing applications to check the revocation status of individual certificates on-demand rather than downloading Certificate Revocation List (CRL) files. With OCSP support, customers in these Regions can implement more efficient certificate validation with minimal bandwidth, typically requiring a few hundred bytes per query, versus downloading large Certificate Revocation Lists (CRLs) that can be hundreds of kilobytes or larger. This enables real-time revocation checks for use cases such as validating internal microservices communications, implementing zero trust security architectures, and authenticating IoT devices. AWS Private CA fully manages the OCSP responder infrastructure, providing high availability without requiring you to deploy or maintain OCSP servers. OCSP is now also available in the following AWS Regions: China (Beijing), and China (Ningxia), AWS GovCloud (US-East), AWS GovCloud (US-West). To enable OCSP for your certificate authorities, use the AWS Private CA console, AWS CLI, or API. To learn more about OCSP, see Certificate Revocation in the AWS Private CA User Guide. For pricing information, visit the AWS Private CA pricing page.

#ga#now-available#support

Today, AWS announces SOCI (Seekable Open Container Initiative) indexing support for Amazon SageMaker Studio, reducing container startup times by 30-50% when using custom images. Amazon SageMaker Studio is a fully integrated, browser-based environment for end-to-end machine learning development. SageMaker Studio provides pre-built container images for popular ML frameworks like TensorFlow, PyTorch, and Scikit-learn that enable quick environment setup. However, when data scientists need to tailor environments for specific use cases with additional libraries, dependencies, or configurations, they can build and register custom container images with pre-configured components to ensure consistency across projects. As ML workloads become increasingly complex, these custom container images have grown in size, leading to startup times of several minutes that create a bottlenecks in iterative ML development where quick experimentation and rapid prototyping are essential. SOCI indexing addresses this challenge by enabling lazy loading of container images, downloading only the necessary components to start applications with additional files loaded on-demand as needed. Instead of waiting several minutes for complete custom image downloads, users can begin productive work in seconds while the environment completes initialization in the background. To use SOCI indexing, create a SOCI index for your custom container image using tools like Finch CLI, nerdctl, or Docker with SOCI CLI, push the indexed image to Amazon Elastic Container Registry (ECR), and reference the image index URI when creating SageMaker Image resources. SOCI indexing is available in all AWS Regions where Amazon SageMaker Studio is available. To learn more about implementing SOCI indexing for your SageMaker Studio custom images, see Bring your own SageMaker image in the Amazon SageMaker Developer Guide.

sagemakerlex
#sagemaker#lex#support

Amazon Relational Database Service (RDS) now offers enhanced observability for your snapshot exports to Amazon S3, providing detailed insights into export progress, failures, and performance for each task. These notifications enable you to monitor your exports with greater granularity and enables more predictability. With snapshot export to S3, you can export data from your RDS database snapshots to Apache Parquet format in your Amazon S3 bucket. This launch introduces four new event types, including current export progress and table-level notifications for long-running tables, providing more granular visibility into your snapshot export performance and recommendations for troubleshooting export operation issues. Additionally, you can view export progress, such as the number of tables exported and pending, along with exported data sizes, enabling you to better plan your operations and workflows. You can subscribe to these events through Amazon Simple Notification Service (SNS) to receive notifications and view the export events through the AWS Management Console, AWS CLI, or SDK. This feature is available for RDS PostgreSQL, RDS MySQL, and RDS MariaDB engines in all Commercial Regions where RDS is generally available. To learn more about the new event types, see Event categories in RDS.

s3rdssns
#s3#rds#sns#launch#generally-available

Amazon Bedrock Data Automation (BDA) now supports blueprint instruction optimization, enabling you to improve the accuracy of your custom field extraction using just a few example document assets with ground truth labels. BDA automates the generation of insights from unstructured multimodal content such as documents, images, audio, and videos for your GenAI-powered applications. Blueprint instruction optimization automatically refines the natural language instructions in your blueprints, helping you achieve production-ready accuracy in minutes without model training or fine-tuning. With blueprint instruction optimization, you can now bring up to 10 representative document assets from your production workload and provide the correct, expected values for each field. Blueprint instruction optimization analyzes the differences between your expected results and the Data Automation inference results, and then refines the natural language instructions to improve extraction accuracy across your examples. For your intelligent document processing applications, you can now improve the accuracy of extracting insights such as invoice line items, contract terms, tax form fields, or medical billing codes. After optimization completes, you receive detailed evaluation metrics including exact match rates and F1 scores measured against your ground truth, giving you confidence that your blueprint is ready for production deployment. Data Automation blueprint instruction optimization for documents is available in all AWS Regions where Amazon Bedrock Data Automation is supported. To learn more, see the Bedrock Data Automation User Guide and the Amazon Bedrock Pricing page. To get started with blueprint instruction optimization, navigate to your blueprint in the Amazon Bedrock console, go to Data Automation, select your custom outputs for documents, and select Start Optimization.

bedrock
#bedrock#launch#ga#support

Amazon Elastic Container Registry (ECR) now supports automatic repository creation on image push. This new capability simplifies container workflows by having ECR automatically create repositories if they don't exist when an image is pushed, without customers having to pre-create repositories before pushing container images. Now when customers push images, ECR will automatically create repositories according to defined repository creation template settings. Create on push is available in all AWS commercial and AWS GovCloud (US) Regions. To learn more about repository creation templates, please visit our documentation. You can learn more about storing, managing and deploying container images and artifacts with Amazon ECR, including how to get started, from our product page and user guide.

#support#new-capability

Amazon Timestream for InfluxDB now offers a restart API for both InfluxDB versions 2 and 3. This new capability enables customers to trigger system restarts on their database instances directly through the AWS Management Console, API, or CLI, to streamline operational management of their time-series database environments. With the restart API, customers can perform resilience testing to validate their application's behavior during database restarts and address health-related issues without requiring support intervention. This feature enhances operational flexibility for DevOps teams managing mission-critical workloads, allowing them to implement more comprehensive testing strategies and respond faster to performance concerns by providing direct control over database instance lifecycle operations. Amazon Timestream for InfluxDB restart capability is available in all Regions where Timestream for InfluxDB is offered. To get started with Amazon Timestream for InfluxDB 3, visit the Amazon Timestream for InfluxDB console. For more information, see the Amazon Timestream for InfluxDB documentation and pricing page.

lex
#lex#support#new-capability

AWS announces Cost Allocation tags support for account tags across AWS Cost Management products, enabling customers with multiple member accounts to utilize their existing AWS Organizations account tags directly in cost management tools. Account tags are applied at the account level in AWS Organizations and automatically apply to all metered usage within tagged accounts, eliminating the need to manually configure and maintain separate account groupings in AWS Cost Explorer, Cost and Usage Reports, AWS Budgets, and Cost Categories. With account tag support, customers can analyze costs by account tag directly in Cost Explorer and Cost and Usage Reports (CUR 2.0 and FOCUS). Customers can set up AWS Budgets and AWS Cost Anomaly Detection alerts on groups of accounts without configuring lists of account IDs. Customers can also build complex cost categories on top of account tags for further categorization. Account tags enable cost allocation for untaggable resources including refunds, credits, and certain service charges that cannot be tagged at the resource level. When new accounts join the organization or existing accounts are removed, customers simply add or update relevant tags, and the changes automatically apply across all cost management products. To get started, customers apply tags to accounts in the AWS Organizations console, then activate those account tags from the Cost Allocation Tags page in the Billing and Cost Management console. This feature is generally available in all AWS Regions, excluding GovCloud (US) Regions and China (Beijing) and China (Ningxia) Regions. To learn more, see organizing and tracking costs using AWS cost allocation tags.

lexorganizations
#lex#organizations#generally-available#ga#update#support

AWS IoT now supports event-based logging, a new capability that helps developers reduce Amazon CloudWatch costs while improving log management efficiency. This feature enables targeted logging for individual event with customizable log levels and Amazon CloudWatch log group destinations. With Event-Based Logging, you can set different log levels for different types of IoT events based on their operational importance. For example, you can configure INFO-level logging for certificateProvider events while maintaining ERROR-level logging for less critical activities like connectivity events. The granularity allows you to maintain comprehensive visibility into your IoT operations without the overhead of logging every activity at the same verbosity level, improving log searchability and analysis efficiency while helping to reducing costs. Event-level logging is now available for configuration through the AWS IoT console, CLI, and API in all AWS Regions where AWS IoT is supported. To learn more about configuring Event-Based Logging, visit AWS IoT Developer Guide.

cloudwatch
#cloudwatch#now-available#support#new-capability

Amazon WorkSpaces Applications now offers images powered by Microsoft Windows Server 2025, enabling customers to launch streaming instances with the latest features and enhancements from Microsoft’s newest server operating system. This update ensures your application streaming environment benefits from improved security, performance, and modern capabilities. With Windows Server 2025 support, you can deliver the Microsoft Windows 11 desktop experience to your end users, giving you greater flexibility in choosing the right operating system for your specific application and desktop streaming needs. Whether you're running business-critical applications or providing remote access to specialized software, you now have expanded options to align your infrastructure decisions with your unique workload requirements and organizational standards. You can select from AWS-provided public images or create custom images tailored to your requirements using Image Builder. Support for Microsoft Windows Server 2025 is now generally available in all AWS Regions where Amazon WorkSpaces Applications is offered. To get started with Microsoft Windows Server 2025 images, visit the Amazon WorkSpaces Applications documentation. For pricing details, see the Amazon WorkSpaces Applications Pricing page.

lexrds
#lex#rds#launch#generally-available#ga#update

Amazon Redshift ODBC 2.x driver now supports Apple macOS, expanding platform compatibility for developers and analysts. This enhancement allows Apple macOS users to connect to Amazon Redshift clusters using the latest Amazon Redshift ODBC 2.x driver version. You can use an ODBC connection to connect to your Amazon Redshift cluster from many third-party SQL client tools and applications. The Amazon Redshift ODBC 2.x native driver support enables you to access Amazon Redshift features such as data sharing write capabilities and Amazon IAM Identity Center integration - features that are only available through Amazon Redshift drivers. This native Apple macOS support enables seamless integration with Extract, Transform, Load (ETL) and Business Intelligence (BI) tools, allowing you to use Apple macOS while accessing the full suite of Amazon Redshift capabilities. We recommend that you upgrade to the latest Amazon Redshift ODBC 2.x driver version to access new features. For installation instructions and system requirements, please see the Amazon Redshift ODBC 2.x driver documentation.

redshiftiamiam identity center
#redshift#iam#iam identity center#new-feature#enhancement#integration

AWS Glue now supports zero-ETL for self-managed database sources in seven additional regions. Using Glue zero-ETL, you can setup an integration to replicate data from Oracle, SQL Server, MySQL or PostgreSQL databases which are located on-premises or on AWS EC2 to Redshift with a simple experience that eliminates configuration complexity. AWS zero-ETL for self-managed database sources will automatically create an integration for an on-going replication of data from your on-premises or EC2 databases through a simple, no-code interface. You can now replicate data from Oracle, SQL Server, MySQL and PostgreSQL databases into Redshift. This feature further reduces users' operational burden and saves weeks of engineering effort needed to design, build, and test data pipelines to ingest data from self-managed databases to Redshift. AWS Glue zero-ETL for self-managed database sources are available in the following additional AWS Regions: Asia Pacific (Hong Kong), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (London), South America (São Paulo), and US (Virginia) regions. To get started, sign into the AWS Management Console. For more information visit the AWS Glue page or review the AWS Glue zero-ETL documentation.

lexec2redshifteksglue
#lex#ec2#redshift#eks#glue#ga

Amazon EC2 now supports Availability Zone ID (AZ ID) parameter, enabling you to create and manage resources such as instances, volumes, and subnets using consistent zone identifiers. AZ IDs are consistent and static identifiers that represent the same physical location across all AWS accounts, helping you optimize resource placement. Prior to this launch, you had to use an AZ name while creating a resource, but these names could map to different physical locations. This mapping made it difficult to ensure resources were always co-located especially when operating with multiple accounts. Now, you can specify the AZ ID parameter directly in your EC2 APIs to guarantee consistent placement of resources. AZ IDs always refer to the same physical location across all accounts, which means you no longer need to manually map AZ names across your accounts or deal with the complexity of tracking and aligning zones. This capability is now available for resources including instances, launch templates, hosts, reserved instances, fleet, spot instances, volumes, capacity reservations, network insights, VPC endpoints and subnets, network interfaces, fast snapshot restore, and instance connect. This feature is available in all AWS regions including China and AWS GovCloud (US) Regions. To learn more about Availability Zone IDs, visit the documentation.

lexec2
#lex#ec2#launch#now-available#support

Amazon WorkSpaces Applications now offers support for Ubuntu Pro 24.04 LTS on Elastic fleets, enabling Independent Software Vendors (ISVs) and central IT organizations to stream Ubuntu desktop applications to users while leveraging the flexibility, scalability, and cost-effectiveness of the AWS Cloud. Amazon WorkSpaces Applications is a fully managed, secure desktops and applications streaming service that provides users with instant access to their desktops and applications from anywhere. Within Amazon WorkSpaces Applications, Elastic fleet is a server less fleet type that lets you stream desktop applications to your end users from an AWS-managed pool of streaming instances without needing to predict usage, create and manage scaling policies, or create an image. Elastic fleet type is designed for customers that want to stream applications to users without managing any capacity or creating WorkSpaces Applications images. To get started sign into the WorkSpaces Applications management console and select one of the AWS Region of your choice. For the full list of Regions where WorkSpaces Applications is available, see the AWS Region Table. Amazon WorkSpaces Applications offers pay-as-you-go pricing. For more information, see Amazon WorkSpaces Applications Pricing.

lexorganizations
#lex#organizations#ga#support

Starting today, the compute-optimized Amazon EC2 C8a instances are available in the Europe (Spain) region. C8a instances are powered by 5th Gen AMD EPYC processors (formerly code named Turin) with a maximum frequency of 4.5 GHz, delivering up to 30% higher performance and up to 19% better price-performance compared to C7a instances. C8a instances deliver 33% more memory bandwidth compared to C7a instances, making these instances ideal for latency sensitive workloads. Compared to Amazon EC2 C7a instances, they are up to 57% faster for GroovyJVM allowing better response times for Java-based applications. C8a instances offer 12 sizes including 2 bare metal sizes. This range of instance sizes allows customers to precisely match their workload requirements. C8a instances are built on AWS Nitro System and are ideal for high performance, compute-intensive workloads such as batch processing, distributed analytics, high performance computing (HPC), ad serving, highly-scalable multiplayer gaming, and video encoding. To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information visit the Amazon EC2 C8a instance page.

ec2
#ec2#ga#now-available

AWS IoT Core now lets you batch multiple IoT messages into a single HTTP rule action, before routing the messages to downstream HTTP endpoints. This enhancement helps you to reduce cost and throughput overhead when ingesting telemetry from your Internet of Things (IoT) workloads. AWS IoT Core is a fully managed service that securely connects millions of IoT devices to the AWS cloud. Using rules for AWS IoT, you can filter, process, and decode device data, and route that data to AWS services or third-party endpoints via 20+ AWS IoT rule actions, such as HTTP rule action - which routes the data to HTTP endpoints. With the new feature, you can now batch messages together before routing that data set to downstream HTTP endpoints. To efficiently process IoT messages using the new batching capability, connect your IoT devices to AWS IoT Core and define a HTTP rule action with your desired batch parameters. AWS IoT Core will then process incoming messages according to these specifications and route the messages to your designated HTTP endpoints. For example, you can now combine IoT messages published from multiple smart home devices in a single batch and route it to a HTTP endpoint in your smart home platform. This new feature is available in all AWS Regions where AWS IoT Core is available, including AWS GovCloud (US) and Amazon China Regions. To learn more, visit our developer guide, pricing page, and API documentation.

#new-feature#enhancement

Amazon WorkSpaces now supports IPv6 for WorkSpaces domains and external endpoints, enabling users to connect through an IPv4/IPv6 dual-stack configuration from compatible clients (excluding SAML authentication). This helps customers meet IPv6 compliance requirements and eliminates the need for costly networking equipment to handle address translation between IPv4 and IPv6. Dual-stack support for WorkSpaces addresses the Internet's growing demand for IP addresses by offering a vastly larger address space than IPv4. This eliminates the need to manage overlapping address ranges within your Virtual Private Cloud (VPC). Customers can deploy WorkSpaces through dual-stack that supports both IPv4 and IPv6 protocols while maintaining backward compatibility with existing IPv4 systems. Customers can also connect to their WorkSpaces through PrivateLink VPC endpoints over IPv6, enabling them to access the service privately without routing traffic over the public internet. Connecting to Amazon WorkSpaces over IPv4/IPv6 dual-stack configuration is supported in all AWS Regions where Amazon WorkSpaces is available, including the AWS GovCloud (US East & US West) Regions. There is no additional cost for this feature. To enable IPv6, you must use the latest WorkSpaces client application for Windows, macOS, Linux, PCoIP zero clients, or web access. To learn more about IPv6 support on Amazon WorkSpaces, refer to the Amazon WorkSpaces administration guide.

#support

Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports dual-stack connectivity (IPv4 and IPv6) for new connectors on Amazon MSK Connect. This capability enables customers to create connectors on MSK Connect using both IPv4 and IPv6 protocols, in addition to the existing IPv4-only option. It helps customers modernize applications for IPv6 environments while maintaining IPv4 compatibility, making it easier to meet compliance requirements and prepare for future network architectures. Amazon MSK Connect is a fully managed service that allows you to deploy and operate Apache Kafka Connect connectors in a fully managed environment. Previously, connectors on MSK Connect only supported IPv4 addressing for all connectivity options. With this new capability, customers can now enable dual-stack connectivity (IPv4 and IPv6) on new connectors using the Amazon MSK Console, AWS CLI, SDK, or CloudFormation by setting the Network Type parameter during connector creation. All connectors on MSK Connect will by default use IPv4-only connectivity unless explicitly opted-in for dual-stack while creating new connectors. Existing connectors will continue using IPv4 connectivity. To change that you will need to delete and recreate the connector. Dual-stack connectivity for new connectors on MSK Connect is now available in all AWS Regions where Amazon MSK Connect is available, at no additional cost. To learn more about Amazon MSK dual-stack support, refer to the Amazon MSK developer guide.

cloudformationkafkamsk
#cloudformation#kafka#msk#now-available#support#new-capability

Amazon ECS Managed Instances now supports Amazon EC2 Spot Instances, extending the range of capabilities available with AWS-managed infrastructure. With this launch, you can leverage spare EC2 capacity at up to 90% discount compared to On-Demand prices for fault-tolerant workloads, while AWS handles infrastructure management. ECS Managed Instances is a fully managed compute option designed to eliminate infrastructure management overhead, dynamically scale EC2 instances to match your workload requirements and continuously optimize task placement to reduce infrastructure costs. You can simply define your task requirements such as the number of vCPUs, memory size, and CPU architecture, and Amazon ECS automatically provisions, configures and operates most optimal EC2 instances within your AWS account using AWS-controlled access. You can also specify desired instance types in Managed Instances capacity provider configuration, including GPU-accelerated, network-optimized, and burstable performance, to run your workloads on the instance families you prefer. With today's launch, you can additionally configure a new parameter, capacityOptionType, as spot or on-demand in your capacity provider configuration. Support for EC2 Spot Instances is available in all AWS Regions that Amazon ECS Managed Instances is available. You will be charged for the management of compute provisioned, in addition to your spot Amazon EC2 costs. To learn more about ECS Managed Instances, visit the feature page, documentation, and AWS News launch blog.

ec2ecs
#ec2#ecs#launch#support

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8i and R8i-flex instances are available in the Asia Pacific (Seoul), South America (Sao Paulo), and Asia Pacific (Tokyo) regions. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The R8i and R8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver 20% higher performance than R7i instances, with even higher gains for specific workloads. They are up to 30% faster for PostgreSQL databases, up to 60% faster for NGINX web applications, and up to 40% faster for AI deep learning recommendation models compared to R7i. R8i-flex, our first memory-optimized Flex instances, are the easiest way to get price performance benefits for a majority of memory-intensive workloads. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources. R8i instances are a great choice for all memory-intensive workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. R8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications. R8i instances are SAP-certified and deliver 142,100 aSAPS, the highest among all comparable machines in on-premises and cloud environments, delivering exceptional performance for mission-critical SAP workloads. To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information about the new R8i and R8i-flex instances visit the AWS News blog.

lexec2
#lex#ec2#ga#now-available

AWS Lambda durable functions enable developers to build reliable multi-step applications and AI workflows within the Lambda developer experience. Starting today, durable functions are available in 14 additional AWS Regions: US East (N. Virginia), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Europe (Milan) Europe (Stockholm), Europe (Spain), Asia Pacific (Sydney), Asia Pacific (Hong Kong), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Mumbai), Asia Pacific (Malaysia), and Asia Pacific (Thailand). Lambda durable functions extend the Lambda programming model with new primitives in your event handler, such as "steps" and "waits", allowing you to checkpoint progress, automatically recover from failures, and pause execution without incurring compute charges for on-demand functions. With this region expansion, you can orchestrate complex processes such as order workflows, user onboarding, and AI-assisted tasks closer to your users and data, helping you to meet low-latency and data residency requirements while standardizing on a single serverless programming model. You can activate durable functions for new Python (versions 3.13 and 3.14) or Node.js (versions 22 and 24) based Lambda functions using the AWS Lambda API, AWS Management Console, or AWS SDK. You can also use infrastructure as code tools such as AWS Cloud Formation, AWS Serverless Application Model (AWS SAM), and the AWS Cloud Development Kit (AWS CDK). For more information on durable functions, visit the AWS Lambda Developer Guide. To learn about pricing, visit AWS Lambda pricing. For the latest region availability, visit the AWS Capabilities by Region page.

lexlambda
#lex#lambda#ga#now-available#expansion

AWS Direct Connect now supports resilience testing with AWS Fault Injection Service (FIS), a fully managed service for running controlled fault injection experiments to improve application performance, observability, and resilience. With this capability, you can test and observe how your applications respond when Border Gateway Protocol (BGP) sessions over your Virtual Interfaces are disrupted and validate your resilience mechanisms. With this new capability, you can test how your applications handle Direct Connect BGP failover in a controlled environment. For example, you can validate that traffic routes to redundant Virtual Interfaces when a primary Virtual Interface's BGP session is disrupted and your applications continue to function as expected. This capability is particularly valuable for proactively testing Direct Connect architectures where failover is critical to maintaining network connectivity. This new action is available in all AWS Commercial Regions where AWS FIS is offered. To learn more, visit the AWS FIS product page and the Direct Connect FIS actions user guide.

#ga#support#new-capability

AWS Clean Rooms now supports change requests to modify existing collaboration settings, offering customers greater flexibility in managing collaborations and developing new use cases with their partners. With this new capability, you can submit a change request for a collaboration, including adding new members, updating member abilities, and modifying collaboration auto-approval settings. To maintain security, all collaboration members must approve change requests before updates take affect, ensuring that existing privacy controls remain protected. For transparency, all change requests are logged in the change history for member review. For example, when a publisher creates a Clean Rooms collaboration with an advertiser, the publisher can add the advertiser’s marketing agency as a new member that can receive the analysis results directly in their account, enabling faster time-to-insights and streamlined campaign optimizations with the publisher. This approach reduces onboarding time while maintaining the existing privacy controls for you and your partners. With AWS Clean Rooms, customers can create a secure data clean room in minutes and collaborate with any company on AWS or Snowflake to generate unique insights about advertising campaigns, investment decisions, and research and development. For more information about the AWS Regions where AWS Clean Rooms is available, see the AWS Regions table. To learn more about collaborating with AWS Clean Rooms, visit AWS Clean Rooms.

lex
#lex#update#support#new-capability

Amazon ECS now enables you to define weekly event windows for scheduling task retirements on AWS Fargate. This capability provides precise control over when infrastructure updates and task replacements occur, helping prevent disruption to mission-critical workloads during peak business hours. AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. As part of the AWS shared responsibility model, Fargate maintains the underlying infrastructure with periodic platform updates. Fargate automatically retires your tasks for these updates and notifies you about upcoming task retirements via email and the AWS Health Dashboard. By default, tasks are retired 7 days after notification, but you can configure the fargateTaskRetirementWaitPeriod account setting to extend the retirement period to 14 days or initiate immediate retirement (0 days). Previously, you could build automation using the task retirement notification and wait period to perform service updates or task replacements on your own cadence. With today's launch, you can now use the Amazon EC2 event windows interface to define weekly event windows for precise control over the timing of Fargate task retirements. For example, you can schedule task retirements for a mission-critical service that requires high uptime during weekdays by configuring retirements to occur only on weekends. To get started, configure the AWS account setting fargateEventWindows to enabled as a one-time set up. Once enabled, configure Amazon EC2 event window(s) by specifying time ranges, and associate the event window(s) with your ECS tasks by selecting Amazon ECS-managed tags as the association target. Use the aws:ecs:clusterArn tag for targeting your tasks in an ECS cluster, aws:ecs:serviceArn tag for ECS services, or aws:ecs:fargateTask with a value of true to apply the window to all Fargate tasks. This feature is now available in all commercial AWS Regions. To learn more, visit our documentation.

ec2ecsfargate
#ec2#ecs#fargate#launch#ga#now-available

Amazon Managed Streaming for Apache Kafka (MSK) now supports Apache Kafka version 3.9 for Express Brokers. This release introduces support for KRaft (Kafka Raft), Apache Kafka's new consensus protocol that eliminates the dependency on Apache ZooKeeper for metadata management. KRaft shifts metadata management in Kafka clusters from external Apache ZooKeeper nodes to a group of controllers within Kafka. This change allows metadata to be stored and replicated as topics within Kafka brokers, resulting in faster propagation of metadata. New Express Broker clusters created using Kafka v3.9 will automatically use KRaft as the metadata management mode, giving you the benefits of this modern architecture from the start. The ability to upgrade existing clusters to v3.9 will be available in a future release. Amazon MSK Express Brokers with Kafka v3.9 are available in all AWS regions where MSK Express is supported. To get started, create a new Express Broker cluster and select Kafka version 3.9 in the AWS Management Console or via the AWS CLI or AWS SDKs.

kafkamsk
#kafka#msk#ga#support

Amazon Neptune Database is now available in the Europe (Zurich) Region on engine versions 1.4.5.0 and later. You can now create Neptune clusters using R5, R5d, R6g, R6i, X2iedn, T4g, and T3 instance types in the AWS Europe (Zurich) Region. Amazon Neptune Database is a fast, reliable, and fully managed graph database as a service that makes it easy to build and run applications work with highly connected datasets. You can build applications using Apache TinkerPop Gremlin or openCypher on the Property Graph model, or using the SPARQL query language on W3C Resource Description Framework (RDF). Neptune also offers enterprise features such as high availability, automated backups, and network isolation to help customers quickly deploy applications to production. To get started, you can create a new Neptune cluster using the AWS Management Console, AWS CLI, or a quickstart AWS CloudFormation template. For more information on pricing and region availability, refer to the Neptune pricing page and AWS Region Table.

cloudformation
#cloudformation#now-available

Today, Amazon Simple Email Service (SES) announces email validation, a new capability that helps customers reduce bounce rates and protect sender reputation by validating email addresses before sending. Customers can validate individual addresses via API calls or enable automatic validation across all outbound emails. Email validation helps customers maintain list hygiene, reduce bounces and improve delivery by identifying invalid addresses that could damage sender reputation. The API provides detailed validation insights such as syntax checks and DNS records. With Auto Validation enabled, SES automatically reviews every outbound email address with out requiring any code changes. Auto-Validation can be configured at the account level or at the configuration set level using simple toggles in the AWS console, enabling seamless integration with existing workflows. Email validation is available in all AWS Regions where Amazon SES is available. To learn more, see the documentation on Email Validation in the Amazon SES Developer Guide. To start using Email Validation, visit the Amazon SES console.

rds
#rds#integration#new-capability

With Kinesis Data Firehose, customers can use a fully managed, reliable, and scalable data streaming solution to Splunk. In this post, we tell you a bit more about the Kinesis Data Firehose and Splunk integration. We also show you how to ingest large amounts of data into Splunk using Kinesis Data Firehose.

kinesis
#kinesis#integration

Today, AWS Databases including Amazon Aurora PostgreSQL, Amazon Aurora DSQL, and Amazon DynamoDB are generally available on the Vercel Marketplace, enabling you to create and connect to an AWS database directly from Vercel in seconds. To get started, you can create a new AWS Account from Vercel that includes access to the three databases and $100 USD in credits. These credits can be used with any of these database option for up to six months. Once your account is set up, you can have a production-ready Aurora database or DynamoDB table powering your Vercel projects within seconds. You can also manage your plan, add payment information, and view usage details anytime by visiting the AWS settings portal from the Vercel dashboard. To learn more, visit the AWS landing page on the Vercel Marketplace. The integration includes serverless options for Amazon Aurora PostgreSQL, Amazon Aurora DSQL, and Amazon DynamoDB to simplify your application needs and reduce costs by scaling to zero when not in use. You can create a database in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Tokyo), and Asia Pacific (Mumbai) with more Regions coming soon. AWS Databases deliver security, reliability, and price performance without the operational overhead, whether you're prototyping your next big idea or running production AI and data driven applications. For more information, visit the AWS Databases webpage.

dynamodb
#dynamodb#generally-available#now-available#integration#coming-soon

Today, AWS announces the general availability of the new Amazon Elastic Compute Cloud (Amazon EC2) M8gn and M8gb instances. These instances are powered by AWS Graviton4 processors to deliver up to 30% better compute performance than AWS Graviton3 processors. M8gn instances feature the latest 6th generation AWS Nitro Cards, and offer up to 600 Gbps network bandwidth, the highest network bandwidth among network optimized EC2 instances. M8gb offer up to 150 Gbps of EBS bandwidth to provide higher EBS performance compared to same-sized equivalent Graviton4-based instances. M8gn are ideal for network-intensive workloads such as high-performance file systems, distributed web scale in-memory caches, caching fleets, real-time big data analytics, and Telco applications such as 5G User Plane Function (UPF). M8gb are ideal for workloads requiring high block storage performance such as high performance databases and NoSQL databases. M8gn instances offer instance sizes up to 48xlarge, up to 768 GiB of memory, up to 600 Gbps of networking bandwidth, and up to 60 Gbps of bandwidth to Amazon Elastic Block Store (EBS). They also support EFA networking on the 16xlarge, 24xlarge, and 48xlarge sizes. M8gb instances offer sizes up to 24xlarge, up to 768 GiB of memory, up to 150 Gbps of EBS bandwidth, and up to 200 Gbps of networking bandwidth. They support Elastic Fabric Adapter (EFA) networking on the 16xlarge and 24xlarge sizes, which enables lower latency and improved cluster performance for workloads deployed on tightly coupled clusters. The new instances are available in the following AWS Regions: US East (N. Virginia), and US West (Oregon). To learn more, see Amazon EC2 M8gn and M8gb Instances. To begin your Graviton journey, visit the Level up your compute with AWS Graviton page.

ec2rdsgraviton
#ec2#rds#graviton#generally-available#support

Today, Amazon WorkSpaces Applications announced a new set of Amazon CloudWatch metrics for monitoring the health and performance of fleets, sessions, instances, and users. Administrators and support operations personnel can conveniently enable monitoring across fleets from the Amazon CloudWatch console. These metrics simplify troubleshooting and dynamically update to reflect the latest state of important performance metrics. Users can make informed decisions on sizing and end users' streaming instances by setting performance thresholds on available metrics to meet performance and budgeting criteria. They can view performance metrics to troubleshoot end user streaming session related issues. To enable this feature for your fleet instancess, you must use a WorkSpaces Applications image that uses latest agent released on or after December 06, 2025 or has been updated using Managed WorkSpaces Applications image updates released on or after December 05, 2025. These CloudWatch metrics are available in all AWS commercial and AWS GovCloud (US) Regions where Amazon WorkSpaces Applications is currently available. To get started or learn more, you can visit Amazon WorkSpaces Applications Metrics and Dimensions documentation.

cloudwatch
#cloudwatch#update#support

Amazon Aurora PostgreSQL-Compatible Edition now supports PostgreSQL version 18.1 in the Amazon RDS Database Preview Environment, allowing you to evaluate PostgreSQL 18.1 on Amazon Aurora PostgreSQL. PostgreSQL 18.1 was released by the PostgreSQL community on September 9, 2025.  PostgreSQL 18.1 includes "skip scan" support for multicolumn B-tree indexes and improves WHERE clause handling for OR and IN conditions. It introduces parallel GIN index builds and updates join operations. Observability improvements show buffer usage counts and index lookups during query execution, along with a per-connection I/O utilization metric. To learn more about PostgreSQL 18.1, read here.   Database instances in the RDS Database Preview Environment allow testing of a new database engine without the hassle of having to self-install, provision, and manage a preview version of the Aurora PostgreSQL database software. Clusters are retained for a maximum period of 60 days and are automatically deleted after this retention period. Amazon RDS Database Preview Environment database instances are priced the same as production Aurora instances created in the US East (Ohio) Region.   Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. To get started with Amazon Aurora, take a look at our getting started page.

rds
#rds#preview#update#improvement#integration#support

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8g and M8g instances are available in AWS GovCloud (US-West) and R8g and M8g instances are available in AWS GovCloud (US-East) regions. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. They are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads. AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. These instances offer larger instance sizes with up to 3x more vCPUs and memory compared to Graviton3-based Amazon C8g, M8g and R8g instances. AWS Graviton4 processors are up to 40% faster for databases, 30% faster for web applications, and 45% faster for large Java applications than AWS Graviton3 processors. C8g and R8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). To learn more, see Amazon EC2 C8g Instances, Amazon EC2 M8g Instances, and Amazon EC2 R8g Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS GovCloud (US) Console.

ec2graviton
#ec2#graviton#now-available

In healthcare, generative AI is transforming how medical professionals analyze data, summarize clinical notes, and generate insights to improve patient outcomes. From automating medical documentation to assisting in diagnostic reasoning, large language models (LLMs) have the potential to augment clinical workflows and accelerate research. However, these innovations also introduce significant privacy, security, and intellectual property challenges.

nova
#nova

In this post, we walk through building a generative AI–powered troubleshooting assistant for Kubernetes. The goal is to give engineers a faster, self-service way to diagnose and resolve cluster issues, cut down Mean Time to Recovery (MTTR), and reduce the cycles experts spend finding the root cause of issues in complex distributed systems.

lex
#lex

This post is about AWS SDK for JavaScript v3 announcing end of support for Node.js versions based on Node.js release schedule, and it is not about AWS Lambda. For the latter, refer to the Lambda runtime deprecation policy. In the second week of January 2026, the AWS SDK for JavaScript v3 (JS SDK) will start […]

lambda
#lambda#support

Have you ever wondered what it is really like to be a woman in tech at one of the world's leading cloud companies? Or maybe you are curious about how diverse perspectives drive innovation beyond the buzzwords? Today, we are providing an insider's perspective on the role of a solutions architect (SA) at Amazon Web Services (AWS). However, this is not a typical corporate success story. We are three women who have navigated challenges, celebrated wins, and found our unique paths in the world of cloud architecture, and we want to share our real stories with you.

novards
#nova#rds#ga

Organizations often have large volumes of documents containing valuable information that remains locked away and unsearchable. This solution addresses the need for a scalable, automated text extraction and knowledge base pipeline that transforms static document collections into intelligent, searchable repositories for generative AI applications.

bedrockstep functionsorganizations
#bedrock#step functions#organizations#ga

In this post, we demonstrate how to utilize AWS Network Firewall to secure an Amazon EVS environment, using a centralized inspection architecture across an EVS cluster, VPCs, on-premises data centers and the internet. We walk through the implementation steps to deploy this architecture using AWS Network Firewall and AWS Transit Gateway.

#ga

You can now develop AWS Lambda functions using Node.js 24, either as a managed runtime or using the container base image. Node.js 24 is in active LTS status and ready for production use. It is expected to be supported with security patches and bugfixes until April 2028. The Lambda runtime for Node.js 24 includes a new implementation of the […]

lambda
#lambda#now-available#support

Organizations running critical workloads on Amazon Elastic Compute Cloud (Amazon EC2) reserve compute capacity using On-Demand Capacity Reservations (ODCR) to have availability when needed. However, reserved capacity can intermittently sit idle during off-peak periods, between deployments, or when workloads scale down. This unused capacity represents a missed opportunity for cost optimization and resource efficiency across the organization.

ec2organizations
#ec2#organizations#ga

Amazon Web Services (AWS) provides many mechanisms to optimize the price performance of workloads running on Amazon Elastic Compute Cloud (Amazon EC2), and the selection of the optimal infrastructure to run on can be one of the most impactful levers. When we started building the AWS Graviton processor, our goal was to optimize AWS Graviton […]

ec2graviton
#ec2#graviton

In this post, you will learn how the new Amazon API Gateway’s enhanced TLS security policies help you meet standards such as PCI DSS, Open Banking, and FIPS, while strengthening how your APIs handle TLS negotiation. This new capability increases your security posture without adding operational complexity, and provides you with a single, consistent way to standardize TLS configuration across your API Gateway infrastructure.

lexrdsapi gateway
#lex#rds#api gateway#ga#new-capability

Event-driven applications often need to process data in real-time. When you use AWS Lambda to process records from Apache Kafka topics, you frequently encounter two typical requirements: you need to process very high volumes of records in close to real-time, and you want your consumers to have the ability to scale rapidly to handle traffic spikes. Achieving both necessitates understanding how Lambda consumes Kafka streams, where the potential bottlenecks are, and how to optimize configurations for high throughput and best performance.

lambdardskafka
#lambda#rds#kafka

Modern generative AI applications often need to stream large language model (LLM) outputs to users in real-time. Instead of waiting for a complete response, streaming delivers partial results as they become available, which significantly improves the user experience for chat interfaces and long-running AI tasks. This post compares three serverless approaches to handle Amazon Bedrock LLM streaming on Amazon Web Services (AWS), which helps you choose the best fit for your application.

bedrock
#bedrock

Today, AWS is announcing tenant isolation for AWS Lambda, enabling you to process function invocations in separate execution environments for each end-user or tenant invoking your Lambda function. This capability simplifies building secure multi-tenant SaaS applications by managing tenant-level compute environment isolation and request routing, allowing you to focus on core business logic rather than implementing tenant-aware compute environment isolation.

lambda
#lambda

In this post, we'll explore a reference architecture that helps enterprises govern their Amazon Bedrock implementations using Amazon API Gateway. This pattern enables key capabilities like authorization controls, usage quotas, and real-time response streaming. We'll examine the architecture, provide deployment steps, and discuss potential enhancements to help you implement AI governance at scale.

bedrockapi gateway
#bedrock#api gateway#ga#enhancement

Today, AWS announced support for response streaming in Amazon API Gateway to significantly improve the responsiveness of your REST APIs by progressively streaming response payloads back to the client. With this new capability, you can use streamed responses to enhance user experience when building LLM-driven applications (such as AI agents and chatbots), improve time-to-first-byte (TTFB) performance for web and mobile applications, stream large files, and perform long-running operations while reporting incremental progress using protocols such as server-sent events (SSE).

api gateway
#api gateway#ga#support#new-capability

Amazon Elastic Cloud Compute (Amazon EC2) instances with locally attached NVMe storage can provide the performance needed for workloads demanding ultra-low latency and high I/O throughput. High-performance workloads, from high-frequency trading applications and in-memory databases to real-time analytics engines and AI/ML inference, need comprehensive performance tracking. Operating system tools like iostat and sar provide valuable system-level insights, and Amazon CloudWatch offers important disk IOPs and throughput measurements, but high-performance workloads can benefit from even more detailed visibility into instance store performance.

ec2cloudwatch
#ec2#cloudwatch

At re:Invent 2025, we introduce one new lens and two significant updates to the AWS Well-Architected Lenses specifically focused on AI workloads: the Responsible AI Lens, the Machine Learning (ML) Lens, and the Generative AI Lens. Together, these lenses provide comprehensive guidance for organizations at different stages of their AI journey, whether you're just starting to experiment with machine learning or already deploying complex AI applications at scale.

lexorganizations
#lex#organizations#launch#ga#update

We are delighted to announce an update to the AWS Well-Architected Generative AI Lens. This update features several new sections of the Well-Architected Generative AI Lens, including new best practices, advanced scenario guidance, and improved preambles on responsible AI, data architecture, and agentic workflows.

#update

AWS Lambda now supports Python 3.14 as both a managed runtime and container base image. Python is a popular language for building serverless applications. Developers can now take advantage of new features and enhancements when creating serverless applications on Lambda.

lambda
#lambda#now-available#new-feature#enhancement#support

Today, AWS Lambda is promoting Rust support from Experimental to Generally Available. This means you can now use Rust to build business-critical serverless applications, backed by AWS Support and the Lambda availability SLA.

lambda
#lambda#experimental#generally-available#support

This post was co-written with Frederic Haase and Julian Blau with BASF Digital Farming GmbH. At xarvio – BASF Digital Farming, our mission is to empower farmers around the world with cutting-edge digital agronomic decision-making tools. Central to this mission is our crop optimization platform, xarvio FIELD MANAGER, which delivers actionable insights through a range […]

eks
#eks

Version 2.0 of the AWS Deploy Tool for .NET is now available. This new major version introduces several foundational upgrades to improve the deployment experience for .NET applications on AWS. The tool comes with new minimum runtime requirements. We have upgraded it to require .NET 8 because the predecessor, .NET 6, is now out of […]

#now-available

The global real-time payments market is experiencing significant growth. According to Fortune Business Insights, the market was valued at USD 24.91 billion in 2024 and is projected to grow to USD 284.49 billion by 2032, with a CAGR of 35.4%. Similarly, Grand View Research reports that the global mobile payment market, valued at USD 88.50 […]

Generative AI agents in production environments demand resilience strategies that go beyond traditional software patterns. AI agents make autonomous decisions, consume substantial computational resources, and interact with external systems in unpredictable ways. These characteristics create failure modes that conventional resilience approaches might not address. This post presents a framework for AI agent resilience risk analysis […]

The AWS SDK for Java 1.x (v1) entered maintenance mode on July 31, 2024, and will reach end-of-support on December 31, 2025. We recommend that you migrate to the AWS SDK for Java 2.x (v2) to access new features, enhanced performance, and continued support from AWS. To help you migrate efficiently, we’ve created a migration […]

#new-feature#support

In this post, we explore how Metagenomi built a scalable database and search solution for over 1 billion protein vectors using LanceDB and Amazon S3. The solution enables rapid enzyme discovery by transforming proteins into vector embeddings and implementing a serverless architecture that combines AWS Lambda, AWS Step Functions, and Amazon S3 for efficient nearest neighbor searches.

lambdas3step functions
#lambda#s3#step functions

In this post, we explore an efficient approach to managing encryption keys in a multi-tenant SaaS environment through centralization, addressing challenges like key proliferation, rising costs, and operational complexity across multiple AWS accounts and services. We demonstrate how implementing a centralized key management strategy using a single AWS KMS key per tenant can maintain security and compliance while reducing operational overhead as organizations scale.

lexorganizations
#lex#organizations#ga

This two-part series shows how Karrot developed a new feature platform, which consists of three main components: feature serving, a stream ingestion pipeline, and a batch ingestion pipeline. This post covers the process of collecting features in real-time and batch ingestion into an online store, and the technical approaches for stable operation.

#new-feature

In this post, we demonstrate how to deploy the DeepSeek-R1-Distill-Qwen-32B model using AWS DLCs for vLLMs on Amazon EKS, showcasing how these purpose-built containers simplify deployment of this powerful open source inference engine. This solution can help you solve the complex infrastructure challenges of deploying LLMs while maintaining performance and cost-efficiency.

lexeks
#lex#eks

Today, we are excited to announce the general availability of the AWS .NET Distributed Cache Provider for Amazon DynamoDB. This is a seamless, serverless caching solution that enables .NET developers to efficiently manage their caching needs across distributed systems. Consistent caching is a difficult problem in distributed architectures, where maintaining data integrity and performance across […]

dynamodb
#dynamodb#generally-available

This blog was co-authored by Afroz Mohammed and Jonathan Nunn, Software Developers on the AWS PowerShell team. We’re excited to announce the general availability of the AWS Tools for PowerShell version 5, a major update that brings new features and improvements in security, along with a few breaking changes. New Features You can now cancel […]

#generally-available#new-feature#update#improvement

Software development is far more than just writing code. In reality, a developer spends a large amount of time maintaining existing applications and fixing bugs. For example, migrating a Go application from the older AWS SDK for Go v1 to the newer v2 can be a significant undertaking, but it’s a crucial step to future-proof […]

amazon qq developer
#amazon q#q developer

We’re excited to announce that the AWS Deploy Tool for .NET now supports deploying .NET applications to select ARM-based compute platforms on AWS! Whether you’re deploying from Visual Studio or using the .NET CLI, you can now target cost-effective ARM infrastructure like AWS Graviton with the same streamlined experience you’re used to. Why deploy to […]

graviton
#graviton#support

Version 4.0 of the AWS SDK for .NET has been released for general availability (GA). V4 has been in development for a little over a year in our SDK’s public GitHub repository with 13 previews being released. This new version contains performance improvements, consistency with other AWS SDKs, and bug and usability fixes that required […]

#preview#ga#improvement

Today, AWS launches the developer preview of the AWS IoT Device SDK for Swift. The IoT Device SDK for Swift empowers Swift developers to create IoT applications for Linux and Apple macOS, iOS, and tvOS platforms using the MQTT 5 protocol. The SDK supports Swift 5.10+ and is designed to help developers easily integrate with […]

#launch#preview#support

We are excited to announce the Developer Preview of the Amazon S3 Transfer Manager for Rust, a high-level utility that speeds up and simplifies uploads and downloads with Amazon Simple Storage Service (Amazon S3). Using this new library, developers can efficiently transfer data between Amazon S3 and various sources, including files, in-memory buffers, memory streams, […]

s3
#s3#preview

In a recent post we gave some background on .NET Aspire and introduced our AWS integrations with .NET Aspire that integrate AWS into the .NET dev inner loop for building applications. The integrations included how to provision application resources with AWS CloudFormation or AWS Cloud Development Kit (AWS CDK) and using Amazon DynamoDB local for […]

lambdadynamodbcloudformation
#lambda#dynamodb#cloudformation#ga#integration

.NET Aspire is a new way of building cloud-ready applications. In particular, it provides an orchestration for local environments in which to run, connect, and debug the components of distributed applications. Those components can be .NET projects, databases, containers, or executables. .NET Aspire is designed to have integrations with common components used in distributed applications. […]

#integration

AWS announces important configuration updates coming July 31st, 2025, affecting AWS SDKs and CLIs default settings. Two key changes include switching the AWS Security Token Service (STS) endpoint to regional and updating the default retry strategy to standard. These updates aim to improve service availability and reliability by implementing regional endpoints to reduce cross-regional dependencies and introducing token-bucket throttling for standardized retry behavior. Organizations should test their applications before the release date and can opt-in early or temporarily opt-out of these changes. These updates align with AWS best practices for optimal service performance and security.

organizations
#organizations#ga#update