Amazon Redshift now supports DELETE, UPDATE, and MERGE operations for Apache Iceberg tables stored in Amazon S3 and Amazon S3 table buckets. With these operations, you can modify data at the row level, implement upsert patterns, and manage the data lifecycle while maintaining transactional consistency using familiar SQL syntax. You can run complex transformations in Amazon Redshift and write results to Apache Iceberg tables that other analytics engines like Amazon EMR or Amazon Athena can immediately query. In this post, you work with datasets to demonstrate these capabilities in a data synchronization scenario.
Natural Language Processing
Natural language processing, text analysis, translation, chatbots, and conversational AI capabilities
AWS announces general availability (GA) of AWS Interconnect - multicloud, providing simple, resilient, high-speed private connections to other cloud service providers (CSPs). With GA comes Google Cloud as the first launch partner, with Microsoft Azure and Oracle Cloud Infrastructure (OCI) coming later in 2026. Customers have been adopting multicloud strategies while migrating more applications to the cloud. They do so for many reasons including interoperability requirements, the freedom to choose technology that best suits their needs, and the ability to build and deploy applications on any environment with greater ease and speed. Previously, when interconnecting workloads across multiple cloud providers, customers had to go the route of a ‘do-it-yourself’ multicloud approach, leading to complexities of managing global multi-layered networks at scale. AWS Interconnect - multicloud is the first purpose-built product of its kind and a new way of how clouds connect and talk to each other. Simplifying connectivity into AWS, Interconnect - multicloud enables customers to quickly establish private, secure, high-speed network connections with dedicated bandwidth and built-in resiliency between their Amazon VPCs and other cloud environments. Interconnect - multicloud makes it easy to connect AWS resources or VPCs to other CSPs. Customers can also quickly scale connectivity to multiple VPCs or Regions via associating Interconnect with other networking services such as AWS Transit Gateway and AWS Cloud WAN, instead of taking weeks or months. Interconnect - multicloud introduces a new, single-fee pricing structure based on the customer’s selected bandwidth and the geographical scope of the connectivity to other CSPs. Customers can also use one free, local 500Mbps interconnect per Region starting in May. To learn more please see the Interconnect - multicloud Pricing documentation page. Interconnect - multicloud is available in five AWS Regions. You can enable this capability using the AWS Management Console, Command Line Interface (CLI), or API, and CSPs can also adopt via a published open API package on GitHub. For more information, see the AWS Interconnect - multicloud documentation and pricing pages.
Starting today, Amazon EC2 M8i and M8i-flex instances are now available in AWS GovCloud (US-West) Region. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The M8i and M8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver up to 20% better performance than M7i and M7i-flex instances, with even higher gains for specific workloads. The M8i and M8i-flex instances are up to 30% faster for PostgreSQL databases, up to 60% faster for NGINX web applications, and up to 40% faster for AI deep learning recommendation models compared to M7i and M7i-flex instances. M8i-flex are the easiest way to get price performance benefits for a majority of general-purpose workloads like web and application servers, microservices, small and medium data stores, virtual desktops, and enterprise applications. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources. M8i instances are a great choice for all general purpose workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. The SAP-certified M8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications. To get started, sign in to the AWS Management Console. For more information about the new instances, visit the M8i and M8i-flex instance page or visit the AWS News blog.
Amazon CloudWatch Logs Insights saved queries now support parameters, allowing you to pass values to reusable query templates with placeholders. This eliminates the need to maintain multiple copies of nearly identical queries that differ only in specific values such as log levels, service names, or time intervals. You can define up to 20 parameters in a query, with each parameter supporting optional default values. For example, you can create a single template to query logs by severity level (such as ERROR or WARN) and pass different service names each time you run it. To execute a query with parameters, invoke it using the query name prefixed with $ and pass your parameter values, such as $ErrorsByService(logLevel="ERROR", serviceName="OrderEntry"). You can also use multiple saved queries with parameters together for complex log analysis, significantly reducing query maintenance overhead while improving reusability. Saved queries with parameters are available in all commercial AWS regions. You can create and use saved queries with parameters using the Amazon CloudWatch console, AWS Command Line Interface (AWS CLI), AWS Cloud Development Kit (AWS CDK), and AWS SDKs. To learn more, see the Amazon CloudWatch Logs documentation.
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8i and R8i-flex instances are available in the AWS GovCloud (US-West) Region. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The R8i and R8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver 20% higher performance than R7i instances, with even higher gains for specific workloads. They are up to 30% faster for PostgreSQL databases, up to 60% faster for NGINX web applications, and up to 40% faster for AI deep learning recommendation models compared to R7i. R8i-flex, our first memory-optimized Flex instances, are the easiest way to get price performance benefits for a majority of memory-intensive workloads. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources. R8i instances are a great choice for all memory-intensive workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. R8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications. R8i instances are SAP-certified and deliver 142,100 aSAPS, delivering exceptional performance for mission-critical SAP workloads. To get started, sign in to the AWS Management Console. For more information about the R8i and R8i-flex instances visit the AWS News blog.
AWS launches AWS Interconnect - last mile, a fully managed connectivity offering that allows customers to connect their branch offices, data centers, and remote locations to AWS with just a few clicks, eliminating the friction and complexity of network setup. As a milestone collaboration between AWS and Lumen, AWS Interconnect - last mile combines AWS cloud innovation with Lumen’s extensive network footprint to redefine how businesses connect to the cloud. Through the AWS Console, customers can now instantly establish private, high-speed connections to AWS by simply choosing their preferred AWS Region, bandwidth speed, Direct Connect Gateway ID and partner subscriber ID. Once initiated, AWS generates an activation key to complete provisioning with Lumen. The launch simplifies the connectivity experience by pre-provisioning capacity and automating complex network configuration including BGP peering, VLAN configuration, and ASN assignment. Customers can dynamically scale bandwidth from 1 Gbps to 100 Gbps through the AWS Console and benefit from zero down-time maintenance. The service is designed for high availability and backed by SLA. MACsec encryption is enabled by default for enhanced security between AWS and partner devices. AWS Interconnect - last mile is available in the US through our launch partner Lumen. Partners can also easily adopt via a published open API package on GitHub. For more information, see the AWS Interconnect - last mile documentation and pricing pages.
Building memory-intensive applications with AWS Lambda just got easier. AWS Lambda Managed Instances gives you up to 32 GB of memory—3x more than standard AWS Lambda—while maintaining the serverless experience you know. Modern applications increasingly require substantial memory resources to process large datasets, perform complex analytics, and deliver real-time insights for use cases such as […]
AWS Billing and Cost Management Dashboards now support scheduled email delivery for your reports. You can now automate report distribution on flexible recurring schedules, eliminating manual compilation work and ensuring financial insights reach decision-makers without requiring console access." Scheduled email reports enable you to configure daily, weekly, or monthly delivery schedules for your dashboards. Recipients receive emails containing secure links to password-protected PDF reports optimized for offline viewing. Manage recipients through AWS User Notifications, and once configured, reports generate and distribute automatically on your chosen schedule. You can also access these capabilities programmatically through AWS SDKs and CLI tools. This feature is available at no additional cost in all commercial AWS Regions, excluding AWS China Regions. To get started, open the AWS Billing and Cost Management console, navigate to Dashboards, select a dashboard, and choose 'Manage email reports' from the Actions menu. For more information, see the Dashboards user guide and announcement blog post.
Amazon Elastic Kubernetes Service (Amazon EKS) managed node groups now support Auto Scaling warm pools, enabling you to maintain pre-initialized EC2 instances ready for rapid scale-out. This reduces node provisioning latency for applications with burst traffic patterns, time-sensitive workloads, or long instance boot times due to complex initialization scripts and software dependencies. With warm pools enabled, your EKS managed node group maintains a pool of instances that have already completed OS initialization, user data execution, and software configuration. When demand increases and the Auto Scaling group scales out, instances transition from the warm pool to active service without repeating the full cold-start sequence. You can configure instances in the warm pool as Stopped (lower cost, longer transition) or Running (higher cost, faster transition). You can also enable reuse on scale-in, which returns instances to the warm pool during scale-down instead of terminating them. Warm pools work with Cluster Autoscaler without requiring any additional configuration. You can enable warm pools through the EKS API, AWS CLI, AWS Management Console, or AWS CloudFormation by adding a warmPoolConfig to your CreateNodegroup or UpdateNodegroupConfig requests. Existing managed node groups that do not enable warm pools are unaffected. This feature is available in all AWS Regions where Amazon EKS is available, except for the China (Beijing) Region, operated by Sinnet and the China (Ningxia) Region, operated by NWCD. To get started, see the Amazon EKS managed node groups documentation.
We're excited to announce the launch of a new Greengrass component SDK for AWS IoT Greengrass applications. This new SDK addresses the challenge of deploying sophisticated applications on edge devices with limited resources, enabling industries such as automotive, industrial IoT, robotics, and smart buildings to run more complex AI and ML workloads at the edge. Moreover, the new SDK maintains full compatibility with both AWS IoT Greengrass nucleus and nucleus lite capabilities. The new Greengrass component SDK offers significant memory footprint reduction, with a footprint of less than 0.5MB compared to 30MB, enabling deployment on resource-constrained devices. It provides native C, C++, and Rust bindings, optimized for performance and cost-critical embedded applications. This SDK opens new possibilities for edge computing applications where memory constraints have previously been a limiting factor. The new Greengrass component SDK is available in all AWS Regions where AWS IoT Greengrass is available.
AWS Secrets Manager console now allows you to specify a custom customer managed AWS Key Management Service (KMS) key when creating secrets. You can now provide a KMS key Amazon Resource Name (ARN) directly in the console, in addition to selecting from the pre-populated list of KMS keys in your current account. Previously, when creating a secret through the AWS Secrets Manager console, you could only select customer managed KMS keys from a dropdown list that displayed keys within the same AWS account. With this enhancement, you can now enter a KMS key ARN to use a key from a different account, aligning the console experience with the existing API capabilities. This simplifies cross-account encryption workflows and provides greater flexibility in managing your encryption keys across multiple accounts. This feature is available in all AWS Regions where AWS Secrets Manager is available. To learn more about using customer managed KMS keys with AWS Secrets Manager, visit the AWS Secrets Manager documentation.
Amazon ElastiCache Serverless now supports IPv6 and dual stack connectivity, expanding beyond the IPv4 connectivity that was previously available. This gives you greater flexibility in how your applications connect to your Serverless caches. When creating an ElastiCache Serverless cache, you can now choose from three network type options — IPv4, IPv6, or dual stack. With dual stack connectivity, your cache accepts connections over both IPv4 and IPv6 simultaneously, making it ideal for migrating to IPv6 gradually while maintaining backward compatibility with applications connecting over IPv4. IPv6 connectivity enables you to use IPv6-only subnets with your Serverless caches, eliminating the need for IPv4 addresses and helping you meet compliance requirements for IPv6 adoption. IPv6 and dual stack connectivity for ElastiCache Serverless is available in all AWS Regions, including the AWS GovCloud (US) Regions and the China Regions, at no additional charge. To learn more, visit the Amazon ElastiCache product page and Choosing a network type for serverless caches in the Amazon ElastiCache documentation.
Amazon Managed Service for Apache Flink now supports Apache Flink version 2.2. This is a major upgrade that brings runtime improvements such as Java 17 support, RocksDB 8.10.0 for better I/O performance, and serialization enhancements. Additionally, Dataset API and Scala APIs are now deprecated. You can create a new application on Apache Flink 2.2 or use in-place version upgrades to adopt the Flink 2.2 runtime for a simpler and faster upgrade to compatible applications. Amazon Managed Service for Apache Flink makes it easier to transform and analyze streaming data in real time across various use cases, including real-time analytics, anomaly detection, and complex event processing. Amazon Managed Service for Apache Flink simplifies the setup, operation, and scaling of Apache Flink applications, allowing developers and data engineers to focus on building and running their streaming applications without managing the underlying infrastructure. Apache Flink 2.2 is available across AWS regions where Amazon Managed Service for Apache Flink is offered. You can learn more about Apache Flink 2.2 in Amazon Managed Service for Apache Flink in our documentation.
Customers use AWS Lambda to build Serverless applications for a wide variety of use cases, from simple API backends to complex data processing pipelines. Lambda's flexibility makes it an excellent choice for many workloads, and with support for up to 10,240 MB of memory, you can now tackle compute-intensive tasks that were previously challenging in a Serverless environment. When you configure a Lambda function's memory size, you allocate RAM and Lambda automatically provides proportional CPU power. When you configure 10,240 MB, your Lambda function has access to up to 6 vCPUs.