AWS AI News Hub

Your central source for the latest AWS artificial intelligence and machine learning service announcements, features, and updates

Filter by Category

197
Total Updates
91
What's New
20
ML Blog Posts
20
News Articles
Showing 197 of 197 updates

Organizations are increasingly integrating generative AI capabilities into their applications to enhance customer experiences, streamline operations, and drive innovation. As generative AI workloads continue to grow in scale and importance, organizations face new challenges in maintaining consistent performance, reliability, and availability of their AI-powered applications. Customers are looking to scale their AI inference workloads across […]

bedrocknovaorganizations
#bedrock#nova#organizations#ga

AWS Glue adds write operations support for SAP OData, Adobe Marketo Engage, Salesforce Marketing Cloud, and HubSpot connectors. This allows you to not only extract data from those applications, but also write data to them directly from your AWS Glue ETL jobs. With the new write functionality you can create and update records in SAP systems; sync leads into Adobe Marketo Engage; updating subscriber and campaign data in Salesforce Marketing Cloud; manage contacts, companies, and deals in HubSpot; and more. This feature simplifies building end-to-end ETL pipelines that both extract data from and write processed results back to target applications, eliminating the need for custom scripts or intermediate systems. Write operations support for SAP OData, Adobe Marketo Engage, Salesforce Marketing Cloud, and HubSpot connectors is available in all Regions where AWS Glue is available. To learn more and see the list of supported entities, visit AWS Glue documentation.

rdsglue
#rds#glue#ga#update#support

You can now perform batch AI inference within Amazon OpenSearch Ingestion pipelines to efficiently enrich and ingest large datasets for Amazon OpenSearch Service domains. Previously, customers used OpenSearch’s AI connectors to Amazon Bedrock, Amazon SageMaker, and 3rd-party services for real-time inference. Inferences generate enrichments such as vector embeddings, predictions, translations, and recommendations to power AI use cases. Real-time inference is ideal for low-latency requirements such as streaming enrichments. Batch inference is ideal for enriching large datasets offline, delivering higher performance and cost efficiency. You can now use the same AI connectors with Amazon OpenSearch Ingestion pipelines as an asynchronous batch inference job to enrich large datasets such as generating and ingesting up to billions of vector embeddings. This feature is available in all regions that support Amazon OpenSearch Ingestion and 2.17+ domains. Learn more from the documentation.

bedrocksagemakeropensearchopensearch serviceopensearch ingestion
#bedrock#sagemaker#opensearch#opensearch service#opensearch ingestion#support

Today, AWS announces Internet Protocol version 6 (IPv6) addressing support for Amazon Kinesis Video Streams (KVS). With this enhancement, KVS now offers dual-stack endpoints that let customers use both IPv4 and IPv6 addresses to stream video from millions of devices. This means that existing IPv4 implementations continue to work seamlessly while gaining the benefits of IPv6 connectivity. As customers increasingly encounter IPv4 address exhaustion in their private networks, this enhancement delivers much-needed flexibility. Organizations can now seamlessly stream videos using IPv4, IPv6, or dual-stack clients. This advancement simplifies IPv6-based system transitions, ensures compliance requirements are met, and eliminates dependency on costly address translation equipment. IPv6 support is available in all commercial AWS Regions where Amazon KVS is available except Ap-Southeast-1 and GovCloud regions . To learn more about Amazon KVS, refer to the developer guide.

lexkinesisorganizations
#lex#kinesis#organizations#ga#enhancement#support

Amazon Connect now provides agents with generative AI-powered email conversation overviews, suggested actions, and responses. This enables agents to handle emails more efficiently, and customers to receive faster, more consistent support. For example, when a customer emails about a refund request, Amazon Connect automatically provides key details about the customer's purchase history, recommends a refund resolution step-by-step guide, and generates an email response to help resolve the contact quickly. To enable this feature, add the Amazon Q in Connect block to your flows before an email contact is assigned to your agent. You can customize the outputs of your email generative AI-powered assistant by adding knowledge bases and defining your prompts to guide the AI agent with generating responses that match your company's language, tone, and policies for consistent customer service. This new feature is available in all AWS regions where Amazon Q in Connect is available. To learn more and get started, refer to the help documentation, pricing page, or visit the Amazon Connect website.

amazon qq in connect
#amazon q#q in connect#new-feature#support

Today, AWS Clean Rooms announces support for cross-region data collaboration. This launch enables companies and their partners to easily collaborate with data sources stored in different regions, without having to move, copy, or share their underlying data. With AWS Clean Rooms support for cross-region data collaboration, organizations can collaborate with their partners by leveraging datasets stored in regions outside of their own. Companies can collaborate with data sources stored in different AWS and Snowflake Regions from where their collaboration is hosted, eliminating the need to move or replicate data across regions to collaborate with their partners. Collaboration creators can control where their analysis results are delivered by configuring a set of allowed regions, helping each collaborator comply with applicable data residency requirements and sovereignty laws. For example, a media publisher with data stored in US East (N. Virginia) can collaborate with an advertising partner whose data resides in EU Central (Frankfurt) without building additional data pipelines or sharing underlying data with one another. With AWS Clean Rooms, customers can create a secure data clean room in minutes and collaborate with any company on AWS or Snowflake to generate unique insights about advertising campaigns, investment decisions, and research and development. For more information about the AWS Regions where AWS Clean Rooms is available, see the AWS Regions table. To learn more about collaborating with AWS Clean Rooms, visit AWS Clean Rooms.

organizations
#organizations#launch#ga#support

AWS Directory Service now supports IPv6 connectivity for Managed Microsoft AD and AD Connector. The IPv6 capabilities allow customers to deploy directories with IPv4-only, IPv6-only, or dual-stack configurations, helping organizations meet government mandates and standardize on next-generation Internet protocol. The IPv6 support helps organizations meet regulatory requirements, including U.S. federal agencies' mandate to transition to IPv6 by 2025, while eliminating dual-protocol network complexity. This enables organizations to modernize their network infrastructure and comply with evolving security standards without maintaining separate IPv4 and IPv6 network stacks. Customers can upgrade existing IPv4-only directories to dual-stack by enabling IPv6 in VPC subnets, then adding IPv6 support through the Directory Service Management Console. IPv6 capabilities are available in all AWS Directory Service regions, with IPv6 accessible through Console, CLI, and API. To learn more, see the AWS Directory Service documentation.

lexrdsdirectory serviceorganizations
#lex#rds#directory service#organizations#ga#support

EC2 Image Builder now automatically disables pipelines after consecutive failures and allows customers to configure custom log groups for image pipelines. These capabilities address common operational needs including improved control over pipeline execution, enhanced customization options for logging requirements, and better visibility. Image Builder pipelines are used to automate the creation, testing, and distribution of custom images across your AWS infrastructure. With the new automatic disablement feature, you can configure pipelines to stop execution after a specified number of consecutive failures, preventing creation of unnecessary resources and reducing costs from repeatedly failed builds. Additionally, you can also configure custom log groups for pipelines with specific log retention periods and encryption settings that align with your organizational policies, providing you enhanced customization options for logging and better visibility. These enhancements collectively provide greater control and efficiency in managing your image building processes. These capabilities are available to all customers at no additional costs, in all AWS commercial regions including AWS China (Beijing) Region, operated by Sinnet, AWS China (Ningxia) Region, operated by NWCD, and AWS GovCloud (US) Regions. You can get started from the EC2 Image Builder Console, CLI, API, CloudFormation, or CDK, and learn more in the EC2 Image Builder documentation.

ec2cloudformation
#ec2#cloudformation#ga#enhancement

Today, AWS announces the general availability of a self-service Invoice correction feature to update AWS invoices. This launch enables all AWS customers to correct key invoice attributes—including purchase order numbers, business legal name, and addresses — on their AWS invoices and get corrected invoices instantaneously. AWS customers can now access the new self-service Invoice correction feature directly from the AWS Billing and Cost Management console. This feature offers AWS customers a guided self-service workflow to update invoice attributes in their account settings and on select invoices. This feature gives AWS customers direct control over invoice corrections while reducing wait times and improving efficiency in managing their AWS accounts. AWS self-service Invoice Correction feature is generally available in all AWS Regions, excluding GovCloud (US) Regions and China (Beijing) and China (Ningxia) Regions. To get started with AWS self-service invoice correction feature, please visit the product details page.

#launch#generally-available#ga#update

In this post, we demonstrate how organizations can enhance their employee productivity by integrating Kore.ai’s AI for Work platform with Amazon Q Business. We show how to configure AI for Work as a data accessor for Amazon Q index for independent software vendors (ISVs), so employees can search enterprise knowledge and execute end-to-end agentic workflows involving search, reasoning, actions, and content generation.

amazon qq businessorganizations
#amazon q#q business#organizations#ga

Organizations manage content across multiple languages as they expand globally. Ecommerce platforms, customer support systems, and knowledge bases require efficient multilingual search capabilities to serve diverse user bases effectively. This unified search approach helps multinational organizations maintain centralized content repositories while making sure users, regardless of their preferred language, can effectively find and access relevant […]

opensearchopensearch serviceorganizations
#opensearch#opensearch service#organizations#ga#support

Users usually package their function code as container images when using machine learning (ML) models that are larger than 250 MB, which is the Lambda deployment package size limit for zip files. In this post, we demonstrate an approach that downloads ML models directly from Amazon S3 into your function’s memory so that you can continue packaging your function code using zip files.

lambdas3
#lambda#s3

Today, we’re excited to announce the Amazon Bedrock AgentCore Model Context Protocol (MCP) Server. With built-in support for runtime, gateway integration, identity management, and agent memory, the AgentCore MCP Server is purpose-built to speed up creation of components compatible with Bedrock AgentCore. You can use the AgentCore MCP server for rapid prototyping, production AI solutions, […]

bedrockagentcore
#bedrock#agentcore#ga#integration#support

AWS Directory Service now enables customers to upgrade Managed Microsoft AD from Standard to Enterprise Edition programmatically through the UpdateDirectorySetup API. The self-service edition upgrade eliminates the need for support tickets when scaling Managed Microsoft AD directories. The API-driven Standard to Enterprise Edition upgrade removes operational barriers that previously required coordinating maintenance windows with AWS support, enabling on-demand directory scaling with automated pre-upgrade snapshots and sequential domain controller upgrades. This streamlined process ensures data protection through automatic backup creation before upgrades begin, while the sequential upgrade approach maintains directory availability throughout the process. Organizations can now scale their directory infrastructure in response to growing user bases or expanding application requirements without the delays associated with traditional support-driven upgrade processes. The programmatic approach enables integration with existing automation frameworks and infrastructure-as-code deployments. Directory size upgrades are available in all AWS Directory Service regions through the AWS SDK, providing consistent upgrade capabilities across global deployments. To learn more, see the AWS Directory Service documentation and UpdateDirectorySetup API reference.

directory serviceorganizations
#directory service#organizations#ga#update#integration#support

Amazon Connect now provides screen recording for agents using ChromeOS devices making it easier for you to help improve their performance. With screen recording, you can identify areas for agent coaching (e.g., long contact handle duration or non-compliance with business processes) by not only listening to customer calls or reviewing chat transcripts, but also watching agents’ actions while handling a contact (i.e., a voice call, chat, or task). Screen recording on ChromeOS is available in all the AWS Regions where Amazon Connect is already available. To learn more about screen recording, please visit the documentation and webpage. For information about screen recording pricing, visit the Amazon Connect pricing page.

#support

Today, AWS announced the expansion of 10 Gbps and 100 Gbps dedicated connections with MACsec encryption capabilities at the existing AWS Direct Connect location in the ePLDT data center near Makati City, Philippines. You can now establish private, direct network access to all public AWS Regions (except those in China), AWS GovCloud Regions, and AWS Local Zones from this location. The Direct Connect service enables you to establish a private, physical network connection between AWS and your data center, office, or colocation environment. These private connections can provide a more consistent network experience than those made over the public internet.  For more information on the over 146 Direct Connect locations worldwide, visit the locations section of the Direct Connect product detail pages. Or, visit our getting started page to learn more about how to purchase and deploy Direct Connect.

#expansion

Amazon Connect now provides agent time-off balance data in analytics data lake, making it easier for you to generate reports and insights from this data. With this launch, you can now access latest and historical agent time-off balances across different time-off categories (paid time-off, sick leave, leave of absence, etc.) in the analytics data lake. In addition to balances, you can also view a chronological list of all transactions that impacted the balance. For example, if an agent starts with 80 hours of paid time-off on January 1, submits a 20-hour request on January 3, and later cancels it, you can see each transaction's impact on the final 80-hour balance. This launch makes time-off management easier by eliminating the need for managers to manually reconcile balances and time-off transactions, thus improving manager productivity and making it easier for them to respond to agent inquiries. This feature is available in all AWS Regions where Amazon Connect agent scheduling is available. To learn more about Amazon Connect agent scheduling, click here.

#launch

Amazon Cognito now allows you to configure terms of use and privacy policy documents for Managed Login pages. This helps customers seamlessly present legal terms during user registration while simplifying implementation. With Managed Login, Cognito customers could previously use its no-code editor to customize the user journey from signup and login to password recovery and multi-factor authentication. Now, customers can additionally use Managed Login to easily set up terms of use and privacy policy documents, saving development teams from building custom solutions. With this capability, you can configure terms of use and privacy policy URLs for each app client in your Cognito user pool. When users register, they see text indicating that by signing up, they agree to your terms of use and privacy policy, and a link to your webpage with the agreement. You can configure different URLs for each supported language to match your Managed Login localization settings. For example, if you have configured the privacy policy and terms of use documents for French (fr) and the same is selected in the lang query-parameter on the sign-up page URL, users will see the French URL you configured. This capability is available to Amazon Cognito customers using the Essentials or Plus tiers in AWS Regions where Cognito is available, including the AWS GovCloud (US) Regions. To learn more, refer to the developer guide and Pricing Detail Page for Cognito Essentials and Plus tier.

#ga#support

Today, we’re announcing the integration of Amazon Neptune Database with GraphStorm, a scalable, open-source graph machine learning (ML) library built for enterprise-scale applications. This brings together Neptune’s OLTP (Online transaction processing) graph capabilities with GraphStorm’s scalable inference engine, making it easier for customers to deploy graph ML in latency-sensitive, transactional environments. With this integration, developers can train GNN models using GraphStorm and deploy them as real-time inference endpoints that directly query Neptune for subgraph neighborhoods on demand. Predictions—such as node classifications or link predictions—can then be returned in sub-second timeframes, closing the loop between transactional graph updates and ML-driven decisions. This integration unlocks use cases such as fraud detection and prevention, where organizations can make real-time decisions based on complex relationships among accounts, devices, and transactions; dynamic recommendations, where systems can instantly adapt to user behavior using live graph context; and graph-based risk scoring, where risk assessments are continuously updated as the graph evolves. Customers can also combine real-time inference results with graph analytics queries for deeper operational insights, enabling ML feedback loops directly within graph applications. This feature is available in all regions where Amazon Neptune Database is available. To learn more and try the integration yourself, check out our announcement blog: Modernize fraud prevention: GraphStorm v0.5 for real-time inference for a full walk-through.

lexorganizations
#lex#organizations#ga#update#integration#announcement

AWS Config advanced queries and aggregators are now available in Asia Pacific (New Zealand) region. You can use advanced queries to query the current configuration and compliance state of your AWS resources. Aggregators enable centralized visibility and analysis by aggregating configuration and compliance data from multiple accounts and regions, or across an AWS Organization. Advanced queries provide a single query endpoint and a query language to get current resource configuration and compliance state without performing service-specific describe API calls. You can use configuration aggregators to run the same queries from a central account across multiple accounts and AWS Regions. Advanced queries can be used from AWS console and AWS CLI. To learn more about aggregators, please refer to our documentation. With this expansion, AWS Config advanced queries and aggregators are now available in all supported regions.

#ga#now-available#support#expansion

AWS Secrets Manager now supports AWS PrivateLink with all Secrets Manager Federal Information Processing Standard (FIPS) endpoints that are available in commercial AWS Regions and the AWS GovCloud (US) Regions. With this launch, you can establish a private connection between your virtual private cloud (VPC) and Secrets Manager FIPS endpoints instead of connecting over the public internet, helping you meet your organization's business, compliance, and regulatory requirements to limit public internet connectivity. To learn more about AWS Secrets Manager support for AWS PrivateLink, visit the AWS Secrets Manager documentation. For more information about AWS PrivateLink and its benefits, visit the AWS PrivateLink product page.

secrets manager
#secrets manager#launch#ga#support

This new standardized interface allows developers to analyze, transform, and deploy production-ready AI agents directly in their preferred development environment. Get started with AgentCore faster and more easily with one-click installation that integrates Agentic IDEs like Kiro and AI coding assistants (Claude Code, Cursor, and Amazon Q Developer CLI). Use natural language to iteratively develop your agent, including transforming agent logic to work with the AgentCore SDK and deploying your agent into development accounts. The open-source MCP server is available globally via GitHub. To get started, visit the AgentCore MCP Server GitHub repository for documentation and installation instructions. You can also learn more about this launch in our blog. For more information about Amazon Bedrock AgentCore and it’s services visit the News Blog and explore in-depth implementation details in the AgentCore documentation. For pricing information, visit the Amazon Bedrock AgentCore Pricing.

bedrockagentcoreamazon qq developer
#bedrock#agentcore#amazon q#q developer#launch#now-available

Amazon Connect, the cloud-based contact center service from AWS, now supports Get Customer Input and Store Customer Input flow blocks for outbound voice whisper flows. The Get Customer Input flow block allows a prompt to be played to a customer on an outbound call after they answer the call but before they are connected with an agent, and the customer’s response can be collected through either DTMF input or via an Amazon Lex Bot. This capability will allow you to capture interactive and dynamic customer input on outbound calls before these are connected to an agent. For example, you can use the Get Customer Input flow block to obtain customer consent for call recording as part of outbound calls placed by agents, and use it to trigger Amazon Connect Contact Lens recording and analytics. The capability is available in all AWS commercial and AWS GovCloud (US-West) regions where Amazon Connect is offered. To learn more about Amazon Connect, please visit the Amazon Connect website or consult the Amazon Connect Administrator Guide.

lex
#lex#support

Today, Amazon GameLift Servers launched new console capabilities that let you view and connect to individual fleet instances. The EC2 and Container Fleet Detail pages have a new Instances tab to see a list of instances associated with a fleet. For each instance, there is an instance details page that displays metadata in a human-readable format (data also available via Amazon GameLift Server APIs). From the list and detail views, you can invoke the connect button, open a modal, and launch AWS CloudShell to start an SSM session directly into that instance. These console improvements give hands-on tools to debug, inspect, and resolve issues faster. Instead of relying on external tooling or guesswork, directly investigate host performance, pull recent game server logs, or diagnose issues such as network configuration and instance health - all from within the Amazon GameLift Servers Console. This reduces turnaround time when troubleshooting and enhances visibility into what’s happening “under the hood” of a game server fleet. SSM in Console is available in Amazon GameLift Servers supported regions, except AWS China. For more information, visit the Amazon GameLift Servers documentation.

ec2
#ec2#launch#ga#improvement#support

AWS Clean Rooms now supports data access budgets for tables associated to a collaboration. This new privacy control allows you to limit the number of times your data can be analyzed when training or running inference on a custom ML model or in a SQL query or Pyspark job. With data access budgets, you can establish per-period budgets that refresh daily, weekly, or monthly, lifetime budgets for overall usage, or both types simultaneously. When you spend a budget, the system prevents additional analyses until the budget refreshes, but you can reset or edit a budget at anytime as your needs change. AWS Clean Rooms helps companies and their partners easily analyze and collaborate on their collective datasets without revealing or copying one another’s underlying data. For more information about the AWS Regions where AWS Clean Rooms is available, see the AWS Regions table. To learn more about collaborating with AWS Clean Rooms, visit AWS Clean Rooms.

#support

AWS Parallel Computing Service (AWS PCS) now enables you to modify and update key Slurm workload manager settings without rebuilding your cluster. You can now adjust essential parameters including accounting configurations and workload management settings on existing clusters, where previously these details were fixed at creation time. This new flexibility helps you adapt your high performance computing (HPC) environment to changing requirements without disrupting operations. You can make modifications through the AWS Management Console, AWS Command Line Interface (AWS CLI), or AWS SDK. AWS PCS is a managed service that makes it easier for you to run and scale your high performance computing (HPC) workloads and build scientific and engineering models on AWS using Slurm. You can use AWS PCS to build complete, elastic environments that integrate compute, storage, networking, and visualization tools. AWS PCS simplifies cluster operations with managed updates and built-in observability features, helping to remove the burden of maintenance. You can work in a familiar environment, focusing on your research and innovation instead of worrying about infrastructure. Cluster configuration modifications are available in all AWS Regions where AWS PCS is offered. To learn more, see the Modifying a cluster section in the AWS PCS User Guide.

novalex
#nova#lex#update#support

Amazon Elastic Container Services (Amazon ECS), a fully managed container orchestration service, now provides one-click event capture and event history querying directly in the AWS Management Console. ECS lifecycle events, including service action events and task state changes, provide visibility into your ECS environment for monitoring and troubleshooting. The ECS console now enables you to set up event capture with a single click, and access an intuitive query interface without leaving the ECS console. The ECS console automatically creates and manages the underlying EventBridge rules and CloudWatch log groups, while providing pre-built query templates and filters for common troubleshooting scenarios. Event capture through the ECS console is a great way to retain data for stopped tasks and recent events beyond the default limits. To enable event capture, on the ECS console, navigate to the Cluster details page, locate the Configuration tab, and click the “Turn on event capture” button. Once enabled, navigate to the Event history tab on your cluster or service details page to query and analyze historical events. The console provides commonly used query parameters such as time range, task ID, and deployment ID, along with filters for stop codes and container exit codes. You can view detailed task lifecycle events such as task state changes and service action events without needing to navigate to another console, or use query languages. ECS console event capture is available in all AWS Commercial Regions and AWS GovCloud (US) Regions. To learn more, visit the ECS developer guide.

ecseventbridgecloudwatch
#ecs#eventbridge#cloudwatch#ga#support

AWS Parallel Computing Service (AWS PCS) now offers expanded Slurm configuration capabilities, enabling you to set over 60 additional parameters for granular control over your high performance computing (HPC) cluster operations. This enhancement provides more flexibility in managing job scheduling, resource allocation, access control, and job lifecycle. The new Slurm custom settings give you fine-grained control over various resource management scenarios, including fair-share scheduling and quality of service levels. For example, you can now implement queue-specific priority policies, configure preemption settings, and set custom time and resource limits. Additionally, you can control access permissions at the account level and configure per-job execution behaviors. These and other capabilities help you to run a production HPC environment that efficiently serves multiple teams, projects, and workload types. AWS PCS is a managed service that makes it easier for you to run and scale your HPC workloads and build scientific and engineering models on AWS using Slurm. You can use AWS PCS to build complete, elastic environments that integrate compute, storage, networking, and visualization tools. AWS PCS simplifies cluster operations with managed updates and built-in observability features, helping to remove the burden of maintenance. You can work in a familiar environment, focusing on your research and innovation instead of worrying about infrastructure. Expanded Slurm custom settings are available in all AWS Regions where AWS PCS is available. To learn more, see the AWS PCS User Guide.

novalex
#nova#lex#update#enhancement

AWS Parallel Computing Service (PCS) now allows you to reboot compute nodes using Slurm commands without triggering instance replacement. With this feature, you can reboot nodes for operational reasons such as troubleshooting, resource cleanup, and recovery from degraded states before requiring full node replacement, enabling you to efficiently maintain cluster health at lower costs. This feature is available in all AWS Regions where PCS is available. You can use the 'scontrol reboot' command with options to schedule immediate or deferred reboots, while reboots through other methods will continue to trigger instance replacement. To learn more, refer to Rebooting compute nodes with Slurm in AWS PCS. PCS is a managed service that simplifies running and scaling high performance computing (HPC) workloads on AWS using Slurm. To learn more about PCS, refer to the service documentation.

#support

Amazon Bedrock now offers Cohere Embed v4, the latest state-of-the-art multimodal embedding model from Cohere that produces high-quality embeddings for text, images, and complex business documents. This powerful addition to Amazon Bedrock enables enterprises to build AI applications with frontier search and retrieval capabilities. Traditional embedding models often struggle to understand complex multimodal business materials, such as business presentations and sales and financial reports, requiring extensive data pre-processing pipelines. Embed v4 addresses this challenge by natively processing documents with tables, graphs, diagrams, code snippets, and even handwritten notes. The model handles real-world imperfections such as spelling errors and formatting issues, eliminating the need for time-consuming data cleanup and helping you surface insights from previously difficult-to-search information. With support for over 100 languages, including Arabic, English, French, Japanese, and Korean, Embed v4 enables global organizations to seamlessly search for information, breaking language barriers. The model is also fine-tuned for industries such as finance, healthcare, and manufacturing, delivering superior performance on specialized documents including financial reports, medical records, and product specifications. Cohere Embed v4 is available for on-demand inference in US East (N. Virginia), Europe (Ireland), and Asia Pacific (Tokyo), and can be accessed from select public AWS Regions through cross-region inference. Review the Amazon Bedrock Model Support by Regions guide for complete regional availability. To get started, visit the Amazon Bedrock console to request model access. For more information, refer to the Cohere product page and documentation.

bedrocklexrdsorganizations
#bedrock#lex#rds#organizations#ga#now-available

The global real-time payments market is experiencing significant growth. According to Fortune Business Insights, the market was valued at USD 24.91 billion in 2024 and is projected to grow to USD 284.49 billion by 2032, with a CAGR of 35.4%. Similarly, Grand View Research reports that the global mobile payment market, valued at USD 88.50 […]

In this post, we share how Hapag-Lloyd developed and implemented a machine learning (ML)-powered assistant predicting vessel arrival and departure times that revolutionizes their schedule planning. By using Amazon SageMaker AI and implementing robust MLOps practices, Hapag-Lloyd has enhanced its schedule reliability—a key performance indicator in the industry and quality promise to their customers.

sagemaker
#sagemaker

Amazon GameLift Streams now supports streaming over IPv6 for applications running on Windows-based stream groups, enabling dual-stack (IPv4 and IPv6) streaming capabilities. This enhancement gives our customers flexibility in how they connect to their streamed Windows applications while maintaining compatibility with existing IPv4 implementations. When streaming applications running on Windows-based stream groups through Amazon GameLift Streams, customers can now use either IPv4 or IPv6 protocols. This dual-stack support helps customers meet IPv6 compliance requirements and provides additional addressing options for the streaming clients. Please note that Linux runtime applications will continue to require IPv4 connectivity for streaming. Amazon GameLift Streams IPv6 support for applications running on Windows-based stream groups is available in all AWS Regions where Amazon GameLift Streams is offered. To learn more about networking options for your streaming applications, visit the Amazon GameLift Streams documentation.

lex
#lex#ga#enhancement#support

Amazon Keyspaces (for Apache Cassandra) now supports Internet Protocol version 6 (IPv6) through new dual-stack endpoints that enable both IPv6 and IPv4 connectivity. This enhancement provides customers with a vastly expanded address space while maintaining compatibility with existing IPv4-based applications. Amazon Keyspaces (for Apache Cassandra) is a scalable, highly available, and managed Apache Cassandra–compatible database service. Amazon Keyspaces is serverless, so you pay for only the resources that you use and you can build applications that serve thousands of requests per second with virtually unlimited throughput and storage. The dual-stack endpoints functionality allows you to gradually transition your applications from IPv4 to IPv6 without disruption, enabling safer migration paths for your critical database services. IPv6 support is also available through PrivateLink interface Virtual Private Cloud (VPC) endpoints, allowing you to access Amazon Keyspaces privately without traversing the public internet. IPv6 support for Amazon Keyspaces is now available in all AWS Commercial and AWS GovCloud (US) Regions where Amazon Keyspaces is offered, at no additional cost. To learn more about IPv6 support on Keyspaces, visit the Amazon Keyspaces documentation page.

#now-available#enhancement#support

Amazon Managed Workflows for Apache Airflow (MWAA) now supports Apache Airflow version 3.0, the latest major release of the workflow orchestration platform. This release enhances your ability to author, schedule, and monitor complex workflows with greater efficiency and control. Amazon MWAA is a managed service for Apache Airflow that enables seamless workflow orchestration using the familiar Apache Airflow platform. The availability of Apache Airflow v3.0 on MWAA introduces substantial improvements to workflow orchestration, including a completely redesigned interface for enhanced usability and advanced event-driven scheduling capabilities. This new scheduling system triggers workflows based on external events directly, eliminating the need for separate asset update pipelines. The newly introduced Task SDK in Apache Airflow v3.0 on MWAA help you simplify DAGs by reducing boilerplate code, making workflows more concise, readable, and consistent. Security and isolation are strengthened through the Task Execution API, which restricts direct access to the metadatabase and manages all runtime interactions. This release also features scheduler-managed backfill functionality, providing you better control over historical data processing. Additionally, MWAA now supports Python 3.12, while incorporating critical security improvements and bug fixes that enhance the overall reliability and security of your workflows in Amazon MWAA environments. You can launch a new Apache Airflow 3.0 environment on Amazon MWAA with just a few clicks in the AWS Management Console in all currently supported Amazon MWAA regions. To learn more about Apache Airflow 3.0 visit the Amazon MWAA documentation, and the Apache Airflow 3.0 change log in the Apache Airflow documentation. Apache, Apache Airflow, and Airflow are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.

lex
#lex#launch#update#improvement#support

Laravel, one of the world’s most popular web frameworks, launched its first-party observability platform, Laravel Nightwatch, to provide developers with real-time insights into application performance. Built entirely on AWS managed services and ClickHouse Cloud, the service already processes over one billion events per day while maintaining sub-second query latency, giving developers instant visibility into the health of their applications.

msk
#msk#launch

AWS announced the general availability of Apache Airflow 3 on Amazon Managed Workflows for Apache Airflow (Amazon MWAA). This release transforms how organizations use Apache Airflow to orchestrate data pipelines and business processes in the cloud, bringing enhanced security, improved performance, and modern workflow orchestration capabilities to Amazon MWAA customers. This post explores the features of Airflow 3 on Amazon MWAA and outlines enhancements that improve your workflow orchestration capabilities

organizations
#organizations#ga#new-feature#enhancement

Amazon Bedrock Data Automation (BDA) now supports enhanced transcription output for audio files by providing the option to distinguish between various speakers and separately process audio from each channel. Additionally, BDA expands support for blueprint creation using a guided and natural language-based interface for extracting custom insights to audio modality. BDA is a feature of Amazon Bedrock that automates generation of insights from unstructured multimodal content such as documents, images, audio, and videos for your GenAI-powered applications. With this launch, developers can now enable speaker diarization and channel identification in standard output. Speaker diarization detects each unique speaker and tracks speaker changes in a multi-party audio conversation. Channel identification enables separate processing of audio from each channel. For example, speakers such as a customer and sales agent can be separated into unique channels, making it easier to analyze the transcript. Speaker diarization and channel identification make transcripts easier to read and extract custom insights from a variety of multi-party voice conversations such as customer calls, education sessions, public safety calls, clinical discussions, and meetings. This enables customers to identify ways to improve employee productivity, add subtitles to webinars, enhance customer experience, or increase regulatory compliance. For example, Telehealth customers can summarize the recommendations of a doctor by assigning the doctors and patients to pre-identified channels. Amazon Bedrock Data Automation is available in a total of 7 AWS Regions in US West (Oregon), US East (N. Virginia), GovCloud (US-West), Europe (Frankfurt), Europe (London), Europe (Ireland), Asia Pacific (Mumbai) and Asia Pacific (Sydney). To learn more, visit the Bedrock Data Automation page, Amazon Bedrock Pricing page, or view documentation.

bedrock
#bedrock#launch#support

Amazon CloudWatch now helps you monitor large-scale distributed applications by automatically discovering and organizing services into groups based on configurations and their relationships. SREs and DevOps teams can identify critical dependencies and blast radius impacts to remediate issues faster. You get an always-on, out-of-the-box catalog and map that visualizes services and dependencies across AWS accounts and regions, organizing them into logical groups that align with how customers think about their systems—without manual configurations. You can also apply dynamic grouping based on how you organize applications—by teams, business units, criticality tiers, or other attributes. With this new application performance monitoring (APM) capability, customers can quickly visualize which applications and dependencies to focus on while troubleshooting their distributed applications. For example, SRE and DevOps teams can now accelerate root cause analysis and reduce mean-time-to-resolution (MTTR) through high-level operational signals such as SLOs, health indicators, changes, and top observations. The application map integrates with a contextual troubleshooting drawer that surfaces relevant metrics and actionable insights to accelerate triage. When deeper investigation is needed, teams can pivot to an application-specific dashboard tailored for troubleshooting. The map, drawer, and dashboard dynamically update as new services are discovered or as customers adjust how their environments are grouped—ensuring the view is always accurate and aligned with how teams operate. This new capability is now available in all AWS commercial regions where Application Signals have launched , at no additional cost. To learn more, please visit CloudWatch Application Signals documentation.

cloudwatch
#cloudwatch#launch#generally-available#ga#now-available#update

Amazon Detective now supports Amazon Virtual Private Cloud (VPC) endpoints via AWS PrivateLink, enabling you to securely initiate API calls to Detective from within your VPC without requiring Internet traversal. AWS PrivateLink support for Detective is available in all AWS Regions where Detective is available (see the AWS Region table). To try the new feature, you can create a VPC endpoint for Detective through the VPC console, API, or SDK. This creates an elastic network interface in your specified subnets. The interface has a private IP address that serves as an entry point for traffic destined for Detective. You can read more about Detective's integration with PrivateLink here. Amazon Detective automatically collects log data from your AWS resources and uses machine learning, statistical analysis, and graph theory to build interactive visualizations that enable you to conduct faster and more efficient security investigations. Detective analyzes trillions of events from multiple data sources like Amazon Virtual Private Cloud (Amazon VPC) Flow Logs, AWS CloudTrail logs, Amazon Elastic Kubernetes Service (Amazon EKS) audit logs, and findings from multiple AWS security services to create a unified, interactive view of security events. Detective also automatically groups related findings from Amazon GuardDuty, AWS Security Hub and Amazon Inspector to show you combined threats and vulnerabilities to help security analysts identify and prioritize potential high-severity security risks. To get started, see the Amazon Detective User Guide

eks
#eks#ga#new-feature#integration#support

Today, AWS announces the v1.0.0 release of the AWS API model context protocol (MCP) server enabling foundation models (FMs) to interact with any AWS API through natural language by creating and executing syntactically correct CLI commands. The v1.0.0 release of the AWS API MCP Server contains many enhancements that make the server easier to configure, use, and integrate with MCP clients and agentic frameworks. This release reduces startup time and removes several dependencies by converting the suggest_aws_command tool to a remote service rather than relying on local installation. Security enhancements offer improved secure file system controls, and better input validation. Customers using AWS CloudWatch agent can now collect logs from the API MCP Server for improved observability. In order to support more hosting and configuration options, the AWS API MCP Server now offers streamable HTTP transport in addition to the existing stdio. To make human-in-the-loop workflows requiring iterative inputs more reliable, the AWS API MCP Server now includes elicitation in supported MCP clients. To provide additional safeguards the API MCP Server can be configured to deny certain types of actions or require human oversight and consent for mutating actions. This release also includes a new experimental tool called get_execution_plan to provide prescriptive workflows for common AWS tasks. The tool can be enabled by setting the EXPERIMENTAL_AGENT_SCRIPTS flag to true. Customers can configure the AWS API MCP Server for use with their MCP-compatible clients from several popular MCP registries. The AWS API MCP Server is also available packaged as a container in the Amazon ECR Public Gallery. The AWS API MCP Server is open-source and available now. Visit the AWS Labs GitHub repository to view the source, download, and start experimenting with natural language interaction with AWS APIs today.

rdscloudwatch
#rds#cloudwatch#experimental#ga#enhancement#support

Today, AWS announces the general availability (GA) of the AWS Knowledge Model Context Protocol (MCP) Server. The AWS Knowledge server gives AI agents and MCP clients access to authoritative knowledge, including documentation, blog posts, What's New announcements, and Well-Architected best practices, in an LLM-compatible format. With this release, the server also includes knowledge about the regional availability of AWS APIs and CloudFormation resources. AWS Knowledge MCP Server enables MCP clients and agentic frameworks supporting MCP to anchor their responses in trusted AWS context, guidance, and best practices. Customers can now benefit from more accurate reasoning, increased consistency of execution, reduced manual context management so they can focus on business problems rather than MCP configurations. The server is publicly accessible at no cost and does not require an AWS account. Usage is subject to rate limits. Give your developers and agents access to the most up-to-date AWS information today by configuring your MCP clients to use the AWS Knowledge MCP Server endpoint, and follow the Getting Started guide for setup instructions. The AWS Knowledge MCP Server is available globally.

cloudformation
#cloudformation#generally-available#ga#support#announcement

Amazon SageMaker Unified Studio announces corporate identity support for interactive Apache Spark sessions through AWS Identity Center’s trusted identity propagation. This new capability enables seamless single sign-on and end-to-end data access traceability for data analytics workflows. Data engineers and scientists can now access data resources in Apache Spark sessions in their JupyterLab environment using their organizational identities, while administrators can implement fine-grained access controls and maintain comprehensive audit trails. For data administrators, this feature simplifies security management using AWS Lake Formation, Amazon S3 Access Grants, and Amazon Redshift Data APIs, enabling centralized access controls across Amazon EMR on EC2, EMR on EKS, EMR Serverless, and AWS Glue. Organizations can define granular permissions based on identity provider credentials for Spark sessions and SageMaker Studio notebook flows, including training and processing jobs. This integration is complemented by comprehensive AWS CloudTrail logging of all user activities—from interactive JupyterLab sessions to user background sessions - streamlining compliance monitoring and audit requirements. Identity support for Spark sessions in SageMaker Unified Studio is available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Canada (Central), South America (São Paulo), Europe (Ireland), Europe (Frankfurt), Europe (London), Europe (Paris), Europe (Stockholm), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Seoul), and Asia Pacific (Tokyo). To learn more, visit the SageMaker Unified Studio documentation.

sagemakerunified studios3ec2emr+4 more
#sagemaker#unified studio#s3#ec2#emr#redshift

Starting today, AWS Cloud WAN is available in the AWS GovCloud (US-West) and AWS GovCloud (US-East) regions. With AWS Cloud WAN, you can use a central dashboard and network policies to create a global network that spans multiple locations and networks, removing the need to configure and manage different networks using different technologies. You can use network policies to specify the Amazon Virtual Private Clouds, AWS Transit Gateways, and on-premises locations you want to connect to using an AWS Site-to-Site VPN, AWS Direct Connect, or third-party software-defined WAN (SD-WAN) products. The AWS Cloud WAN central dashboard generates a comprehensive view of the network to help you monitor network health, security, and performance. In addition, AWS Cloud WAN automatically creates a global network across AWS Regions by using Border Gateway Protocol (BGP) so that you can easily exchange routes worldwide. To learn more, please visit the AWS Cloud WAN product detail page.

#ga#now-available

AWS DataSync now supports virtual private cloud (VPC) endpoint policies, allowing you to control access to DataSync API operations through DataSync VPC service endpoints and Federal Information Processing Standard (FIPS) 140-3 enabled VPC service endpoints. This new feature helps organizations strengthen their security posture and meet compliance requirements when accessing DataSync API operations through VPC endpoints. VPC endpoint policies allow you to restrict specific DataSync API actions accessed through your VPC endpoints. For example, you can control which AWS principals can access DataSync operations such as CreateTask, StartTaskExecution, or ListAgents. These policies work in conjunction with identity-based policies and resource-based policies to secure access in your AWS environment. This feature is available in all AWS Regions where AWS DataSync is available. For more information about FIPS 140-3 at AWS, visit FIPS 140-3 Compliance. To learn more about VPC endpoint policies for AWS DataSync, see the AWS DataSync User Guide.

organizations
#organizations#ga#new-feature#support

In this post, we demonstrate how to implement real-time fraud prevention using GraphStorm v0.5's new capabilities for deploying graph neural network (GNN) models through Amazon SageMaker. We show how to transition from model training to production-ready inference endpoints with minimal operational overhead, enabling sub-second fraud detection on transaction graphs with billions of nodes and edges.

sagemaker
#sagemaker

Amazon ECS Managed Instances is a new compute option that eliminates infrastructure management overhead while giving you access to the broad suite of EC2 capabilities including the flexibility to select instance types, access reserved capacity, and advanced security and observability configurations.

lexec2ecs
#lex#ec2#ecs

Amazon SageMaker managed MLflow is now available in both AWS GovCloud (US-West) and AWS GovCloud (US-East) Regions. Amazon SageMaker managed MLflow streamlines AI experimentation and accelerates your GenAI journey from idea to production. MLflow is a popular open-source tool that helps customers manage experiment tracking to providing end-to-end observability, reducing time-to-market for generative AI development. To learn more, visit the Amazon SageMaker developer guide.

sagemaker
#sagemaker#now-available

Amazon CloudWatch and OpenSearch Service integrated analytics experience is now available in 5 additional commercial regions: Asia Pacific (Osaka), Asia Pacific (Seoul), Europe (Milan), Europe (Spain), and US West (N. California). With this integration, CloudWatch Logs customers have two more query languages for log analytics, in addition to CloudWatch Logs Insights QL. Customers can use SQL to analyze data, correlate logs using JOIN, sub-queries, and use SQL functions, namely, JSON, mathematical, datetime, and string functions for intuitive log analytics. They can also use the OpenSearch PPL to filter, aggregate and analyze their data. With a few clicks, CloudWatch Logs customers can create OpenSearch dashboards for VPC, WAF, and CloudTrail logs to monitor, analyze, and troubleshoot using visualizations derived from the logs. OpenSearch customers no longer have to copy logs from CloudWatch for analysis, or create ETL pipelines. Now, they can use OpenSearch Discover to analyze CloudWatch logs in-place, and build indexes and dashboards on CloudWatch Logs. With this launch the integrated experience is now generally available in Asia Pacific (Osaka), Asia Pacific (Seoul), Europe (Milan), Europe (Spain), and US West (N. California) along with regions where OpenSearch Service direct query is available. Please read pricing and free tier details on Amazon CloudWatch Pricing, and OpenSearch Service Pricing. To get started, please refer to Amazon CloudWatch Logs vended dashboard and Amazon OpenSearch Service Developer Guide.

opensearchopensearch servicerdscloudwatchwaf
#opensearch#opensearch service#rds#cloudwatch#waf#launch

Today, AWS announced the expansion of 10 Gbps and 100 Gbps dedicated connections with MACsec encryption capabilities at the existing AWS Direct Connect location in the Equinix BG1 data center near Bogota, Colombia. You can now establish private, direct network access to all public AWS Regions (except those in China), AWS GovCloud Regions, and AWS Local Zones from this location. The Direct Connect service enables you to establish a private, physical network connection between AWS and your data center, office, or colocation environment. These private connections can provide a more consistent network experience than those made over the public internet.  For more information on the over 146 Direct Connect locations worldwide, visit the locations section of the Direct Connect product detail pages. Or, visit our getting started page to learn more about how to purchase and deploy Direct Connect.

#expansion

Generative AI agents in production environments demand resilience strategies that go beyond traditional software patterns. AI agents make autonomous decisions, consume substantial computational resources, and interact with external systems in unpredictable ways. These characteristics create failure modes that conventional resilience approaches might not address. This post presents a framework for AI agent resilience risk analysis […]

Amazon Simple Notification Service (Amazon SNS) now allows customers to make API requests over Internet Protocol version 6 (IPv6) in the AWS GovCloud (US) Regions. The new endpoints have also been validated under the Federal Information Processing Standard (FIPS) 140-3 program. Amazon SNS is a fully managed messaging service that enables publish/subscribe messaging between distributed systems, microservices, and event-driven serverless applications. With this update, customers have the option of using either IPv6 or IPv4 when sending requests over dual-stack public or VPC endpoints.  SNS now supports IPv6 in all Regions where the service is available, including AWS Commercial, AWS GovCloud (US), and China Regions. For more information on using IPv6 with Amazon SNS, please refer to our developer guide.

sns
#sns#update#support

Amazon Simple Notification Service (Amazon SNS) now supports additional endpoints that have been validated under the Federal Information Processing Standard (FIPS) 140-3 program in AWS Regions in the United States and Canada. FIPS compliant endpoints help companies contracting with the US federal government meet the FIPS security requirement to encrypt sensitive data in supported regions. With this expansion, you can use Amazon SNS for workloads that require a FIPS 140-3 validated cryptographic module when sending requests over dual-stack public or VPC endpoints. Amazon SNS FIPS compliant endpoints are now available in the following regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), Canada West (Calgary) and AWS GovCloud (US). To learn more about FIPS 140-3 at AWS, visit FIPS 140-3 Compliance.

sns
#sns#ga#now-available#support#expansion

AWS Transform now offers Terraform as an additional option to generate network infrastructure code automatically from VMware environments. The service converts your source network definitions into reusable Terraform modules, complementing current AWS CloudFormation and AWS Cloud Development Kit (CDK) support. AWS Transform for VMware is an agentic AI service that automates the discovery, planning, and migration of VMware workloads, accelerating infrastructure modernization with increased speed and confidence. These migrations require recreating network configurations while maintaining operational consistency. The service now generates Terraform modules alongside CDK and AWS CloudFormation templates. This addition enables organizations to maintain existing deployment pipelines while using preferred tools for modular, customizable network configurations. The Terraform module generation capability is available in all AWS Regions where the service is offered. To learn more, visit the AWS Transform for VMware product page, read the user guide, or get started in the AWS Transform web experience.

cloudformationorganizationstransform for vmware
#cloudformation#organizations#transform for vmware#ga#support

AWS Transfer Family now supports four new service-specific condition keys for Identity and Access Management (IAM). With this feature, administrators can create more granular IAM policies and service control policies (SCPs) to restrict configurations for Transfer Family resources, enhancing security controls and compliance management.  IAM condition keys allow you to author policies that enforce access control based on API request context. With these new condition keys, you can now author policies based on Transfer Family context to control which protocols, endpoint types, and storage domains can be configured through policy conditions. For example, you can use transfer:RequestServerEndpointType to prevent the creation of public servers, or transfer:RequestServerProtocols to ensure only SFTP servers can be created, enabling you to define additional permission guardrails for Transfer Family actions.  The new IAM condition keys are available in all AWS Regions where AWS Transfer Family is available. To learn more, visit the IAM Service Authorization Reference and Transfer Family User Guide. To learn more about how to manage permissions within your organization through SCPs, visit the AWS Organizations User Guide.

iamorganizations
#iam#organizations#ga#support

AWS Firewall Manager announces that it is now available in AWS Asia Pacific (Taipei) Region. AWS Firewall Manager helps cloud security administrators and site reliability engineers protect applications while reducing the operational overhead of manually configuring and managing rules. Working with AWS Firewall Manager, customers can provide defense in depth policies to address the full range of AWS security services for customers hosting their applications and workloads in AWS Taipei. Customers wishing to establish secured assets using AWS WAF can create and maintain security policies with AWS Firewall Manager. To learn more about how AWS Firewall Manager works, see the AWS Firewall Manager documentation for more details and the AWS Region Table for the list of regions where AWS Firewall Manager is currently available. To learn more about AWS Firewall Manager, its features, and its pricing, visit the AWS Firewall Manager website.

waf
#waf#launch#now-available

Today, we're announcing that Amazon Elastic VMware Service (Amazon EVS) is now available in all availability zones in the Asia Pacific (Singapore) and Europe (London) Regions. This expansion provides more options to leverage AWS scale and flexibility for running your VMware workloads in the cloud. Amazon EVS lets you run VMware Cloud Foundation (VCF) directly within your Amazon Virtual Private Cloud (VPC) on EC2 bare-metal instances, powered by AWS Nitro. Using either our step-by-step configuration workflow or the AWS Command Line Interface (CLI) with automated deployment capabilities, you can set up a complete VCF environment in just a few hours. This rapid deployment enables faster workload migration to AWS, helping you eliminate aging infrastructure, reduce operational risks, and meet critical timelines for exiting your data center. The added availability in the Asia Pacific (Singapore) and Europe (London) Regions gives your VMware workloads lower latency through closer proximity to your end users, compliance with data residency or sovereignty requirements, and additional high availability and resiliency options for your enhanced redundancy strategy. To get started, visit the Amazon EVS product detail page and user guide.

lexec2
#lex#ec2#ga#now-available#expansion

Starting today, customers can use boot and data volumes backed by Dell PowerStore and HPE Alletra Storage MP B10000 storage arrays with Amazon Elastic Compute Cloud (Amazon EC2) instances on AWS Outposts, including authenticated and encrypted volumes. This enhancement extends our existing support for boot and data volumes to include Dell and HPE storage arrays, alongside our current support for NetApp® on-premises enterprise storage arrays and Pure Storage® FlashArray™. Outposts is a fully managed service that extends AWS infrastructure, AWS services, APIs, and tools to virtually any on-premises or edge location for a truly consistent hybrid experience. With Outposts, customers can maximize the value of their on-premises storage investments by leveraging their existing enterprise storage arrays for both boot and data volumes, complementing managed Amazon EBS and Local Instance Store options. This provides significant operational benefits, including streamlined operating system (OS) management via centralized boot volumes and advanced data management features through high-performance data volumes. By integrating their own storage, organizations can also satisfy data residency requirements and benefit from a consistent cloud operational model for their hybrid environments. To simplify the process, AWS offers automation scripts through AWS Samples to help customers easily set up and use external block volumes with EC2 instances on Outposts. Customers can use the AWS Management Console or CLI to utilize third-party block volumes with EC2 instances on Outposts. Third-party storage integration for Outposts with all compatible storage vendors is available on Outposts 2U servers and Outposts racks at no additional charge in all AWS Regions where Outposts is supported. See the FAQs for Outposts servers and Outposts racks for the latest list of supported Regions. To learn more about implementation details and best practices, check out this blog post or visit our technical documentation for Outposts servers, second-generation Outposts racks, and first-generation Outposts racks.

ec2organizationsoutposts
#ec2#organizations#outposts#ga#enhancement#integration

AWS Storage Gateway now supports Virtual Private Cloud (VPC) endpoint policies for your VPC endpoints. With this feature, administrators can attach endpoint policies to VPC endpoints, allowing granular access control over Storage Gateway direct APIs for improved data protection and security posture. AWS Storage Gateway is a hybrid cloud storage service that provides on-premises applications access to virtually unlimited storage in the cloud. You can use AWS Storage Gateway for backing up and archiving data to AWS, providing on-premises file shares backed by cloud storage, and providing on-premises applications low latency access to data in the cloud. AWS Storage Gateway support for VPC endpoint policies is available in all AWS Regions where Storage Gateway is available. To learn more, visit our documentation.

#ga#support

Amazon Connect now enables you to customize service level calculations to your specific needs. Supervisors and managers can define time thresholds for when a contact is considered to meet service level standards and select which contact outcomes to include in the calculation. For example, managers can choose to count callback contacts, exclude contacts transferred out while waiting in queue, and exclude short abandons using a configurable time threshold. Customization of service level calculation is available from the metric configuration section on the analytics dashboards. With this feature supervisors and managers can now create a service level metric calculation that better aligns with their business operations. With a customized view of service level performance, operations managers can assess how effectively they have met their service standards. This new feature is available in all AWS regions where Amazon Connect is offered. To learn more about customizing your service level calculation, visit the Admin Guide. To learn more about Amazon Connect, the easy-to-use cloud contact center, visit the Amazon Connect website.

rds
#rds#new-feature

AWS ParallelCluster 3.14 is now generally available. This release includes P6e-GB200 and P6-B200 instance types, prioritized allocation strategies for optimized instance placement, and NICE DCV support for Amazon Linux 2023. Other features included in this release are support for chef-client log visibility in instance console inside the instance's system log and Amazon Linux 2023 with kernel 6.12. To get started using P6e-GB200 instances with ParallelCluster, follow the tutorial in the ParallelCluster User Guide - Using Amazon EC2 P6e-GB200 UltraServers in AWS ParallelCluster. For more details on the release, review the AWS ParallelCluster 3.14.0 release notes. ParallelCluster is a fully-supported and maintained open-source cluster management tool that enables R&D customers and their IT administrators to operate high-performance computing (HPC) clusters on AWS. ParallelCluster is designed to automatically and securely provision cloud resources into elastically-scaling HPC clusters capable of running scientific and engineering workloads at scale on AWS. ParallelCluster is available at no additional charge in the AWS Regions listed here, and you pay only for the AWS resources needed to run your applications. To learn more about launching HPC clusters on AWS, visit the ParallelCluster User Guide. To start using ParallelCluster, see the installation instructions for ParallelCluster UI and CLI.

ec2
#ec2#launch#generally-available#support

Amazon FSx for NetApp ONTAP second-generation file systems are now available in 4 additional AWS Regions: Europe (Spain, Zurich), Asia Pacific (Seoul), and Canada (Central). Amazon FSx makes it easier and more cost effective to launch, run, and scale feature-rich high-performance file systems in the cloud. Second-generation FSx for ONTAP file systems give you more performance scalability and flexibility over first-generation file systems by allowing you to create or expand file systems with up to 12 highly-available (HA) pairs of file servers, providing your workloads with up to 72 GBps of throughput and 1 PiB of provisioned SSD storage. With this regional expansion, second-generation FSx for ONTAP file systems are available in the following AWS Regions: US East (N. Virginia, Ohio), US West (N. California, Oregon), Canada (Central), Europe (Frankfurt, Ireland, Spain, Stockholm, Zurich), and Asia Pacific (Mumbai, Seoul, Singapore, Sydney, Tokyo). You can create second-generation Multi-AZ file systems with a single HA pair, and Single-AZ file systems with up to 12 HA pairs. To learn more, visit the FSx for ONTAP user guide.

lex
#lex#launch#ga#now-available#expansion

Amazon FSx now offers customers the option to use Internet Protocol version 6 (IPv6) for access to Amazon FSx for NetApp ONTAP file systems. More and more customers are adopting IPv6 to mitigate IPv4 address exhaustion in their private networks or to satisfy government mandates such as the US Office of Management and Budget (OMB) M-21-07 memorandum. With this launch, customers can now access their file systems using IPv4, IPv6, or dual-stack clients without the need for complex infrastructure to handle IPv6 to IPv4 address translation. IPv6 support for new FSx for NetApp ONTAP file systems is now available in all AWS Commercial and AWS GovCloud (US) regions where Amazon FSx is available, with IPv6 support for existing FSx for NetApp ONTAP file systems coming in an upcoming weekly maintenance window. To learn more, visit the Amazon FSx user guide.

lex
#lex#launch#ga#now-available#support

Amazon FSx now offers customers the option to use Internet Protocol version 6 (IPv6) for access to Amazon FSx for Windows File Server file systems. More and more customers are adopting IPv6 to mitigate IPv4 address exhaustion in their private networks or to satisfy government mandates such as the US Office of Management and Budget (OMB) M-21-07 memorandum. With this launch, customers can now access their file systems using IPv4, IPv6, or dual-stack clients without the need for complex infrastructure to handle IPv6 to IPv4 address translation. IPv6 support for new FSx for Windows File Server file systems is now available in all AWS Commercial and AWS GovCloud (US) regions where Amazon FSx is available, with IPv6 support for existing FSx for Windows File Server file systems coming in an upcoming weekly maintenance window. To learn more, visit the Amazon FSx user guide.

lex
#lex#launch#ga#now-available#support

Today, AWS announces the launch of Amazon Elastic Container Service (Amazon ECS) Managed Instances, a new fully managed compute option designed to eliminate infrastructure management overhead while giving you access to the full capabilities of Amazon EC2. By offloading infrastructure operations to AWS, ECS Managed Instances helps you quickly launch and scale your workloads, while enhancing performance and reducing your total cost of ownership. With ECS Managed Instances, you get the application performance you want and the simplicity you need. Simply define your task requirements such as the number of vCPUs, memory size, and CPU architecture, and Amazon ECS automatically provisions, configures and operates most optimal EC2 instances within your AWS account using AWS-controlled access. You can also specify desired instance types in Managed Instances Capacity Provider configuration, including GPU-accelerated, network-optimized, and burstable performance, to run your workloads on the instance families you prefer. ECS Managed Instances dynamically scales EC2 instances to match your workload requirements and continuously optimizes task placement to reduce infrastructure costs. It also enhances your security posture through regular security patching initiated every 14 days. You can use EC2 event windows to schedule patching to occur within weekly maintenance windows, minimizing the risk of interruptions during critical hours. ECS Managed Instances is now available in six AWS regions: US East (North Virginia), US West (Oregon), Europe (Ireland), Africa (Cape Town), Asia Pacific (Singapore), and Asia Pacific (Tokyo). To get started with ECS Managed Instances, use the AWS Console, Amazon ECS MCP Server, or your favorite infrastructure-as-code tooling to enable it in a new or existing Amazon ECS cluster. You will be charged for the management of compute provisioned, in addition to your regular Amazon EC2 costs. To learn more about ECS Managed Instances, visit the feature page, documentation, and AWS News launch blog.

ec2ecs
#ec2#ecs#launch#ga#now-available

You can now deploy AWS IAM Identity Center in 36 AWS Regions, including Asia Pacific (Thailand) and Mexico (Central). IAM Identity Center is the recommended service for managing workforce access to AWS applications. It enables you to connect your existing source of workforce identities to AWS once and offer your users single sign on experience across AWS. It powers the personalized experiences offered by AWS applications, such as Amazon Q, and the ability to define and audit user-aware access to data in AWS services, such as Amazon Redshift. It can also help you manage access to multiple AWS accounts from a central place. IAM Identity Center is available at no additional cost in these AWS Regions. To learn more about IAM Identity Center, visit the product detail page. To get started, see the IAM Identity Center User Guide.

amazon qpersonalizeredshiftiamiam identity center
#amazon q#personalize#redshift#iam#iam identity center

AWS Transfer Family now supports Virtual Private Cloud (VPC) endpoint policies for your VPC endpoints. With this feature, administrators can attach an endpoint policy to an interface VPC endpoint, allowing granular access control over Transfer Family APIs for improved data protection and security posture. Additionally, Transfer Family now supports Federal Information Processing Standards (FIPS) 140-3 enabled VPC endpoints.  Previously, customers had full access to Transfer Family APIs through an interface VPC endpoint, powered by AWS PrivateLink. With this launch, you can now manage which Transfer Family API actions (CreateServer, StartServer, DeleteServer, etc) can be performed, which principals can perform them, and which resources they can act upon. These policies work with existing IAM user and role policies and organizational service control policies.  VPC endpoint policy support is available in all AWS Regions where the service is available. To learn more, visit the Transfer Family User Guide.

rdsiam
#rds#iam#launch#ga#support

AWS B2B Data Interchange introduces new transformation status reporting in the AWS Console, enabling you to monitor and troubleshoot your Electronic Data Interchange (EDI) files processing in a single simple user interface. AWS B2B Data Interchange automates validation, transformation, and generation of EDI files such as ANSI X12 documents to and from JSON and XML data formats. With this launch, you can now track and review the status of the most recently performed EDI transformations directly in the AWS Console. For each partnership, AWS B2B Data Interchange now automatically presents information about the transformation status, timelines, and validation results for up to 10,000 most recently processed input-output pairs. This information enables you to easily track the status of your EDI exchanges with trading partners and troubleshoot issues, all in a single interface without needing to manually review log entries. Support for transformation status reporting is available in all AWS Regions where the AWS B2B Data Interchange service is available. To get started with monitoring your EDI transformations, visit the AWS B2B Data Interchange user guide or take our self-paced workshop.

#launch#support

Customers can now create Amazon FSx for Lustre file systems in the AWS US West (Phoenix) Local Zone. Amazon FSx makes it easier and more cost effective to launch, run, and scale feature-rich, high-performance file systems in the cloud. It supports a wide range of workloads with its reliability, security, scalability, and broad set of capabilities. Amazon FSx for Lustre provides fully managed shared storage built on the world’s most popular high-performance file system, designed for fast processing of workloads such as machine learning, high performance computing (HPC), video processing, financial modeling, and electronic design automation (EDA). To learn more about Amazon FSx for Lustre, visit our product page, and see the AWS Region Table for complete regional availability information.

#launch#now-available#support

Starting today, Amazon EC2 Auto Scaling (ASG) supports Federal Information Processing Standard (FIPS) 140-3 validated VPC endpoints. With this launch, you can use AWS PrivateLink with ASG for regulated workloads that require secure connections using FIPS 140-3 validated cryptographic modules. FIPS-compliant endpoints help organizations contracting with the U.S. federal government meet FIPS security requirements for encrypting sensitive data in supported regions. To create a VPC endpoint that connects to an ASG endpoint, see Setting up a VPC endpoint for Amazon EC2 Auto Scaling. This capability is available in the following regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), and Canada West (Calgary). For more information about FIPS 140-3 at AWS, visit FIPS 140-3 Compliance. To learn more about Amazon EC2 Auto Scaling, visit the ASG product page.

ec2organizations
#ec2#organizations#launch#ga#support

Amazon Elastic Container Service (Amazon ECS) now supports running tasks in IPv6-only subnets. With this launch, Amazon ECS tasks and services can run using only IPv6 addresses, without requiring IPv4. This enables customers to deploy containerized applications in IPv6-only environments, scale without being limited by IPv4 address availability, and meet IPv6 compliance requirements through native IPv6 support in Amazon ECS. Previously, Amazon ECS tasks always required an IPv4 address, even when launched in dual-stack subnets. This requirement could create scaling and management challenges for customers operating large fleets of containerized applications, where IPv4 address space became a bottleneck. With IPv6-only support, Amazon ECS tasks launched in IPv6-only subnets use only IPv6 addresses. This removes IPv4 as a dependency and helps organizations that must meet IPv6 adoption or regulatory mandates. The feature works across all Amazon ECS launch types and can be used with awsvpc, bridge, and host networking modes. To get started, create IPv6-only subnets in your VPC and launch Amazon ECS services or tasks in those subnets. Amazon ECS automatically detects the configuration and provisions the appropriate networking. To learn more about IPv6-only task networking and supported AWS Regions, see the Amazon ECS task networking documentation for AWS Fargate launch type and EC2 launch type. You can also read our blog post for a detailed walkthrough and migration strategies.

ec2ecsfargateorganizations
#ec2#ecs#fargate#organizations#launch#ga

Customers can now use Claude Sonnet 4.5 in Amazon Bedrock, a fully managed service that offers a choice of high- performing foundation models from leading AI companies. Claude Sonnet 4.5 is Anthropic's most intelligent model, excelling at complex agents, coding, and long-horizon tasks while maintaining optimal speed and cost-efficiency for high-volume use-cases. Claude Sonnet 4.5 currently leads the SWE-bench Verified benchmarks with enhanced instruction following, better code improvement identification, stronger refactoring judgment, and more effective production-ready code generation. This model excels at powering long-running agents that tackle complex, multi-step tasks requiring peak accuracy—like autonomously managing multi-channel marketing campaigns or orchestrating cross-functional enterprise workflows. In cybersecurity, it can help teams shift from reactive detection to proactive defense by autonomously patching vulnerabilities. For financial services, it can handle everything from analysis to advanced predictive modeling. Through the Amazon Bedrock API, Claude can now automatically edit context to clear stale information from past tool calls, allowing you to maximize the model’s context. A new memory tool lets Claude store and consult information outside the context window to boost accuracy and performance. Claude Sonnet 4.5 is now available in Amazon Bedrock via global cross region inference in multiple locations. To view the full list of available regions, refer to the documentation. To get started with Claude Sonnet 4.5 in Amazon Bedrock, read the News Blog, visit the Amazon Bedrock console, Anthropic's Claude in Amazon Bedrock product page, and the Amazon Bedrock pricing page.

bedrocklex
#bedrock#lex#now-available#improvement

Beginning today, customers can use Amazon Bedrock in the Middle East (UAE) region to easily build and scale generative AI applications using a variety of foundation models (FMs) as well as powerful tools to build generative AI applications. Amazon Bedrock is a comprehensive and secure service for building generative AI applications and agents. Amazon Bedrock connects you to leading foundation models (FMs) and services to deploy and operate agents, enabling you to quickly move from experimentation to real-world deployment. To get started, visit the Amazon Bedrock page and see the Amazon Bedrock documentation for more details.

bedrock
#bedrock#now-available

Today, AWS Network Firewall introduces Reject and Alert action support for stateful domain list rule groups, providing customers with more granular control over their network traffic. This enhancement allows customers to create Reject and Alert actions in stateful domain list rule groups using the AWS Network Firewall console, offering more flexible and precise traffic management options within their AWS environments. With this new feature, customers can now create more sophisticated and tailored network security policies. The Reject action enables customers to block specific domain-based traffic, while the Alert action allows for monitoring and logging of traffic without interrupting the flow. This granular control helps organizations improve their security posture by fine-tuning their firewall rules to better align with their specific security requirements and compliance needs. The new Reject and Alert action support for stateful domain list rule groups is available in all AWS Regions where AWS Network Firewall is offered. You can enable TLS inspection from the Amazon VPC Console or the Network Firewall API To learn more about this new feature and other AWS Network Firewall capabilities, visit the AWS Network Firewall product page and the service documentation.

lexorganizations
#lex#organizations#ga#new-feature#enhancement#support

AWS Backup is now available in the in the AWS Asia Pacific (New Zealand) Region. AWS Backup is a fully-managed, policy-driven service that allows you to centrally automate data protection across multiple AWS services spanning compute, storage, and databases. Using AWS Backup, you can centrally create and manage backups of your application data, protect your data from inadvertent or malicious actions with immutable recovery points and vaults, and restore your data in the event of a data loss incident. You can get started with AWS Backup using the AWS Backup console, SDKs, or CLI by creating a data protection policy and then assigning AWS resources to it using tags or Resource IDs. For more information on the features available in the Asia Pacific (New Zealand) Region, visit the AWS Backup product page and documentation. To learn about the Regional availability of AWS Backup, see the AWS Regional Services List.

#now-available

Beginning today, customers can use Amazon Bedrock in the Asia Pacific (Thailand), Asia Pacific (Malaysia), and Asia Pacific (Taipei) regions to easily build and scale generative AI applications using a variety of foundation models (FMs) as well as powerful tools to build generative AI applications. Amazon Bedrock is a comprehensive and secure service for building generative AI applications and agents. Amazon Bedrock connects you to leading foundation models (FMs) and services to deploy and operate agents, enabling you to quickly move from experimentation to real-world deployment. To get started, visit the Amazon Bedrock page and see the Amazon Bedrock documentation for more details.

bedrock
#bedrock#now-available

Beginning today, customers can use Amazon Bedrock in the Israel (Tel Aviv) region to easily build and scale generative AI applications using a variety of foundation models (FMs) as well as powerful tools to build generative AI applications. Amazon Bedrock is a comprehensive and secure service for building generative AI applications and agents. Amazon Bedrock connects you to leading foundation models (FMs) and services to deploy and operate agents, enabling you to quickly move from experimentation to real-world deployment. To get started, visit the Amazon Bedrock page and see the Amazon Bedrock documentation for more details.

bedrock
#bedrock#now-available

Amazon Neptune Analytics is now available in the Asia Pacific (Mumbai) Region. You can now create and manage Neptune Analytics graphs in the Asia Pacific (Mumbai) Region and run advanced graph analytics. Neptune Analytics is a memory-optimized graph database engine for analytics. With Neptune Analytics, you can get insights and find trends by processing large amounts of graph data in seconds. To analyze graph data quickly and easily, Neptune Analytics stores large graph datasets in memory. It supports a library of optimized graph analytic algorithms, low-latency graph queries, and vector search capabilities within graph traversals. Neptune Analytics is an ideal choice for investigatory, exploratory, or data-science workloads that require fast iteration for data, analytical and algorithmic processing, or vector search on graph data. It complements Amazon Neptune Database, a popular managed graph database. To perform intensive analysis, you can load the data from a Neptune Database graph or snapshot into Neptune Analytics. You can also load graph data that's stored in Amazon S3. To get started, you can create a new Neptune Analytics graphs using the AWS Management Console, or AWS CLI. For more information on pricing and region availability, refer to the Neptune pricing page.

s3
#s3#ga#now-available#support

The AWS SDK for Java 1.x (v1) entered maintenance mode on July 31, 2024, and will reach end-of-support on December 31, 2025. We recommend that you migrate to the AWS SDK for Java 2.x (v2) to access new features, enhanced performance, and continued support from AWS. To help you migrate efficiently, we’ve created a migration […]

#new-feature#support

In this solution, we demonstrate how the user (a parent) can interact with a Strands or LangGraph agent in conversational style and get information about the immunization history and schedule of their child, inquire about the available slots, and book appointments. With some changes, AI agents can be made event-driven so that they can automatically send reminders, book appointments, and so on.

bedrockagentcore
#bedrock#agentcore

Amazon AppStream 2.0 is enhancing the end-user experience by introducing support for local files redirection on multi-session fleets. While this feature is already available on single-session fleets, this launch extends it to multi-session fleets, helping administrators to leverage the cost benefits of the multi-session model while providing an enhanced end-user experience. Local file redirection on AppStream helps deliver benefits by enabling seamless access to local files directly from streaming applications, enhancing user productivity and experience. This feature reduces the need for manual file uploads and downloads, providing a natural, desktop-like experience with intuitive drag-and-drop functionality. Users can more efficiently manage their workflows while helping to maintain security through controlled access to local resources and secure file handling between environments. This feature is available at no additional cost in all the AWS Regions where Amazon AppStream 2.0 is available. AppStream 2.0 offers pay-as-you go pricing. To get started with AppStream 2.0, see Getting Started with Amazon AppStream 2.0. To enable this feature for your users, you must use an AppStream 2.0 image that uses latest AppStream 2.0 agent or has been updated using Managed AppStream 2.0 image updates released on or after September 05, 2025.

#launch#now-available#update#support

Amazon MSK Connect is now available in five additional AWS Regions: Asia Pacific (Thailand), Asia Pacific (Taipei), Mexico (Central), Canada West (Calgary), and Europe (Spain). MSK Connect enables you to run fully managed Kafka Connect clusters with Amazon Managed Streaming for Apache Kafka (Amazon MSK). With a few clicks, MSK Connect allows you to easily deploy, monitor, and scale connectors that move data in and out of Apache Kafka and Amazon MSK clusters from external systems such as databases, file systems, and search indices. MSK Connect eliminates the need to provision and maintain cluster infrastructure. Connectors scale automatically in response to increases in usage and you pay only for the resources you use. With full compatibility with Kafka Connect, it is easy to migrate workloads without code changes. MSK Connect will support both Amazon MSK-managed and self-managed Apache Kafka clusters. You can get started with MSK Connect from the Amazon MSK console or the Amazon CLI. Visit the AWS Regions page for all the regions where Amazon MSK is available. To get started visit, the MSK Connect product page, pricing page, and the Amazon MSK Developer Guide.

kafkamsk
#kafka#msk#ga#now-available#support

AWS Clean Rooms now supports incremental processing of rule-based ID mapping workflows with AWS Entity Resolution. This helps you perform real-time data synchronization across collaborators’ datasets with the privacy-enhancing controls of AWS Clean Rooms. With this launch, you can populate ID mapping tables in a Clean Rooms collaboration with only the new, modified, or deleted records since the last analysis. Data collaborators can enable incremental processing for rule-based ID mapping workflows in AWS Entity Resolution, and then update an existing ID mapping table in a collaboration. For example, a measurement provider can maintain up-to-date offline purchase data in a collaboration with an advertiser and a publisher, enabling always-on measurement of campaign outcomes, reduced costs, and maintained privacy controls for all collaboration members. AWS Entity Resolution is natively integrated within AWS Clean Rooms to help you and your partners more easily prepare and match related customer records. Using rule-based or data service provider-based matching can help you improve data matching for enhanced advertising campaign planning, targeting, and measurement. For more information about the AWS Regions where AWS Clean Rooms is available, see the AWS Regions table. To learn more about AWS Clean Rooms, visit AWS Clean Rooms.

rds
#rds#launch#update#support

Amazon Elastic Block Store (Amazon EBS) now supports higher volume-level limits for its General Purpose (gp3) volumes. With this update, gp3 volumes can scale up to 64 TiB in size (4X the previous 16 TiB limit), up to 80,000 IOPS (5X the previous 16,000 IOPS limit), and up to 2,000 MiB/s throughput (2X the previous 1,000 MiB/s limit). These expanded limits help reduce operational complexity for storage-intensive workloads by enabling gp3 volumes with larger capacity and higher performance. You can consolidate multiple striped volumes into a single gp3 volume, streamline architectures, and lower management overhead. The increased limits particularly benefit customers running containerized workloads with limited support for striping multiple volumes, applications that rely on single-volume architectures, and growing workloads approaching current gp3 limits. The pricing model remains unchanged: you pay for storage plus any additional IOPS and throughput provisioned beyond the baseline performance. The new gp3 limits are available in all AWS Commercial Regions and AWS GovCloud (US) Regions where gp3 volumes are available. To get started and learn more, please visit the Amazon EBS user guide.

lex
#lex#update#support

AWS Compute Optimizer now supports 99 additional Amazon Elastic Compute Cloud (Amazon EC2) instance types. These enhancements help you identify additional savings opportunities across your EC2 instances without specialized knowledge or manual analysis. Compute Optimizer has expanded support to include the latest generation Compute Optimized (C8gn, C8gd), General Purpose (M8i, M8i-flex, M8gd), Memory Optimized (R8i, R8i-flex, R8gd), and Storage Optimized (I8ge) instance types. This expansion enables Compute Optimizer to help you take advantage of the price-to-performance improvements offered by the newest instance types. This new feature is available in all AWS Regions where Compute Optimizer is available except the AWS GovCloud (US) and the China Regions. For more information about Compute Optimizer, visit our product page and documentation. You can start using Compute Optimizer through the AWS Management Console, AWS CLI, or AWS SDK.

lexec2
#lex#ec2#new-feature#improvement#enhancement#support

Starting today, AWS WAF’s Targeted Bot Control, Fraud, and DDoS Prevention Rule Group are available in the AWS Asia Pacific (Taipei), Asia Pacific (Bangkok), and Mexico (Central) regions. These features help customers to stay protected against sophisticated bots, application layer DDoS and account takeover attacks. AWS WAF is a web application firewall that helps you protect your web application resources against common web exploits and bots that can affect availability, compromise security, or consume excessive resources. To see the full list of regions where AWS WAF is currently available, visit the AWS Region Table. For more information about the service, visit the AWS WAF page. For more information about pricing, visit the AWS WAF Pricing page.

waf
#waf#ga

Amazon Relational Database Service (RDS) for Db2 now offers Reserved Instances with up to 47% cost savings compared to On-Demand prices. The option to use Reserved Instances is available for all supported instance types. Amazon RDS for Db2 Reserved Instances provide size flexibility for both Bring Your Own License (BYOL) and Db2 license purchased through AWS Marketplace. With Reserved Instances size flexibility, the discounted rate for Reserved Instances automatically applies to usage of any size in the same instance family. For example, if you purchase a db.r7i.2xlarge Reserved Instance in US East (N. Virginia), the discounted rate of this Reserved Instance can automatically apply to 2 db.r7i.xlarge instances. For information on RDS Reserved Instances, refer to Reserved DB instances for Amazon RDS. You can purchase Reserved Instances through the AWS Management Console, AWS CLI, or AWS SDK. For detailed pricing information and purchase options, refer to Amazon RDS for Db2 Pricing.

lexrds
#lex#rds#support

Today, we are excited to announce support for DoWhile loops in Amazon Bedrock Flows. With this powerful new capability, you can create iterative, condition-based workflows directly within your Amazon Bedrock flows, using Prompt nodes, AWS Lambda functions, Amazon Bedrock Agents, Amazon Bedrock Flows inline code, Amazon Bedrock Knowledge Bases, Amazon Simple Storage Service (Amazon S3), […]

bedrocklambdas3
#bedrock#lambda#s3#support#new-capability

In this post, we explore how we built a multi-agent conversational AI system using Amazon Bedrock that delivers knowledge-grounded property investment advice. We explore the agent architecture, model selection strategy, and comprehensive continuous evaluation system that facilitates quality conversations while facilitating rapid iteration and improvement.

bedrock
#bedrock#improvement

In the benefits administration industry, claims processing is a vital operational pillar that makes sure employees and beneficiaries receive timely benefits, such as health, dental, or disability payments, while controlling costs and adhering to regulations like HIPAA and ERISA. In this post, we examine the typical benefit claims processing workflow and identify where generative AI-powered automation can deliver the greatest impact.

bedrock
#bedrock

Amazon Managed Streaming for Apache Kafka (Amazon MSK) has added support for Express brokers in eight additional AWS Regions: AWS GovCloud (US-West), AWS GovCloud (US-East), Asia Pacific (Jakarta), Asia Pacific (Melbourne), Asia Pacific (Osaka), Europe (Zurich), Israel (Tel Aviv), and Asia Pacific (Hong Kong). Express brokers are a new broker type for Amazon MSK Provisioned designed to deliver up to 3x more throughput per broker, scale up to 20x faster, and reduce recovery time by 90% as compared to standard Apache Kafka brokers. Express brokers come pre-configured with Kafka best practices by default, support all Kafka APIs, and provide the same low-latency performance that Amazon MSK customers expect, so they can continue using existing client applications without any changes. You can now create an MSK cluster with Express brokers in these AWS Regions from the Amazon MSK console. To get started visit, the MSK product page, pricing page, and the Amazon MSK Developer Guide.

kafkamsk
#kafka#msk#support

Amazon Bedrock AgentCore Runtime, Browser, and Code Interpreter services now support Amazon Virtual Private Cloud (VPC) connectivity, AWS PrivateLink, AWS CloudFormation, and resource tagging, enabling developers to deploy AI agents with enhanced enterprise security and infrastructure automation capabilities. AgentCore Runtime enables you to deploy and scale dynamic AI agents securely using any framework, protocol, or model. AgentCore Browser enables web-based interactions such as form filling, data extraction, and QA testing, while AgentCore Code Interpreter provides secure execution of agent-generated code. With VPC support, you can now securely connect AgentCore Runtime, Browser, and Code Interpreter services to private resources such as databases, internal APIs, and services within your VPC without internet exposure. AWS PrivateLink provides private connectivity between your VPC and Amazon Bedrock AgentCore services, while CloudFormation support enables automated resource provisioning through infrastructure as code. Resource tagging allows you to implement comprehensive cost allocation, access control, and resource organization across your AgentCore deployments. Amazon Bedrock AgentCore is currently in preview and available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Frankfurt). To learn more, see Configuring VPC for AgentCore and Use Interface VPC endpoints (AWS PrivateLink) with AgentCore. For CloudFormation resources, visit the AgentCore CloudFormation Reference, and to get started with tagging, see the Tagging AgentCore resources.

bedrockagentcorecloudformation
#bedrock#agentcore#cloudformation#preview#ga#support

AWS is announcing the availability of high performance Storage optimized Amazon EC2 I7i instances in AWS Europe (Milan) and US West (N. California) regions. Powered by 5th Gen Intel Xeon Processors with an all-core turbo frequency of 3.2 GHz, these new instances deliver up to 23% better compute performance and more than 10% better price performance over previous generation I4i instances. Powered by 3rd generation AWS Nitro SSDs, I7i instances offer up to 45TB of NVMe storage with up to 50% better real-time storage performance, up to 50% lower storage I/O latency, and up to 60% lower storage I/O latency variability compared to I4i instances. I7i instances offer compute and storage performance for x86-based storage optimized instances in Amazon EC2 ideal for I/O intensive and latency-sensitive workloads that demand very high random IOPS performance with real-time latency to access the small to medium size datasets. Additionally, torn write prevention feature support up to 16KB block sizes, enabling customers to eliminate database performance bottlenecks. I7i instances are available in eleven sizes - nine virtual sizes up to 48xlarge and two bare metal sizes - delivering up to 100Gbps of network bandwidth and 60Gbps of Amazon Elastic Block Store (EBS) bandwidth. To learn more, visit the I7i instances page.

ec2
#ec2#now-available#support

Amazon Redshift Concurrency Scaling is now available in the AWS Africa (Cape Town), Asia Pacific (Hong Kong), Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Asia Pacific (Osaka), Asia Pacific (Thailand), Europe (Milan), Middle East (Bahrain), Mexico (Central) and AWS GovCloud (US-West) regions. With the Amazon Redshift Concurrency Scaling feature, you can now support thousands of concurrent users and concurrent queries, with consistently fast query performance. Amazon Redshift Concurrency Scaling elastically scales query processing power to provide consistently fast performance for hundreds of concurrent queries. Concurrency Scaling resources are added to your Redshift cluster transparently in seconds, allowing for increased concurrency to process queries with minimal wait time. Amazon Redshift customers with an active Redshift cluster earn up to one hour of free Concurrency Scaling credits, which is sufficient for the concurrency needs of most customers. Concurrency scaling enables you to specify usage control, providing customers with predictable month-to-month costs, even during periods of fluctuating analytical demand. To enable Amazon Redshift Concurrency Scaling, set the Concurrency Scaling Mode to Auto in your Amazon Web Services Management Console. You can allocate Concurrency Scaling usage to specific user groups and workloads, control the number of Concurrency Scaling clusters that can be used, and monitor Amazon CloudWatch performance and usage metrics. To learn more about concurrency scaling including regional-availability, see our documentation and pricing page.

redshiftcloudwatch
#redshift#cloudwatch#now-available#support

AWS Network Firewall, a managed service that makes it easy to deploy essential network protections for your Amazon VPCs, now provides enhanced default rules to handle TLS client hellos, and HTTP requests split across multiple packets. This update introduces new application layer drop and alert established default stateful actions, enabling customers to maintain security controls while supporting modern TLS implementations and large HTTP requests. These enhancements help customers implement robust security policies without writing complex custom rules. Security teams can now effectively inspect and filter traffic where key information is segmented across multiple packets, while maintaining visibility through detailed logging options, making it easier to secure applications using modern protocols and encryption standards. This capability is available in all AWS Regions where AWS Network Firewall is supported. To learn more, refer to AWS Network Firewall service documentation.

lexrds
#lex#rds#update#enhancement#support

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8gn instances, powered by the latest-generation AWS Graviton4 processors, are available in the AWS Region Europe (Frankfurt, Stockholm), Asia Pacific (Singapore). The new instances provide up to 30% better compute performance than Graviton3-based Amazon EC2 C7gn instances. Amazon EC2 C8gn instances feature the latest 6th generation AWS Nitro Cards, and offer up to 600 Gbps network bandwidth, the highest network bandwidth among network optimized EC2 instances. Take advantage of the enhanced networking capabilities of C8gn to scale performance and throughput, while optimizing the cost of running network-intensive workloads such as network virtual appliances, data analytics, CPU-based artificial intelligence and machine learning (AI/ML) inference. For increased scalability, C8gn instances offer instance sizes up to 48xlarge, up to 384 GiB of memory, and up to 60 Gbps of bandwidth to Amazon Elastic Block Store (EBS). C8gn instances support Elastic Fabric Adapter (EFA) networking on the 16xlarge, 24xlarge, 48xlarge, metal-24xl, and metal-48xl sizes, which enables lower latency and improved cluster performance for workloads deployed on tightly coupled clusters. C8gn instances are available in the following AWS Regions: US East (N. Virginia), US West (Oregon, N.California), Europe (Frankfurt, Stockholm), Asia Pacific (Singapore) To learn more, see Amazon C8gn Instances. To begin your Graviton journey, visit the Level up your compute with AWS Graviton page. To get started, see AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs.

ec2rdsgraviton
#ec2#rds#graviton#ga#now-available#support

Today we’re announcing Research and Engineering Studio (RES) on AWS 2025.09, which brings support for fractional GPUs, simplified AMI management, and enhanced deployment flexibility. This release also expands regional availability to include four additional AWS commercial Regions. Research and Engineering Studio on AWS is an open source solution that provides a web-based portal for administrators to create and manage secure cloud-based research and engineering environments. RES enables scientists and engineers to access powerful Windows and Linux virtual desktops with pre-installed applications and shared resources, without requiring cloud expertise. Version 2025.09 adds support for Amazon EC2 g6f instances, enabling GPU fractionalization for more efficient resource utilization in graphics-intensive workloads. The release also introduces Systems Manager Parameter Alias support for AMI IDs, simplifying the management of project-specific images, and enables integration with existing Amazon Cognito user pools for streamlined authentication setup during deployment. Administrators can now also customize CIDR ranges in the AWS CloudFormation external resources template for better network planning and integration with existing resources. This release expands regional availability to include Asia Pacific (Osaka), Asia Pacific (Jakarta), Middle East (UAE), and South America (São Paulo). To learn more about RES 2025.09, including detailed release notes and deployment instructions, visit the Research and Engineering Studio documentation or check out the RES GitHub repository.

lexec2cloudformation
#lex#ec2#cloudformation#now-available#integration#support

Today, AWS announces the general availability of new capabilities within AWS Billing and Cost Management that enable customers to manage their AWS spend across multiple organizations through a single AWS account. Customers can now share custom billing views containing cost management data with other AWS accounts outside their organization. Additionally, customers can combine multiple custom billing views to create new consolidated views. These features enable FinOps teams to create custom billing views containing cost management data for multiple organizations. These views can then be used to access cost management data across multiple organizations through Cost Explorer or set up budgets to monitor AWS costs. With the new custom billing view capabilities, you can create consolidated views of cost management data spanning multiple organizations that can be accessed using AWS Cost Explorer and AWS Budgets, allowing you to monitor, analyze, and forecast spending patterns across multiple organizations. This helps customers operating multiple subsidiaries or business units as separate organizations on AWS manage their AWS spend through a single AWS account. Support for custom billing views containing cost management data for multiple organizations is available in all AWS Regions, excluding AWS GovCloud Regions and the AWS China Regions. To get started with custom billing views, visit Billing View within the Cost Management Preferences page in the AWS Billing and Cost Management console and create a new custom billing view. To get started visit the Billing View user guide.

forecastorganizations
#forecast#organizations#ga#support

Today, Amazon CloudWatch announces support for a new tag-based telemetry experience to help customers monitor their metrics and set up their alarms using AWS resources tags. This new capability simplifies monitoring cloud infrastructure at scale by automatically adapting alarms and metrics analysis as resources change. DevOps engineers and cloud administrators can now create dynamic monitoring views that align with their organizational structure using their existing AWS resource tags. Tag-based querying filtering eliminates the manual overhead of updating alarms and dashboards after deployments, freeing teams to focus on innovation rather than maintenance. This provides faster, targeted insights that match how teams organize their systems. Teams can query AWS default metrics using their existing resource tags, making it easier to troubleshoot issues and maintain operational visibility while focusing on core business initiatives. CloudWatch tag-based filtering is available in the following regions: US East (N. Virginia); US East (Ohio); US West (N. California); US West (Oregon); Asia Pacific (Tokyo); Asia Pacific (Seoul); Asia Pacific (Singapore); Asia Pacific (Sydney); Asia Pacific (Mumbai); Asia Pacific (Osaka); Canada (Central); Europe (Frankfurt); Europe (Ireland); Europe (London); Europe (Paris); Europe (Stockholm) and South America (SĂŁo Paulo). To get started, simply enable tag enriched telemetry with one click in the Amazon CloudWatch Settings, or through the AWS Command Line Interface (AWS CLI), and AWS SDKs to use your existing AWS resource tags to monitor your infrastructure. Learn more on the Amazon CloudWatch documentation page.

novardscloudwatch
#nova#rds#cloudwatch#ga#support#new-capability

AWS X-Ray, a service that helps developers analyze and debug distributed applications by providing request tracing capabilities, now offers adaptive sampling to solve a common challenge for DevOps teams, Site Reliability Engineers (SREs), and application developers. These customers often face a difficult trade-off: setting sampling rates too low risks missing critical traces during incidents, while setting them too high unnecessarily increases observability costs during normal operations.             Today, with adaptive sampling, you can automatically adjust sampling rates within user-defined limits to ensure you capture the most important traces precisely when you need them. This helps development teams reduce mean time to resolution (MTTR) during incidents by providing comprehensive trace data for root cause analysis, while maintaining cost-efficient sampling rates during normal operations. Adaptive sampling supports two approaches, Sampling Boost and Anomaly Span Capture. These can be applied independently or can be combined together. Customers can use Sampling Boost to temporarily increase sampling rates when anomalies are detected to capture complete traces and Anomaly Span Capture to ensures anomaly-related spans are always captured, even when the full trace isn't sampled. Adaptive sampling is currently available in all commercial regions where AWS X-Ray is offered. For more information, see the X-Ray documentation. and CloudWatch pricing page for X-ray pricing details.

cloudwatch
#cloudwatch#support

Allowed AMIs, the Amazon EC2 account-wide setting that enables you to limit the discovery and use of Amazon Machine Images (AMIs) within your Amazon Web Services accounts, adds support for four new parameters — marketplace codes, deprecation time, creation date and AMI names. Previously, you could specify accounts or owner aliases that you trust in your Allowed AMIs setting. Starting today, you can use the four new parameters to define additional criteria to further reduce risk of inadvertently launching instances with non-compliant or unauthorized AMIs. Marketplace codes can be provided to limit the use of Marketplace AMIs, the deprecation time and creation date parameters can be used to limit the use of outdated AMIs, and AMI name parameter can be used to restrict usage to AMIs with specific naming pattern. You can also leverage Declarative Policies to configure these parameters to perform AMI governance across your organization. These additional parameters are now supported in all AWS regions including AWS China (Beijing) Region, operated by Sinnet, and AWS China (Ningxia) Region, operated by NWCD, and AWS GovCloud (US). To learn more, please visit the documentation.

ec2
#ec2#launch#ga#support

You can now preview your Amazon S3 Tables directly in the S3 console without having to write a SQL query. You can view the schema and sample rows of your tables stored in S3 Tables to better understand and gather key information about your data quickly, without any setup. You can preview tables in the S3 console in all AWS Regions where S3 Tables are available. You only pay for S3 requests to read a portion of your table. See Amazon S3 pricing and the S3 User Guide for pricing details and to learn more.

s3
#s3#preview#ga

Amazon RDS for PostgreSQL 18.0 is now available in the Amazon RDS Database Preview Environment, allowing you to evaluate the latest PostgreSQL features while leveraging the benefits of a fully managed database service. This preview environment provides you a sandbox where you can test applications and explore new PostgreSQL 18.0 capabilities before they become generally available. PostgreSQL 18.0 includes "skip scan" support for multicolumn B-tree indexes and improves WHERE clause handling for OR and IN conditions. It introduces parallel Generalized Inverted Index (GIN) builds and updates join operations. It now supports Universally Unique Identifiers Version 7 (UUIDv7), which combines timestamp-based ordering with traditional UUID uniqueness to boost performance in high-throughput distributed systems. Observability improvements show buffer usage counts and index lookups during query execution, along with per-connection I/O utilization metric. Please refer to the RDS PostgreSQL release documentation for more details. Amazon RDS Database Preview Environment database instances are retained for a maximum period of 60 days and are automatically deleted after the retention period. Amazon RDS database snapshots that are created in the preview environment can only be used to create or restore database instances within the preview environment. You can use the PostgreSQL dump and load functionality to import or export your databases from the preview environment. Amazon RDS Database Preview Environment database instances are priced as per the pricing in the US East (Ohio) Region.

rds
#rds#preview#generally-available#now-available#update#improvement

With the Amazon EMR 7.10 runtime, Amazon EMR has introduced EMR S3A, an improved implementation of the open source S3A file system connector. In this post, we showcase the enhanced read and write performance advantages of using Amazon EMR 7.10.0 runtime for Apache Spark with EMR S3A as compared to EMRFS and the open source S3A file system connector.

s3emr
#s3#emr

AWS Lambda now offers Code Signing in GovCloud Regions (AWS GovCloud (US-West) and AWS GovCloud (US-East)), which allows administrators to ensure that only trusted and verified code is deployed to Lambda functions. This feature uses AWS Signer, a managed code signing service. When code is deployed, Lambda checks the signatures to confirm the code hasn't been altered and is signed by trusted developers. Administrators can create Signing Profiles in AWS Signer and use AWS Identity and Access Management (IAM) to manage user access. Within Lambda, they can specify allowed signing profiles for each function and configure whether to warn or reject deployments if signature checks fail. There is no extra charge for using this feature. For more details, you can refer to the AWS Region table, the AWS blog, the Lambda developer guide, or the Signer developer guide.

lambdaiam
#lambda#iam#support

In this post, we explore commonly used Amazon CloudWatch metrics and alarms for OpenSearch Serverless, walking through the process of selecting relevant metrics, setting appropriate thresholds, and configuring alerts. This guide will provide you with a comprehensive monitoring strategy that complements the serverless nature of your OpenSearch deployment while maintaining full operational visibility.

opensearchcloudwatch
#opensearch#cloudwatch

Region switch in Amazon Application Recovery Controller (ARC) is now available in the Asia Pacific (New Zealand) Region. Region switch allows you to orchestrate the specific steps to operate your cross-AWS account application resources out of another AWS Region. It provides dashboards for real-time visibility into the recovery process and gathers data from across resources and accounts required for reporting to regulators and compliance teams. Region switch supports failover and failback for active/passive multi-Region approaches, and shift-away and return for active/active multi-Region approaches. When you create a Region switch plan, it is replicated to all the Regions your application operates in. This removes dependencies on the Region you are leaving for your recovery. To get started, build a Region switch plan using the ARC console, API, or CLI. To learn more, visit the ARC Region switch documentation and pricing page.

rds
#rds#ga#now-available#support

Today, we are announcing the availability of Route 53 Resolver Query Logging in Asia Pacific (New Zealand), enabling you to log DNS queries that originate in your Amazon Virtual Private Cloud (Amazon VPC). With query logging enabled, you can see which domain names have been queried, the AWS resources from which the queries originated - including source IP and instance ID - and the responses that were received.  Route 53 Resolver is the Amazon provided DNS server that is available by default in all Amazon VPCs. Route 53 Resolver responds to DNS queries from AWS resources within a VPC for public DNS records, Amazon VPC-specific DNS names, and Amazon Route 53 private hosted zones. With Route 53 Resolver Query Logging, customers can log DNS queries and responses for queries originating from within their VPCs, whether those queries are answered locally by Route 53 Resolver, or are resolved over the public internet, or are forwarded to on-premises DNS servers via Resolver Endpoints. You can share your query logging configurations across multiple accounts using AWS Resource Access Manager (RAM). You can also choose to send your query logs to Amazon S3, Amazon CloudWatch Logs, or Amazon Data Firehose.  There is no additional charge to use Route 53 Resolver Query Logging, although you may incur usage charges from Amazon S3, Amazon CloudWatch, or Amazon Data Firehose. To learn more about Route 53 Resolver Query Logging or to get started, visit the Route 53 Resolver product page or the Route 53 documentation.

s3rdscloudwatch
#s3#rds#cloudwatch#now-available

Amazon GameLift Servers now supports a new AWS Local Zone in Dallas, Texas (us-east-1-dfw-2). You can use this Local Zone to deploy GameLift Fleets with EC2 C6gn, C6i, C6in, M6g, M6i, M6in, M8g, and R6i instances. Local Zones place AWS services closer to major player population and IT centers where no AWS region exists. From the Amazon GameLift Servers Console, you can enable the Dallas Local Zone and add it to your fleets, just as you would with any other Region or Local Zone. With this launch, game studios can run latency-sensitive workloads such as real-time multiplayer gaming, responsive AR/VR experiences, and competitive tournaments closer to players in the Dallas metro area. Local Zones help deliver single-digit millisecond latency, giving players a smoother, more responsive experience by reducing network distance between your servers and players. For more information on AWS Local Zones, please see here. To see a complete list of supported regions and local zones for Amazon GameLift Servers, visit the Amazon GameLift Servers documentation. For pricing, please visit the Amazon GameLift Servers Instance Pricing page.

ec2
#ec2#launch#ga#support

Today, AWS eliminated the networking bandwidth burst duration limitations for Amazon EC2 I7i and I8g instances on sizes larger than 4xlarge. This update doubles the Network Bandwidth available at all times for i7i and i8g instances on sizes larger than 4xlarge. Previously, these instance sizes had a baseline bandwidth and used a network I/O credit mechanism to burst beyond their baseline bandwidth on a best effort basis. Today these instance sizes can sustain their maximum performance indefinitely. With this improvement, customers running memory and network intensive workloads on larger instance sizes can now consistently maintain their maximum network bandwidth without interruption, delivering more predictable performance for applications that require sustained high-throughput network connectivity. This change applies only to instance sizes larger than 4xlarge, while smaller instances will continue to operate with their existing baseline and burst bandwidth configurations. Amazon EC2 I7i and I8g instances are designed for I/O intensive workloads that require rapid data access and real-time latency from storage. These instances excel at handling transactional, real-time, distributed databases, including MySQL, PostgreSQL, Hbase and NoSQL solutions like Aerospike, MongoDB, ClickHouse, and Apache Druid. They're also optimized for real-time analytics platforms such as Apache Spark, data lakehouse, and AI LLM pre-processing for training. These instances have up to 1.5 TiB of memory, and 45 TB local instance storage. They deliver up to 100 Gbps of network performance bandwidth, and 60 Gbps of dedicated bandwidth for Amazon Elastic Block Store (EBS). To learn more, see Amazon EC2 I7i and I8g instances. To get started, see AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs.

ec2
#ec2#update#improvement

AI agents are evolving beyond basic single-task helpers into more powerful systems that can plan, critique, and collaborate with other agents to solve complex problems. Deep Agents—a recently introduced framework built on LangGraph—bring these capabilities to life, enabling multi-agent workflows that mirror real-world team dynamics. The challenge, however, is not just building such agents but […]

bedrockagentcorelex
#bedrock#agentcore#lex

In this post, we show you how to integrate Amazon Bedrock Guardrails with third-party tokenization services to protect sensitive data while maintaining data reversibility. By combining these technologies, organizations can implement stronger privacy controls while preserving the functionality of their generative AI applications and related systems.

bedrockorganizations
#bedrock#organizations#ga

The Nova Act extension is a new IDE-integrated tool that enables developers to create browser automation agents using natural language through the Nova Act model, offering features like Builder Mode, chat capabilities, and predefined templates while streamlining the development process without leaving their preferred development environment.

nova
#nova

In this post, we explore how Metagenomi built a scalable database and search solution for over 1 billion protein vectors using LanceDB and Amazon S3. The solution enables rapid enzyme discovery by transforming proteins into vector embeddings and implementing a serverless architecture that combines AWS Lambda, AWS Step Functions, and Amazon S3 for efficient nearest neighbor searches.

lambdas3step functions
#lambda#s3#step functions

Orchestrating machine learning pipelines is complex, especially when data processing, training, and deployment span multiple services and tools. In this post, we walk through a hands-on, end-to-end example of developing, testing, and running a machine learning (ML) pipeline using workflow capabilities in Amazon SageMaker, accessed through the Amazon SageMaker Unified Studio experience. These workflows are powered by Amazon Managed Workflows for Apache Airflow.

sagemakerunified studiolex
#sagemaker#unified studio#lex

Three weeks ago, I published a post about the new AWS Region in New Zealand (ap-southeast-6). This led to an incredible opportunity to visit New Zealand, where I met passionate builders and presented at several events including Serverless and Platform Engineering meetup, AWS Tools and Programming meetup, AWS Cloud Clubs in Auckland, and AWS Community […]

amazon qq developereksstep functions
#amazon q#q developer#eks#step functions

This post explores how Amazon Bedrock AgentCore helps you transition your agentic applications from experimental proof of concept to production-ready systems. We follow the journey of a customer support agent that evolves from a simple local prototype to a comprehensive, enterprise-grade solution capable of handling multiple concurrent users while maintaining security and performance standards.

bedrockagentcorerds
#bedrock#agentcore#rds#experimental#support

Trellix, a global leader in cybersecurity solutions, emerged in 2022 from the merger of McAfee Enterprise and FireEye. To address exponential log growth across their multi-tenant, multi-Region infrastructure, Trellix used Amazon OpenSearch Service, Amazon OpenSearch Ingestion, and Amazon S3 to modernize their log infrastructure. In this post, we share how, by adopting these AWS solutions, Trellix enhanced their system’s performance, availability, and scalability while reducing operational overhead.

s3opensearchopensearch serviceopensearch ingestion
#s3#opensearch#opensearch service#opensearch ingestion

Amazon OpenSearch Ingestion is a powerful data ingestion pipeline that AWS customers use for many different purposes, such as observability, analytics, and zero-ETL search. Many customers today push logs, traces, and metrics from their applications to OpenSearch Ingestion to store and analyze this data. Today, we are happy to announce that OpenSearch Ingestion pipelines now […]

opensearchopensearch serviceopensearch ingestion
#opensearch#opensearch service#opensearch ingestion

In this post, we outline a comprehensive guide for setting up single sign-on from Tableau desktop to Amazon Redshift using integration with IAM Identity Center and PingFederate as the identity provider (IdP) with an LDAP based data store, AWS Directory Service for Microsoft Active Directory.

redshiftiamiam identity centerdirectory service
#redshift#iam#iam identity center#directory service#integration

Amazon Bedrock has expanded its model offerings with the addition of Qwen 3 foundation models enabling users to access and deploy them in a fully managed, serverless environment. These models feature both mixture-of-experts (MoE) and dense architectures to support diverse use cases including advanced code generation, multi-tool business automation, and cost-optimized AI reasoning.

bedrock
#bedrock#now-available#support

AWS launches DeepSeek-V3.1 as a fully managed models in Amazon Bedrock. DeepSeek-V3.1 is a hybrid open weight model that switches between thinking mode for detailed step-by-step analysis and non-thinking mode for faster responses.

bedrock
#bedrock#launch#now-available

This post was written with Alex Gnibus of Stability AI. Stability AI Image Services are now available in Amazon Bedrock, offering ready-to-use media editing capabilities delivered through the Amazon Bedrock API. These image editing tools expand on the capabilities of Stability AI’s Stable Diffusion 3.5 models (SD3.5) and Stable Image Core and Ultra models, which […]

bedrocklex
#bedrock#lex#now-available

Amazon Bedrock now offers Stability AI Image Services: 9 tools that improve how businesses create and modify images. The technology extends Stable Diffusion and Stable Image models to give you precise control over image creation and editing. Clear prompts are critical—they provide art direction to the AI system. Strong prompts control specific elements like tone, […]

bedrock
#bedrock

In this post, we show how to integrate AWS DLCs with MLflow to create a solution that balances infrastructure control with robust ML governance. We walk through a functional setup that your team can use to meet your specialized requirements while significantly reducing the time and resources needed for ML lifecycle management.

sagemaker
#sagemaker

Amazon SageMaker Unified Studio is a single data and AI development environment that brings together data preparation, analytics, machine learning (ML), and generative AI development in one place. By unifying these workflows, it saves teams from managing multiple tools and makes it straightforward for data scientists, analysts, and developers to build, train, and deploy ML […]

sagemakerunified studio
#sagemaker#unified studio

As modern data architectures expand, Apache Iceberg has become a widely popular open table format, providing ACID transactions, time travel, and schema evolution. In table format v2, Iceberg introduced merge-on-read, improving delete and update handling through positional delete files. These files improve write performance but can slow down reads when not compacted, since Iceberg must […]

emr
#emr#update

Today, we’re excited to announce new capabilities that further simplify the local testing experience for Lambda functions and serverless applications through integration with LocalStack, an AWS Partner, in the AWS Toolkit for Visual Studio Code. In this post, we will show you how you can enhance your local testing experience for serverless applications with LocalStack using AWS Toolkit.

lambdalocalstack
#lambda#localstack#integration

When you’re spinning up your Amazon OpenSearch Service domain, you need to figure out the storage, instance types, and instance count; decide the sharding strategies and whether to use a cluster manager; and enable zone awareness. Generally, we consider storage as a guideline for determining instance count, but not other parameters. In this post, we […]

opensearchopensearch service
#opensearch#opensearch service

AWS recently announced that Amazon SageMaker now offers Amazon Simple Storage Service (Amazon S3) based shared storage as the default project file storage option for new Amazon SageMaker Unified Studio projects. This feature addresses the deprecation of AWS CodeCommit while providing teams with a straightforward and consistent way to collaborate on project files across the […]

sagemakerunified studios3
#sagemaker#unified studio#s3

In this post, we walk through the different considerations for using Amazon MSK Replicator over Apache Kafka’s MirrorMaker 2, and help you choose the right replication solution for your use case. We also discuss how to make applications using Amazon Managed Streaming for Apache Kafka (Amazon MSK) resilient to disasters using a multi-Region Kafka architecture using MSK Replicator.

kafkamsk
#kafka#msk

This blog post discusses how to create a seamless integration between Amazon SageMaker Lakehouse and Snowflake for modern data analytics. It specifically demonstrates how organizations can enable Snowflake to access tables in AWS Glue Data Catalog (stored in S3 buckets) through SageMaker Lakehouse Iceberg REST Catalog, with security managed by AWS Lake Formation. The post provides a detailed technical walkthrough of implementing this integration, including creating IAM roles and policies, configuring Lake Formation access controls, setting up catalog integration in Snowflake, and managing data access permissions. While four different patterns exist for accessing Iceberg tables from Snowflake, the blog focuses on the first pattern using catalog integration with SigV4 authentication and Lake Formation credential vending.

sagemakers3iamglueorganizations
#sagemaker#s3#iam#glue#organizations#ga

In this post, we discuss how to build a fully automated, scheduled Spark processing pipeline using Amazon EMR on EC2, orchestrated with Step Functions and triggered by EventBridge. We walk through how to deploy this solution using AWS CloudFormation, processes COVID-19 public dataset data in Amazon Simple Storage Service (Amazon S3), and store the aggregated results in Amazon S3.

s3ec2emrcloudformationeventbridge+1 more
#s3#ec2#emr#cloudformation#eventbridge#step functions

Last week, Strands Agents, AWS open source for agentic AI SDK just hit 1 million downloads and earned 3,000+ GitHub Stars less than 4 months since launching as a preview in May 2025. With Strands Agents, you can build production-ready, multi-agent AI systems in a few lines of code. We’ve continuously improved features including support […]

#launch#preview#support

AWS has launched new EC2 M4 and M4 Pro Mac instances based on Apple M4 Mac mini, offering improved performance over previous generations and featuring up to 48GB memory and 2TB storage for iOS/macOS development workloads.

ec2
#ec2#launch

AWS is announcing integrated LocalStack support in the AWS Toolkit for Visual Studio Code that makes it easier than ever for developers to test and debug serverless applications locally. This enhancement builds upon our recent improvements to the Lambda development experience, including the console to IDE integration and remote debugging capabilities we launched in July 2025, continuing our commitment to simplify serverless development on AWS.

lambdalocalstack
#lambda#localstack#launch#improvement#enhancement#integration

Today, we announce the availability of a Security Technical Implementation Guide (STIG) for Amazon Linux 2023 (AL2023), developed through collaboration between Amazon Web Services (AWS) and the Defense Information Systems Agency (DISA). The STIG guidelines are important for U.S Department of Defense (DOD) and Federal customers needing strict security compliance derived from the National Institute […]

#now-available

Delightful developer experience is an important part of building serverless applications efficiently, whether you’re creating an automation script or developing a complex enterprise application. While AWS Lambda has transformed modern application development in the cloud with its serverless computing model, developers spend significant time working in their local environments. They rely on familiar IDEs, debugging […]

lexlambda
#lex#lambda

Summer has drawn to a close here in Utrecht, where I live in the Netherlands. In two weeks, I’ll be attending AWS Community Day 2025, hosted at the Kinepolis Jaarbeurs Utrecht on September 24. The single-day event will bring together over 500 cloud practitioners from across the Netherlands, featuring 25 breakout sessions across five technical […]

eks
#eks

This two-part series explores the different architectural patterns, best practices, code implementations, and design considerations essential for successfully integrating generative AI solutions into both new and existing applications. In this post, we focus on patterns applicable for architecting real-time generative AI applications.

My LinkedIn feed was absolutely packed this week with pictures from the AWS Heroes Summit event in Seattle. It was heartwarming to see so many familiar faces and new Heroes coming together. For those not familiar with the AWS Heroes program, it’s a global community recognition initiative that honors individuals who make outstanding contributions to […]

amazon qq developerec2
#amazon q#q developer#ec2#update

When expanding your Graviton deployment across multiple AWS Regions, careful planning helps you navigate considerations around regional instance type availability and capacity optimization. This post shows how to implement advanced configuration strategies for Graviton-enabled EC2 Auto Scaling groups across multiple Regions, helping you maximize instance availability, reduce costs, and maintain consistent application performance even in AWS Regions with limited Graviton instance type availability.

ec2graviton
#ec2#graviton#ga

Amazon Prime Day 2025 achieved record-breaking sales with enhanced AI shopping features, while AWS infrastructure handled unprecedented volumes of data—including 1.7 trillion Lambda invocations per day, DynamoDB peaking at 151 million requests per second, and a 77% increase in Fargate container tasks—showcasing the massive scalability required to power the four-day shopping event.

lambdadynamodbfargate
#lambda#dynamodb#fargate#ga

As I was preparing for this week’s roundup, I couldn’t help but reflect on how database technology has evolved over the past decade. It’s fascinating to see how architectural decisions made years ago continue to shape the way we build modern applications. This week brings a special milestone that perfectly captures this evolution in cloud […]

bedrockec2
#bedrock#ec2

In this post, we explore an efficient approach to managing encryption keys in a multi-tenant SaaS environment through centralization, addressing challenges like key proliferation, rising costs, and operational complexity across multiple AWS accounts and services. We demonstrate how implementing a centralized key management strategy using a single AWS KMS key per tenant can maintain security and compliance while reducing operational overhead as organizations scale.

lexorganizations
#lex#organizations#ga

AWS Lambda cold start latency can impact performance for latency-sensitive applications, with function initialization being the primary contributor to startup delays. Lambda SnapStart addresses this challenge by reducing cold start times from several seconds to sub-second performance for Java, Python, and .NET runtimes with minimal code changes. This post explains SnapStart's underlying mechanisms and provides performance optimization recommendations for applications using this feature.

lambda
#lambda

Let me start this week’s update with something I’m especially excited about – the upcoming BeSA (Become a Solutions Architect) cohort. BeSA is a free mentoring program that I host along with a few other AWS employees on a volunteer basis to help people excel in their cloud careers. Last week, the instructors’ lineup was […]

sagemakerhyperpod
#sagemaker#hyperpod#update

Imagine an AI assistant that doesn’t just respond to prompts – it reasons through goals, acts, and integrates with real-time systems. This is the promise of agentic AI. According to Gartner, by 2028 over 33% of enterprise applications will embed agentic capabilities – up from less than 1% today. While early generative AI efforts focused […]

#ga

This two-part series shows how Karrot developed a new feature platform, which consists of three main components: feature serving, a stream ingestion pipeline, and a batch ingestion pipeline. This post covers the process of collecting features in real-time and batch ingestion into an online store, and the technical approaches for stable operation.

#new-feature

In this post, we demonstrate how to deploy the DeepSeek-R1-Distill-Qwen-32B model using AWS DLCs for vLLMs on Amazon EKS, showcasing how these purpose-built containers simplify deployment of this powerful open source inference engine. This solution can help you solve the complex infrastructure challenges of deploying LLMs while maintaining performance and cost-efficiency.

lexeks
#lex#eks

Cold starts are an important consideration when building applications on serverless platforms. In AWS Lambda, they refer to the initialization steps that occur when a function is invoked after a period of inactivity or during rapid scale-up. While typically brief and infrequent, cold starts can introduce additional latency, making it essential to understand them, especially […]

lambda
#lambda

With AWS Outposts racks, you can extend AWS infrastructure, services, APIs, and tools to on-premises locations. Providing performant, stable, and resilient network connections to both the parent AWS Region as well as the local network is essential to maintaining uninterrupted service. The release of two new Amazon CloudWatch metrics, VifConnectionStatus and VifBgpSessionState, gives you greater visibility into the operational status of the Outpost network connections. In this post, we discuss how to use these metrics to quickly identify network disruptions, using additional data points that can help reduce time to resolution.

cloudwatchoutposts
#cloudwatch#outposts

As cloud spending continues to surge, organizations must focus on strategic cloud optimization to maximize business value. This blog post explores key insights from MIT Technology Review's publication on cloud optimization, highlighting the importance of viewing optimization as a continuous process that encompasses all six AWS Well-Architected pillars.

organizations
#organizations#ga

Modern applications increasingly rely on Serverless technologies such as Amazon Web Services (AWS) Lambda to provide scalability, cost efficiency, and agility. The Serverless Applications Lens for the AWS Well-Architected Framework focuses on how to design, deploy, and architect your Serverless applications to overcome some of these challenges. Powertools for AWS Lambda is a developer toolkit that […]

lambda
#lambda

In this post, you’ll learn how Zapier has built their serverless architecture focusing on three key aspects: using Lambda functions to build isolated Zaps, operating over a hundred thousand Lambda functions through Zapier's control plane infrastructure, and enhancing security posture while reducing maintenance efforts by introducing automated function upgrades and cleanup workflows into their platform architecture.

lambda
#lambda

Quorum queues are now available on Amazon MQ for RabbitMQ from version 3.13. Quorum queues are a replicated First-In, First-Out (FIFO) queue type that uses the Raft consensus algorithm to maintain data consistency. Quorum queues on RabbitMQ version 3.13 lack one key feature compared to classic queues: message prioritization. However, RabbitMQ version 4.0 introduced support […]

#now-available#support

Today, AWS introduced Amazon Simple Queue Service (Amazon SQS) fair queues, a new feature that mitigates noisy neighbor impact in multi-tenant systems. With fair queues, your applications become more resilient and easier to operate, reducing operational overhead while improving quality of service for your customers. In distributed architectures, message queues have become the backbone of […]

sqs
#sqs#ga#new-feature

In this post, we show you how to implement comprehensive monitoring for Amazon Elastic Kubernetes Service (Amazon EKS) workloads using AWS managed services. This solution demonstrates building an EKS platform that combines flexible compute options with enterprise-grade observability using AWS native services and OpenTelemetry.

lexeks
#lex#eks

In this post, you'll learn how Scale to Win configured their network topology and AWS WAF to protect against DDoS events that reached peaks of over 2 million requests per second during the 2024 US presidential election campaign season. The post details how they implemented comprehensive DDoS protection by segmenting human and machine traffic, using tiered rate limits with CAPTCHA, and preventing CAPTCHA token reuse through AWS WAF Bot Control.

waf
#waf#ga

AWS Transform for VMware is a service that tackles cloud migration challenges by significantly reducing manual effort and accelerating the migration of critical VMware workloads to AWS Cloud. In this post, we highlight its comprehensive capabilities, including streamlined discovery and assessment, intelligent network conversion, enhanced security and compliance, and orchestrated migration execution.

transform for vmware
#transform for vmware

Today, we are excited to announce the general availability of the AWS .NET Distributed Cache Provider for Amazon DynamoDB. This is a seamless, serverless caching solution that enables .NET developers to efficiently manage their caching needs across distributed systems. Consistent caching is a difficult problem in distributed architectures, where maintaining data integrity and performance across […]

dynamodb
#dynamodb#generally-available

This blog was co-authored by Afroz Mohammed and Jonathan Nunn, Software Developers on the AWS PowerShell team. We’re excited to announce the general availability of the AWS Tools for PowerShell version 5, a major update that brings new features and improvements in security, along with a few breaking changes. New Features You can now cancel […]

#generally-available#new-feature#update#improvement

In this post, we explore the Amazon Bedrock baseline architecture and how you can secure and control network access to your various Amazon Bedrock capabilities within AWS network services and tools. We discuss key design considerations, such as using Amazon VPC Lattice auth policies, Amazon Virtual Private Cloud (Amazon VPC) endpoints, and AWS Identity and Access Management (IAM) to restrict and monitor access to your Amazon Bedrock capabilities.

bedrockiam
#bedrock#iam

Software development is far more than just writing code. In reality, a developer spends a large amount of time maintaining existing applications and fixing bugs. For example, migrating a Go application from the older AWS SDK for Go v1 to the newer v2 can be a significant undertaking, but it’s a crucial step to future-proof […]

amazon qq developer
#amazon q#q developer

Organizations managing large audio and video archives face significant challenges in extracting value from their media content. Consider a radio network with thousands of broadcast hours across multiple stations and the challenges they face to efficiently verify ad placements, identify interview segments, and analyze programming patterns. In this post, we demonstrate how you can automatically transform unstructured media files into searchable, analyzable content.

organizations
#organizations#ga

In this post, we share how Pegasystems (Pega) built Launchpad, its new SaaS development platform, to solve a core challenge in multi-tenant environments: enabling secure customer customization. By running tenant code in isolated environments with AWS Lambda, Launchpad offers its customers a secure, scalable foundation, eliminating the need for bespoke code customizations.

lambda
#lambda#launch#ga

We’re excited to announce that the AWS Deploy Tool for .NET now supports deploying .NET applications to select ARM-based compute platforms on AWS! Whether you’re deploying from Visual Studio or using the .NET CLI, you can now target cost-effective ARM infrastructure like AWS Graviton with the same streamlined experience you’re used to. Why deploy to […]

graviton
#graviton#support

Version 4.0 of the AWS SDK for .NET has been released for general availability (GA). V4 has been in development for a little over a year in our SDK’s public GitHub repository with 13 previews being released. This new version contains performance improvements, consistency with other AWS SDKs, and bug and usability fixes that required […]

#preview#ga#improvement

Today, AWS launches the developer preview of the AWS IoT Device SDK for Swift. The IoT Device SDK for Swift empowers Swift developers to create IoT applications for Linux and Apple macOS, iOS, and tvOS platforms using the MQTT 5 protocol. The SDK supports Swift 5.10+ and is designed to help developers easily integrate with […]

#launch#preview#support

We are excited to announce the Developer Preview of the Amazon S3 Transfer Manager for Rust, a high-level utility that speeds up and simplifies uploads and downloads with Amazon Simple Storage Service (Amazon S3). Using this new library, developers can efficiently transfer data between Amazon S3 and various sources, including files, in-memory buffers, memory streams, […]

s3
#s3#preview

In Part 1 of our blog posts for .NET Aspire and AWS Lambda, we showed you how .NET Aspire can be used for running and debugging .NET Lambda functions. In this part, Part 2, we’ll show you how to take advantage of the .NET Aspire programming model for best practices and for connecting dependent resources […]

lambda
#lambda

In a recent post we gave some background on .NET Aspire and introduced our AWS integrations with .NET Aspire that integrate AWS into the .NET dev inner loop for building applications. The integrations included how to provision application resources with AWS CloudFormation or AWS Cloud Development Kit (AWS CDK) and using Amazon DynamoDB local for […]

lambdadynamodbcloudformation
#lambda#dynamodb#cloudformation#ga#integration

.NET Aspire is a new way of building cloud-ready applications. In particular, it provides an orchestration for local environments in which to run, connect, and debug the components of distributed applications. Those components can be .NET projects, databases, containers, or executables. .NET Aspire is designed to have integrations with common components used in distributed applications. […]

#integration

AWS announces important configuration updates coming July 31st, 2025, affecting AWS SDKs and CLIs default settings. Two key changes include switching the AWS Security Token Service (STS) endpoint to regional and updating the default retry strategy to standard. These updates aim to improve service availability and reliability by implementing regional endpoints to reduce cross-regional dependencies and introducing token-bucket throttling for standardized retry behavior. Organizations should test their applications before the release date and can opt-in early or temporarily opt-out of these changes. These updates align with AWS best practices for optimal service performance and security.

organizations
#organizations#ga#update

This blog was co-authored by Afroz Mohammed and Jonathan Nunn, Software Developers on the AWS PowerShell team. In August 2024, the AWS Tools for PowerShell team announced the upcoming release of the AWS Tools for PowerShell V5. The first preview release of V5 is now available. Preview 1 of the AWS Tools for PowerShell V5 […]

#preview#now-available