Generative AI

Generative AI applications, AI agents, RAG systems, and prompt engineering with Amazon Bedrock, Amazon Q, and AgentCore

44 updates

Today, we're announcing sheet tooltips in Amazon Quick Sight. Dashboard authors can now design custom tooltip layouts using free-form layout sheets. These layouts combine charts, key performance indicator (KPI) metrics, text, and other visuals into a single tooltip that renders dynamically when readers hover over data points.

amazon q
#amazon q

This post is cowritten by Renata Salvador Grande, Gabriel Bueno and Paulo Laurentys at Rede Mater Dei de Saúde. The growing adoption of multi-agent AI systems is redefining critical operations in healthcare. In large hospital networks, where thousands of decisions directly impact cash flow, service delivery times, and the risk of claim denials, the ability […]

bedrockagentcore
#bedrock#agentcore#ga

This post explores how Amazon SageMaker HyperPod provides a comprehensive solution for inference workloads. We walk you through the platform’s key capabilities for dynamic scaling, simplified deployment, and intelligent resource management. By the end of this post, you’ll understand how to use the HyperPod automated infrastructure, cost optimization features, and performance enhancements to reduce your total cost of ownership by up to 40% while accelerating your generative AI deployments from concept to production.

sagemakerhyperpod
#sagemaker#hyperpod#enhancement

In this post, we walk through how Guidesly built Jack AI on AWS using AWS Lambda, AWS Step Functions, Amazon Simple Storage Service (Amazon S3), Amazon Relational Database Service (Amazon RDS), Amazon SageMaker AI, and Amazon Bedrock to ingest trip media, enrich it with context, apply computer vision and generative AI, and publish marketing-ready content across multiple channels—securely, reliably, and at scale.

bedrocksagemakerlambdas3rds+1 more
#bedrock#sagemaker#lambda#s3#rds#step functions

Today, AWS announces increased Amazon Elastic Block Store (Amazon EBS) performance for Amazon EC2 C8gn, M8gn, and R8gn instances in 48xlarge and metal-48xl sizes. EC2 C8gn, M8gn, and R8gn instances are network optimized instances powered by AWS Graviton4 processors and latest 6th generation AWS Nitro Cards. With the latest enhancements to AWS Nitro System, we have doubled the maximum EBS performance on these instances in 48xlarge and metal-48xl sizes, from 60 Gbps of EBS bandwidth and 240,000 IOPS to 120 Gbps of EBS bandwidth and 480,000 IOPS. Customers running network-intensive workloads while requiring additional block storage performance such as data analytics and high-performance file systems can benefit from the improved EBS performance. All existing and new C8gn, M8gn, and R8gn instances in 48xlarge and metal-48xl sizes launched starting today will benefit from this performance increase at no additional cost. For running instances, customers can stop and start instances to enable this performance increase. The higher EBS performance is available in all AWS regions where these instance types are generally available today. To learn more, see Amazon C8gn, M8gn, and R8gn Instances and EBS-optimized instance types.

ec2rdsgraviton
#ec2#rds#graviton#launch#generally-available#enhancement

With the new Spring AI AgentCore SDK, you can build production-ready AI agents and run them on the highly scalable AgentCore Runtime. The Spring AI AgentCore SDK is an open source library that brings Amazon Bedrock AgentCore capabilities into Spring AI. In this post, we build an AI agent starting with a chat endpoint, then adding streaming responses, conversation memory, and tools for web browsing and code execution.

bedrockagentcore
#bedrock#agentcore#generally-available

Amazon OpenSearch Serverless introduces support for Derived Source, a new feature that can help reduce the amount of storage required for your OpenSearch Service collections. With derived source support, you can skip storing source fields and dynamically derive them when required.  With Derived Source, OpenSearch Serverless reconstructs the _source field on the fly using the values already stored in the index, eliminating the need to maintain a separate copy of the original document. This can significantly reduce storage consumption, particularly for time-series and log analytics collections where documents contain many indexed fields. You can enable derived source at the index level when creating or updating index mappings. Derived Source support is available today in all AWS Regions where Amazon OpenSearch Serverless is supported. For more information, see the Amazon OpenSearch Serverless documentation.

opensearchopensearch service
#opensearch#opensearch service#new-feature#support

Amazon Redshift further optimizes the processing of top-k queries (queries with ORDER BY and LIMIT clauses) by intelligently skipping irrelevant data blocks to return results faster, dramatically reducing the amount of data processed. This optimization reorders and efficiently adjusts the data blocks to be read based on the ORDER BY column's min/max values, maintaining only the K most qualifying rows in memory. When the ORDER BY column is sorted or partially sorted, Amazon Redshift now processes only the minimal data blocks needed rather than scanning entire tables, eliminating unnecessary I/O and compute overhead. This enhancement particularly benefits top-k queries when the data permanently stores in descending order (ORDER BY ... DESC LIMIT K) on large tables where qualifying rows are appended at the end of the data storage. Common examples include: Finding the k most recent orders from millions or billions of transactions Retrieving top-k best performing products or k worst performing products (top-k in descending order) from your sales catalog containing hundreds of thousands stock keeping units (SKUs) and millions or billions of sales transactions associated with all product SKUs in your sales catalog Finding the top-k most recent or top-k oldest (top k in descending order) prompts inferred by a foundational large language model (LLM) out of billions of prompts. With this new optimization, top-k query performance improves dramatically. This optimization for top-k queries is now available in Amazon Redshift at no additional cost starting with patch release P199 across all AWS regions where Amazon Redshift is available. This optimization automatically applies to eligible queries without requiring any query rewrites or configuration changes.

redshift
#redshift#now-available#enhancement

AWS Elastic Disaster Recovery (AWS DRS) now supports IPv6 for both data replication and control plane connections. Customers operating in IPv6-only or dual-stack network environments can now configure AWS DRS to replicate using IPv6, eliminating the need for IPv4 addresses in their disaster recovery setup. AWS DRS minimizes downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery. Previously, AWS DRS required IPv4 connectivity for all replication and service communication. Now, customers can set the internet protocol to IPv6 in their replication configuration to use dual-stack endpoints for agent-to-service communication and data replication. This helps customers meet network modernization requirements and enables disaster recovery in environments where IPv4 addresses are unavailable or restricted. Existing replication configurations are not affected and continue to use IPv4 by default. This capability is available in all AWS Regions where AWS DRS is available and where Amazon EC2 supports IPv6. See the AWS Regional Services List for the latest availability information. To learn more about AWS DRS, visit our product page or documentation. To get started, sign in to the AWS Elastic Disaster Recovery Console.

ec2
#ec2#support

Amazon CloudWatch pipelines now supports conditional processing and a new drop events processor, giving you more control over how your log data is transformed. CloudWatch pipelines is a fully managed service that ingests, transforms, and routes log data to CloudWatch without requiring you to manage infrastructure. Until now, processors applied to all log entries uniformly. With conditional processing, you can define rules that determine when a processor runs and which individual log entries it acts on, so you only transform the data that matters. Conditional processing is available across 21 processors including Add Entries, Delete Entries, Copy Values, Grok, Rename Key, and more. For each processor, you can set a "run when" condition to skip the entire processor if the condition is not met, or an entry-level condition to control whether each individual action within the processor is applied. The new Drop Events processor lets you filter out unwanted log entries from third-party pipeline connectors based on conditions you define, helping reduce noise and lower costs. Conditional processing and the Drop Events processor are available at no additional cost in all AWS Regions where CloudWatch pipelines is generally available. Standard CloudWatch Logs ingestion and storage rates still apply. To get started, visit the CloudWatch pipelines page in the Amazon CloudWatch console. To learn more, see the CloudWatch pipelines documentation.

cloudwatch
#cloudwatch#generally-available#support

Amazon FSx for NetApp ONTAP second-generation file systems are now available in 4 additional AWS Regions: Europe (London), Asia Pacific (Hyderabad), South America (Sao Paulo), and AWS GovCloud (US-West).  Amazon FSx makes it easier and more cost effective to launch, run, and scale feature-rich, high-performance file systems in the cloud. Second-generation FSx for ONTAP file systems give you more performance scalability and flexibility over first-generation file systems by allowing you to create or expand file systems with up to 12 highly-available (HA) pairs of file servers, providing your workloads with up to 72 GBps of throughput and 1 PiB of provisioned SSD storage. With this regional expansion, second-generation FSx for ONTAP file systems are available in the following AWS Regions: US East (N. Virginia, Ohio), US West (N. California, Oregon), Canada (Central), Europe (Frankfurt, Ireland, London, Spain, Stockholm, Zurich), South America (Sao Paulo), Asia Pacific (Hyderabad, Mumbai, Seoul, Singapore, Sydney, Tokyo), and AWS GovCloud (US-West). You can create second-generation Multi-AZ file systems with a single HA pair, and Single-AZ file systems with up to 12 HA pairs. To learn more, visit the FSx for ONTAP user guide.

lex
#lex#launch#ga#now-available#expansion

Amazon OpenSearch Service now provides a unified observability experience that brings together metrics, logs, traces, and AI agent tracing in a single interface. This release introduces native integration with Amazon Managed Service for Prometheus and comprehensive agent tracing capabilities, addressing the dual challenges of prohibitive costs from premium observability platforms and operational complexity from fragmented tooling. Site Reliability Engineers, DevOps Engineers, and Platform Engineering teams can now consolidate their observability stack without costly data duplication or constant context switching between multiple tools. You can now query Prometheus metrics directly using native PromQL syntax alongside logs and traces in OpenSearch UI's observability workspace—without duplicating data. Combined with new application monitoring workflows powered by RED metrics (Rate, Errors, Duration) and AI agent tracing using OpenTelemetry GenAI semantic conventions, operations teams can correlate slow traces to application logs, overlay Prometheus metrics on service dashboards, and trace LLM agent execution—all without switching tools. This live query architecture delivers significant cost reduction compared to premium platforms while maintaining operational excellence. The new unified observability experience is available on OpenSearch UI in 20 AWS regions: US East (N. Virginia, Ohio), US West (N. California, Oregon), Asia Pacific (Hong Kong, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo), Europe (Frankfurt, Ireland, London, Milan, Paris, Spain, Stockholm), Canada (Central), and South America (São Paulo). To learn more, visit the OpenSearch Service observability documentation and direct query documentation.

lexopensearchopensearch servicerds
#lex#opensearch#opensearch service#rds#ga#integration

Today, AWS Marketplace announces the Discovery API, giving you programmatic access to product and pricing information across the AWS Marketplace catalog — including SaaS, AI agents and tools, AMI, containers, and machine learning models. With the Discovery API, buyers can embed catalog data into internal portals, enrich procurement tools with current pricing and offer terms, and streamline vendor evaluation workflows. Sellers and channel partners can surface product listings, public pricing, and private offer details directly within their own websites and storefronts — helping customers browse, compare, and move to purchase without leaving the partner experience. The API provides access to product descriptions, categories, pricing across public and private offers, and offer terms, so you can build experiences tailored to how your organization discovers and procures software through AWS Marketplace. The AWS Marketplace Discovery API is available in US East (N. Virginia), US West (Oregon), and Europe (Ireland). You can get started by configuring IAM permissions for your AWS account and calling the API through the AWS SDK. For more information, see the AWS Marketplace Discovery API Reference.

iam
#iam#ga

Amazon OpenSearch Serverless now supports Zstandard codecs for index storage, giving customers greater control over the trade-off between storage costs and query performance. With this launch, customers can configure Zstandard compression to achieve up to 32% reduction in index size compared to the default LZ4 codec, helping lower managed storage costs for data-intensive workloads. Customers running large-scale log analytics, observability pipelines, and time-series workloads on Amazon OpenSearch Serverless can benefit most from Zstandard compression where high data volumes make storage efficiency a significant cost driver. The Zstandard compression algorithm is available in two different modes in Amazon OpenSearch Serverless: zstd and zstd_no_dict. Customers can tune the compression level to balance their specific needs: lower levels (e.g., level 1) deliver meaningful storage savings with minimal impact on indexing throughput and query latency, while higher levels (e.g., level 6) maximize compression ratios at the cost of slower indexing speeds.  Zstandard codec support is available today in all AWS Regions where Amazon OpenSearch Serverless is supported. To get started, you can specify these codecs in your index settings at creation time. For more information, see the Amazon OpenSearch Serverless documentation.

opensearchecs
#opensearch#ecs#launch#support

In this post, you will learn how to build stateful MCP servers that request user input during execution, invoke LLM sampling for dynamic content generation, and stream progress updates for long-running tasks. You will see code examples for each capability and deploy a working stateful MCP server to Amazon Bedrock AgentCore Runtime.

bedrockagentcore
#bedrock#agentcore#update

In this post, we demonstrate how you can build a scalable, multi-tenant configuration service using the tagged storage pattern, an architectural approach that uses key prefixes (like tenant_config_ or param_config_) to automatically route configuration requests to the most appropriate AWS storage service. This pattern maintains strict tenant isolation and supports real-time, zero-downtime configuration updates through event-driven architecture, alleviating the cache staleness problem.

#update#support

In this post, we explore where RFT is most effective, using the GSM8K mathematical reasoning dataset as a concrete example. We then walk through best practices for dataset preparation and reward function design, show how to monitor training progress using Amazon Bedrock metrics, and conclude with practical hyperparameter tuning guidelines informed by experiments across multiple models and use cases.

bedrock
#bedrock

Amazon WorkSpaces Advisor is a new AI-powered tool that helps administrators quickly troubleshoot and resolve issues with Amazon WorkSpaces Personal. Using generative AI capabilities, it analyzes WorkSpace configurations, identifies problems, and provides actionable recommendations to restore service and optimize performance. WorkSpaces Advisor streamlines administrative workflows by reducing the time needed to investigate and fix common issues. Administrators can leverage AI-driven insights to proactively maintain their virtual desktop infrastructure, improve end-user experience, and minimize downtime across their WorkSpaces. Amazon WorkSpaces Advisor is now available in all AWS commercial regions where Amazon WorkSpaces is offered. Visit the Amazon WorkSpaces console to access WorkSpaces Advisor and begin troubleshooting your environment. Learn more in the feature blog and user guide.

#ga#now-available

Amazon OpenSearch Service now supports i8ge instances, which is the latest generation of storage optimized instances offering the best performance for storage-intensive workloads. Powered by AWS Graviton4 processors, I8ge instances deliver up to 60% better compute performance compared to previous generation Graviton2-based storage optimized Im4gn instances. I8ge instances use the latest third generation AWS Nitro SSDs, local NVMe storage that deliver up to 55% better real-time storage performance per TB while offering up to 60% lower storage I/O latency and up to 75% lower storage I/O latency variability compared to previous generation Im4gn instances. Built on the AWS Nitro System, these instances offload CPU virtualization, storage, and networking functions to dedicated hardware and software enhancing the performance and security for your workloads. I8ge instances are available of sizes up to 18xlarge and 45 TB instance storage. At 112.5 Gbps, these instances have the highest networking bandwidth among storage optimized instances available in Amazon OpenSearch Service. I8ge instances support all OpenSearch versions & Elasticsearch (open source) versions 7.9 and 7.10. Amazon OpenSearch Service supports i8ge instances in following AWS Regions : US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (Stockholm), Asia Pacific (Malaysia), Asia Pacific (Mumbai), Asia Pacific (Singapore) and Asia Pacific (Sydney). For region specific availability & pricing, visit our pricing page. To learn more about Amazon OpenSearch Service and its capabilities, visit our product page.

opensearchopensearch servicegraviton
#opensearch#opensearch service#graviton#ga#support

With Amazon Bedrock Projects, you can attribute inference costs to specific workloads and analyze them in AWS Cost Explorer and AWS Data Exports. In this post, you will learn how to set up Projects end-to-end, from designing a tagging strategy to analyzing costs.

bedrock
#bedrock

AWS Cost Explorer now brings Amazon Q Developer's generative AI capabilities directly into your cost analysis workflows. You can now use natural language queries to ask Amazon Q questions about your AWS cost and usage data. In addition to providing answers to your question, you now also receive automatically updated visualizations in Cost Explorer. This enables faster cost analysis, reduces time to insights, and makes cost visibility accessible to every team member. With this launch, you can start your cost analysis with the new suggested prompts in Cost Explorer. These prompts include commonly asked cost questions like "Show me my top spending services for this month." Amazon Q provides detailed insights while Cost Explorer simultaneously updates with the corresponding visualization, filters, and groupings. You can also ask custom questions in your own words using the new 'Ask Question' button, exploring your spending patterns conversationally. Cost Explorer automatically updates charts and tables when analysis is based on your cost and usage data. When Amazon Q compiles insights from additional datasets such as pricing or anomaly detection, visualizations are displayed in Amazon Q's new artifacts panel. You can continue the conversation with follow-up questions while maintaining full context, allowing you to go from a quick cost check to a deep investigation without switching tools or breaking your workflow. Natural language cost analysis for AWS Cost Explorer is available today in all commercial AWS Regions at no additional charge. To learn more, visit AWS Cost Explorer. To get started, see the user guide.

amazon qq developerrds
#amazon q#q developer#rds#launch#ga#update

Smithy Java client code generation is now generally available. You can use it to build type-safe, protocol-agnostic Java clients directly from Smithy models. With Smithy Java, serialization, protocol handling, and request/response lifecycles are all generated automatically from your model. This removes the need to write or maintain any of this code by hand. In this […]

#generally-available

Customers can now create Amazon FSx for OpenZFS file systems in the AWS Asia Pacific (Melbourne) Region, providing fully managed shared file storage built on the OpenZFS file system. Amazon FSx makes it easier and more cost effective to launch, run, and scale feature-rich, high-performance file systems in the cloud. It supports a wide range of workloads with its reliability, security, scalability, and broad set of capabilities. Amazon FSx for OpenZFS provides fully managed, cost-effective, shared file storage powered by the popular OpenZFS file system, and is designed to deliver sub-millisecond latencies and multi-GB/s throughput along with rich ZFS-powered data management capabilities (like snapshots, data cloning, and compression). To learn more about Amazon FSx for OpenZFS, visit our product page, and see the AWS Region Table for complete regional availability information.

#launch#now-available#support

Amazon SageMaker Data Agent now supports interactive charting, SQL analytics on Snowflake data sources, and materialized view management in Amazon SageMaker Unified Studio notebooks. Data Agent now provides a complete analytics workflow that goes beyond code generation, enabling you to explore AWS and external data sources, visualize results, and optimize query performance, all with natural language prompts. You can ask "plot monthly revenue trends by region for 2025" and Data Agent generates an interactive chart directly in your notebook, where you can hover over data points, and modify without writing code. When your analysis spans AWS and Snowflake, you can query Snowflake tables through external connections and join them with your AWS Glue Data Catalog data in a single prompt. Additionally, you can ask "analyze my notebook and suggest which queries would benefit from materialized views" and the agent recommends optimizations based on your query patterns, creates the views, and sets refresh schedules. To get started, open a notebook in your SageMaker Unified Studio project and use the Data Agent chat panel. These features are available in all AWS Regions where Amazon SageMaker Unified Studio is supported. To learn more, see SageMaker Data Agent in the SageMaker Unified Studio User Guide.

sagemakerunified studioglue
#sagemaker#unified studio#glue#support

Now, Amazon OpenSearch Service brings three new agentic AI features to OpenSearch UI. In this post, we show how these capabilities work together to help engineers go from alert to root cause in minutes. We also walk through a sample scenario where the Investigation Agent automatically correlates data across multiple indices to surface a root cause hypothesis.

opensearchopensearch service
#opensearch#opensearch service#ga

Amazon CloudWatch now supports automatic enablement of Amazon CloudFront Standard access logs, AWS Security Hub CSPM finding logs, and Amazon Bedrock AgentCore memory and gateway logs and traces to CloudWatch Logs. Customers can set up enablement rules that automatically configure telemetry for both existing and newly created resources, ensuring consistent monitoring coverage without manual setup. Enablement rules can be scoped to the organization, specific accounts, or specific resources based on resource tags to standardize telemetry collection. For example, a central security team can create a single rule to automatically send CloudFront access logs and Security Hub findings for all resources across their organization to CloudWatch Logs. CloudWatch's auto-enablement capability is available in all AWS commercial regions. Log ingestion will be billed according to CloudWatch Pricing. Amazon CloudFront access logs and AWS Security Hub CSPM findings support organization-wide enablement rules. Bedrock AgentCore memory and gateway telemetry support account-level enablement rules. To learn more about enablement rules in Amazon CloudWatch, visit the Amazon CloudWatch documentation.

bedrockagentcorecloudfrontcloudwatch
#bedrock#agentcore#cloudfront#cloudwatch#ga#support

Smithy Kotlin client code generation is now generally available. With Smithy Kotlin, you can keep client libraries in sync with evolving service APIs. By using client code generation, you can reduce repetitive work and instead, automatically create type-safe Kotlin clients from your service models. In this post, you will learn what Smithy Kotlin client generation is, how it works, and how you can use it.

#generally-available

Amazon WorkSpaces Applications now enables instances in multi-session fleets to stop accepting new user sessions, while allowing existing sessions to continue uninterrupted. This capability, known as drain mode, ensures seamless operations during maintenance, scaling, or system updates. Multi-session fleets allow hosting multiple end user sessions on a single instance, helping to maximize usage of the underlying infrastructure - including compute, memory and storage resources - and lowering the overall cost. This new drain mode capability helps administrators manage multi-session environments more seamlessly by preventing disruption to active users. When performing system maintenance, applying security patches, or scaling down resources, administrators can configure instances to gradually empty without abruptly terminating user sessions. This ensures users can complete their work smoothly while new connections are directed to other available instances, maintaining system stability and improving the overall end-user experience. This new feature is available at no additional cost in all AWS Regions where Amazon WorkSpaces Applications is available. Amazon WorkSpaces Applications offers pay-as-you-go pricing. For more information, see Amazon WorkSpaces Applications Pricing. To get started with WorkSpaces Applications, see WorkSpaces applications: Getting started.

#new-feature#update

Amazon SageMaker Data Agent now supports cross-region inference profiles for Japan and Australia through Amazon Bedrock. With this update, inference requests from Data Agent in the Asia Pacific (Tokyo) and Asia Pacific (Sydney) regions are processed within their respective geographies, supporting data sovereignty requirements for customers in Japan and Australia. Data Agent provides an AI-powered conversational experience for data exploration, Python and SQL code generation, troubleshooting, and analytics directly within Amazon SageMaker Unified Studio Notebook and Query Editor. With geo-specific inference through JP-CRIS (Japan Cross-Region Inference) and AU-CRIS (Australia Cross-Region Inference), you can use Data Agent with confidence that your inference requests are routed exclusively within your geography over the AWS Global Network. Customers in regulated industries such as financial services, healthcare, and the public sector can meet data residency requirements while using the full set of Data Agent capabilities. To get started, open a project in SageMaker Unified Studio in a supported region and use Data Agent in notebooks or Query Editor. For more information, see SageMaker Data Agent in the Amazon SageMaker Unified Studio User Guide.

bedrocksagemakerunified studio
#bedrock#sagemaker#unified studio#update#support

This post describes a solution that uses fixed camera networks to monitor operational environments in near real-time, detecting potential safety hazards while capturing object floor projections and their relationships to floor markings. While we illustrate the approach through distribution center deployment examples, the underlying architecture applies broadly across industries. We explore the architectural decisions, strategies for scaling to hundreds of sites, reducing site onboarding time, synthetic data generation using generative AI tools like GLIGEN, and other critical technical hurdles we overcame.

rds
#rds

Amazon RDS for Oracle now supports cross-account snapshot sharing for database instances with additional storage volumes. Additional storage volumes allow customers to scale database storage up to 256 TiB by adding up to three storage volumes, each with up to 64 TiB, in addition to the primary storage volume. With this launch, customers can create, share, and copy a database snapshot across AWS accounts for database instances set up with additional storage volumes. Cross account snapshots enable customers to set up isolated backup environments in separate accounts for compliance requirements and to perform diagnostics, such as investigating production issues by restoring database snapshots in a separate account for development and testing. Cross account snapshots for database instances with additional storage volumes preserve the storage layout of the original database instance, including the configuration of additional storage volumes. When a snapshot is shared to a target AWS account, authorized users in the target account can restore it to another database instance, copy the snapshot within the same or different AWS Region, or create independent backups under different AWS Identity and Access Management (IAM) access permissions for backup and disaster recovery. Cross-account snapshot sharing with additional storage volumes is available in all AWS Regions, including AWS GovCloud (US) Regions. Customers can start using this feature today through the AWS Management Console, AWS CLI, or AWS SDKs. To learn more, see Sharing a DB snapshot for Amazon RDS, Copying a DB snapshot for Amazon RDS, and Working with storage in RDS for Oracle in the Amazon RDS User Guide.

rdsiam
#rds#iam#launch#ga#support

Amazon CloudWatch now supports ingesting AWS Security Hub CSPM findings, enabling customers to centrally analyze and monitor security findings directly in CloudWatch Logs. Security Hub CSPM findings are supported in AWS Security Finding Format (ASFF) and Open Cybersecurity Schema Framework (OCSF) format using CloudWatch Pipelines, providing standardized security data ingestion. Customers can now use CloudWatch Logs Insights to query findings, create metric filters for monitoring, and leverage Amazon S3 Tables integration for advanced analytics, helping security teams identify and respond to threats faster across their AWS environment. With today's launch, customers can automatically enable Security Hub findings delivery to CloudWatch Logs using CloudWatch enablement rules that apply to the entire organization or specific accounts, to standardize security monitoring coverage. For example, a security team can create an enablement rule to automatically send Security Hub findings to CloudWatch Logs for all production accounts, ensuring consistent visibility into security posture. Security Hub findings to CloudWatch logs are available in all AWS commercial regions. Security Hub findings are charged as tiered pricing when delivered to CloudWatch Logs. For pricing information, see the CloudWatch pricing page. To learn more about Security Hub findings in CloudWatch Logs and organization-level enablement, visit the Amazon CloudWatch documentation..

s3cloudwatch
#s3#cloudwatch#launch#ga#integration#support

In this post, we demonstrate how to architect AWS systems that enable AI agents to iterate rapidly through design patterns for both system architecture and code base structure. We first examine the architectural problems that limit agentic development today. We then walk through system architecture patterns that support rapid experimentation, followed by codebase patterns that help AI agents understand, modify, and validate your applications with confidence.

#support

This post is part 3 of the three-part series ‘Enabling high availability of Amazon EC2 instances on AWS Outposts servers’. We provide you with code samples and considerations for implementing custom logic to automate Amazon Elastic Compute Cloud (EC2) relaunch on Outposts servers. This post focuses on guidance for using Outposts servers with third party storage for boot […]

ec2outposts
#ec2#outposts#launch

The new multipart download support in AWS SDK for .NET Transfer Manager improves the performance of downloading large objects from Amazon Simple Storage Service (Amazon S3). Customers are looking for better performance and parallelization of their downloads, especially when working with large files or datasets. The AWS SDK for .NET Transfer Manager (version 4 only) […]

s3
#s3#support

To support cloud applications that increasingly depend on rich contextual data, AWS is raising the maximum payload size from 256 KB to 1 MB for asynchronous AWS Lambda function invocations, Amazon Amazon SQS, and Amazon EventBridge. Developers can use this enhancement to build and maintain context-rich event-driven systems and reduce the need for complex workarounds such as data chunking or external large object storage.

lexlambdaeventbridgesqs
#lex#lambda#eventbridge#sqs#enhancement#support

In healthcare, generative AI is transforming how medical professionals analyze data, summarize clinical notes, and generate insights to improve patient outcomes. From automating medical documentation to assisting in diagnostic reasoning, large language models (LLMs) have the potential to augment clinical workflows and accelerate research. However, these innovations also introduce significant privacy, security, and intellectual property challenges.

nova
#nova

In this post, we walk through building a generative AI–powered troubleshooting assistant for Kubernetes. The goal is to give engineers a faster, self-service way to diagnose and resolve cluster issues, cut down Mean Time to Recovery (MTTR), and reduce the cycles experts spend finding the root cause of issues in complex distributed systems.

lex
#lex