Natural Language Processing

Natural language processing, text analysis, translation, chatbots, and conversational AI capabilities

18 updates

Amazon Application Recovery Controller (ARC) Region switch helps customers orchestrate the failover of their multi-Region applications to achieve a bounded recovery time in the event of a Regional impairment. It automates multi-Region disaster recovery, reducing engineering effort and eliminating operational overhead when recovering applications across multiple AWS accounts and Regions. Region switch now includes three new capabilities: post-recovery workflows, native RDS execution blocks, and AWS provider for Terraform support. Post-recovery workflows. Disaster recovery doesn't end when customers failover to a standby Region. After orchestrating a failover or failback, customers must prepare the other Region for the next recovery event. Today, this requires manual coordination of scaling, recreating read replicas, and validating configurations. Post-recovery workflows help customers automate these preparation steps. With this launch, post-recovery workflows support the custom action Lambda execution block, Amazon RDS create read replica execution block, ARC Region switch plan execution block, and the manual approval execution block. Customers can create read replicas, run custom logic via Lambda functions, add manual approval gates, and embed child plans for complex orchestration as part of post-recovery. Post-recovery workflows are available for active/passive deployments and can be triggered manually. RDS execution blocks. Coordinating Amazon RDS database recovery during Regional failover requires manual steps to promote read replicas and recreate replication, introducing delays and errors. Region switch now natively supports two Amazon RDS execution blocks that automate RDS recovery orchestration. The RDS promote read replica execution block orchestrates promotion of a read replica to a standalone instance during failover. The RDS create read replica execution block orchestrates replica creation as part of post-recovery workflows. AWS provider for Terraform support. Region switch is now supported by the AWS provider for Terraform, enabling customers to manage disaster recovery plans as Infrastructure-as-Code and integrate them into CI/CD pipelines alongside application deployments. To learn more, about AWS provider support for Terraform, visit Terraform provider documentation. To learn about post-recovery workflows in action, read the post-recovery workflow tutorial. To get started with Region switch, read our launch blog or documentation.

lexlambdards
#lex#lambda#rds#launch#ga#support

AWS Marketplace now supports Concurrent Agreements for SaaS and Professional Services products, enabling buyers to make multiple purchases for the same product within a single AWS account. Previously, buyers could only maintain one active agreement per product per AWS account, requiring sellers to use workarounds to support expansion deals. Concurrent Agreements removes this constraint, allowing different business units to procure independently with their own negotiated terms and pricing. Both buyers and sellers benefit from the flexibility Concurrent Agreements provides. Buyers can accept multiple offers for the same product without disrupting existing agreements, supporting multi-team procurement within centralized AWS accounts, mid-term expansions, and repeat purchases. Sellers can close multi-business unit deals that couldn't happen before, transact expansions immediately instead of waiting for renewal cycles, and eliminate the operational overhead of managing workarounds.  Concurrent Agreements is enabled by default for all Professional Services listings starting today, with no seller action required. For SaaS listings, sellers must update their AWS Marketplace integration to handle multiple active subscriptions, including updating subscription notifications to use EventBridge and updating entitlement and metering APIs. Starting June 1, 2026, support for Concurrent Agreements will be required for new SaaS products. Sellers who have completed the integration work can opt in to enable Concurrent Agreements for their SaaS products now.  This capability is available in all AWS Regions where AWS Marketplace is supported. Concurrent Agreements purchasing is available on SaaS products where sellers have completed the integration, and is enabled by default for all Professional Services listings. To learn more about enabling Concurrent Agreements as a seller of SaaS products, review the Concurrent Agreements integration lab.

lexeventbridge
#lex#eventbridge#update#integration#support#expansion

Today, AWS announces the general availability of dynamic dialing mode switching for Amazon Connect Outbound Campaigns, which allows contact center administrators to change between preview and non-preview dialing modes during active campaign execution. Previously, campaigns were locked into their initial dialing mode once started, requiring administrators to stop and restart campaigns to adjust strategies. This launch solves the problem of inflexible dialing strategies that couldn't adapt to real-time business needs and agent availability changes. Dynamic dialing mode switching enables contact centers to optimize agent productivity and campaign efficiency in real-time without campaign interruptions. For example, you can automatically switch from progressive dialing to preview mode when handling high-priority contacts that require additional context, then revert back when traffic returns to normal patterns. This flexibility is particularly valuable for campaigns with varying contact priorities or fluctuating agent availability throughout the day. Dynamic dialing mode switching is available at no additional cost in all AWS Regions where Amazon Connect Outbound Campaigns is supported: US East (N. Virginia), US West (Oregon), Canada (Central), Europe (Frankfurt), Europe (London), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Africa (Cape Town). To learn more, see the Amazon Connect Administrator Guide or visit the Amazon Connect website.

lex
#lex#launch#preview#ga#support

AWS recently released significant updates to the Large Model Inference (LMI) container, delivering comprehensive performance improvements, expanded model support, and streamlined deployment capabilities for customers hosting LLMs on AWS. These releases focus on reducing operational complexity while delivering measurable performance gains across popular model architectures.

lex
#lex#ga#update#improvement#enhancement#support

Today, we're announcing the general availability of AWS Security Hub Extended, a new plan that extends unified security operations across your enterprise through a single-vendor experience. This plan helps address the complexity of managing multiple vendor relationships and lengthy procurement cycles by bringing together the best of AWS detection services and curated partner security solutions. The Security Hub Extended plan delivers three critical advantages. First, it helps streamline procurement by consolidating solution usage into one bill—thereby reducing procurement complexity while preserving direct access to each provider's domain expertise. AWS Enterprise Support Customers also benefit from unified Level 1 support from AWS. Second, it enables you to establish more comprehensive protection by bringing together the best of AWS detection services with curated partner solutions across endpoint, identity, email, network, data, browser, cloud, AI, and security operations. Third, it helps enhance operational efficiency by streamlining security findings in a standard format, providing centralized visibility across your security environment while reducing the burden of manual integration work. You can access and review partner solutions across security categories through the Security Hub console, selecting only the solutions you need with flexible pay-as-you-go or flat-rate pricing—no upfront investments or long-term commitments required. With AWS as the seller of record, the Extended plan may be eligible for AWS Private Pricing opportunities. This gives you the flexibility to add or remove security categories as your business needs evolve, while enabling you to streamline vendor contract negotiations and consolidate billing. For a list of AWS commercial Regions where Security Hub is available, see the AWS Region table. For more information about pricing, visit the AWS Security Hub pricing page. To get started, visit the AWS Security Hub console or product page.

lex
#lex#launch#integration#support

Starting today, Amazon EC2 M8i and M8i-flex instances are now available in US West (N. California), Europe (Paris), Asia Pacific (Hyderabad), and South America (Sao Paulo) regions. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The M8i and M8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver up to 20% better performance than M7i and M7i-flex instances, with even higher gains for specific workloads. The M8i and M8i-flex instances are up to 30% faster for PostgreSQL databases, up to 60% faster for NGINX web applications, and up to 40% faster for AI deep learning recommendation models compared to M7i and M7i-flex instances. M8i-flex are the easiest way to get price performance benefits for a majority of general-purpose workloads like web and application servers, microservices, small and medium data stores, virtual desktops, and enterprise applications. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources. M8i instances are a great choice for all general purpose workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. The SAP-certified M8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications. To get started, sign in to the AWS Management Console. For more information about the new instances, visit the M8i and M8i-flex page or visit the AWS News blog.

lexec2
#lex#ec2#ga#now-available

Starting today, Amazon EC2 M8i and M8i-flex instances are now available in Africa (Cape Town) region. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The M8i and M8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver up to 20% better performance than M7i and M7i-flex instances, with even higher gains for specific workloads. The M8i and M8i-flex instances are up to 30% faster for PostgreSQL databases, up to 60% faster for NGINX web applications, and up to 40% faster for AI deep learning recommendation models compared to M7i and M7i-flex instances. M8i-flex are the easiest way to get price performance benefits for a majority of general-purpose workloads like web and application servers, microservices, small and medium data stores, virtual desktops, and enterprise applications. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources. M8i instances are a great choice for all general purpose workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. The SAP-certified M8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications. To get started, sign in to the AWS Management Console. For more information about the new instances, visit the M8i and M8i-flex page or visit the AWS News blog.

lexec2
#lex#ec2#ga#now-available

Customers use AWS Lambda to build Serverless applications for a wide variety of use cases, from simple API backends to complex data processing pipelines. Lambda's flexibility makes it an excellent choice for many workloads, and with support for up to 10,240 MB of memory, you can now tackle compute-intensive tasks that were previously challenging in a Serverless environment. When you configure a Lambda function's memory size, you allocate RAM and Lambda automatically provides proportional CPU power. When you configure 10,240 MB, your Lambda function has access to up to 6 vCPUs.

lexlambda
#lex#lambda#support

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8i and C8i-flex instances are available in the Asia Pacific (Malaysia) and South America (Sao Paulo) regions. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. These C8i and C8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver up to 20% higher performance than C7i and C7i-flex instances, with even higher gains for specific workloads. The C8i and C8i-flex are up to 60% faster for NGINX web applications, up to 40% faster for AI deep learning recommendation models, and 35% faster for Memcached stores compared to C7i and C7i-flex. C8i-flex are the easiest way to get price performance benefits for a majority of compute intensive workloads like web and application servers, databases, caches, Apache Kafka, Elasticsearch, and enterprise applications. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources. C8i instances are a great choice for all memory-intensive workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. C8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications. To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information about the new C8i and C8i-flex instances visit the AWS News blog.

lexec2kafka
#lex#ec2#kafka#ga#now-available

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C7i-flex and C7i instances are available in the Africa (Cape Town) region. These instances are powered by powered by custom 4th Generation Intel Xeon Scalable processors (code-named Sapphire Rapids) custom processors, available only on AWS, and offer up to 15% better performance over comparable x86-based Intel processors utilized by other cloud providers. C7i-flex instances expand the EC2 Flex instances portfolio to provide the easiest way for you to get price performance benefits for a majority of compute intensive workloads, and deliver up to 19% better price-performance compared to C6i. C7i-flex instances offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources. With C7i-flex instances, you can seamlessly run web and application servers, databases, caches, Apache Kafka, and Elasticsearch, and more. C7i instances deliver up to 15% better price-performance versus C6i instances and are a great choice for all compute-intensive workloads, such as batch processing, distributed analytics, ad serving, and video encoding. C7i instances offer larger instance sizes, up to 48xlarge, and two bare metal sizes (metal-24xl, metal-48xl). These bare-metal sizes support built-in Intel accelerators: Data Streaming Accelerator, In-Memory Analytics Accelerator, and QuickAssist Technology that are used to facilitate efficient offload and acceleration of data operations and optimize performance for workloads. To learn more, visit Amazon EC2 C7i Instances. To get started, see the AWS Management Console.

lexec2kafka
#lex#ec2#kafka#now-available#support

Amazon Redshift now offers 3-year Serverless Reservations for Amazon Redshift Serverless, a new discounted pricing option that provides up to 45% savings and improved cost predictability for your analytics workloads. With Serverless Reservations, you commit to a specific number of Redshift Processing Units (RPUs) for a 3-year term with a no-upfront payment option. Amazon Redshift Serverless allows you to run and scale analytics without having to provision and manage clusters with a pay-as-you-go pricing model. Serverless Reservations help you further optimize compute costs and improve cost predictability of existing and new workloads on Amazon Redshift Serverless. Managed at the AWS payer account level, Serverless Reservations can be shared between multiple AWS accounts, reducing your compute costs by up to 45% on all Amazon Redshift Serverless workloads in your AWS account. Serverless Reservations are billed hourly and metered per second, offering a consistent billing model (24 hours a day, seven days a week) while maintaining the flexibility offered by Amazon Redshift Serverless. Any usage exceeding the specified RPU level is charged at standard on-demand rates. You can purchase Serverless Reservations via the Amazon Redshift console or by invoking the Serverless Reservations API “create-reservation”. Serverless Reservations are available in all regions where Amazon Redshift Serverless is currently available. To learn more about Amazon Redshift Serverless pricing options, see the Redshift Serverless feature page, Redshift Pricing Page, or the Amazon Redshift Management Guide.

lexredshift
#lex#redshift

Today we are announcing the release of Aurora DSQL Connectors for Go (pgx), Python (asyncpg), and Node.js (WebSocket for Postgres.js) that simplify IAM authentication for customers using standard PostgreSQL drivers to connect to Aurora DSQL clusters. These connectors act as transparent authentication layers that automatically handle IAM token generation, eliminating the need to write token generation code or manually supply IAM tokens. Tokens are automatically generated for each connection, ensuring valid tokens are always used while maintaining full compatibility with existing PostgreSQL driver features. The Postgres.js connector additionally supports WebSocket protocol, enabling customers to connect to DSQL clusters in environments where TCP connections are not available. These connectors streamline authentication and eliminate security risks associated with traditional user-generated passwords. All three connectors support custom IAM credential providers, giving customers flexibility in how they manage their AWS credentials. To get started, visit the Connectors for Aurora DSQL documentation page. For code examples, visit our Github page for pgx for Go, asyncpg for Python, and Websocket for Postgres.js. Get started with Aurora DSQL for free with the AWS Free Tier. To learn more about Aurora DSQL, visit the webpage.

lexrdsiam
#lex#rds#iam#launch#support

Starting today, Amazon EC2 M8i-flex instances are now available in Asia Pacific (Malaysia, Seoul, Singapore, Tokyo), Europe (Frankfurt) and Canada (Central) regions. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The M8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver up to 20% better performance than M7i-flex instances, with even higher gains for specific workloads. The M8i-flex instances are up to 30% faster for PostgreSQL databases, up to 60% faster for NGINX web applications, and up to 40% faster for AI deep learning recommendation models compared to M7i-flex instances. M8i-flex instances are the easiest way to get price performance benefits for a majority of general-purpose workloads like web and application servers, microservices, small and medium data stores, virtual desktops, and enterprise applications. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources. To get started, sign in to the AWS Management Console. For more information about the M8i-flex instances visit the AWS News blog.

lexec2
#lex#ec2#ga#now-available

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8i and R8i-flex instances are available in the Europe (Ireland) region. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The R8i and R8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver 20% higher performance than R7i instances, with even higher gains for specific workloads. They are up to 30% faster for PostgreSQL databases, up to 60% faster for NGINX web applications, and up to 40% faster for AI deep learning recommendation models compared to R7i. R8i-flex, our first memory-optimized Flex instances, are the easiest way to get price performance benefits for a majority of memory-intensive workloads. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources. R8i instances are a great choice for all memory-intensive workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. R8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications. R8i instances are SAP-certified and deliver 142,100 aSAPS, delivering exceptional performance for mission-critical SAP workloads. To get started, sign in to the AWS Management Console. For more information about the R8i and R8i-flex instances visit the AWS News blog.

lexec2
#lex#ec2#ga#now-available

While general sizing guidelines for OpenSearch Service domains are covered in detail in OpenSearch Service documentation, in this post we specifically focus on T-shirt-sizing OpenSearch Service domains for e-commerce search workloads. T-shirt sizing simplifies complex capacity planning by categorizing workloads into sizes like XS, S, M, L, XL based on key workload parameters such as data volume and query concurrency.

lexopensearchopensearch service
#lex#opensearch#opensearch service

Amazon Relational Database Service (RDS) and Amazon Aurora now offer greater flexibility for restore operations to view and modify backup retention period and preferred backup window prior to and upon restoring database snapshots. The backup retention period lets you specify how many days backups are retained, while the preferred backup window allows you to set your desired backup schedule. Previously, restored database instances and clusters inherited backup parameter values from snapshot metadata and could only be modified after restore was complete. This launch introduces two enhancements - you can now view the backup retention period and preferred backup window settings as part of automated backups and snapshots, providing visibility into backup configurations before initiating restore operation. Additionally, you can now specify or modify the backup retention period and preferred backup window when restoring database instances and clusters, eliminating the need to modify the instance or cluster after restoration. These enhancements are available for all Amazon RDS database engines (MySQL, PostgreSQL, MariaDB, Oracle, SQL Server, and DB2) and Amazon Aurora (MySQL-Compatible and PostgreSQL-Compatible editions) in all AWS commercial regions and AWS GovCloud (US) regions where RDS and Aurora are supported and respective database engines are available. You can use these features through the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs at no additional cost. For more information, see Amazon RDS and Amazon Aurora User Guide.

lexrds
#lex#rds#launch#enhancement#support

Amazon Athena is a serverless interactive query service that makes it easy to analyze data using SQL. With Athena, there’s no infrastructure to manage, you simply submit queries and get results. Capacity Reservations is a feature of Athena that addresses the need to run critical workloads by providing dedicated serverless capacity for workloads you specify. In this post, we highlight three new capabilities that make Capacity Reservations more flexible and easier to manage: reduced minimums for fine-grained capacity adjustments, an autoscaling solution for dynamic workloads, and capacity cost and performance controls.

lexathena
#lex#athena

In this post, you will learn how the new Amazon API Gateway’s enhanced TLS security policies help you meet standards such as PCI DSS, Open Banking, and FIPS, while strengthening how your APIs handle TLS negotiation. This new capability increases your security posture without adding operational complexity, and provides you with a single, consistent way to standardize TLS configuration across your API Gateway infrastructure.

lexrdsapi gateway
#lex#rds#api gateway#ga#new-capability