Natural Language Processing

Natural language processing, text analysis, translation, chatbots, and conversational AI capabilities

14 updates

Amazon Lex now offers a neural automatic speech recognition (ASR) model for English that delivers improved recognition accuracy for your voice bots. Trained on data from multiple English locales, the model excels at recognizing conversational speech patterns across diverse speaking styles, including non-native English speakers and regional accents. This reduces the need for end-customers to repeat themselves and improves self-service success rates. To enable this feature, select "Neural" as the speech recognition option in your bot's locale settings. This feature is available in all AWS commercial regions where Amazon Connect and Lex operate. To learn more, visit the Amazon Lex documentation or explore the Amazon Connect website to learn how Amazon Connect and Amazon Lex deliver seamless end-customer self-service experiences.

lex
#lex#launch

Amazon Lex now provides three VAD sensitivity levels that can be configured for each bot locale: Default, High, and Maximum. The Default setting is suitable for most environments with typical background noise levels. High is designed for environments with consistent but moderate noise levels, such as busy offices or retail spaces. Maximum provides the highest tolerance for very noisy environments such as manufacturing floors, construction sites, or outdoor locations with significant ambient noise. You can configure VAD sensitivity when creating or updating a bot locale in the Amazon Connect's Conversational AI designer. This feature is available in all AWS commercial regions where Amazon Connect and Lex operate. To learn more, visit the Amazon Lex documentation or explore the Amazon Connect website to learn how Amazon Connect and Amazon Lex deliver seamless end-customer self-service experiences.

lex
#lex#launch

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8i and R8i-flex instances are available in the Asia Pacific (Mumbai, Hyderabad) and Europe (Paris) regions. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The R8i and R8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver 20% higher performance than R7i instances, with even higher gains for specific workloads. They are up to 30% faster for PostgreSQL databases, up to 60% faster for NGINX web applications, and up to 40% faster for AI deep learning recommendation models compared to R7i. R8i-flex, our first memory-optimized Flex instances, are the easiest way to get price performance benefits for a majority of memory-intensive workloads. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources. R8i instances are a great choice for all memory-intensive workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. R8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications. R8i instances are SAP-certified and deliver 142,100 aSAPS, delivering exceptional performance for mission-critical SAP workloads. To get started, sign in to the AWS Management Console. For more information about the R8i and R8i-flex instances visit the AWS News blog.

lexec2
#lex#ec2#ga#now-available

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8i and C8i-flex instances are available in the Asia Pacific (Mumbai), Asia Pacific (Seoul), and Asia Pacific (Tokyo) regions. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. These C8i and C8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver up to 20% higher performance than C7i and C7i-flex instances, with even higher gains for specific workloads. The C8i and C8i-flex are up to 60% faster for NGINX web applications, up to 40% faster for AI deep learning recommendation models, and 35% faster for Memcached stores compared to C7i and C7i-flex. C8i-flex are the easiest way to get price performance benefits for a majority of compute intensive workloads like web and application servers, databases, caches, Apache Kafka, Elasticsearch, and enterprise applications. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources. C8i instances are a great choice for all memory-intensive workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. C8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications. To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information about the new C8i and C8i-flex instances visit the AWS News blog.

lexec2kafka
#lex#ec2#kafka#ga#now-available

Amazon Connect now enables you to store and work with complex data structures in your flows, making it easy to build dynamic automated experiences that use rich information returned from your internal business systems. You can save complete data records, including nested JSON objects and lists, and reference specific elements within them, such as a particular order from a list of orders returned in JSON format. Additionally, you can automatically loop through lists of items in your customer service flows, moving through each entry in sequence while tracking the current position in the loop. This allows you to easily access item-level details and present relevant information to end-customers. For example, a travel agency can retrieve all of a customer’s itineraries in a single request and guide the caller through each booking to review or update their reservations. A bank can similarly walk customers through recent transactions one by one using data retrieved securely from its systems. These capabilities reduce the need for repeated calls to your business systems, simplify workflow design, and make it easier to deliver advanced automated experiences that adapt as your business requirements evolve. To learn more about these features, see the Amazon Connect Administrator Guide. These features are available in all AWS regions where Amazon Connect is available. To learn more about Amazon Connect, AWS’s AI-native customer experience solution, please visit the Amazon Connect website.

lexrds
#lex#rds#update

Today, AWS Secrets Manager announces enhanced secret sorting capabilities in the Secrets Manager console and for ListSecrets API. You can now sort secrets by name, last changed date, last accessed date, and creation date—expanding beyond the previous creation date-only option. Secrets Manager is a fully managed service that helps you manage, retrieve, and rotate database credentials, application credentials, API keys, and other secrets throughout their lifecycles. This enhancement improves secret discovery by providing flexible sorting options across multiple dimensions through both Secrets Manager console and APIs. The new sorting capabilities are available in Secrets Manager console and ListSecrets API in all AWS commercial and AWS GovCloud (US) Regions. For a list of regions where Secrets Manager is available, see the AWS Region table.

lexsecrets manager
#lex#secrets manager#enhancement

Today we announce Research and Engineering Studio (RES) on AWS version 2025.12, which introduces tag propagation for CloudFormation resources, enhanced Windows domain configuration options, default session scheduling, and security improvements. Research and Engineering Studio on AWS is an open source solution that provides a web-based portal for administrators to create and manage secure cloud-based research and engineering environments. RES enables scientists and engineers to access powerful Windows and Linux virtual desktops with pre-installed applications and shared resources, without requiring cloud expertise. Tags applied to the CloudFormation stack now propagate to all resources created during RES deployment, making it easier to track costs and manage resources across your organization. Administrators can disable automatic Windows domain joining for hosts, providing flexibility to implement custom domain-join logic when needed. You can now set a default schedule for all new desktop sessions, helping teams standardize session management practices. This version includes several security improvements to help RES deployments meet the NIST 800-223 standard and fixes a bug where some sessions were logged out after 2 minutes when using a custom DNS domain. This release is available in all AWS Regions where RES is available. To learn more about RES 2025.12, including detailed release notes and deployment instructions, visit the Research and Engineering Studio documentation or check out the RES GitHub repository.

lexcloudformation
#lex#cloudformation#ga#now-available#improvement

Amazon Timestream for InfluxDB now offers a restart API for both InfluxDB versions 2 and 3. This new capability enables customers to trigger system restarts on their database instances directly through the AWS Management Console, API, or CLI, to streamline operational management of their time-series database environments. With the restart API, customers can perform resilience testing to validate their application's behavior during database restarts and address health-related issues without requiring support intervention. This feature enhances operational flexibility for DevOps teams managing mission-critical workloads, allowing them to implement more comprehensive testing strategies and respond faster to performance concerns by providing direct control over database instance lifecycle operations. Amazon Timestream for InfluxDB restart capability is available in all Regions where Timestream for InfluxDB is offered. To get started with Amazon Timestream for InfluxDB 3, visit the Amazon Timestream for InfluxDB console. For more information, see the Amazon Timestream for InfluxDB documentation and pricing page.

lex
#lex#support#new-capability

Amazon WorkSpaces Applications now offers images powered by Microsoft Windows Server 2025, enabling customers to launch streaming instances with the latest features and enhancements from Microsoft’s newest server operating system. This update ensures your application streaming environment benefits from improved security, performance, and modern capabilities. With Windows Server 2025 support, you can deliver the Microsoft Windows 11 desktop experience to your end users, giving you greater flexibility in choosing the right operating system for your specific application and desktop streaming needs. Whether you're running business-critical applications or providing remote access to specialized software, you now have expanded options to align your infrastructure decisions with your unique workload requirements and organizational standards. You can select from AWS-provided public images or create custom images tailored to your requirements using Image Builder. Support for Microsoft Windows Server 2025 is now generally available in all AWS Regions where Amazon WorkSpaces Applications is offered. To get started with Microsoft Windows Server 2025 images, visit the Amazon WorkSpaces Applications documentation. For pricing details, see the Amazon WorkSpaces Applications Pricing page.

lexrds
#lex#rds#launch#generally-available#ga#update

AWS Glue now supports zero-ETL for self-managed database sources in seven additional regions. Using Glue zero-ETL, you can setup an integration to replicate data from Oracle, SQL Server, MySQL or PostgreSQL databases which are located on-premises or on AWS EC2 to Redshift with a simple experience that eliminates configuration complexity. AWS zero-ETL for self-managed database sources will automatically create an integration for an on-going replication of data from your on-premises or EC2 databases through a simple, no-code interface. You can now replicate data from Oracle, SQL Server, MySQL and PostgreSQL databases into Redshift. This feature further reduces users' operational burden and saves weeks of engineering effort needed to design, build, and test data pipelines to ingest data from self-managed databases to Redshift. AWS Glue zero-ETL for self-managed database sources are available in the following additional AWS Regions: Asia Pacific (Hong Kong), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (London), South America (São Paulo), and US (Virginia) regions. To get started, sign into the AWS Management Console. For more information visit the AWS Glue page or review the AWS Glue zero-ETL documentation.

lexec2redshifteksglue
#lex#ec2#redshift#eks#glue#ga

Amazon EC2 now supports Availability Zone ID (AZ ID) parameter, enabling you to create and manage resources such as instances, volumes, and subnets using consistent zone identifiers. AZ IDs are consistent and static identifiers that represent the same physical location across all AWS accounts, helping you optimize resource placement. Prior to this launch, you had to use an AZ name while creating a resource, but these names could map to different physical locations. This mapping made it difficult to ensure resources were always co-located especially when operating with multiple accounts. Now, you can specify the AZ ID parameter directly in your EC2 APIs to guarantee consistent placement of resources. AZ IDs always refer to the same physical location across all accounts, which means you no longer need to manually map AZ names across your accounts or deal with the complexity of tracking and aligning zones. This capability is now available for resources including instances, launch templates, hosts, reserved instances, fleet, spot instances, volumes, capacity reservations, network insights, VPC endpoints and subnets, network interfaces, fast snapshot restore, and instance connect. This feature is available in all AWS regions including China and AWS GovCloud (US) Regions. To learn more about Availability Zone IDs, visit the documentation.

lexec2
#lex#ec2#launch#now-available#support

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8i and R8i-flex instances are available in the Asia Pacific (Seoul), South America (Sao Paulo), and Asia Pacific (Tokyo) regions. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The R8i and R8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver 20% higher performance than R7i instances, with even higher gains for specific workloads. They are up to 30% faster for PostgreSQL databases, up to 60% faster for NGINX web applications, and up to 40% faster for AI deep learning recommendation models compared to R7i. R8i-flex, our first memory-optimized Flex instances, are the easiest way to get price performance benefits for a majority of memory-intensive workloads. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources. R8i instances are a great choice for all memory-intensive workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. R8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications. R8i instances are SAP-certified and deliver 142,100 aSAPS, the highest among all comparable machines in on-premises and cloud environments, delivering exceptional performance for mission-critical SAP workloads. To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information about the new R8i and R8i-flex instances visit the AWS News blog.

lexec2
#lex#ec2#ga#now-available

AWS Lambda durable functions enable developers to build reliable multi-step applications and AI workflows within the Lambda developer experience. Starting today, durable functions are available in 14 additional AWS Regions: US East (N. Virginia), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Europe (Milan) Europe (Stockholm), Europe (Spain), Asia Pacific (Sydney), Asia Pacific (Hong Kong), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Mumbai), Asia Pacific (Malaysia), and Asia Pacific (Thailand). Lambda durable functions extend the Lambda programming model with new primitives in your event handler, such as "steps" and "waits", allowing you to checkpoint progress, automatically recover from failures, and pause execution without incurring compute charges for on-demand functions. With this region expansion, you can orchestrate complex processes such as order workflows, user onboarding, and AI-assisted tasks closer to your users and data, helping you to meet low-latency and data residency requirements while standardizing on a single serverless programming model. You can activate durable functions for new Python (versions 3.13 and 3.14) or Node.js (versions 22 and 24) based Lambda functions using the AWS Lambda API, AWS Management Console, or AWS SDK. You can also use infrastructure as code tools such as AWS Cloud Formation, AWS Serverless Application Model (AWS SAM), and the AWS Cloud Development Kit (AWS CDK). For more information on durable functions, visit the AWS Lambda Developer Guide. To learn about pricing, visit AWS Lambda pricing. For the latest region availability, visit the AWS Capabilities by Region page.

lexlambda
#lex#lambda#ga#now-available#expansion

In this post, you will learn how the new Amazon API Gateway’s enhanced TLS security policies help you meet standards such as PCI DSS, Open Banking, and FIPS, while strengthening how your APIs handle TLS negotiation. This new capability increases your security posture without adding operational complexity, and provides you with a single, consistent way to standardize TLS configuration across your API Gateway infrastructure.

lexrdsapi gateway
#lex#rds#api gateway#ga#new-capability