AWS re:Invent 2020 Analytics, Database, and Security Announcements

AWS Categories: End User Compute

New features in Amazon Connect

What is it?

Amazon Connect is an easy-to-use omnichannel cloud contact center that helps you provide superior customer service at a lower cost. Over 10 years ago, Amazon’s retail business needed a contact center that would give our customers personal, dynamic, and natural experiences. We couldn’t find one that met our needs, so we built it. We've now made this available for all businesses, and today thousands of companies ranging from 10 to tens of thousands of agents use Amazon Connect to serve millions of customers daily.

Availability:

To learn about Amazon Connect’s availability see the Amazon Connect Regions Table.

Use Cases:

  • Omnichannel customer service: Amazon Connect provides a seamless omnichannel experience through a single unified contact center for voice, chat, and task management. Amazon Connect offers high-quality audio capabilities, natural interactive voice response (IVR), and interactive chatbots that operate seamlessly with web and mobile chat contact flows.
  • Automated agent assist: Amazon Connect Wisdom leverages machine learning to help agents resolve customer issues faster using powerful search to quickly find relevant content, like frequently asked questions (FAQs), step-by-step instructions, and wikis, across multiple knowledge repositories, such as Salesforce, ServiceNow, and Zendesk. Amazon Connect Wisdom also uses real-time analytics to detect customer issues and provide the agent's relevant content in real time, resulting in faster issue resolution and improved customer satisfaction.

Customer Benefits:

  • Make changes in minutes not months: Amazon Connect is so simple to set-up and use, you can increase your speed of innovation. With only a few clicks, you can set up an omnichannel contact center and agents can begin talking and messaging with customers right away. Making changes is easy with an intuitive UI that allows you to create voice and chat contact flows, or agent tasks without any coding, rather than custom development that can take months and cost millions of dollars.
  • Save up to 80% compared to traditional contact center solutions: Amazon Connect costs less than legacy contact center systems. With Amazon Connect you pay only for what you use, plus any associated telephony and messaging charges. With Amazon Connect there are no minimum monthly fees, long-term commitments, upfront license charges, and pricing is not based on peak capacity, agent seats, or maintenance.
  • Easily scale to meet unpredictable demand: Amazon Connect has the flexibility to scale your contact center up or down to any size, onboarding tens of thousands of agents in response to normal business cycles or unplanned events. As part of the AWS cloud, you can support your customers by accessing Amazon Connect from anywhere in the world on secure, reliable, and highly scalable infrastructure. All you need is a supported web browser and an internet connection to engage with customers from anywhere.

Resources: Website1 | Website 2 | What’s New Post 1 | What’s New Post 2

AWS Categories: Analytics

AWS Glue DataBrew

What is it?

AWS Glue DataBrew is a new visual data preparation tool that makes it easy for data analysts and data scientists to clean and normalize data to prepare it for analytics and machine learning. You can choose from over 250 pre-built transformations to automate data preparation tasks, all without the need to write any code. You can automate filtering anomalies, converting data to standard formats, and correcting invalid values, and other tasks. After your data is ready, you can immediately use it for analytics and machine learning projects. You only pay for what you use - no upfront commitment.

Availability:

us-east-1 (N. Virginia), us-east-2 (Ohio), us-west-2 (Oregon), eu-west-1 (Ireland), eu-central-1 (Frankfurt), ap-southeast-2 (Sydney), and ap- northeast-1 (Tokyo).

Use Cases:

  • Self-service visual data preparation for analytics and machine learning: AWS Glue DataBrew enables you to explore and experiment with data directly from your data lake, data warehouses, and databases, including Amazon S3, Amazon Redshift, AWS Lake Formation, Amazon Aurora, and Amazon RDS. You can choose from over 250 prebuilt transformations in AWS Glue DataBrew to automate data preparation tasks, such as filtering anomalies, standardizing formats, and correcting invalid values. After the data is prepared, you can immediately use it for analytics and machine learning.

Customer Benefits:

  • Profile data to evaluate data quality: Evaluate the quality of your data by profiling it to understand data patterns and detect anomalies; connect terabytes and even petabytes of data directly from your data lake, data warehouses, and databases.
  • Clean and Normalize data without writing code: Choose from over 250 built-in transformations to visualize, clean, and normalize your data with an interactive, point-and-click visual interface.
  • Map Data Lineage: Visually map the lineage of your data to understand the various data sources and transformation steps that the data has been through.
  • Automate data preparation tasks: Automate data cleaning and normalization tasks by applying saved transformations directly to new data as it comes into your source system.

Resources: Website

AWS Glue Elastic Views 

What is it?

AWS Glue Elastic Views is a new capability of AWS Glue that makes it easy to build materialized views to combine and replicate data across multiple data stores without you having to write custom code.

New applications and features often require you to combine data that resides across multiple data stores, including relational and non-relational databases. Accessing, combining, replicating, and keeping this data up to date requires manual work and custom code that can take months of development time.

With AWS Glue Elastic Views, you can use familiar Structured Query Language (SQL) to quickly create a virtual table—called a view—from multiple different source data stores. Based on this view, AWS Glue Elastic Views copies data from each source data store and creates a replica—called a materialized view—in a target database. AWS Glue Elastic Views monitors for changes to data in your source data stores continuously, and provides updates to your target data stores automatically, ensuring data accessed through the materialized view is always up to date.

Availability:

AWS Glue Elastic Views is available in limited preview in US East (N. Virginia),

US East (Ohio), US West (Oregon), EU (Ireland), and Asia Pacific (Tokyo). Customers can apply for the preview here.

Use Cases:

  • Combine data across multiple databases and data stores: AWS Glue Elastic Views combines data from more than one data store in near-real time. For example, you can combine data from an Amazon DynamoDB database with data from an Amazon Aurora database and copy it to Amazon Redshift.
  • Replicate data across multiple databases and data stores: AWS Glue Elastic Views replicates data across multiple databases and data stores. For example, you can create a copy of a DynamoDB table in Amazon Elasticsearch Service to enable full text search on the DynamoDB data.
  • Integrate operational and analytical systems: AWS Glue Elastic Views simplifies running analytical queries on your most recent operational data. For example, you can create database views over data in your operational databases and materialize those views in your data warehouse or data lake.

Customer Benefits:

  • Use familiar SQL to create a materialized view: AWS Glue Elastic Views enables you to create materialized views across many databases and data stores using familiar SQL. AWS Glue Elastic Views supports Amazon DynamoDB, Amazon Redshift, Amazon S3, and Amazon Elasticsearch Service, with support for more data stores to follow.
  • Copies data from each source data store to a target data store: AWS Glue Elastic Views handles all of the heavy lifting of copying and combining data from source to target data stores, without you having to write custom code or use unfamiliar ETL tools and programming languages. AWS Glue Elastic Views reduces the time it takes to combine and replicate data across data stores from months to minutes.
  • Automatically keeps the data in the target data store updated: AWS Glue Elastic Views monitors for changes to data in your source data stores continuously and provides updates to your target data stores automatically. This ensures that applications always access up-to-date data in the materialized views.

Resources: Website

Amazon QuickSight Q

What is it?

Amazon QuickSight Q uses machine learning-powered, natural language query (NLQ) technology to enable business users to ask ad-hoc questions of their data in natural language and get answers in seconds. To ask a question, users simply type it into the Amazon QuickSight Q search bar. Amazon QuickSight Q uses machine learning (natural language processing, schema understanding, and semantic parsing for SQL code generation) to generate a data model that automatically understands the meaning of and relationships between business data, so users can receive highly accurate answers to their business questions in seconds by simply using the business language that they are used to.

Amazon QuickSight Q comes pre-trained on large volumes of real-world data from various domains and industries like sales, marketing, operations, retail, human resources, pharmaceuticals, insurance, energy, and more, so it is already optimized to understand complex business language. For example, sales users can ask, “How is my sales tracking against quota?”, or retail users can ask, “What are the top products sold week-over- week by region?” Furthermore, users can get more complete and accurate answers because the query is applied to all of the data, not just the datasets in pre-determined model. And because Amazon QuickSight Q does this automatically, it eliminates the need for BI teams to spend time in building and updating data models, saving weeks of effort.

Availability:

Amazon QuickSight Q will be in Gated Preview where customers need to sign-up to get access.

Use Cases:

  • Amazon QuickSight Q is optimized to understand complex business language and data models from multiple domains, including
  • Sales (“How is my sales tracking against quota?”)
  • Marketing (“What is the conversion rate across my campaigns?”)
  • Retail (“What are the top products sold week over week by region?”)
  • HR, Advertising, amongst others

Customer Benefits:

  • Get answers in seconds: With Amazon QuickSight Q, business users can simply type a question in plain English and get an answer such as a number, chart, or table in seconds.
  • Use business language that you are used to: With Amazon QuickSight Q, you can ask questions using phrases and business language that you use every day as part of your functional or vertical domain. Amazon QuickSight Q is optimized to understand complex business language and data models from multiple domains
  • Ask any question on all your data: Amazon QuickSight Q provides answers to questions on all of your data. Unlike conventional NLQ- based BI tools, Q is not limited to answering questions from a single dataset or dashboard

Resources: Website | What’s new post

Amazon Redshift AQUA

What is it?

Today, in the analytics press release, we announced that AQUA (Advanced Query Accelerator) for Amazon Redshift preview is now open to all customers and AQUA will be generally available in January 2021.

AQUA is a new distributed and hardware-accelerated cache that enables Redshift queries to run up to 10x faster than other cloud data warehouses. Existing data warehousing architectures with centralized storage require data be moved to compute clusters for processing. As data warehouses continue to grow over the next few years, the network bandwidth needed to move all this data becomes a bottleneck on query performance.

AQUA takes a new approach to cloud data warehousing. AQUA brings the compute to storage by doing a substantial share of data processing in-place on the innovative cache. In addition, it uses AWS-designed processors and a scale-out architecture to accelerate data processing beyond anything traditional CPUs can do today.

Availability:

Customers can sign up for the AQUA preview now and will be contacted within a week with instructions. In order to use AQUA, customers must be using RA3.4xl or RA3.16xl nodes in us-east-1 (N. Virginia), us-west-2 (Oregon), or us-east-2 (Ohio) regions.

Customer Benefits:

  • Brings compute closer to storage - AQUA accelerates Redshift queries by running data intensive tasks such as such as filtering and aggregation closer to the storage layer. This avoids networking bandwidth limitations by eliminating unnecessary data movement between where data is stored and compute clusters.
  • Powered by AWS-Designed Processors - AQUA uses AWS-designed processors to accelerate queries. This includes AWS Nitro chips adapted to speed up data encryption and compression, and custom analytics processors, implemented in FPGAs, to accelerate operations such as filtering and aggregation.
  • Scale out Architecture - AQUA can process large amounts of data in parallel across multiple nodes, and automatically scales out to add more capacity as your storage needs grow over time.

Resources: Website

Amazon Redshift ML

What is it?

Redshift ML is a new capability for Amazon Redshift that make it easy for data analysts and database developers to create, train, and deploy Amazon SageMaker models using SQL. With Amazon Redshift ML, customers can use SQL statements to create and train Amazon SageMaker models on their data in Amazon Redshift and then use those models for predictions such as churn detection and risk scoring directly in their queries and reports.

Availability:

The Redshift ML preview is available in: us-east-1 (N. Virginia), us-east-2 (Ohio), us-west-2 (Oregon), ca-central-1 (Canada Central), eu-west-1 (Ireland), eu-central-1 (Frankfurt), ap-northeast-1 (Tokyo), ap-southeast-2 (Sydney), and ap-southeast-1 (Singapore)

Use Cases:

  • Predictive analytics with Amazon Redshift: With Redshift ML, you can embed predictions like churn prediction, fraud detection, and risk scoring directly in queries and reports. Use the SQL function to apply the ML model to your data in queries, reports, and dashboards. For example, you can run the “customer churn” SQL function on new customer data in your data warehouse on a regular basis to predict customers at risk of churn and feed this information to your sales and marketing teams so they can take preemptive action such as sending these customers an offer designed to retain them.

Customer Benefits:

  • No prior ML experience needed: Redshift ML makes it easy to benefit from the ML capabilities in Amazon SageMaker directly in Redshift so you don’t have to learn new platforms, tools, or languages. Redshift ML provides simple, optimized, and secure integration between Redshift and Amazon SageMaker and enables inference within the Redshift cluster, making it easy to use model predictions in queries and applications. There is no need to manage a separate inference model end point, and the training data is secured end-to-end with encryption.
  • Use ML on your Redshift data using standard SQL: With Redshift ML you can create, train, and apply ML models on your Redshift data using standard SQL. To get started, use the CREATE MODEL SQL command in Redshift and specify training data either as a table or SELECT statement. Redshift ML then compiles and imports the trained model inside the Redshift data warehouse and prepares a SQL inference function that can be immediately used in SQL queries. Redshift ML automatically handles all the steps needed to train and deploy a model.

Resources: What’s New Blog | External Webpage | Detailed Blog | Leadership Authored Blog

Amazon Redshift Feature Updates

What is it?

We announced several features for Amazon Redshift, including:

  • Amazon Redshift data sharing (preview): A new way to securely share live data across Redshift in an organization and externally. Data sharing improves the agility of organizations by giving them instant, granular and high-performance access to data across Redshift clusters without the need to copy or move it. Data sharing provides live access to the data so that users can see the most up-to-date and consistent information as it is updated in the data warehouse.
  • xlplus GA: RA3 with managed storage enables customers to scale and pay for compute and storage separately. This new, smaller, node size joins the RA3.4xl and RA3.16xl nodes we launched last year.
  • Amazon Redshift Automated Performance Tuning GA: A new self-tuning capability, Automatic Table Optimization, optimizes the physical design of tables by automatically setting sort and distribution keys to improve query speed, without requiring any administrator intervention.
  • Partner console integration (preview): Enables customers to launch the Partner Integration Wizard from the Redshift cluster details page and select partners already integrated in the console to accelerate data onboarding. Our launch partners include Matillion, Sisense, FiveTran, Segment and ETLeap.
  • Cross-AZ cluster recovery: A few ability to move a cluster to another Availability Zone (AZ) without any loss of data or changes to your applications.
  • Federated Query updates (preview): With Redshift Federated query, customers can combine operational data that is stored in popular databases such as RDS and Aurora PostgreSQL. Now, we also offer RDS MySQL and Aurora MySQL support in preview.
  • Native semi-structured data support with Super data type with JSON support (preview): A new data type SUPER that will support nested data formats such as JSON and enable customers to ingest, store, and query nested data natively in Amazon Redshift. JSON formatted data can be stored in SUPER columns.

Availability:

  • Amazon Redshift data sharing: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Asia Pacific (Tokyo), Asia Pacific (Sydney), and Asia Pacific (Seoul).
  • xlplus nodes are generally available in Asia Pacific (Seoul, Sydney, Tokyo), Brazil (São Paulo), Canada (Central), EU (Ireland, Paris), US East (N. Virginia, Ohio), and US West (N. California, Oregon) regions.
  • Automatic Table Optimization is available on Amazon Redshift version 1.0.21291 in all regions where the Redshift Advisor is available. Refer to the this link for Amazon Redshift Advisor availability.
  • Partner console is available to new and existing customers. Refer to the AWS Region Table for Amazon Redshift availability.
  • Cluster relocation capability is available in all commercial regions where the RA3 instance type is supported.
  • Federated Query updates available to all Amazon Redshift customers for preview. Refer to the AWS Region Table for Amazon Redshift availability.
  • The support for native semi-structured data processing in Amazon Redshift is available as public preview in SQL_PREVIEW track.

Resources:  What’s new post [Data Sharing] | What’s new post [RA3] | What’s new [Automated Performance Tuning] | What’s new [Partner console] | What’s new [Cross-AZ cluster recovery]  | What’s new [Federated Query updates] | What’s New [Native semi-structured data support]

Amazon EMR on Amazon EKS

What is it?

Amazon EMR on Amazon EKS provides a new deployment option for Amazon EMR that allows you to run Apache Spark on Amazon Elastic Kubernetes Service (Amazon EKS). If you already use Amazon EMR, you can now run Amazon EMR based applications with other types of applications on the same Amazon EKS cluster to improve resource utilization and simplify infrastructure management across multiple AWS Availability Zones. If you already run big data frameworks on Amazon EKS, you can now use Amazon EMR to automate provisioning and management and run Apache Spark up to 3x faster. With this deployment option, you can focus on running analytics workloads while Amazon EMR on Amazon EKS builds, configures, and manages containers.

Availability:

Amazon EMR on Amazon EKS is available in all commercial AWS Regions except for AWS China (Beijing), AWS China (Ningxia), Asia Pacific

(Osaka-Local), and AWS GovCloud (US) regions.

Use Cases:

  • Consolidated Workloads: Amazon EMR on Amazon EKS can rapidly start and run jobs from multiple customer organizations on the same infrastructure. Cost sensitive development jobs can be executed on compute provided by AWS Fargate, while production jobs requiring higher performance can be backed by Amazon EC2 Reserved Instances. Additional or unused capacity can be used for other containerized workloads such as pre- or post-processing of the data.
  • Low Latency batch jobs: Amazon EMR on Amazon EKS can begin running jobs within seconds without having to wait for provisioning a dedicated cluster. Jobs can then be scheduled at increasing frequency to provide increased resolution of analytics.
  • Distributed Analytics with Multi-AZ workloads: Amazon EMR on Amazon EKS simplifies operations of Spark workloads by running the job within a single AZ, or for higher-availability spreading the job across multiple AZs.

Customer Benefits:

  • Simplify Running Spark on Kubernetes: Amazon EKS provides customers with a managed experience for running Kubernetes on AWS, enabling you to add compute capacity using EKS Managed Node Groups or using AWS Fargate. EMR jobs can access their data on Amazon S3 while monitoring and logging can be integrated with Amazon CloudWatch. Amazon Identity and Access Management (IAM) enables role-based access control for both jobs and to dependent AWS services.
  • Consolidate workloads to run on Amazon EKS: Customers can run multiple Spark jobs simultaneously alongside other containerized workloads on the same Amazon EKS cluster. This results in reduced management overhead and increased resource utilization.
  • Run jobs without the need to provision clusters: A job’s dependencies and configuration parameters are stored within the job definition. This eliminates having to pre-create clusters that are tightly coupled to EMR versions, Spark parameters or job dependencies. EMR on EKS deploys, on- demand, the resources required to run the job based on the job definition, avoiding the need for pre-provisioned clusters for ad-hoc, interactive or batch workloads.

Resources: Website | What’s New Post

AWS Lake Formation Features: Transactions, Row-Level Security, and Acceleration

What is it?

AWS Lake Formation transactions, row-level security, and acceleration are now available for preview. These capabilities are available via new, open, and public update and access APIs for data lakes. These APIs extend AWS Lake Formation’s governance capabilities with row-level security. In addition, with this preview, we introduce governed tables - a new Amazon S3 table type that supports atomic, consistent, isolated, and durable (ACID) transactions. AWS Lake Formation transactions simplify ETL script and workflow development, and allow multiple users to concurrently and reliably insert, delete, and modify rows across multiple governed tables. AWS Lake Formation automatically compacts and optimizes storage of governed tables in the background to improve query performance.

Availability:

This feature is in preview in the US East (N. Virginia) AWS Region.

Resources: Website

AWS Categories: Database

Amazon Aurora Serverless v2

What is it?

Amazon Aurora Serverless v2 (Preview) is the new version of Aurora Serverless, an on-demand, auto-scaling configuration of Amazon Aurora that automatically starts up, shuts down, and scales capacity up or down based on your application's needs. It scales instantly from hundreds to hundreds- of-thousands of transactions in a fraction of a second. As it scales, it adjusts capacity in fine-grained increments to provide just the right amount of database resources that the application needs. There is no database capacity for you to manage, you pay only for the capacity your application consumes, and you can save up to 90% of your database cost compared to the cost of provisioning capacity for peak load. Aurora Serverless v2 (Preview) is currently available in preview for Aurora with MySQL compatibility only.

Availability:

Amazon Aurora Serverless v2 is available in a gated preview for Amazon Aurora with MySQL compatibility in US East (N. Virginia) at this time.

Use Cases:

  • Enterprise database fleet management: Enterprises with hundreds or thousands of applications, each backed by one or more databases, must manage resources for their entire database fleet. As application requirements fluctuate, continuously monitoring and adjusting capacity for each and every database to ensure high performance, high availability and remain under budget is a daunting task. With Aurora Serverless v2 (Preview), database capacity is automatically adjusted based on application demand and you no longer need to manually manage thousands of databases in your database fleet.
  • Software-as-a-Service applications: Software-as-a-Service (SaaS) vendors typically operate hundreds or thousands of Aurora databases, each supporting a different customer, in a single cluster to improve utilization and cost efficiency. With Aurora Serverless v2 (Preview), SaaS vendors can provision Aurora database clusters for each individual customer without worrying about costs of provisioned capacity. It automatically shuts down databases when they are not in use to save costs and instantly adjusts databases capacity to meet changing application requirements.
  • Scaled-out databases split across multiple servers: Customers with high write or read requirements often split databases across several instances to achieve higher throughput. However, customers often provision too many or too few instances, increasing cost or limiting scale. With Aurora Serverless v2 (Preview), customers split databases across several Aurora instances and let the service adjust capacity instantly and automatically based on need.

Customer Benefits:

  • Highly Scalable: Scale instantly, from hundreds to hundreds-of-thousands of transactions, in a fraction of a second.
  • Highly Available: Power your business-critical workloads with the full breadth of Aurora features, including backtrack, cloning, Global Database, Multi-AZ, and read replicas.
  • Cost effective: Scale in fine-grained increments to provide just the right amount of database resources and pay only for capacity consumed.

Resources: Website

Amazon Babelfish for Aurora PostgreSQL

What is it?

Babelfish is a new translation layer for Amazon Aurora PostgreSQL that enables Aurora to understand commands from applications written for Microsoft SQL Server.

Migrating from legacy SQL Server databases can be time consuming and resource intensive. When migrating your databases, you can automate the migration of your database schema and data using the AWS Database Migration Service (DMS), but there is often more work to do, to migrate the application itself including re-writing application code that interacts with the database.

With Babelfish, Aurora PostgreSQL now understands T-SQL, Microsoft SQL Server's proprietary SQL dialect, and supports the same communications protocol, so your apps that were originally written for SQL Server can now work with Aurora with fewer code changes. As a result, the effort required to modify and move applications running on SQL Server 2014 or newer to Aurora is reduced, leading to faster, lower risk, and more cost-effective migrations.

Availability:

Available in preview in us-east-1. At GA, it will be available in all commercial regions.

Customer Benefits:

  • Highly Scalable: Scale instantly, from hundreds to hundreds-of-thousands of transactions, in a fraction of a second.
  • Reduce migration time and risk: With Babelfish, Amazon Aurora PostgreSQL supports commonly used T-SQL language and semantics which reduces the amount of code changes related to database calls in an application. As a result, the amount of application code you need to re- write is minimized, reducing the risk of any new application errors.
  • Migrate at your own pace: With Babelfish, you can run SQL Server code side-by-side with new functionality built using native PostgreSQL APIs. Babelfish enables Aurora PostgreSQL to work with commonly used SQL Server query tools, commands, and drivers. As a result, you can continue developing with the tools you are familiar with.

Resources: Website | What’s new post

Amazon Neptune ML

What is it?

Amazon Neptune ML is a new capability of Amazon Neptune that uses Graph Neural Networks (GNNs), a machine learning technique purpose-built for graphs, to make easy, fast, and more accurate predictions using graph data. With Neptune ML, you can improve the accuracy of most predictions for graphs by over 50% when compared to making predictions using non-graph methods.

Using the Deep Graph Library (DGL), an open-source library that makes it easy to apply deep learning to graph data, Neptune ML automates the heavy lifting of selecting and training the best ML model for graph data, and lets users run machine learning on their graph directly using Neptune APIs and queries. As a result, you can now create, train, and apply ML on Amazon Neptune data in hours instead of weeks without the need to learn new tools and ML technologies.

Availability:

Amazon Neptune ML is available in all AWS Regions where Neptune is available. See details on the AWS Regions Table

Use Cases:

  • Fraud Detection Companies lose millions (even billions) of dollars in fraud, and want to detect fraudulent users, accounts, devices, IP address or credit cards to minimize the loss. You can use a graph-based representation to capture the interaction of the entities (user, device or card) and detect aggregations such as when a user initiates multiple mini transactions or uses different accounts that are potentially fraudulent.
  • Product recommendation Traditional recommendations use analytics services manually to make product recommendations. Neptune ML can identify new relationships directly on graph data, and easily recommend the list of games a player would be interested to buy, other players to follow, or products to purchase.
  • Customer Acquisition: Neptune ML automatically recommends next steps, or product discounts to certain customers based on where they are in the acquisition funnel.
  • Knowledge Graph: Knowledge graphs consolidate and integrate an organization’s information assets and make them more readily available to all members of the organization. Neptune ML can infer missing links across data sources, identify similar entities to enable better knowledge discovery for all.

Customer Benefits:

  • Make predictions on graph data without ML expertise: Neptune ML automatically creates, trains, and applies ML models on your graph data. It uses DGL to automatically choose and train the best ML model for your workload, enabling you to make ML-based predictions on graph data in hours instead of weeks.
  • Improve the accuracy of most predictions by over 50%: Neptune ML uses GNNs, a state of art ML technique applied to graph data that can reason over billions of relationships in graphs, to enable you to make more accurate predictions.

Resources: Website | What’s New Post | Leadership Authored Blog

AWS Categories: Storage

Amazon EBS gp3 Volume

What is it?

Amazon EBS gp3 volumes are the latest generation of general-purpose SSD- based EBS volumes that enable customers to provision performance independent of storage capacity, while providing up to 20% lower price per GB than existing gp2 volumes. With gp3 volumes, customers can scale IOPS (input/output operations per second) and throughput without needing to provision additional block storage capacity. This means customers only pay for the storage they need.

Customer Benefits:

  • Ease of use: gp3 volumes take all the guesswork out of provisioning capacity and performance for your applications. You get sustained, baseline performance of 3,000 IOPS at any volume size. This means that even if you don’t provision any IOPS, your applications will consistently get this baseline performance for the smallest of volumes. For use cases where your application needs more performance than the baseline, you simply provision the IOPS or throughput you need, without having to add more capacity.
  • Higher performance and throughput: gp3 volumes make it easy and cost effective for customers to meet the IOPS and throughput requirements for the majority of their applications, including virtual desktops, medium sized single instance databases such as Microsoft SQL Server and Oracle, latency sensitive interactive applications based on frameworks like Kafka and Spark, and dev/test environments. The new gp3 volumes deliver a baseline performance of 3,000 IOPS and 125 MB/s at any volume size. Customers looking for higher performance can scale up to 16,000 IOPS and 1,000 MB/s for an additional fee.
  • Lower cost: gp3 offers SSD-performance at a 20% lower cost per GB than gp2 volumes. Furthermore, by decoupling storage performance from capacity, you can easily provision higher IOPS and throughput without the need to provision additional block storage capacity, thereby improving performance and reducing costs.

Resources: Website | What’s new post

Amazon EBS Provisioned IOPS Volume

What is it?

Provisioned IOPS volumes, backed by solid-state drives (SSDs), are the highest performance Elastic Block Store (EBS) storage volumes designed for your critical, IOPS-intensive and throughput-intensive workloads that require low latency.

Availability:

Now in Preview: io2 Block Express: Customers that need sub-millisecond latency or need to go beyond the current single volume peak performance and throughput, can sign up for a preview of io2 volumes running on next generation Amazon EBS storage server architecture (io2 Block Express).

Designed to provide 4,000 MB/s throughput per volume, 256K IOPS/volume, up to 64 TiB storage capacity, and 1,000 IOPS/GB as well as 99.999% durability and sub-millisecond latency. With io2 Block Express, customers now get SAN (Storage Area Network) like performance in a high durability block store in the cloud with the ability to scale, provision, and pay for just the capacity they need.

Resources: Website | What’s new post

AWS Categories: Mobile

AWS Amplify featuring New Admin UI

What is it?

AWS Amplify is a set of tools and services that can be used together or on their own, to help front-end web and mobile developers build scalable full stack applications, powered by AWS. With Amplify, you can configure app backends and connect your app in minutes, deploy static web apps in a few clicks, and easily manage app content outside the AWS console. Get to market faster with AWS Amplify.

NEW! The Amplify admin UI is an abstraction layer on top of the Amplify CLI, and lets you configure back-ends on AWS with a graphical user interface. It also allows you to manage content, users and user groups in the app and assign this outside of the group of developers working on the application. The admin UI does not require an AWS account until the point you need the CLI.

Availability:

All AWS markets.

Customer Benefits:

  • Easily manage app users and app content: The Amplify admin UI (NEW!) provides even non-developers with administrative access to manage app users and app content without an AWS account.

AWS Categories: Management and Governance

AWS Service Catalog AppRegistry

What is it?

AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures. AWS Service Catalog allows you to centrally manage deployed IT services, your applications, resources, and metadata. This helps you achieve consistent governance and meet your compliance requirements, while enabling users to quickly deploy only the approved IT services they need.

With AWS Service Catalog AppRegistry, organizations can understand the application context of their AWS resources. You can define and manage your applications and their metadata, to keep track of things like cost, performance, security, compliance and operational status at the application level.

Availability:

For a full list of supported AWS Regions, see details on the AWS Regions Table.

Use Cases:

  • Define and Manage Applications and Metadata
  • Create application definitions that include resource collections and metadata from AWS services and ISV partners.
  • Integrate AppRegistry with your application development processes to maintain a single source of truth.
  • Get application context - Know what application your resource belongs to, and vice versa.

Customer Benefits:

  • Ensure compliance with corporate standards: AWS Service Catalog provides a single location where organizations can centrally manage catalogs of IT services. With AWS Service Catalog you can control which IT services and versions are available, what is configured in each of the available service, and who gets permission access by individual, group, department or cost center.
  • Help employees quickly find and deploy approved IT services: With AWS Service Catalog, you define your own catalog of AWS services and AWS Marketplace software, and make them available for your organization. Then, end users can discover and deploy IT services using a self-service portal.
  • Centrally manage IT service lifecycle: AWS Service Catalog enables you to add new versions of IT services, and end users are notified so they can keep abreast of the latest updates. With AWS Service Catalog you can control the use of IT services by specifying constraints, such as limiting the AWS regions in which a product can be launched.

Resources: Website

AWS Categories: Security, Identity, and Compliance

AWS Audit Manager

What is it?

AWS Audit Manager helps you continuously audit your AWS usage to simplify how you assess risk and compliance with regulations and industry standards. Audit Manager automates evidence collection to make it easier to assess if your policies, procedures, and activities, also known as controls, are operating effectively. When it is time for an audit, AWS Audit Manager helps you manage stakeholder reviews of your controls and enables you to build audit-ready reports with much less manual effort.

Availability:

us-east-1 (N. Virginia), us-east-2 (Ohio), us-west-1 (N. California), us-west-2 (Oregon), ap-southeast-2 (Sydney), ap-northeast-1 (Tokyo), ap-southeast-1 (Singapore), eu-west-1 (Ireland), eu-central-1 (Frankfurt), eu-west-2 (London)

Use Cases:

  • Transition from manual to automated evidence collection: AWS Audit Manager enables you to move from manually collecting, reviewing, and managing evidence to a solution that automates evidence collection and helps to manage evidence security and integrity.
  • Continuous auditing and compliance: With AWS Audit Manager, you have an increased level of transparency into usage activity and changes in the environment. You can continuously collect evidence, monitor your compliance posture, and proactively reduce risk by fine-tuning your controls.
  • Internal risk assessments: Easily perform assessments to help assess risks unique to your business. You can customize a prebuilt framework or build your own framework from scratch. Then, launch an assessment to automatically collect evidence helping you validate if your internal controls are working as intended.

Customer Benefits:

  • Easily map your AWS usage to controls: AWS Audit Manager provides prebuilt frameworks that include mappings of AWS resources to control requirements for well-known industry standards and regulations. A prebuilt framework includes a collection of controls with descriptions and testing information, which are grouped in accordance to the requirements of an industry standard or regulation, such as CIS AWS Foundations Benchmarks, GDPR, or PCI DSS. You can fully customize these prebuilt frameworks and controls to tailor them to your unique needs.
  • Save time with automated collection of evidence: AWS Audit Manager saves you time by automatically collecting and organizing evidence as defined by each control requirement. With Audit Manager, you can focus on reviewing the relevant evidence to ensure your controls are working as intended. For example, you can configure an Audit Manager assessment to automatically collect configuration snapshots from resources on a daily, weekly, or monthly basis, subject to underlying AWS service configurations.
  • Streamline collaboration across teams: AWS Audit Manager helps you streamline audit stakeholder collaboration. For example, the delegation feature enables you to assign controls in your assessment to a subject matter expert to review. You might delegate to a network security engineer to confirm the evidence properly demonstrates that you meet a specific security requirement. Audit Manager also allows team members to comment on evidence, upload manual evidence, and update the status of each control

Resources: Website | What’s New Post

 

Recent Posts