The Future Of Cloud-Based Machine Learning: Highlights from AWS re:Invent 2022

As a pioneer in the field of cloud-based machine learning (ML), Swami Sivasubramanian is credited with starting a revolution in the IT industry. His innovation was to take the artificial intelligence (AI) and deep learning services that had been so effective for Amazon in its primary business of e-commerce and offer them as a cloud service to other organizations.

This week, he took to the stage once more for AWS re: Invent to discuss the cloud giant’s ever-growing portfolio of intelligent services. For me, a firm believer in the power of artificial intelligence and, specifically, ML, when it comes to transforming business and society, his keynote is always one of the highlights of the event.

And for re: Invent 2021 – marking a return to in-person following last year’s all-virtual event – he didn’t disappoint. As usual, he was joined on stage and via video link by several other accomplished experts in the field of ML and business leadership. This year, they included Nicolai Kramer, BMW’s VP of connected vehicle platforms, Neeraja Rentachintala, principle product manager on Amazon Redshift, Aurora co-founder and CEO Chris Urmson, and Allie Miller, global head of machine learning for start-ups at AWS.

Sivasubramanian’s keynote centered on the enterprise data journey, from discovery to insight to action – of course, with a focus on how AWS’s as-a-service offerings provide a fit for every stage of the process. He also took the opportunity, as tradition demands, to announce several new additions to the AWS portfolio. These addressed many of the key tech trends that we see driving innovation today, from the democratization of data to the serverless cloud and AI chatbots.

The jewel in Sivasubramanian’s crown is often said to be Amazon SageMaker – AWS’s fully managed machine learning service, now used by tens of thousands of businesses of all shapes and sizes to wrangle data and extract insights. Earlier this week, AWS CEO Adam Selipsky announced SageMaker Canvas, a no-code point-and-click interface that allows anyone to access machine learning even without any data science background. German car giant BMW already employs AI throughout their entire value chain, and Marc Neumann, Product Owner, AI Platform at The BMW Group said, “We believe Amazon SageMaker Canvas can add a boost to our AI/ML scaling across the BMW Group. With SageMaker Canvas, our business users can easily explore and build ML models to make accurate predictions without writing any code. SageMaker also allows our central data science team to collaborate and evaluate the models created by business users before publishing them to production.”

Sivasubramanian got to be the first to talk about another new development – SageMaker Training Compiler. This is a new engine built into the Pytorch and TensorFlow libraries within the platform, designed to massively speed up the training of deep learning models. According to Sivasubramanian, it can achieve speed increases of up to 50%, dramatically lowering the cost – in terms of both time and money – of training complex computer vision or natural language processing applications.

Pharmaceutical giant Pfizer uses Amazon SageMaker to accelerate drug development and clinical manufacturing. Andrew McKillop, Vice President of Pharmaceutical Sciences, Worldwide Research, Development, and Medical at Pfizer, said, “Pfizer’s goal with AWS is to expedite the processes for drug discovery and development in ways that can ultimately enhance patient experiences and deliver new therapies to market. Working closely with AWS experts in machine learning and analytics, we aim to provide our scientists and researchers with the insights they need to help deliver medical breakthroughs that change patients’ lives.”

Two new SageMaker functions were also unveiled on stage this year – SageMaker Inference Recommender, a tool that helps organizations select the optimal configuration of compute instances for their cloud deployment, and SageMaker Serverless Inference, which allows organizations to do away with instances entirely and opt for a “serverless” model of cloud infrastructure, where the underlying technology is completely transparent, and users simply pay according to the amount of data that is processed.

The Latin American online food delivery company iFood, which fulfills over 60 million orders each month, uses ML to recommend restaurants to their customers. Ivan Lima, Director of Machine Learning and Data Engineering at iFood said, “With Amazon SageMaker Serverless Inference, we expect to be able to deploy even faster and scale models without having to worry about selecting instances or keeping the endpoint active when there is no traffic. With this, we also expect to see a cost reduction to run these services.”

“All of these new SageMaker features are going to make it a lot easier to scale machine learning … but we also know that for companies to adopt ML, more business users need access to machine learning technologies,” said Sivasubramanian, before inviting Miller onto the stage to give a demonstration of how Canvas can help with workloads involving forecasting demand.

Also new this year is AWS DevOps Guru for RDS, a tool for automating the diagnosing and fixing of problems with cloud machine learning databases. This is a service that looks for unusual activity when database access is taking place that could indicate performance-limiting issues or errors in the underlying code. If possible, it then takes action to fix them itself.

Chatbots are becoming a hugely important part of the customer experience journey for many businesses, but they can often be a double-edged sword. While they can certainly free up human customer service staff from mundane, repetitive activities, they can also hugely negatively impact customer experience if they aren’t trained and deployed effectively. AWS is hoping that it can help its business customers to overcome these challenges with Amazon Lex Automated Chatbot Designer, which takes customer conversation transcripts and uses them to train chatbots and automated virtual assistants. Bots trained using the system can then be deployed either in virtual contact centers such as those running on Amazon Connect or via websites or chat applications such as Facebook Messenger.

And one more newly-announced ML service is AWS Database Migration Service Fleet Advisor, designed to allow users to migrate large numbers of legacy databases and analytics applications to the Amazon cloud, using smart analysis of metadata, schema, and usage metrics to automate the migration process.

Sivasubramanian wrapped up his 2021 keynote by announcing an expansion of AWS’s drive to train a new generation of data and AI experts for careers in the industry.

He said, “Making ML more accessible to all is central to creating a more diverse and inclusive tech workforce,” before announcing the launch of a new AWS AI and ML scholarship program that will award $10m per year in scholarships to underrepresented students for careers in machine learning. This will include 2,000 scholarships per year to the Udacity AI nano degree course and 500 mentorship opportunities with tenured Amazon ML experts.  

Overall, it was a very inspiring keynote that announced many innovations that will make it easier than ever for any business user to leverage cloud-based machine learning and AI.

About The Author

Bernard Marr is an internationally best-selling author, popular keynote speaker, futurist, and a strategic business & technology advisor to governments and companies. He helps organizations improve their business performance, use data more intelligently, and understand the implications of new technologies such as artificial intelligence, big data, blockchains, and the Internet of Things.

Related posts