Loading...

Madhusudan Shekar on how Amazon is democratizing AI

Madhusudan Shekar on how Amazon is democratizing AI
Madhusudan Shekar
Loading...

As head of digital innovation at Amazon Internet Services, the local arm of Amazon Web Services, Madhusudan Shekar is tasked with taking the best practices and mechanisms used within Amazon Web Services (AWS) to customers who would like to adopt similar strategies within their internal functions. Shekar helps a host of startups take their ideas forward and scale the implementation of technologies such as artificial intelligence (AI), machine learning (ML), Kubernetes, microservices and a host of other AWS cloud based solutions.      

This usually happens in the form of customers taking a blueprint of the practices and processes used within AWS and modifying them to suit their styles and requirements.

In an interview with TechCircle, Shekar spoke about how AWS is making AI/ML available for all through three levels of democratization, how the Seattle-headquartered technology giant is using open source and building solutions that don’t require customers to locked into proprietary software, and why enterprises need to take a cautious approach in the age of architectures being built on microservices.

Loading...

Edited excerpts:

You speak extensively about the democratization of AI. How beneficial is that going to be for an ecosystem that is increasingly innovating on AI/ML technologies?

Democratization of AI is about making the tooling and capabilities of AI/ML available to developers and data scientists at various levels of competence so that anybody can use AI to increase the velocity of new customer acquisition, reduce costs and look for new business innovations.

Loading...

Traditionally AI/ML could be used by specific people. Many of the algorithms in AI have been around for at least 30 years. But the number of people who could take advantage of it was small.

In the democratization process, it is possible to package them in certain ways. It is possible now for developers with no knowledge of AI/ML to use the technology and make their applications smarter. A data scientist who is on the learning curve takes advantage of what AWS offers in AI to take their idea further.

What is the strategy that Amazon uses to democratize AI?  

Loading...

We looked at the whole spectrum of AI/ML capabilities that need to be offered in the market and split it across three stacks. On top is AI services, which are pre-packaged APIs which any developer with no previous experience in AI can call. A programming language of choice, such as Jawa, Golang, Rust, Python or any language can be chosen and then a call request can be placed to any of the APIs that Amazon publishes. This will allow for the language to generate speech. The developer can enter text and generate voice as he or she does not have to build anything. A request to the API can generate voice that can make applications human like.

 Similarly, applications can be built with visual cognition. It can detect objects based on picture inputs. Facial detection, facial comparison are some of the use cases of visual cognition through AI. This is adding smarts to an application that previously could not see. A software that could not see or speak before is now capable of those functions. Similarly software that can listen, comprehend unstructured data, forecasting engines can be built easily with plug and play APIs.  

A solution called Personalize also allows developers to build personalisation engines similar to those that Amazon uses on its websites to customise specifically for each customer. These are the solutions that require no prerequisite knowledge in AI but can be incorporated easily into any application creating process using ready APIs.

Loading...

What are the AI offerings for developers, data- scientists who want more customization and customers who want to build solutions from scratch?

Next is the layer for developers and data-scientists with knowledge on ML. It enables them to bring the data and tweak an already existing algorithm out of the 15 that we provide through open source.

The ML platform is called SageMaker and allows for any data scientist or developer to get started and start tweaking the existing algorithms to better suit your application development. The platform allows for the building of the code, testing, training, deploying into production and scaling as required, all on SageMaker.

Loading...

The third level are customers who want to push the ambit on ML and build custom logic, custom script, take advantage of the hardware offered and the proprietary frameworks. They use our deep learning AMIs (Amazon Machine Images) on GPUs and specialised machines that are designed for AI.

Organisations such as Pinterest are using the third layer of our deep learning AMIs while companies like Intuit who are conducting tax computation, validation and computation on the second layer.

For the pre-defined APIs at the first layer there are a number of startups and companies that are taking advantage of the readily available models to push out applications and updates faster.

Loading...

Talking about proprietary software, many companies are worried about getting locked into the solutions offered by one provider. How can AWS help counter this?

Any solution deployed on AWS can be moved to any other platform to a large extent. There will always be some aspect of the code that is native to a platform. If we use simple storage service (Amazon S3), reading and writing to S3 will always have something unique to itself. When you move to another platform a few small changes might be required. 

There are a number of engineering practices available today that allow for easy transferability from one provider to the other. There are different types of lock-ins.

Financial lock-ins are contractually agreed upon. You could have lock-ins related to an approach taken in an engineering decision for usage of a certain language, framework or a library. When you use packaged applications you make a choice to use only those applications.

Can you mention a couple of examples where AWS built software doesn't get locked-in and can be moved easily to other platforms?

AWS is using a lot of open source technologies across the board. For example Amazon’s Aurora storage engine uses MYSQL or can be used with PostgreSQL, which are both open source so the code doesn’t have to change if you’re using one of these two. The database can be moved out of Aurora into any other environment with MYSQL or PostgreSQL and it will continue to work without any disruptions.

If you are using Kubernetes, you can use our managed service as much as any other service. In fact there is more Kubernetes running on AWS than anywhere else. Similarly 85% of the Tensorflow AI/ML framework runs on AWS today.

But we do not want to be locked into one framework. Our Platform SageMaker too gives the option to choose any framework of choice. If there is a framework that we don’t ship and you would like to use it, dockerise and containerise the framework and it can then be brought into SageMaker to be built further. We are still in the early days of ML and we don’t want to be tied down to a few frameworks.

What is your opinion on the growing trend of microservices-based architectures? 

Everybody is moving towards a microservices architecture today and with that the number of moving components in the application increases. 

Traditionally in old client-servicing architectures there were two layers -- the client and the server. Then we moved towards a distributed architecture which consisted of client, browser, a web server, an application server and a database.

Applications were still stuck in monoliths which could be scaled to a particular point but the innovation velocity was still curbed. Irrespective of how many developers were working on them, the applications had to be built into a monolith and then shipped into production.

In microservices it is now possible for developers to independently produce the outcomes within shorter timelines but the number of components has increased drastically.  

What precautions can be taken to ensure smooth functioning while dealing with microservices? 

Some of the largest globally deployed applications run thousands of microservice components simultaneously and which are trying to improve. It is a mathematical certainty that when you have too many components, something isn’t going to work correctly. In which case, the company has to build discipline in the organisation to ensure that the solution can be run in ‘partial failure mode’.

Constantly running on a partial failure mode can build resilience in an organisation and help serve customers continuously even during instances when a few components might not be functioning.

Different disciplines needs to be inculcated at different layers of the applications and failure needs to be injected during testing and production.


Sign up for Newsletter

Select your Newsletter frequency