In this article, I aim to demystify AWS Bedrock, shedding light on its core functionalities, the array of currently available Foundation Models (FMs), and their versatile applications across various use cases.
1. What is Amazon Bedroack?
Amazon Bedrock is the latest machine learning platform that was released by Amazon Web Services(AWS) that enables users an easy way to build and scale generative AI applications with foundation models, or “FMs” for short. This service accelerates the development of generative AI applications using foundational models without managing any physical infrastructure or the high operational overhead that usually comes with it. You can choose from different foundation models to cater for your particular use case such as from AI21 Labs, Anthropic, Stability AI, or internally from AWS. You can privately customize FMs using your organization’s data and utilize AWS tools and capabilities that you are familiar with, which empower you to deploy scalable, reliable as well as highly secure generative AI applications.
2. So what can Amazon Bedrock do?
Amazon Bedrock is a machine learning platform that is similar with Amazon SageMaker. Both of the services have multiple sub-components/services, but the latter is more complicated and is primarily used by Machine Learning Engineers to build and train custom machine learning models. This is different from Amazon Bedrock which is meant to be a more user-friendly platform for building and scaling generative AI applications through the use of foundation models (FMs).
Amazon Bedrock can be used in various use cases such as:
Text generation : You can generate new pieces of original written content like short stories, social media posts, articles, web page copy, or even school essays.
Chatbots: Amazon Bedrock is capable of building conversational interfaces like chatbots and virtual assistants to improve the user experience for your clients. It is possible that this platform can provide a direct integration with Amazon Lex (a chatbot service in AWS).
Search: This service will allow its users to easily search, find or even synthesize information to quickly answer various questions from a large collection of diverse data.
Text summarization: You can use this service to summarize textual content – from blog posts, essays, books, documents and others to get a concise summary of the subject matter – removing the need to read the full content in verbose.
Image generation: Amazon Bedrock can create a realistic and artistic picture of different subjects, environments, and scenes that you specify.
Personalization: This AI service can also personalize the way you deal with your customer – allowing your users to search exactly what they’re looking for with more relevant and contextual product recommendations, which is way better than just mere keyword matching.
3. What on earth is a Foundation Model (FM), anyway?
According to an insightful article from Stanford, Foundation Models are defined as "models trained on comprehensive data that can be flexibly adapted to a wide spectrum of subsequent tasks."
Another perspective comes from the Center for Research on Foundation Models (CRFM) at Stanford University: In recent times, a remarkably successful approach to constructing AI systems has emerged: Develop a single model through extensive data training and tailor it to suit numerous applications. This model earns the title of a foundation model.
To put it simply, a Foundation Model is akin to a bedrock – it can be shaped and molded to serve various functions, unlike the traditional approach of constructing a highly specialized Machine Learning model for a specific use case. This concept encourages collaboration and unity among diverse contributors, enabling a singular model to be repurposed and employed with enhanced efficiency. Rather than investing countless hours in training a bespoke model designed for a single task, the emphasis shifts to constructing a pliable and versatile model capable of executing a multitude of tasks.
4. How can I access and try the new Amazon Bedrock service?
To date, the Amazon Bedrock service is not yet available in the AWS Management Console, it is yet to be in general availability. However, you can express interest to get early access. The currently available FMs are:
Amazon - Titan Text(auto NLP), Titan Embeddings (search).
AI21 - Jurassic 2 (generate multilingual text)
Anthropic - Claude 2(automate Q&A, conversation)
Stability AI - Stable Diffusion(generate Images)
Cohere - Command and Embed (text generation 100+ languages)
Once you have selected the provider and model, it is important to understand the inference configuration. These configuration inputs are important to drive the desired results using the API. Common inference parameter definitions:
Randomness and Diversity - Foundation models support the following parameters to control randomness and diversity in the response.
Temperature – Large language models use probability to construct the words in a sequence. For any given next word, there is a probability distribution of options for the next word in the sequence. When you set the temperature closer to zero, the model tends to select the higher-probability words. When you set the temperature further away from zero, the model may select a lower-probability word.
In technical terms, the temperature modulates the probability density function for the next tokens, implementing the temperature sampling technique. This parameter can deepen or flatten the density function curve. A lower value results in a steeper curve with more deterministic responses, and a higher value results in a flatter curve with more random responses.
Top K – Temperature defines the probability distribution of potential words, and Top K defines the cut off where the model no longer selects the words. For example, if K=50, the model selects from 50 of the most probable words that could be next in a given sequence. This reduces the probability that an unusual word gets selected next in a sequence. In technical terms, Top K is the number of the highest-probability vocabulary tokens to keep for Top- K-filtering - This limits the distribution of probable tokens, so the model chooses one of the highest- probability tokens.
Top P – Top P defines a cut off based on the sum of probabilities of the potential choices. If you set Top P below 1.0, the model considers the most probable options and ignores less probable ones. Top P is similar to Top K, but instead of capping the number of choices, it caps choices based on the sum of their probabilities. For the example prompt "I hear the hoof beats of ," you may want the model to provide "horses," "zebras" or "unicorns" as the next word. If you set the temperature to its maximum, without capping Top K or Top P, you increase the probability of getting unusual results such as "unicorns." If you set the temperature to 0, you increase the probability of "horses." If you set a high temperature and set Top K or Top P to the maximum, you increase the probability of "horses" or "zebras," and decrease the probability of "unicorns."
Length - The following parameters control the length of the generated response.
Response length – Configures the minimum and maximum number of tokens to use in the generated response.
Length penalty – Length penalty optimizes the model to be more concise in its output by penalizing longer responses. Length penalty differs from response length as the response length is a hard cut off for the minimum or maximum response length.
In technical terms, the length penalty penalizes the model exponentially for lengthy responses. 0.0 means no penalty. Set a value less than 0.0 for the model to generate longer sequences, or set a value greater than 0.0 for the model to produce shorter sequences.
Repetitions - The following parameters help control repetition in the generated response.
Repetition penalty (presence penalty) – Prevents repetitions of the same words (tokens) in responses. 1.0 means no penalty. Greater than 1.0 decreases repetition.
I hope you now have a good understanding of Amazon Bedrock service and how you can use it. Thank you for reading, and I hope you found this information informative and valuable.