# Key concepts

# Model

In A.I. and machine learning, a model is a file that contains the learnings. You can provide an input to a model and get a prediction. This is true on hydra as well. However, there is a layer of abstraction that automatically handles all the complexities associated with a model file.

On hydra, a model is of a certain model template type. A model has a collection of training data, a model file, and configurations. You can create, edit and delete a model through the builder (opens new window) UI or the API.

# Model Template

When you create a model on hydra, the platform uses a code and configuration template to generate that model. These are referred to as model templates. Model templates are linked to a pool of algorithms and an input data source. Using these pre-created templates is how you can skip all the coding work that would otherwise be required. Below are a few examples of model templates.

  • messageClassification
  • sentimentAnalysis
  • commonEntityRecognition
  • namedEntityRecognition

# Training Data

When you are creating a user-trained model, you need to provide training data to the system. Training data includes and input and expected output values. Hydra uses these data points to learn how to automatically make those predictions. For example, if you are building a model to predict which of the incoming customer requests are of high priortiy, then you have to provide a set of training data that includes the request text and a label indicating whether high priority or low priority.

If you are using the API you can use the training data endpoint (opens new window) to submit these training data points. If you are using one of hydra's native application integrations, then the system automatically collects the training data.

# Training

Training is how you get a model to learn how to make predictions. During the training process, hydra uses a partial set of the provided training data to build models using a number of different algorithms and optimizations. Once those models are ready, hydra uses the remaining training data that was held back to test all the models to see how they are performing. When the tests are complete, hydra picks the highest-scoring model and discards the rest. This best performing model is then served through an unique API endpoint.

# Activation

You have the option to toggle a model on and off. This is referred to as activation. When a model is active, it is live and available to make predictions.

# Action

You will often want to do something using the predictions made by hydra. Actions can help you with that. Actions are pre-defined code modules that you can run using the hydra predictions as inputs. There are two types of actions. Pre-built actions that you can pick and use and actions where you can plug in your own custom code module.

Send an email through Sendgrid is an example for a pre-built action.

# Automation

In most business cases, you want to do something with the predictions — automations can help you with that. Automations chain a series of actions together to run a number of different tasks automatically after each prediction. See here for more details.

# Hpu

Hydra Processing Unit (HPU) is the unit of measurement that is used to calculate the consumption. Models that use text and column data as the input cost one HPU per invocation. Models that use documents as the input costs three HPUs per page. And, models that use audio or video files as the input costs ten HPUs per minute. Processing documents, audio and video files costs more because it takes more compute and memory to run those models.

However, the free tier comes with enough credits to build and experiment with hydra without incurring any costs.

# Prediction

Models produce predictions as their output. Prediction include one or many labels and a confidence score. If you are using the API predictions are returned as a JSON payload. If you are using one of our native integrations, predictions are often written back directly to the calling app.

# Feedback loop

In some cases, over time new scenarios get introduced. Sometimes, these cause the model performance to suffer. For example, let's say that you are predicting the priority level of a customer request. You have a model that is performing well and making accurate predictions. And, your company just introduced a new service. Some of the customer requests about this new service are considered high priority. But, the model is failing to recognize them as high priority items. Feedback loop proactively addresses this issue.

When enabled, feedback loop monitors for any corrections made by the users, updates the model's training data set, and automatically re-trains the model to capture any new scenarios and maintain the model's performance level.