# Hasura Migration Principles

In Hasura, the metadata can be exported as files as a representation of the state of the database; however, you might want more granular step-by-step checkpoints on the evolution of the state. This is the main purpose of migrations, to make and track all the changes in the database.

#### Hasura Migrations - The Metadata

Whenever you do certain actions on the web console or via the API, Hasura records it in the metadata catalogue which is a schema called `hdb_catalog` in your Postgres database.

For example, if you track a table, a new entry is created in the `hdb_catalog.hdb_table` table in Postgres. Similarly, there are more tables in this schema to track relationships, event triggers, functions and remote schemas.

All information in this schema can be exported as files. Export options are available on the console, CLI and via the API.

These files when imported to an existing or new Hasura instance, will clear out the `hdb_catalog` schema on that instance and populates it again with the imported data.

{% hint style="info" %}
One thing to note is that all the Postgres resources the metadata refers to should already exist when the import happens, otherwise Hasura will throw an error.
{% endhint %}

#### Hasura Migrations - Principles

In order to understand how migrations work, we have to understand a couple principles:

1. Migrations are stored and applied as **steps** (or versions). A migration step (or version) contains changes to the Postgres schema.
2. The migration version can store the **up migration** (creating resources) and the **down migration** (deleting resources).

   > For example, migration version 1 can include the SQL statements required to create a table called profile as the up migration and SQL statements to drop this table as the down migration.
3. The migration versions can be automatically generated by the Hasura console or can be written by hand. They are stored as SQL files in a directory called migrations.

   > For more details on the format of these files, refer to the [Migration file format documentation](https://hasura.io/docs/1.0/graphql/core/migrations/reference/migration-file-format.html#migration-file-format-v2)

#### Hasura Migrations - Initialization (context)

Note that there is no need to run this step in this folder, it is only necessary for new projects. It is included in here so that you may understand how these files came to be.

In a new project, the first step to get started with Hasura migrations is to capture the "initial state" of the database, including any existing metadata, database structure, etc.

First, we need to create a workspace based on our endpoint:

```
$ hasura init new-hasura-project --endpoint http://your-hasura-endpoint --admin-secret <youradminsecret>

$ cd new-hasura-project
```

Next, we need to export the initial state of the database by creating a initial point from the server: export the metadata:

```
$ hasura migrate create init --from-server --endpoint <...> --admin-secret <...>
$ hasura metadata export --endpoint <...> --admin-secret <...>
```

These commands generates a database project folder, which looks looks this:

```
new-hasura-project/
├── config.yaml
├── metadata
│   ├── actions.graphql
│   ├── actions.yaml
│   ├── allow_list.yaml
│   ├── cron_triggers.yaml
│   ├── functions.yaml
│   ├── query_collections.yaml
│   ├── remote_schemas.yaml
│   ├── tables.yaml
│   └── version.yaml
├── migrations
│   └── 1599845236313_init
│       └── up.sql
└── seeds

4 directories, 12 files
```

Notice there is a `config.yaml` file, here is where the connection to the Hasura instance is stored. The `metadata` folder contains the Hasura metadata information about your database. The `migrations` folder contains folders, each represent a migration step (version). Each version contains any up/down actions.

This is the example of the contents of the config.yaml file:

```
version: 2
endpoint: http://localhost:8080
admin_secret: abcdef1234567890
metadata_directory: metadata
actions:
  kind: synchronous
  handler_webhook_baseurl: http://localhost:3000
```

#### Hasura Migrations - Application Principles

Since we already have a migrations project, all we need to worry about is creating migrations and apply these changes safely. To understand how these changes are applied, these are the basic principles:

1. When someone executes `migrate apply` using the Hasura CLI, the CLI will first read the migration files present in the current directory.
2. The CLI then contacts the configured Hasura Server and get the status of all migrations applied to the server by reading the `hdb_catalog.schema_migrations` table. Each row in this table denotes a migration version that is already applied on the server.
3. By comparing these two sets of versions, the CLI derives which versions are already applied and which are not. The CLI would then go ahead and apply the migrations on the server. This is done by executing the actions against the database through the **Hasura metadata APIs**.
4. The default action of the `migrate apply` command is to execute all the **up migrations**. In order to roll back changes, you would need to execute **down migrations** using the `--down` flag on the CLI.

####

###


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://atd-dts.gitbook.io/moped-documentation/dev-guides/hasura/hasura-migrations/hasura-migration-principles.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
