1. Home
  2. Archive Shuttle
  3. Administration Guide
  1. Home
  2. Administration
  3. Administration Guide

Administration Guide

Configuring Archive Shuttle

This section outlines the basic steps required to configure Archive Shuttle. This is applicable for most archive migration projects. The user interface and other components of Archive Shuttle are explained in subsequent sections. We recommend reading each section before performing the post-installation configuration.

Initial Archive Shuttle setup

After Archive Shuttle has been installed, the following five tasks need to be performed before you start migrating archives:

  • Configure module schedules.
  • Enable Active Directory domains.
  • Add Enterprise Vault environment(s), if Enterprise Vault is the source or target environment.
  • Add link databases.
  • Configure module mappings.

These tasks are explained below.

Configure module schedules

During Archive Shuttle installation, the core product, databases, and a number of modules will have been deployed in the environment in order to fulfill migration requirements. During the installation of the modules, each module is enabled. If modules are not enabled, they will not receive any work from the Archive Shuttle Core.

In order to check that all appropriate modules are enabled and operational, follow these steps:

  1. From the Archive Shuttle web interface, click Configuration > AS Modules in the navigation bar.
  2. Review the list of modules:
    • Ensure all required modules are present. For more information see the Installation Overview, as well as the Planning Guide, which will help you validate this part of the migration.
    • Ensure that the computer name, domain name, and version are as expected.
  3. Verify that all modules are reported appropriately and that none of the modules have a red background. If they do, there is a communication problem between the module and the Archive Shuttle Core.

All of the modules are configured to run continuously by default. If this doesn’t meet your requirements for the migration, modules can be individually scheduled to more suitable times using the Set Schedule button on the AS Modules page. This is described later in this guide.

Enable Active Directory domains

A few minutes after the Active Directory Collector Module is enabled, a list of domains where the Active Directory Collector Module is running become visible on the Configuration > Active Directory page.

By default, all domains are discovered, but won’t be scanned for user accounts. In order to enable one or more domains to be scanned, follow these steps:

  1. From the Archive Shuttle web interface, click Configuration > Active Directory in the navigation bar.
  2. Review the list of domains.
  3. Select the check box next to one or more domains.
  4. Click Enable in the navigation bar.
  5. Make sure the Scan Domain column shows a check mark.

Add Enterprise Vault environment(s)

Note: Skip this step if Enterprise Vault is not the source or target of a migration.

The next step to perform in the migration workflow is to add Enterprise Vault environment(s). This is performed from the Configuration > EV Environment page. This page also displays environments that are currently configured.

If the migration is between two Enterprise Vault directory databases, then this page is used to enter details for both directory databases. If the migration is between Enterprise Vault and an external system, such as Microsoft Exchange or Office 365, then only one Enterprise Vault Directory database is added. Likewise, if the migration is between two Enterprise Vault sites within the same Enterprise Vault directory, only one entry is listed on this page.

In order to add an Enterprise Vault environment, follow these steps:

  1. From the Archive Shuttle web interface, click Configuration > EV Environment.
  2. Click Add to add an Enterprise Vault directory.
  3. Enter the required information (for example, the fully qualified domain name and instance name, if it’s not the default instance).
  4. Click Add/Update to apply the changes to the Archive Shuttle database.

Once you specify required EV Directory databases, the Enterprise Vault Collector Module gathers and displays information about the environment. These are then displayed in the EV Environment page after a few minutes.

Tip: Click Refresh in the navigation bar to reload the list of vault stores.

Enable vault stores

Enable archive collection for each vault store that you’ll use in the migration by following these steps:

  1. From the Archive Shuttle web interface, click Configuration > EV Environment in the navigation bar.
  2. Select the checkbox next to each vault store.
  3. In the navigation bar, click Enable.
  4. Verify that the Archive Collection Enabled column now contains a check mark.

Add link databases

Once archive collection is enabled, the next step in the migration is to add link databases for each of the vault stores. A link database is needed for each link in the source environment.

The Configuration > Links page may already contain some links that were collected from the Active Directory Collector Module (the Exchange databases) and from the Enterprise Vault Collector Module (the Vault Stores, as a result of adding each Enterprise Vault environment). These links are created automatically. You must manually create other links, for example, links to Office 365 and PST. To create a link database:

  1. Open the Configuration > Links page.
  2. Select the tab of the type of link database you’re creating (for example, Office 365), and then click Create Link.
  3. Follow the prompts to create the link.

After a link database is created, check the EV Environment page again to review the progress of the archive and item gathering stages.

Note: The number of archives and size of data will not be populated on the EV Environment page until the module linkage is added, but the vault stores will display.

Configure module mappings

The next stage of the configuration involves mapping modules to the Links in the migration project. To do it, follow these steps:

  1. Go to Configuration > Links.
  2. Select the check box next to a vault store/Exchange database/source link.
  3. Click Map Modules in the navigation bar.
  4. From the pop-up window, select appropriate modules from the drop-down lists, and then click Save.

Edit module mappings

Module mappings can be modified by following these steps:

  1. Go to Configuration > Links.
  2. Select the check box next to a vault store/Exchange database.
  3. Click Map Modules.
  4. Make desired changes to the current mapping for this link, and then click Save.

Shortly after module mappings have been added, the appropriate modules perform their tasks and the Number of Containers column updates to show the number of containers in the link.

For the modules that are performing export and import functions, it’s also important to set the staging area path. The corresponding modules need to be paired up to facilitate the flow of archived data. For example, the EV Export module path should correspond to the Exchange import path if the migration is needed to flow in that direction.

Setting up the staging area path

To configure the staging area path:

  1. Go to the Configuration > Links page.
  2. Select the checkbox next to a vault store/Exchange database/source link.
  3. Click Path and follow the prompts.

Repeat this step for each export and each import module. For the second and subsequent paths you define, the Staging Area Path is pre-populated with the path that was previously used. This makes it possible to quickly configure many modules with the same storage location.

The current export/import path and module mappings can be viewed on the Links page. Make sure all links are correctly defined with the appropriate export/import path before proceeding with the migration of any archived data.

It’s also possible to configure the Default Staging Area Path that all modules and links will use unless the setting is overridden on an individual link.

Validating the setup

Once the stages above have been performed and all necessary setup and configuration steps are complete, the archive migration is ready to begin.

At this point, a collection of links have been defined and mapped to modules that perform tasks on those links. The source and target environments have been defined within Archive Shuttle and all of the Archive Shuttle Modules are enabled and set to an appropriate schedule. Finally, the Enterprise Vault retention categories have been mapped according to the migration needs.

Note: Before proceeding with any migration of archived data, confirm that each stage has been performed.

Depending on the migration, additional steps may be required. These are described later in this guide.

Migrating an archive to Enterprise Vault

This section explains how to migrate a single test archive within Enterprise Vault. For the purposes of this test, the migration takes place between two vault stores in the same Enterprise Vault environment.

Prerequisites

The following steps must be completed before the migration begins:

  • Two or more vault stores must exist in the Enterprise Vault environment.
  • All modules must be enabled.
  • Appropriate Active Directory domains must be enabled for scanning.
  • An Enterprise Vault environment has been added for migration.
  • A link database has been created for the source vault store/archives.
  • Module mappings are configured for the source and the target vault stores. The source vault store needs to be linked to an EV Export module, and the target needs to be linked to an EV import module. In addition, an EV Provisioning Module needs to be linked to the source vault store and to the target vault store.

Each of these steps is described earlier in this document.

Add retention category mappings

The final stage of the setup of the archive migration is to map retention categories between the source and target environments.

Note: This step is only required for Enterprise Vault to Enterprise Vault migrations.

Follow the steps below to achieve this task:

  1. Go to Configuration > EV Retention Mappings.
  2. Click Create Mapping.
  3. Select the values from the drop-down lists, and click Add to add the mapping to the retention category grid.

Retention mappings are managed from this screen.

There is an option to use “Add Intrasite Migration Mappings”, which maps each retention category to itself. This can be used in a situation where archives are to be migrated in the same environment. It simply moves the archives to the new location.

Map Containers – For the test archive

In order to migrate a test archive from one vault store to another, the source and target containers must be mapped. This can be done as follows:

  1. Go to Manage & Operations > Bulk Mapping.
  2. Type the beginning of the archive name in the Container Name filter. Click the button in the far right of the filter row in the Apply Filter column.

Note: If the test archive is not displayed, go back to the Configuration > EV Environment page and click Sync all AD Users. Then, select the vault store where the source archive is located and issue a Run Now for Archive Gathering.

  1. Select the checkbox next to the test archive, and click Add Mappings.

Note: A wizard begins gathering information related to the mapping for the select archive. (Multiple archives can also be selected.)

  1. When you’re prompted to select a target container type, select Enterprise Vault, and then click Next.
  2. When you’re prompted to select a target user, select Same User, and then click Next.
  3. When you’re prompted to select a container strategy, select Create new Containers, and then click Next.
  4. Select the link that corresponds to where the archive is to be migrated, and then click Next.

Note: If the target link drop-down list is empty, then it is likely that an EV Import module has not been correctly associated with the link. Review the Links page.

  1. On the Workflow Policy screen, select Standard Workflow – Within same EV Site (without Archive deletion), and then click Next. You don’t need to select the ownerless workflow policy in this situation; it’s optional.

Note: If a group of users was selected containing a mixture of ownerless and normal containers, this screen gives an administrator the option to specify which workflow to use for the ownerless containers.

  1. For the filter policy, select Default (No Filter), and then click Next.
  2. On the Container Mapping Settings screen, select Enabled for both Migration Status and Item Gathering Status, and then click Next.
  3. If desired, set the mapping’s priority, and then click Next.
  4. Review the summary screen. If everything looks as expected, click Confirm.

Review Stage 1 status

A few minutes after the mapping is created, Archive Shuttle tells the appropriate modules to start the actions defined in the mapping. Review the progress for this stage of the migration using these steps:

  1. Go to Manage & Operations > Stage 1 (Sync Data).
  2. Type the beginning of the archive name in the Name filter, and then click the button in the Apply Filter column, found at the far right of the filter row.
  3. Once the source archive displays, click the Refresh button to see the status of the export of the archive and the ingestion of data into the target archive. Continue to click Refresh until both export and import are complete.

Note: If the data progress bars reach 100% for export, but show no progress for import, the Retention Category mappings have likely not been configured.

If more data is added to the source archive, it’s synchronized to the target archive every 24 hours using the connection made with the mapping you created in the previous section. In addition, part of the Stage 2 workflow is to perform a final sync. Therefore, it’s not necessary to ensure that export and import has reached 100% before moving on to the next steps.

Validate exported data

If there’s a large amount of data to export and import, and you want to check the progress, click the Refresh button located on the navigation bar on the Stage 1 (Sync Data) page.

In addition, you can use Windows Explorer to browse the staging area data on the disk and view the folder structure.

Also, the Enterprise Vault Admin Console shows that an archive exists in the source Vault Store and that a new archive has been created in the target vault store. It’s also possible to grant a service account access to the target archive and perform searches on it to make sure the data matches the source archive.

Enable Stage 2

Before you enable Stage 2, the switch-over, for the test archive, check the Stage 1 (Sync Data) page for issues like failed item-export or failed item-import.

To enable Stage 2 for the test archive, perform these steps:

  1. Open the Stage 1 (Sync Data) page.
  2. Select the checkbox next to the test archive, and click Enable Stage 2 in the navigation bar.
  3. When you refresh the Stage 1 status, a check mark should display in the Stage 2 Enabled column.

Stage 2 is the “switch-over” to the target environment. A final synchronization of archived items is performed from the source environment to the target environment, before several additional migration tasks are performed.

Review Stage 2 status

After a few minutes, the progress of the test archive migration displays on the Stage 2 page:

  1. Go to the Stage 2 (Switch User) page.
  2. Type the beginning of the archive name in the Container Name filter. Apply the filter by clicking the button in the Apply Filter column, located at the far right of the filter row.

Note: If the archive isn’t displayed, wait 1-2 minutes, and then click Refresh in the navigation bar.

  1. Once the source archive displays, click Refresh to see the progress of the final stages of the archive migration. By default, since the workflow policy that was selected was Standard policy (without archive deletion), these steps are performed by Archive Shuttle:
    1. Disable source mailbox from Enterprise Vault archiving.
    2. Rename the source archive.
    3. Collect any remaining items for migration.
    4. Import the remaining items into the target.
    5. Zap the mailbox, removing EV settings.
    6. Assign the new archive to the user.
    7. Enable the mailbox for archiving again.
    8. Rename the target archive.
    9. Update all the existing shortcuts to point to the new archive.

Verify data has been migrated

After all Stage 2 operations are complete, the Stage 2 (Switch User) page for the test archive shows a check mark in the Finished column.

In addition, if Outlook or Outlook Web Access is used to access the test mailbox, then all of the archived items are accessible via the new archive. Shortcuts to archived items will work and open correctly.

Migrating an archive to Exchange 2010/2013

Complete the steps below to migrate a single test archive within Enterprise Vault to an Exchange 2010/2013 personal archive.

Prerequisites

The steps below must be complete before the migration begins:

  • Enable all Modules.
  • Enable appropriate Active Directory domains for scanning.
  • Add an Enterprise Vault environment for the migration.
  • Create a link database for the source archives.
  • Configure module mappings for the source archives and the target Exchange environment.
  • Run a PowerShell command to allow application impersonation.

Each of these steps is described earlier in this article.

In addition, if you’ll use an Exchange personal archive, rather than an ordinary mailbox for the test migration, the personal archive needs to be pre-created. The personal archive doesn’t need to be created on the same mailbox database as the primary mailbox.

Map containers – For the test archive

To migrate a test archive from Enterprise Vault to Exchange 2010/2013, the source and target containers must be mapped. Follow these steps to do it:

  1. Go to Manage & Operations > Bulk Mapping.
  2. Type the beginning of the archive name in the Container Name filter. Click the button in the Apply Filter row, located at the far right of the filter row.

Note: If the test archive doesn’t display, go back to the Configuration > EV Environment page and click Sync all AD Users. Then, select the vault store where the source archive is located and click Run Now.

  1. Select the checkbox next to the test archive(s) and click Add Mappings. A wizard runs, gathering information related to the mapping(s) for the selected archive(s).
  2. When you’re prompted to choose a target container type, select Exchange and click Next.
  3. When you’re prompted to choose the type of migration, select Normal and click Next.
  4. When you’re prompted to choose the target mailbox type, select Secondary (Archive) mailbox. Then, in the second part of the dialog select Skip, and then click Next.
  5. When you’re prompted to choose the target user, select Same User, and click Next.
  6. On the Choose Workflow Policy screen, select the Exchange migration (without archive deletion) option, leave the second drop-down list blank, and then click Next.
  7. When you’re prompted to choose a filter policy, leave the Default (No Filter) option selected, and then click Next.
  8. On the Container Mapping Settings screen, select Enabled for both Migration Status and Item Gathering Status, and then click Next.
  9. Leave the priority blank and click Next.
  10. Review the summary screen, and then click Confirm.

Review Stage 1 status

A few minutes after the mapping is created, appropriate modules start the actions defined in the mapping. The progress of this stage of the migration can be reviewed by following these steps:

  1. Go to the Stage 1 (Sync Data) page.
  2. Type the beginning of the archive name in the Name filter. Click the Apply Filter button at the far right of the filter row.
  3. Once the source archive displays, click Refresh to see the progress of the export of the archive and the ingestion of data into the Exchange personal archive. Continue to click Refresh until both the export and import are complete.

Validate exported data

In addition to checking export/import status (using the steps in the Review Stage 1 status procedure above) you can use Windows Explorer to browse the export/import storage area and data on the disk.

It’s also possible to log in to Outlook or Outlook Web Access as the test user. The migrated data will be present in the Personal Archive, and can be freely opened and manipulated.

Note: If the test user previously used the Exchange personal archive feature, it might be difficult to locate the migrated data. Because of this, we recommend that the test user has an empty personal archive before migration.

Enable Stage 2

Before enabling Stage 2 (Switch User) for the test archive, check the Stage 1 Status page for issues like failed item-export or item-import.

Complete these steps to enable Stage 2 (Switch User) for the test archive

  1. Go to the Stage 1 (Sync Data) page.
  2. Select the checkbox next to the test archive, and click Enable Stage 2 in the navigation bar.
  3. When you refresh the Stage 1 (Sync Data) page, there should be a green and white check mark in the Stage 2 Enabled column.

Stage 2 is the switchover to the target environment. A final synchronization of archived items is performed from the source environment to the target environment before several additional migration tasks are performed.

Review Stage 2 status

After a few minutes, view the progress of the test archive migration on the Stage 2 Status page by following these steps:

  1. Go to the Stage 2 (Switch User) page.
  2. Type the beginning of the archive name in the Container Name filter. Click the Apply button located at the far right of the row in the Apply Filter column.

Note: If the archive is not displayed, wait one to two minutes and click Refresh in the navigation bar.

  1. Once the source archive displays, click the Refresh button to show the progress of the final stages of the archive migration. By default, since the Exchange migration (without archive deletion) workflow policy was selected, the following steps are performed by Archive Shuttle:
    1. Disable source mailbox from Enterprise Vault archiving.
    2. Rename the source archive.
    3. Collect any remaining items for migration.
    4. Import the remaining items into the target.
    5. Delete any Enterprise Vault shortcuts in the mailbox

Verify data has been migrated

Once all Stage 2 operations are complete, the Stage 2 (Sync Data) for the test archive will have a check mark in the Finished column.

In addition, if Outlook or Outlook Web Access is used to access the test mailbox, then all archived items will be in the personal archive and no Enterprise Vault shortcuts will remain in the source mailbox.

Migrating an archive to PST

This section outlines the steps that are necessary in order to complete the migration of a single test archive within Enterprise Vault to a PST file.

Prerequisites

It’s essential that the following steps are complete before the migration begins:

  • All Modules are enabled.
  • Appropriate Active Directory Domains are enabled for scanning.
  • An Enterprise Vault Environment has been added for migration.
  • A Link Database has been created for the source archives.
  • Module Mappings is configured for the source archives.

Each of the above steps is described earlier in this article.

In addition, a PST Link needs to be created. This is explained later in this article.

Creating a new PST naming policy

Finalized PST files are named according to a chosen PST naming policy. Policies are managed in Archive Shuttle on the Configuration > Naming Policies page.

The Naming Policies page shows existing policies. It’s used to view, edit, and delete existing policies, and create new policies.

Token Description
*username* Username of the owning user (sAMAccountName)
*firstname* First name of the owning user
*lastname* Last name of the owning user
*fullname* Full name of the owning user
*email* E-mail address of the owning user
*upn* User principal name of the owning user
*pstid* IF of the PST file; continuous integer over all PST files
*pstnumber* Number of PST files; continuous integer per user/mapping
*archivename* Name of the archive
*archiveID* The Enterprise Vault archive ID associated with the archive.

Map Containers – For the test archive

To migrate a test archive from Enterprise Vault to a PST file, the source and target containers must be mapped. To do this, follow these steps:

  1. Go to Manage & Operations > Bulk Mapping.
  2. Type the beginning of the archive name in the Container Name filter. Then, click the Apply button located at the far right of the filter row.

Note: If the test archive doesn’t display, go back to the Configuration > EV Environment page and click Sync all AD Users. Then select the vault store where the source archive is located and click Run Now.

  1. Select the checkbox next to the test archive, and click Add Mappings. A brief wizard runs, gathering information relating to the mapping for the selected archive (multiple archives can also be selected).
  2. On the Choose Target Link page, select the name of the PST Link from the drop-down list, and click Next.
  3. On the Workflow Policy page, select Native Import (PST) (without archive deletion), and then click Next. You don’t need to select the ownerless workflow policy in this situation; it’s optional.
  4. On the Filter Policy page, select Default (No Filter), and then click Next.
  5. On the PST Filename Policy page, select the name of a policy from the drop-down list, and click Next.
  6. On the Container Mapping Settings page, select Enabled for both Item Gathering Status and Migration Status, and click Next.
  7. If desired, set the mapping’s priority, and then click Next.
  8. Review the summary page, and then click Confirm.

Review Stage 1 status

A few minutes after the mapping is created, modules begin the actions defined in the mapping. Check this stage’s progress by following these steps:

  1. Go to Manage & Operations > Stage 1 (Sync Data).
  2. Type the beginning of the archive name in the Container Name filter. Click the button located at the far right of the filter row to apply the filter.
  3. Once the source archive displays, click Refresh to see the progress of the export of the archive and the ingestion of the data into a temporary PST file. Continue to click Refresh until both export and import are complete.

Validate exported data

If there is a large amount of data to export and import, then you can see the progress bars for the Stage 1 Status move by clicking Refresh from the navigation bar on the Stage 1 (Sync Data) page.

In addition, you can use Windows Explorer to browse the export/import storage area data on the disk.

In the case of PST Migration, in the folder on disk containing the exported data, there will be a further sub-folder containing a temporary PST file. You can copy it to a new location and attach to Outlook in order to review its contents.

Enable Stage 2

Before enabling Stage 2, the switch, for the test archive, check the Stage 1 (Sync Data) page for issues like failed item-export or item-import.

Then, to enable Stage 2 for the test archive, perform the following steps:

  1. Go to the Stage 1 (Sync Data) page.
  2. Select the checkbox next to the test archive, and then click Enable Stage 2 in the navigation bar.
  3. When the Stage 1 (Sync Data) page is refreshed, there should be a green and white check mark in the Stage 2 Enabled column.

Stage 2 is the switch-over to the target environment. A final synchronization is performed of archived items from the source environment to the temporary PST file, before several additional migration tasks are performed.

Review Stage 2 status

After a few minutes the progress of the test archive migration will be visible in the Stage 2 Status page, as follows:

  1. Go to the Stage 2 (Switch User) page.
  2. Type the beginning of the archive name in the Name filter. Click Apply at the right of the filter row.

Note: If the archive is not displayed, wait 1-2 minutes and then click Refresh.

  1. Once the source archive displays, click Refresh from time to time to see the progress of the final stages of the archive migration. By default, since the Native Import (PST) (without archive deletion) workflow policy was selected, the following steps will be performed by Archive Shuttle:
    1. Disable source mailbox from Enterprise Vault archiving.
    2. Rename the source archive.
    3. Collect any remaining items for migration.
    4. Import the remaining items in to the temporary PST file.
    5. Finalize the data in the temporary PST file.
    6. Move the PST file to the PST Output Path, as defined on the PST Link, using the naming standard defined in the filename policy.
    7. Delete any Enterprise Vault shortcuts in the mailbox.
    8. Change any pending-archive items in the source mailbox back to normal items.
    9. Zap the permissions on the source archive.

Verify data has migrated

Once all of the Stage 2 operations have been completed, the Stage 2 (Sync Data) for the test archive will have a green and white check mark in the Finished column.

In addition, the PST file can be opened with Outlook from the PST Output Path.

Migrating an Archive to Office 365

This section outlines steps of the migration of a single test archive within Enterprise Vault to an Office 365 mailbox.

Prerequisites

Complete these steps before beginning the migration:

  • Enable all modules.
  • Enable appropriate Active Directory Domains for scanning.
  • Add an Enterprise Vault Environment for the migration.
  • Create a Link Database for the source archives.
  • Configure Module Mappings for the source archives.

Each of the above steps is described earlier in this article.

Note: It is assumed that the mailbox has been moved to Office 365 already and that it is the EV Archive that is to be moved.

Office 365 Global Admin

In addition, it is necessary to configure the Archive Shuttle to Office 365 connection to use credentials that have sufficient privileges. It is recommended to configure multiple accounts with Application Impersonation Rights.

Note: By default, Archive Shuttle will ingest into five containers simultaneously; therefore, it is recommended to have at least five service accounts.

User Principal Name (UPN)

When mailbox information is gathered from Office 365, they are matched against local Active Directory users by using the Primary SMTP Address.

This means that the local Active Directory users must have the same UPN as the Office 365 users, or have the same Primary SMTP Address.

Configure the Office 365 Module

The Office 365 module needs credentials supplying to it in order to be able to connect to Office 365. To provide these credentials, follow the steps below:

  1. Log in to the server or workstation where the Office 365 Module is currently deployed using the account that is used to run the Archive Shuttle Office 365 Service.
  2. Click Start and locate the Credentials Editor, and then click it to launch the editor.
  3. In the small application that launches, click Add and enter the account UPN and password for the account described in the prerequisites section.
  4. Click OK on the UPN/Password dialog.
  5. Click Save in the Credentials Editor, and close the application

These credentials will be used by the module when it connects to Office 365.

The final step in configuring the module is to start the Office 365 Module Service using the Windows Service Control Manager.

Note: Additional configuration may be necessary in order to obtain good ingest performance, and/or to work in environments where web/HTTP proxies are used. These are described in the Installation Overview.

In order to migrate an archive to Office 365, an Archive Shuttle Office 365 Link is required. Follow these steps to configured it:

  1. Go to the Links page.
  2. Click Office 365 in the Actions Bar.
  3. Click Create Link.
  4. Give the link a name, for example, O365.
  5. Click Create.

The new link needs to have Archive Shuttle Modules associated with it, and the Staging Area Path needs to match the EV Export Module.

Sync mailbox data

Before you can map a container in the source environment to an Office 365 target, select the Office 365 link on the Links administration page, and click Sync Mailboxes in the Actions Bar.

Map Containers – For the test archive

In order to migrate a test archive from Enterprise Vault to Office 365, the source and target containers must be mapped. This can be done as follows:

  1. Go to the Bulk Mapping page.
  2. Type the beginning of the archive name in the Name filter. Click Apply, located at the right of the filter row.

Note: If the test archive is not displayed, go back to the EV Environment page and click Sync all AD Users, then select the Vault Store where the source archive is located and issue a Run Now for Archive Gathering.

  1. Select the checkbox next to the test archive, and click Add Mappings.

Note: A short wizard runs, gathering information related to the mapping for the select archive. (Multiple archives can also be selected.)

  1. On the Target Container Type screen, select Office 365, and click Next.
  2. Choose whether the migration should be performed to an Office 365 Mailbox or Personal Archive, and click Next.
  3. Select the link to use for this operation, and click Next.
  4. Choose the Workflow Policy for this migration, and click Next.
  5. Choose the Filter Policy for this migration, and click Next.
  6. On the Container Mapping Settings screen, select Enabled for both Migration Status and Item Gathering Status, and click Next.
  7. Review the summary screen before clicking Confirm.

Review Stage 1 status

A few minutes after the mapping is created, Archive Shuttle instructs the appropriate modules to start the actions defined in the mapping. The progress of this stage of the migration can be reviewed as follows:

  1. Go to the Stage 1 (Sync Data) page.
  2. Type the beginning of the archive name in the Name filter. Click Apply at the right of the filter row.
  3. Once the source archive displays, click Refresh from time to time to show the progress of the export of the archive and the ingesting of the data into Office 365. Continue to click Refresh until both export and import reach 100%.

Validate exported data

If there is a large amount of data to export and import, then it will be possible to see the progress bars for the Stage 1 Status move by clicking Refresh from the navigation bar on the Stage 1 (Sync Data) page of the user interface.

In addition, you can use Windows Explorer to view the export/import storage area data on the disk.

Also, at this time it will be possible to login to Outlook or Outlook Web Access as the test user. The migrated data will be present in the Office 365 mailbox, and can be freely opened and manipulated.

Enable Stage 2

Before enabling Stage 2, the switch, for the test archive, the Stage 1 (Sync Data) page should be checked for issues such as failed item-export or item-import.

To enable Stage 2 for the test archive, perform the following steps:

  1. Go to the Stage 1 (Sync Data) page.
  2. Select the checkbox next to the test archive, and click Enable Stage 2 in the navigation bar.
  3. When the Stage 1 Status page is refreshed, there should be a green and white check mark in the Stage 2 Enabled column.

Stage 2 is the switchover to the target environment. A final synchronization is performed of archived items from the source environment to the target environment, before several additional migration tasks are performed.

Review Stage 2 Status

After a few minutes, the progress of the test archive migration will be visible in the Stage 2 Status page, as follows:

  1. Navigate to the Stage 2 (Switch User) page.
  2. Type the beginning of the archive name in the Name filter. Click Apply at the right of the filter row.

Note: If the archive is not displayed, wait one to two minutes and click Refresh in the navigation bar.

  1. Once the source archive is displayed, you can click ‘Refresh’ from time to time to show the progress of the final stages of the archive migration. By default, since the workflow policy which was selected was ‘Office 365 (without archive deletion)’, the following steps will be performed by Archive Shuttle:
    1. Rename the source archive
    2. Collect any remaining items for migration
    3. Import the remaining items in to the target
    4. Delete any Enterprise Vault shortcuts in the Office 365 mailbox
    5. Change any archive-pending items back to normal items in the Office 365 mailbox

Verify data has migrated

Once all of the Stage 2 operations have been completed, the ‘Stage 2 (Switch User)’ for the test archive will have a green and white check mark in the ‘Finished’ column.

In addition Outlook or Outlook Web Access can be used to access the Office 365 mailbox to verify that all data from the Enterprise Vault archive is present and accessible.

Migrating from PST to a target environment

This section of the document will outline the steps which are necessary in order to complete the migration of a single test PST to an Exchange 2010/2013 Personal Archive.

The steps are largely the same when migrating from source PST files to any supported target environment.

 

Prerequisites

It is essential that the following steps have been completed prior to beginning the migration:

  • All Modules should be enabled
  • Appropriate Active Directory Domains have been enabled for scanning
  • The Exchange target links should be configured with a specific staging area, and with modules associated to the link.

Each of the above steps is described in the previous sections of this document.

In order to use PST as a source for migration, a PST link needs to be created, and configured as follows:

  1. Go to the Links page in Archive Shuttle (Configuration -> Links)
  2. On the PST tab click on ‘Create Link’
  3. Give the link a meaningful name (eg PST Source)
  4. Specify a UNC path, in this case it can be any path, because it will not be used in the case of using PST as a source.
  5. With the link still selected, click on Map Modules on the Link tab.
  6. A Native Export Module should be selected underneath the ‘Source’ option.
  7. With the link still selected, click on ‘Create Database’ to create a link database.

Scan PST Location(s)

The PST link also needs to be configured with one or more source UNC paths to scan for PSTs, so that they can be migrated from. This can be achieved as follows:

  1. Go oto the Links page in Archive Shuttle (Configuration -> Links)
  2. Select the PST link which was created in the previous step.
  3. On the PST tab select the ‘PST Source’ option
  4. Click on ‘New’ in the centre of the dialog, and enter a UNC path to scan for PSTs.
  5. If required multiple paths can be entered.

It is not necessary to click on ‘Scan’ after adding a new UNC path for the module, it will scan new paths automatically

 

Map Containers – For the test archive

In order to migrate a test archive from one vault store to another, the source and target containers must be mapped. This can be done as follows:

  1. Navigate to the “Bulk Mapping” page.
  2. Change the filter for “Type”, so it is set to PST. This will show all the discovered PST files.

The PSTs will show as ownerless, and will display the file path and file name. The number of items in the PST will show as zero at this stage

 

  1. Locate the PST you wish to use to test the migration, and select it.
  2. Click on ‘Assign Archive User’ and pick an Active Directory user whose mailbox will be the target of the migration.
  3. With the container still selected, click ‘Add Mappings’ from the navigation bar.

A short wizard will start, which gathers information relating to the mapping for the select archive. (Multiple archives can also be selected)

 

  1. On the ‘Target Container Type’ screen, select ‘Exchange’, and click ‘Next’.
  2. On the ‘Choose target mailbox type’ screen, select ‘Secondary (Archive) mailbox). In the second part of the dialog select ‘Map to primary mailbox’ if the secondary mailbox does not exist, and then click on ‘Next’.
  3. On the ‘Target User’ screen, select ‘Same User’ and click on ‘Next’
  4. On the ‘Workflow Policy’ screen, select the ‘PST to Exchange/Office365 (without archive deletion)’ option, and click on ‘Next’.
  5. On the ‘Filter Policy’ screen, select the ‘Default (No Filter)’ option, and click ‘Next’.
  6. On the ‘Container Mapping Settings’ screen, select ‘Enabled’ for both Migration Status and Item Gathering Status, and click on ‘Next’.
  7. Review the summary screen before clicking ‘Confirm’.

If the PST is not assigned an owner, it is still possible to use the Manual Mapping screen in Archive Shuttle to perform a migration from PST to a target.

 

Review Stage 1 Status

A few minutes after the mapping has been created, Archive Shuttle will instruct the appropriate modules to start the actions defined in the mapping. The progress for this stage of the migration can be reviewed by doing the following:

  1. Navigate to the “Stage 1 (Sync Data)” page.
  2. Type the beginning of the archive name in the ‘Name’ filter. Click on the ‘Apply’ button at the right of the filter row.
  3. Once the source archive is displayed, click ‘Refresh’ from time to time to show the progress of the export of the archive and the ingesting of the data into the target Archive. Continue to click ‘Refresh’ until both export and import are 100%.

If further data is added to the source archive, it will be synchronized every 24 hours to the target archive via the connection made with the mapping performed in the previous section. In addition, part of the Stage 2 Workflow is to perform a ‘final synch’. Therefore, it’s not necessary to ensure that export and import has reached 100% before moving on to the next steps.

Validate exported data

If there is a large amount of data to export and import, you can view the progress bars for the ‘Stage 1 (Sync Data)’ move by clicking ‘Refresh’ from the navigation bar on the ‘Stage 1 (Sync Data)’ page of the user interface.

In addition, you can use Windows Explorer to browse the export/import storage area data on the disk.

Also at this time it will be possible to login to Outlook or Outlook Web Access as the test user. The migrated data will be present in the Personal Archive, and can be freely opened and manipulated.

If the test user previously used the Personal Archive feature, it might be difficult to locate the migrated data. Because of this, we recommend that the test user has an empty personal archive before migration.

 

Enable Stage 2

Before enabling ‘Stage 2’, the switch, for the test archive, the ‘Stage 1 (Sync Data)’ page should be checked for issues such as failed item-export or item-import.

To enable ‘Stage 2’ for the test archive, perform the following steps:

  1. Navigate to the “Stage 1 (Sync Data)” page.
  2. Select the checkbox next to the test archive, and then click ‘Enable Stage 2’ in the navigation bar.
  3. When the ‘Stage 1 (Sync Data)’ page is refreshed, there should be a green and white check mark in the ‘Stage 2 Enabled’ column.

Stage 2 is the switchover to the target environment. A final synchronization is performed of archived items from the source environment to the temporary PST file, before several additional migration tasks are performed.

Review Stage 2 Status

After a few minutes, the progress of the test archive migration will be visible in the Stage 2 Status page, as follows:

  1. Navigate to the “Stage 2 (Switch User)” page.
  2. Type the beginning of the archive name in the ‘Name’ filter. Click on the ‘Apply’ button at the right of the filter row.

If the archive is not displayed, wait one to two minutes and click ‘Refresh’ in the navigation bar

 

  1. Once the source archive is displayed, you can click ‘Refresh’ from time to time to show the progress of the final stages of the archive migration. By default, since the workflow policy that was selected was ‘PST to Exchange/Office 365 (without archive deletion)’, the following steps will be performed by Archive Shuttle:
    1. Export any remaining items from the PST file
    2. Import the remaining items in to the target

Verify data has migrated

Once all of the Stage 2 operations have been completed the ‘Stage 2 (Sync Data)’ for the test archive will have a green and white check mark in the ‘Finished’ column.

In addition, if Outlook or Outlook Web Access is used to access the test mailbox then all of the archived items will be in the Personal Archive, and no Enterprise Vault shortcuts will remain in the source mailbox.

Migrating from EAS to a target environment

This section of the document will outline the steps which are necessary in order to complete the migration of a single test EAS to Enterprise Vault.

The steps are largely the same when migrating from EAS to any supported target environment.

 

Prerequisites

It is essential that the following steps have been completed prior to beginning the migration:

  • All Modules should be enabled
  • Appropriate Active Directory Domains have been enabled for scanning
  • The Enterprise Vault target links should be configured with a specific staging area, and with modules associated to the link.

Each of the above steps is described in the previous sections of this document.

Add an EAS Environment

In order to use EAS as a source for migration, an EAS Environment needs to be added, and configured as follows:

  1. Go to the EAS Environment page in Archive Shuttle (Configuration -> EAS Environment)
  2. Click on ‘Add’
  3. Enter the details required to connect to the EAS Environment.
  4. Go to the System Configuration page for EAS, and enter the URL to be used to connect to the EAS WebServer.

The newly added EAS Environment will be added as a Link to the Links page in Archive Shuttle, but modules may still need to be associated with it. This can be achieved as follows:

  1. Go to the Links page (Configuration -> Links)
  2. Select the EAS Link which was created previously.
  3. Click on ‘Map Modules’ and check and assign the modules on the link.

Map Containers – For the test archive

In order to migrate a test EAS archive to an Enterprise Vault environment, the source and target containers must be mapped. This can be done as follows:

  1. Navigate to the System Configuration page
  2. On the EV Import Module tab, enter the retention category ID to use for the items which will be ingested into Enterprise Vault.
  3. Navigate to the ‘Bulk Mapping’ Page.
  4. Change the filter for “Type”, so it is set to EAS. This will show all the discovered EAS archives.
  5. Locate the archive you wish to use to test the migration, and select it.
  6. With the container still selected, click ‘Add Mappings’ from the navigation bar.

A short wizard will start, which gathers information relating to the mapping for the select archive. (Multiple archives can also be selected)

 

  1. On the ‘Target Container Type’ screen, select ‘Enterprise Vault’, and click ‘Next’.
  2. On the ‘Choose target user type’ screen, select ‘Same User’, and then click on ‘Next’.
  3. On the ‘New or Existing Container’ screen, select ‘Create new Containers’ and click on ‘Next’.
  4. On the ‘Choose Target Link’ screen, choose the appropriate Enterprise Vault target link.
  5. On the ‘Workflow Policy’ screen, select the ‘EAS to Enterprise Vault (without archive deletion)’ option, and click on ‘Next’.
  6. On the ‘Filter Policy’ screen, select the ‘Default (No Filter)’ option, and click ‘Next’.
  7. On the ‘Container Mapping Settings’ screen, select ‘Enabled’ for both Migration Status and Item Gathering Status, and click on ‘Next’.
  8. Review the summary screen before clicking ‘Confirm’.

Review Stage 1 Status

A few minutes after the mapping has been created, Archive Shuttle will instruct the appropriate modules to start the actions defined in the mapping. The progress for this stage of the migration can be reviewed by doing the following:

  1. Navigate to the “Stage 1 (Sync Data)” page.
  2. Type the beginning of the archive name in the ‘Name’ filter. Click on the ‘Apply’ button at the right of the filter row.
  3. Once the source archive is displayed, click ‘Refresh’ from time to time to show the progress of the export of the archive and the ingesting of the data into the target Archive. Continue to click ‘Refresh’ until both export and import are 100%.

Validate exported data

If there is a large amount of data to export and import, you can view the progress bars for the ‘Stage 1 (Sync Data)’ move by clicking ‘Refresh’ from the navigation bar on the ‘Stage 1 (Sync Data)’ page of the user interface.

In addition, you can use Windows Explorer to browse the export/import storage area data on the disk.

Enable Stage 2

Before enabling ‘Stage 2’, the switch, for the test archive, the ‘Stage 1 (Sync Data)’ page should be checked for issues such as failed item-export or item-import.

To enable ‘Stage 2’ for the test archive, perform the following steps:

  1. Navigate to the “Stage 1 (Sync Data)” page.
  2. Select the checkbox next to the test archive, and then click ‘Enable Stage 2’ in the navigation bar.
  3. When the ‘Stage 1 (Sync Data)’ page is refreshed, there should be a green and white check mark in the ‘Stage 2 Enabled’ column.

Stage 2 is the switchover to the target environment. A final synchronization is performed of archived items from the source environment to the temporary PST file, before several additional migration tasks are performed.

Review Stage 2 Status

After a few minutes, the progress of the test archive migration will be visible in the Stage 2 Status page, as follows:

  1. Navigate to the “Stage 2 (Switch User)” page.
  2. Type the beginning of the archive name in the ‘Name’ filter. Click on the ‘Apply’ button at the right of the filter row.

If the archive is not displayed, wait one to two minutes and click ‘Refresh’ in the navigation bar

 

  1. Once the source archive is displayed, you can click ‘Refresh’ from time to time to show the progress of the final stages of the archive migration. By default, since the workflow policy that was selected was ‘EAST to Enterprise Vault (without archive deletion)’, the following steps will be performed by Archive Shuttle:
    1. Disable archiving in EAS
    2. Import the remaining items in to the target
    3. Assign the EV archive to the end user
    4. Enable the mailbox for archiving in Enterprise Vault
    5. Rename the target archive (to remove ‘Archive Shuttle
    6. Update all existing EAS shortcuts in the mailbox and convert them to Enterprise Vault shortcuts.

Verify data has migrated

Once all of the Stage 2 operations have been completed the ‘Stage 2 (Sync Data)’ for the test archive will have a green and white check mark in the ‘Finished’ column.

In addition, if Outlook or Outlook Web Access is used to access the test mailbox then all of the archived items will be in the Personal Archive, and no Enterprise Vault shortcuts will remain in the source mailbox.

Migrating from DAM to a target environment

This section of the document will outline the steps which are necessary in order to complete the migration of a single test DAM to PST.

The steps are largely the same when migrating from DAM to any supported target environment.

 

Prerequisites

It is essential that the following steps have been completed prior to beginning the migration:

  • All Modules should be enabled
  • Appropriate Active Directory Domains have been enabled for scanning
  • The PST target links should be configured with a specific staging area, and with modules associated to the link.
  • The PST Rollover Threshold should be configured to an appropriate value.
  • The PST Filename policy should be defined.

Each of the above steps is described in the previous sections of this document.

Add a DAM Environment

In order to use DAM as a source for migration, a DAM Environment needs to be added, and configured as follows:

  1. Go to the DAM Environment page in Archive Shuttle (Configuration -> Dell Archive Manager Environment)
  2. Click on ‘Add’
  3. Enter the details required to connect to the DAM Environment

The newly added DAM Environment will be added as a Link to the Links page in Archive Shuttle, but modules may still need to be associated with it. This can be achieved as follows:

  1. Go to the Links page (Configuration -> Links)
  2. Select the DAM Link which was created previously.
  3. Click on ‘Map Modules’ and check and assign the modules on the link.

Map Containers – For the test archive

In order to migrate a test DAM archive to PST file, the source and target containers must be mapped. This can be done as follows:

  1. Navigate to the ‘Bulk Mapping’ Page.
  2. Change the filter for “Type”, so it is set to DAM. This will show all the discovered DAM archives.
  3. Locate the archive you wish to use to test the migration, and select it.
  4. With the container still selected, click ‘Add Mappings’ from the navigation bar.

A short wizard will start, which gathers information relating to the mapping for the select archive. (Multiple archives can also be selected)

 

  1. On the ‘Target Container Type’ screen, select ‘PST’, and click ‘Next’.
  2. On the ‘Choose a format’ screen, select ‘PST’, and then click on ‘Next’.
  3. On the ‘Choose Target Link’ screen, choose the appropriate PST link.
  4. On the ‘Workflow Policy’ screen, select the ‘Dell Archive Manager to PST)’ option, and click on ‘Next’.
  5. On the ‘Filter Policy’ screen, select the ‘Default (No Filter)’ option, and click ‘Next’.
  6. On the ‘Container Mapping Settings’ screen, select ‘Enabled’ for both Migration Status and Item Gathering Status, and click on ‘Next’.
  7. Review the summary screen before clicking ‘Confirm’.

Review Stage 1 Status

A few minutes after the mapping has been created, Archive Shuttle will instruct the appropriate modules to start the actions defined in the mapping. The progress for this stage of the migration can be reviewed by doing the following:

  1. Navigate to the “Stage 1 (Sync Data)” page.
  2. Type the beginning of the archive name in the ‘Name’ filter. Click on the ‘Apply’ button at the right of the filter row.
  3. Once the source archive is displayed, click ‘Refresh’ from time to time to show the progress of the export of the archive and the ingesting of the data into the target Archive. Continue to click ‘Refresh’ until both export and import are 100%.

Validate exported data

If there is a large amount of data to export and import, you can view the progress bars for the ‘Stage 1 (Sync Data)’ move by clicking ‘Refresh’ from the navigation bar on the ‘Stage 1 (Sync Data)’ page of the user interface.

In addition, you can use Windows Explorer to browse the export/import storage area data on the disk.

Enable Stage 2

Before enabling ‘Stage 2’, the switch, for the test archive, the ‘Stage 1 (Sync Data)’ page should be checked for issues such as failed item-export or item-import.

To enable ‘Stage 2’ for the test archive, perform the following steps:

  1. Navigate to the “Stage 1 (Sync Data)” page.
  2. Select the checkbox next to the test archive, and then click ‘Enable Stage 2’ in the navigation bar.
  3. When the ‘Stage 1 (Sync Data)’ page is refreshed, there should be a green and white check mark in the ‘Stage 2 Enabled’ column.

Stage 2 is the switchover to the target environment. A final synchronization is performed of archived items from the source environment to the temporary PST file, before several additional migration tasks are performed.

Review Stage 2 Status

After a few minutes, the progress of the test archive migration will be visible in the Stage 2 Status page, as follows:

  1. Navigate to the “Stage 2 (Switch User)” page.
  2. Type the beginning of the archive name in the ‘Name’ filter. Click on the ‘Apply’ button at the right of the filter row.

If the archive is not displayed, wait one to two minutes and click ‘Refresh’ in the navigation bar.

 

  1. Once the source archive is displayed, you can click ‘Refresh’ from time to time to show the progress of the final stages of the archive migration. By default, since the workflow policy that was selected was ‘Dell Archive Manager to PST’, the following steps will be performed by Archive Shuttle:
    1. Disable archiving in DAM
    2. Import the remaining items in to the target
    3. Close the PST file
    4. Rename the PST file, according to the file name policy
    5. Delete the shortcuts from the mailbox.

Verify data has migrated

Once all of the Stage 2 operations have been completed the ‘Stage 2 (Sync Data)’ for the test archive will have a green and white check mark in the ‘Finished’ column.

The PST file can that was created can then be attached to Outlook and reviewed to ensure that the data was successfully migrated.

Migrating from SourceOne or EmailXtender to a target environment

This section of the document will outline the steps which are necessary in order to complete the migration of a single test SourceOne archive to PST. The same steps can be used to migrate from EmailXtender.

The steps are largely the same when migrating from SourceOne to any supported target environment.

 

Prerequisites

It is essential that the following steps have been completed prior to beginning the migration:

  • All Modules should be enabled
  • Appropriate Active Directory Domains have been enabled for scanning
  • The PST target links should be configured with a specific staging area, and with modules associated to the link.
  • The PST Rollover Threshold should be configured to an appropriate value.
  • The PST Filename policy should be defined.

Each of the above steps is described in the previous sections of this document.

Add a SourceOne Environment

In order to use SourceOne as a source for migration, a SourceOne Environment needs to be added, and configured as follows:

  1. Go to the SourceOne Environment page in Archive Shuttle (Configuration -> SourceOne Environment)
  2. Click on ‘Add’
  3. Enter the details required to connect to the SourceOne Environment

The newly added SourceOne Environment will be added as a Link to the Links page in Archive Shuttle, but modules may still need to be associated with it. This can be achieved as follows:

  1. Go to the Links page (Configuration -> Links)
  2. Select the SourceOne Link which was created previously.
  3. Click on ‘Map Modules’ and check and assign the modules on the link.

Map Containers – For the test archive

In order to migrate a test SourceOne archive to PST file, the source and target containers must be mapped. This can be done as follows:

  1. Navigate to the ‘Bulk Mapping’ Page.
  2. Change the filter for “Type”, so it is set to SourceOne. This will show all the discovered SourceOne archives.
  3. Locate the archive you wish to use to test the migration, and select it.
  4. With the container still selected, click ‘Add Mappings’ from the navigation bar.

A short wizard will start, which gathers information relating to the mapping for the select archive. (Multiple archives can also be selected)

 

  1. On the ‘Target Container Type’ screen, select ‘PST’, and click ‘Next’.
  2. On the ‘Choose a format’ screen, select ‘PST’, and then click on ‘Next’.
  3. On the ‘Choose Target Link’ screen, choose the appropriate PST link.
  4. On the ‘Workflow Policy’ screen, select the ‘SourceOne to PST)’ option, and click on ‘Next’.
  5. On the ‘Filter Policy’ screen, select the ‘Default (No Filter)’ option, and click ‘Next’.
  6. On the ‘Container Mapping Settings’ screen, select ‘Enabled’ for both Migration Status and Item Gathering Status, and click on ‘Next’.
  7. Review the summary screen before clicking ‘Confirm’.

Review Stage 1 Status

A few minutes after the mapping has been created, Archive Shuttle will instruct the appropriate modules to start the actions defined in the mapping. The progress for this stage of the migration can be reviewed by doing the following:

  1. Navigate to the “Stage 1 (Sync Data)” page.
  2. Type the beginning of the archive name in the ‘Name’ filter. Click on the ‘Apply’ button at the right of the filter row.
  3. Once the source archive is displayed, click ‘Refresh’ from time to time to show the progress of the export of the archive and the ingesting of the data into the target Archive. Continue to click ‘Refresh’ until both export and import are 100%.

Validate exported data

If there is a large amount of data to export and import, you can view the progress bars for the ‘Stage 1 (Sync Data)’ move by clicking ‘Refresh’ from the navigation bar on the ‘Stage 1 (Sync Data)’ page of the user interface.

In addition, you can use Windows Explorer to browse the export/import storage area data on the disk.

Enable Stage 2

Before enabling ‘Stage 2’, the switch, for the test archive, the ‘Stage 1 (Sync Data)’ page should be checked for issues such as failed item-export or item-import.

To enable ‘Stage 2’ for the test archive, perform the following steps:

  1. Navigate to the “Stage 1 (Sync Data)” page.
  2. Select the checkbox next to the test archive, and then click ‘Enable Stage 2’ in the navigation bar.
  3. When the ‘Stage 1 (Sync Data)’ page is refreshed, there should be a green and white check mark in the ‘Stage 2 Enabled’ column.

Stage 2 is the switchover to the target environment. A final synchronization is performed of archived items from the source environment to the temporary PST file, before several additional migration tasks are performed.

Review Stage 2 Status

After a few minutes, the progress of the test archive migration will be visible in the Stage 2 Status page, as follows:

  1. Navigate to the “Stage 2 (Switch User)” page.
  2. Type the beginning of the archive name in the ‘Name’ filter. Click on the ‘Apply’ button at the right of the filter row.

If the archive is not displayed, wait one to two minutes and click ‘Refresh’ in the navigation bar.

 

  1. Once the source archive is displayed, you can click ‘Refresh’ from time to time to show the progress of the final stages of the archive migration. By default, since the workflow policy that was selected was ‘SourceOne to PST’, the following steps will be performed by Archive Shuttle:
    1. Import the remaining items in to the target
    2. Close the PST file
    3. Rename the PST file, according to the file name policy
    4. Delete the shortcuts from the mailbox.

Verify data has migrated

Once all of the Stage 2 operations have been completed the ‘Stage 2 (Sync Data)’ for the test archive will have a green and white check mark in the ‘Finished’ column.

The PST file can that was created can then be attached to Outlook and reviewed to ensure that the data was successfully migrated.

Migrating a Journal Archive

Migrating a journal archive can take a considerable length of time due to the size of the archive itself in the source environment. It is possible to migrate journal archives to any of the supported Archive Shuttle targets, as described in the following sections

Migrating to Native Format or Exchange

When migrating a journal archive out of Enterprise Vault to a non-Enterprise Vault target environment, it is recommended to make use of a new Archive Shuttle feature relating to folder splitting. Many target environments have limits on the number of items in a single folder, or perform poorly when there are a large number of items in a single folder, as is the case with Native Format (PST).

For information relating to migrating journal archives to Office 365, using Virtual Journals, see the next section in the Administration Guide

 

Item Description
Journal Archive Split to Folders – Base Folder Name This is the name of the top-level folder to ingest items into in the target environment.
Journal Archive Split to Folders – Max Items Per Folder This is the number of items that should be placed in each folder before a new folder is created.

If the migration is to take place to Native Format, or Exchange, navigate to the System Configuration page in the Archive Shuttle Admin Interface, and configure the following settings:

The splitting of items is performed after the Enterprise Vault Collector process has gathered metadata about the items to migrate.

 

Migrating to Enterprise Vault

This section outlines the steps that are required in order to complete the migration of a single journal archive within Enterprise Vault to a new Vault Store.

The first section explains how to perform the migration using a single container mapping. It is necessary to perform these steps if the migration is from a version of Enterprise Vault prior to 10.0.4. In Enterprise Vault 10.0.4, changes were made to the Enterprise Vault API meaning that a new way can be used to extract and ingest data. The second part of this topic explains how to perform a journal archive migration if Enterprise Vault 10.0.4 or later is the source.

Pre-Enterprise Vault 10.0.4 Source

Prerequisites

It is essential that the following steps have been completed prior to beginning the migration:

  • All Modules are enabled
  • Appropriate Active Directory Domains are enabled for scanning
  • An Enterprise Vault Environment has been added for migration
  • A Link Database has been created for the source archive
  • Module Mappings are configured for the source Vault Store and also the target Vault Store

Each of the above steps is described in the previous sections of this document.

The target for the journal archive migration must also:

  • Be a ‘user’ and be setup with a mailbox
  • Have a journal archive created for it, in an appropriate Vault Store
  • Be listed as a Journal Target in the Vault Administration Console

The Enterprise Vault journal task is used to ingest the data in to the target journal archive container.

 

Map Containers – For the journal archive

The first thing to do when migrating a journal archive to a new vault store is to do a container mapping. This is slightly different to the process followed earlier in the document since we must do a ‘single container mapping’. To setup the mapping, follow these steps:

  1. Click ‘Manual Mapping’.

You will now see two lists of containers.

  1. Locate the source journal archive in the left hand list, and select the radio button next to the archive.
  2. Locate the target journal archive in the right hand list, and select the radio button next to the archive.

All the normal filtering and searching is available on the Single Container Mapping screen, which can help identify the correct archives when there are many journal archives displayed on screen.

 

  1. Click ‘Add Mapping’.

Set Policies and Enable Migration

Once the container mapping has been selected, it is necessary to add two more components to the link, before the migration can be enabled. To setup the migration, follow these steps:

  1. Click ‘Existing Mappings’

All the normal filtering and searching is available on the Existing Mapping screen, which can help identify the mapping.

 

  1. Locate the mapping that was created in the previous step.
  2. Select the checkbox next to the mapping.
  3. From the toolbar, select ‘Set Workflow Policy’, and choose the Journal Workflow.
  4. From the toolbar, select ‘Set Filter Policy’, and choose a filter if one is required.
  5. Finally, from the toolbar select ‘Enable for Migration’, and ‘Enable Item Gathering’.

The container mapping will now look much like other mappings that may have been created. Ensure that the ‘Item Gathering’ and ‘Migration Enabled’ flags are displayed before moving on to monitoring the migration

 

Monitoring the Migration

Monitoring the ‘Stage 1’ synch process for a journal archive is the same as with other types of archive migration. A few minutes after setting the policies and enabling migration (see previous section), the container will appear on the ‘Stage 1 (Sync Data)’ screen in the Archive Shuttle administration console.

Performing Stage 2 and Extra Steps

Migrating a journal archive is slightly different to migrating an ordinary Enterprise Vault archive. There are a few additional steps, as outlined below:

  1. Stop the Enterprise Vault Journal Task, and set the startup of the task to Disabled.

The journal mailbox that the task was targeting will now start to grow in size because Enterprise Vault is no longer processing items from it.

 

  1. In Archive Shuttle, go to the ‘Stage 1 (Sync Data)’ screen.
  2. Locate the container mapping for the journal archive, and select the check box next to it.
  3. From the toolbar, select ‘Enable Stage 2’.
  4. After a few minutes, observe the ‘Stage 2 (Switch User)’ screen and ensure that after a few more minutes the migration is marked as complete.

The Journal Archive has now been moved to a new Vault Store, and a new archive. The final step is to change the journaling process in Enterprise Vault. Usually this is a matter of changing the journal target to be the new archive, and restarting the journal task after setting the task startup back to Automatic.

Depending on the migration being performed, it may also be necessary to reconfigure the Exchange journaling configuration to journal to the new ‘mailbox’.

 

Enterprise Vault 10.0.4 or later as the Source

Prerequisites

It is essential that the following steps have been completed prior to beginning the migration:

  • All Modules are enabled
  • Appropriate Active Directory Domains is enabled for scanning
  • An Enterprise Vault Environment has been added for migration
  • A Link Database has been created for the source archive
  • Module Mappings are configured for the source Vault Store and also the target Vault Store

Each of the above steps is described in the previous sections of this document.

Map Containers – For the journal archive

The first thing to do when migrating a journal archive to a new vault store is to do a container mapping. This is slightly different than the process followed earlier in the document since we must do a ‘single container mapping’. To setup the mapping, follow these steps:

  1. Navigate to the “Bulk Mapping” page.
  2. Type the beginning of the archive name in the ‘Name’ filter. Click the ‘Apply’ button at the right of the filter row.
  3. Select the checkbox next to the archive, and click ‘Add Mappings’ from the navigation bar.

A short wizard will start, which gathers information relating to the mapping for the archive.

 

  1. The destination of the migration can then be chosen from the available list and then subsequent pages of the wizard will prompt for further information.

It is recommended that a workflow policy is created that suits the requirements, or the ‘Journal Archive’ workflow is chosen.

 

Monitoring the Migration

Monitoring the ‘Stage 1’ synch process for a journal archive is the same as with other types of archive migration. A few minutes after setting the policies and enabling migration (see previous section), the container will appear on the ‘Stage 1 (Sync Data)’ screen in the Archive Shuttle administration console.

Performing Stage 2 and Extra Steps

Migrating a journal archive is slightly different to migrating an ordinary Enterprise Vault archive. There may be some additional steps as outlined below:

Before enabling ‘Stage 2’, the switch, for the test archive, the ‘Stage 1 (Sync Data)’ page should be checked for issues such as failed item-export or item-import.

To enable ‘Stage 2’ for the test archive, perform the following steps:

  1. Navigate to the “Stage 1 (Sync Data)” page.
  2. Select the checkbox next to the test archive, and click on ‘Enable Stage 2’ in the navigation bar.
  3. When the ‘Stage 1 (Sync Data)’ page is refreshed, there should be a green and white check mark in the ‘Stage 2 Enabled’ column.

Stage 2 is the switchover to the target environment. A final synchronization is performed of archived items from the source environment to the target environment, before several additional migration tasks are performed.

The Journal Archive has now been moved to a new Vault Store, and a new archive.

Depending on the migration being performed, it may also be necessary to reconfigure the Exchange journaling configuration to journal to the new ‘mailbox’.

 

Migrating a Journal Archive to Office 365

Journal Archives in source environments are typically large. Their multiple terabyte nature makes it currently quite hard to migrate the data to Office 365 as there are limits imposed by Microsoft on both the mailbox and personal archive sizes. To be able to migrate a large journal archive to Office 365 Archive Shuttle introduced a concept of a ‘Virtual Journal.

 

The general process for doing this is:

 

1. Define the naming scheme for the target. It is suggested to prefix or postfix names, eg AL-<archivename> or <archivename>-Departed. This makes the data easier to find in the target after the migration has completed.

2. Define the allowed number of rolling licenses.

3. Decide on the type of ‘hold’ to place on the migrated data.

4. Configure the maximum number of items allowed per child container, and maximum size allowed.

5. Perform the mapping.

In the background what will happen is:

a. Provisioning

1. A user account will be created according to the naming scheme.

2. A Personal Archive will be created if it was required in the mapping.

3. The mailbox/personal archive will be placed on the selected type of hold.

4. A license will be assigned from the pool.

5. The data from Office 365 about the ‘user’ will be synchronized into Archive Shuttle

b. Migration

1. Data will be exported after the provisioning process is done.

2. Data will be imported soon after it is exported, when a particular child container is full, as determined by the system settings, a new one will be created.

c. Stage 2

1. The license which was assigned in order to be able to ingest data in to the mailbox (or Personal Archive) will be removed (usually as the last step) and returned to the pool so that another mapping can take that license and complete the provisioning step.

To use the Virtual Journal feature there are a number of steps that need to be performed, as outlined below:

Configuring the Office 365 Environment

Naming Scheme

This determines how mailboxes and accounts will be created, using a token system. The following are tokens which can be used:

  • Archive Name
  • Archive ID
  • Container Mapping ID
  • PST ID
  • PST Number

Note: PST ID and PST Number are applicable if PSTs are the source environment.

In addition to those, alphanumeric characters can be appended or prepended to the name of the mailbox in Office 365.

Usage Location

This is the location relating to license allocation and is required to be chosen from the drop down list.

E-Mail Domain

This should be a valid email domain which will be used to create mailboxes. Eg @something.onmicrosoft.com

Hide from GAL

If enabled the mailbox/user will be hidden from the Global Address List.

Legal Hold

Legal hold can be configured if required. There are two types of legal hold:

  • Litigation
  • In-Place
  • Both

If In-Place or both are to be used, then a profile must have been created in Office 365 first, and the ‘In-Place Hold Identifier’ needs to be filled in with that profile name.

Set Rolling License Count

A specific number of licenses can be used to process the virtual journal mappings. That number should be configured by clicking on ‘Set Rolling License Count’. Licenses will be consumed up to that limit, and will be freed up when stage 2 completes on those mappings.

Note: There is a System Configuration option which can be enabled to migrate the data into Office 365 journal format, if required.

Office 365 Configuration Changes

Virtual Journal Item Count Limit

This determines the maximum number of items which will be in a particular mapping before rolling over to a new mapping.

Virtual Journal Item Size Limit

This determines the maximum size of a mapping before rolling over to a new mapping.

Perform the Journal Mapping

Once the above configuration changes have been made, then the journal archive can be mapped in the normal way except that you should choose the option in the wizard to process the mapping as a virtual journal.

Migrating a Shared Mailbox Archive

This section of the document will outline the steps that are necessary in order to complete the migration of an archive shared amongst a group of users. The migration will be from one Enterprise Vault environment to another.

In Enterprise Vault, this may simply be a regular mailbox archive so that is maintains the structure from the mailbox where the data comes from. The mailbox itself will ultimately have one true owner in Active Directory, but a number of other users and/or groups may also have access to both the mailbox and the archive.

Capture Permissions on the Source

There a number of ways in which the permissions can be shown for an archive, including the following:

  • Open the properties of the archive in the Vault Administration Console, and check the entries on the permissions tab
  • Use a script similar to the one provided on the Symantec Connect web site:

https://vox.veritas.com/t5/Articles/Script-to-know-AD-permission-assigned-on-Archives-Folders/ta-p/806130

Build an EVPM file

From the list of permissions on the source archive an EVPM file should be built. There are details about the way to use EVPM in the Enterprise Vault Utilities Guide. At a high level the steps are:

  • Directory section should contain information about the Enterprise Vault Directory computer and Enterprise Vault site name.
  • A section should be added called ‘ArchivePermissions’
  • The Archive Name property should be added to this section with the name of the archive.
  • The Grant Access property should be added to this section with a list of people to grant access to the archive. This should be listed one per line.

Perform the Migration

The migration of the archive/container can then be performed. Particular care should be taken in relation to migrating these types of archives/containers because the migration has an impact on a number of people.

Apply the EVPM file

Following the successful migration of the Shared Mailbox / Archive, the EVPM file that was created previously should be run to grant permissions on to the target/new archive.

Migrating ‘Leavers’ to Office 365

Archive Shuttle makes it possible to manage a pool of licenses to provision and migrate data to Office 365 mailboxes or archives which are ownerless. It is even possible to treat an archive as though it is ownerless, and migrate it using this process, even if an owner is showing in the user interface.

The general process for doing this is:

1. Define the naming scheme for the target. It is suggested to prefix or postfix names, eg AL-<archivename> or <archivename>-Departed. This makes the data easier to find in the target after the migration has completed.

2. Define the allowed number of rolling licenses.

3. Decide on the type of ‘hold’ to place on the migrated data.

4. Perform the mappings.

In the background what will happen is:

a. Provisioning

1. A user account will be created according to the naming scheme.

2. A Personal Archive will be created if it was required in the mapping.

3. The mailbox/personal archive will be placed on the selected type of hold.

4. A license will be assigned from the pool.

5. The data from Office 365 about the ‘user’ will be synchronized into Archive Shuttle

b. Migration

1. Data will be exported after the provisioning process is done.

2. Data will be imported soon after it is exported.

c. Stage 2

1. The familiar parts of the workflow will still occur, such as renaming the source archive, doing a final delta, and so on.

2. The license which was assigned in order to be able to ingest data in to the mailbox (or Personal Archive) will be removed (usually as the last step) and returned to the pool so that another mapping can take that license and complete the provisioning step.

How to set this up is described in the following section.

Requirements

The normal Office 365 migration requirements are necessary (see the earlier section). In addition it should be noted that Azure management tools are required. These can be downloaded from: https://msdn.microsoft.com/en-us/library/azure/jj151815.aspx

If these components are not installed, the normal Office 365 migrations will still be successful, but processing of leavers will not be successful. They can be added at any time during the migration; it is not necessary to reinstall or modify the Office 365 module following their installation.

Configuring the Office 365 Environment

Naming Scheme

This determines how mailboxes and accounts will be created, using a token system. The following are tokens which can be used:

  • Archive Name
  • Archive ID
  • Container Mapping ID
  • PST ID
  • PST Number

Note: PST ID and PST Number are applicable if PSTs are the source environment.

In addition to those, alphanumeric characters can be appended or prepended to the name of the mailbox in Office 365.

Usage Location

This is the location relating to license allocation and is required to be chosen from the drop down list.

E-Mail Domain

This should be a valid email domain which will be used to create mailboxes. Eg @something.onmicrosoft.com

Hide from GAL

If enabled the mailbox/user will be hidden from the Global Address List.

Legal Hold

Legal hold can be configured if required. There are two types of legal hold:

  • Litigation
  • In-Place
  • Both

If In-Place or both are to be used, then a profile must have been created in Office 365 first, and the ‘In-Place Hold Identifier’ needs to be filled in with that profile name.

Load Licenses

Licenses can be loaded by clicking on the ‘Load Licenses’ button on the Archive Shuttle ribbon. These will also be loaded after adding a new Office 365 link.

Set Rolling License Count

A specific number of licenses can be used to process leavers. That number should be configured by clicking on ‘Set Rolling License Count’. Licenses will be consumed up to that limit to migrate users, and will be freed up when stage 2 completes on those users. When mailboxes are created, personal archives are also created.

Perform the Migration

The migration of the archive/container can then be performed. The process of mapping the users is the same as with a normal Office 365 migration, however, there is an additional option in the Mapping Wizard where you can select to use Office 365 Leavers, and you should select an Office 365 Leavers Workflow.

A container can be processed as a leaver if one or more of the following is satisfied:

  • It is ownerless in the user interface
  • The Active Directory account is disabled
  • The Active Directory account is deleted and exists in the Active Directory Dumpster. Note: The account running the AD Collector module must have permissions to read the contents of the dumpster.

Note: Archives will be exported, but only the configured number of rolling licensed users will ingest data in to the target.

Stage 2

The Stage 2 workflow for Office 365 leavers functions in a similar manner to the normal Office 365 workflows. There is an additional step in the workflow which handles the removal of licenses. Once the license has been removed, if there are additional users that require migration using the leavers process then these will begin migration.

Journal Explosion

Archive Shuttle makes it possible to migrate a journal archive to a target environment in a special way which fans-out, or ‘explodes’ the recipient / custodian list. This process is termed journal explosion. If an original message was addressed to three different people, then the item is exported, and ingested into up to four target containers.

This section of the guide provides information on this feature.

Here are some reasons you might want to use Journal Explosion:

  • It causes deliver/ingest messages to be sent to user mailboxes in the same way the Exchange or Office 365 modules do in regular migrations, rather than a separate mailbox.
  • If a user’s data becomes corrupt or lost, journal archives can be used to recreate the user’s data.
  • It can be used for leavers who lack archives.
  • It’s easy to distinguish external email addresses
  • It can help fulfil legal/compliance/regulatory requirements.

Requirements for Journal Explosion

The following source environments are supported:

  • Enterprise Vault journal archives
  • Enterprise Vault SMTP archives
  • PST files

The following target environments are supported:

  • Microsoft Exchange (Exchange 2010 and later)
  • Office 365
  • PST files

It is recommended to have a dedicated staging area for journal explosion. The recommended size is 200-250 Gb per source journal archive.

Procedure for Journal Explosion

There are a number of steps that need to be followed in order to configure and perform journal explosion. They are covered below:

Configure Journal Explosion Link

On the Links page, there is a tab labeled ‘Journal Explosion’, select that and create a new Journal Explosion link.

The link can be given a name, and the target type must be selected from the available list.

Once the link has been created, an ingest module must be mapped to the link.

Check/Confirm general location for ingest items

On the System Configuration section of the user interface, go to the Journal Explosion Settings tab.

On this page you can set a general folder to be used when ingesting the items into the target container. This can be overridden on an individual user mapping. If one is not specified then the ‘Recoverable Items Folder’ is used in the target container.

Create Mapping

The source environment, associated staging areas, and module mappings then need to be configured. Once those are done a mapping can be created for the source container.

When creating the mapping, there is an option to use Journal Explosion, rather than normal migration.

Export starts

Journal explosion is done during export. As the item is exported, the P1 information is reviewed for SMTP addresses. These addresses then need to be mapped as described in the next section.

Journal Explosion User Management

As items export from the source container and are analyzed for recipient information a list is built up which needs to be reviewed, mapped, and enabled for migration. This is done on the Journal Explosion User Management page in the user interface.

On the Journal Explosion user management page, which will initially show no mapped users, select the journal mapping, and then choose ‘Add’. The pop-up page will show users/SMTP addresses discovered from the items which have been exported. Select those which you which to process/map and click on Save/Close. The chosen addresses will now be shown in the grid on the page.  If the addresses exist in the current environment (as collected by the Active Directory Collector) a suggested mapping will be given. This can be overridden if required. External and non-existent SMTP addresses can also be mapped.

To perform a mapping, select one or more entries on the journal explosion user management page and click on ‘Manual Mapping’. On the pop-up screen you can define the mapping you wish to create.

The mappings are shown in the grid on the journal explosion user management page.

Before enabling the user-mapping for migration (i.e. ingestion) it is possible to configure a folder policy (as described in the next section).

Once the mappings are made, a folder policy configured (if required) one or more of the mappings can be enabled for migration.

Note: Mappings can be imported from CSV. The CSV file should consist of the target UPN or SAMAccountName followed by a comma followed by the source SMTP address.

It is important to go back to the journal explosion user management page and click on ‘Add’ from time to time. Newly discovered SMTP addresses will be added to there as items are exported from the source container.

Folder Name Policy

Rather than ingesting the items into the folder which is defined in the System Configuration, a folder name policy can be defined an associated with a user or group of users. If a folder policy is not defined for a user, then the folder specified in the System Configuration is used.

Enable the users for import

Once the mappings are made, a folder policy configured (if required) one or more of the mappings can be enabled for migration.

The mappings which have been created to existing users/target containers will progress as normal. For example if you have mapped a source SMTP address of abc@somedomain.com, to an existing mailbox with an address of abc@somedomain.com, then the import process will begin for that mapping.

For ownerless mappings, the configuration defined on the Office 365 Environment page will be used. Mailboxes will be created and licensed as required, using a rolling license count. This operates in much the same way as the process for migrating ownerless archives to Office 365. The difference is that the Office 365 license will be added, and removed, as required, during Stage 1. The additional and removal of the license may happen multiple times.

For ownerless mappings migrating to Exchange you should create mailboxes, if required, or map the source SMTP address to an existing mailbox.

Tracking progress

A journal explosion mapping can be viewed on the Stage 1 page in the user interface. However, the import figures will show as N/A. In order to properly see the current ingest progress you should click on ‘Actions’ and enable the ‘Journal Explosion View’. The page will then refresh and show the normal export routing count, the exploded routing count and the true imported count figures.

It is also possible on this page to expand the mapping, and see the progress on a per user basis. This can also be exported to supported formats (ex. XLS, PDF)

Additional Notes

There are some additional notes to be aware of when using Journal Explosion:

  • Items may be skipped on ingest. This happens when an item to be ingested into a user mapping is older than the creation date of the user. These can be reviewed in the log files and database. If assistance is needed please contact support.
  • User mappings can be imported from CSV file to aid with the handling of a large list of recipients.
  • When an item has been imported into all applicable user mappings it will be removed from the staging area. Later, if a new user mapping is created, requiring that same item, it will be re-exported. This will not affect the license usage.

Using HOTS

HOTS is a feature with Archive Shuttle which allows for less bandwidth usage for migrations to Office 365 by combining a highly-optimised transfer system with storage of extracted data taking place in Microsoft Azure. The following diagram presents an overview of HOTS:

Requirements

For migrations from legacy archives to Office 365 using HOTS the following needs to be considered:

  • Using HOTS is supported for all currently supported sources, when migrating to Office 365 mailboxes or Personal Archives.
  • More CPU usage might be needed on the source environment in order to create the HOTS-format data
  • An Azure Storage Account must be configured and used for storing the extracted data
  • All export and import modules must have been configured with the connection string to the Azure Storage Account
  • A bridgehead server running the ingest and shortcut processing module should be deployed in Azure to facilitate the ingestion of the data from the Azure Storage Account. (The specification of the bridgehead server is given in the Installation Guide)

Note: It is possible to limit the storage used in Azure by configuring a high watermark for the links.

Additional Configuration

The following needs to be performed in Archive Shuttle in order to utilize HOTS:

System Setting

In Archive Shuttle HOTS must be enabled in System Configuration -> General

Configuration on each module

Each Export Module must be configured with a connection string in order to be able to send data to the Azure Storage Account. This is configured in the Credentials Editor. The connection string is obtained from the Azure Portal. The corresponding Import Module needs to also be configured with the credentials.

Configuration on link

The source and target links need to use an appropriate Azure Storage Account connection. This is done from the Links page within Archive Shuttle.

High Watermark on link

By default Microsoft Azure places a very high maximum allowance for Storage Accounts. Customers may wish to restrict the usage of the Storage Account because it may involve additional cost to Microsoft. To place a limit on the amount of storage to be used the high watermark can be set on each of the source links.

Using the Archive Shuttle Web Interface

Accessing the Web Interface

The Archive Shuttle User Interface is fully web-based. This allows archive migrations to be performed from anywhere in an organization from an Internet browser. It also means that archive migrations can be managed externally by partners of the migration organization. We recommend that HTTPS be enabled in that situation.

The following table shows the browsers that can be used to access the Archive Shuttle interface:

Browser Supported?
Internet Explorer 8 No
Internet Explorer 9 Yes
Internet Explorer 10 Yes
Internet Explorer 11 Yes
Firefox No
Chrome No
Opera No
Chromium No
Opera No
Safari No

 

Note: Internet Explorer 10 requires that the Compatibility View for local intranet sites setting to be disabled. Additional information is available in the section:Disable Internet Explorer 10 Compatibility View Setting

General Information

All pages share a common look and feel within the Archive Shuttle user interface.

On the left side is the main navigation area. It is used to jump from page to page, where every page controls a different aspect of Archive Shuttle.

Along the top are “Actions”, which are context sensitive to each page.

The following pages are available:

  • Progress & Performance

Provides an overview of the status of the migration.

  • System Health

Shows key information about the health of the migration environment.

  • User Information

Allows an administrator to see detailed information gathered about users.

  • Bulk Mapping

Provides tasks to create container mappings.

  • Manual Mapping

Provides tasks to create single container mappings.

  • Existing Mapping

Provides tasks to manipulate existing mappings

  • Offline Media

Allows configuration of offline media (e.g., disk shipping)

  • Stage 1 (Sync Data)

Provides detailed information on the migration status for each Container mapping. This is the “Item Sync” status for the container.

  • Stage 2 (Switch User)

Provides detailed information on the migration status for each Container mapping. This is the “Workflow” status after switching a container to the target environment.

  • Link Statistics

Provides detailed information on the migration status for each Link.

  • Performance

Provides a detailed hourly performance overview of the migration and allows selection of date/time ranges to see this data historically and graphically

  • Failed Items

Provides information on items that have failed to export or import

  • Index Fragmentation

Shows information about SQL Index Fragmentation

  • License

Provides information on the licenses available for migration

  • Support Bundles

Allows the creation and manipulation of log file bundles that are helpful to Quadrotech Support Engineers.

  • AS Modules

Allows an administrator to control the configuration of modules deployed in the environment

  • Active Directory

Provides a list of Active Directory domains and allows for them to be scanned for users.

  • User Groups

Provides an administrator the ability to manage Archive Shuttle groups. These are used as aids when selecting containers to migrate.

  • EV Environment

Provides an overview of the Enterprise Vault Environment. It’s also used to add new environments and various Enterprise Vault related collection tasks.

  • EV Retention Mappings

Provides insight into Enterprise Vault Retention Category mappings that are needed for the migration.

  • Links

Provides an overview of Links, allows creation of an Archive Shuttle Item databases as well as enabling Map Modules to Links.

  • Workflow Policies

Allows an administrator to review the commands within a workflow, modify workflows and add new workflows.

  • Migration Filters

Allow an administrator to construct filters that can filter the data during migration.

  • System Configuration

Allows an administrator to fine-tune the migration environment

Grids / Tables

General

All grids / tables throughout the Archive Shuttle Web Interface have common functionality, which will not be explained in every section individually, but is provided here for your reference.

Filtering

Many of the grids / tables can be sorted and filtered as required.

Sorting can be achieved by clicking on the appropriate column header. Ascending versus descending order is achieved by clicking on the column heading a second time.

Filtering is provided on many columns by the means of the textbox below a column header. Data entered into this text box will cause Archive Shuttle to “filter by” on that column. The small “antenna” symbol behind the textbox can be used to change the type of filter, with the following being common options that are available:

  • Begins with
  • Contains
  • Doesn’t contain
  • Ends with
  • Equals
  • Doesn’t equal

It is possible to combine multiple column filters to further enhance the displayed data in the grid / table.

In order to apply the filter, which may contain multiple columns/selections click on the ‘Apply’ button at the right of the filter row.

A second filtering option is provided by the little “antenna” symbol behind the header name of a column. It provides a Microsoft ® Excel ® like filtering interface which lists the following values for an easy selection:

  • (All)

Does not apply a filter, displays All Items

  • (Blanks)

Only displays items that do not have a value

  • (Non blanks)

Only displays items that have a value, hiding blank values

  • Values

Aggregated list of values from that column to choose from

Paging

All grids provide standard paging functionality. It is possible to see the current page, and select to view another page from the pager at the bottom of the grid.

Progress & Performance

The Progress and Performance Dashboard provides an administrator with an overview of the current state of the archive migration project. The following information is displayed:

Item Description
Status User finished migration:

Percentage of mapped containers that have been through the complete workflow

 

Stage 1 (Synced):

Percentage of users that have been synched.

 

Stage 2 (Switched):

Percentage of enabled users switched to target

Basic statistics Data gathered for the migration, including:

Users

Total number of users

Mailboxes

Total number of mailboxes

Archives

Total number of archives

Archived Items

Total number of archived items

Archived Items Size

Size Total size of the archived items

Mappings

Total number of containers that have been mapped

Enabled for Sync (Stage 1)

Total number of containers that are in Stage 1

Enabled for Switch (Stage 2)

Total number of containers that are in Stage 2

Finished Migration

Total number of containers that have finished migration

Last 10 Workflow Completed Users Last ten users that have fully completed their migration workflow

Clicking in this area will move you to the Stage 2 screen within the Admin Interface.

Export Speed GB/h

Extraction Speed Items/h

Ingestion Speed GB/h

Ingestion Speed Items/h

Export and import speed per hour, calculated from the last ten minutes values
Overall Performance Collection, Export and Import performance broken down into a timetable.

  • Last ten minutes
  • Last hour
  • Current day
  • Current week
  • Current month
  • Current quarter
  • Last 6 months
  • Current year
  • All
Archive Statistics For how many archive have archive statistics been received.

Archive statistics are:

  • Count of items in an archive
  • Size of items in an archive
  • Count of shortcuts in mailbox belonging to this archive
Collected Items How many items need to be collected in total (based on archive statistics), and how many have already been collected.
Current Activity
  • Running Export Item Count
  • Running Imports Item Count
  • Failed Export item Count
  • Failed Import Item Count
  • Running Export Item Size
  • Running Import Item Size
  • Failed Export Item Size
  • Failed Import Item Size
Exported Items

Exported Size

Imported Items

Imported Size

Exported and imported count/size as a percentage

The Dashboard can optionally be switched to show all of this information for a specific link order to more closely monitor a particular part of the migration project.

It is also possible to show the extraction and ingestion speed in items per second, rather than items per hour.

System Health

The dashboard provides an administrator with an overview of storage and other health parameters that may affect the migration being undertaken. If a problem is detected a yellow triangle will be visible in the user interface and a number. The number represents the number of problem types.

The page is divided into the following sections:

Section Description
Modules Information relating to the status of modules. Those with issues will be shown with a warning symbol. Hover over those items and a tooltip will give more information.

In addition this page shows details relating to any credentials which have been configured to be used by the system to access resources.

Storage Paths Shows information about the storage paths used by the system.
Miscellaneous Any retention categories that are not mapped to a target environment are displayed here.
Events Status events will be shown on this page. A list of events that may be displayed are described in this article.
Sync Details relating to environment sync activities will be shown on this page.

Free space on a staging area location is color coded as shown in the following table:

Highlighting Reason
No highlighting Free space is above 100 GB
Yellow Free space is below 100 GB
Red Free space is below 50 GB

In addition, the System Health page gives an overview of any modules that are disabled or otherwise not running. This is shown at the top left of the screen in the Admin Interface.

If all available Office 365 licenses are already assigned and there is no available license left, on the Events tab on the System Health page displays a warning message.

The Link information at the bottom of the page also highlights the ‘Used’ space on each link. Used space is defined as the amount of data that has been exported, but not yet imported. This is also color coded for ease of identifying a potential problem, as follows:

Highlighting Reason
No highlighting Used space is within normal parameters
Yellow Used space is between 75% and 90% of the maximum allowed
Red Used space is above 90% of the maximum allowed

If the Used Space reaches 100% on a link, exporting of data will be stopped until data has been imported, even if there is still free disk space available on disk. This is further illustrated as follows:

Note: High and low watermarks can be adjusted from the Links page in the Archive Shuttle UI.

The link information that is included also gives an indication of the indexing performance on the target (if the target is Enterprise Vault). The Index Incomplete column shows the number of items that are waiting to be indexed on the target environment. This information is updated once per minute by the EV Import module.

Note: If the Index Incomplete column shows a large value, it might indicate an issue with indexing in the target environment, which should be investigated.

User Dashboard

The dashboard provides comprehensive information that has been discovered about a specifiable user. The following information is displayed:

Item Description
Basic User Information Shows information about the user such as the User SID, login name, OU where the Active Directory account resides.

In addition, the user address can be displayed. This might be helpful to confirm the chosen user identify.

Mailboxes Shows detailed information relating to the user mailbox, and personal archive as discovered by Archive Shuttle.
Archives Displays detailed information relating to the primary Enterprise Vault archive that has been discovered for the user.
Migration Status Information relating to the migration can also be viewed, if this user is currently mapped for migration. The information includes the number of items, priority, number of failed items and information relating to the size of the arhive.

 

The information is also displayed graphically as a series of pie charts at the bottom of the screen.

Stage 2 Status This section shows the current state of any Stage 2 commands which have run on this user.

When migration is underway, this page will show additional information related to the migration. This includes statistics and overall progress graphs. It is also possible to see the Failed Items for a selected user/migration by clicking on the hyperlink on this page.

Module Dashboard

Valuable information about module-level responsiveness can be gained from the Module Dashboard in Archive Shuttle. Using the dashboard it is possible to see detailed information relating to the operations being performed by particular modules on particular servers involved in the migration. For example, if new mappings have just been created, and yet one or more Enterprise Vault Export modules are showing no items being exported, it may indicate an area that should be investigated further.

Day-to-day administration

The tasks that need to be performed from day to day during an Archive Shuttle based migration vary from project to project; however, there are some aspects that should always be performed:

Monitoring Performance

Performance of the migration can be viewed in various ways within the Archive Shuttle Web Interface and can be exported to a variety of formats. It is recommended to review the following:

Progress & Performance

Over a period of time, the number of containers (users) processed should start to increase. In fact, all the statistics should rise towards 100% during the migration. It is also possible to view the information displayed on this page both for the overall progress of the migration and an individual link level. To get a fine level of detail on the progress of a migration, it is also possible to show the graphs on the page as items/second rather than items/hour.

The data which is displayed and the order it is displayed in can be customized to meet the needs of the particular migration. These views can be stored, and can be switched between when required. Each chart or data grid is a widget that can be moved around the screen, or removed by clicking on the small ‘x’ at the top right of the widget. Once removed, widgets can be re-added if required.

Link Statistics

Ensure each link is progressing towards 100% over time

Performance

Once steady-state migration has been reached the number of items being exported and imported per hour should be fairly consistent.

System Health

Ensure that the Free Space amounts do not reach a warning or critical level.

Also ensure that modules are not disabled or otherwise not running.

Finally ensure that the link ‘used’ space is within normal parameters.

Monitoring Workflows

Monitoring workflows involves reviewing the Stage 1 Status screen for hangs and failures. By default, the number of export errors and the number of import errors are shown as data columns on the screen. Additional data columns can be added (by clicking on Columns in the Actions Bar and dragging additional columns on to the data grid).

The Stage 1 Status screen can also be filtered and sorted to show (for example) all containers with less than 100% imported. This can be done as follows:

  1. Navigate to the “Stage 1 (Sync Data)” screen.
  2. Click “Reset” on the Actions Bar in order to return the page to the default view
  3. In the text box under “Imported Percentage” enter 100.
  4. Click on the small filter icon to the right of the text box, and choose “is less than” from the pop-up list
  5. Click on the ‘Apply’ button at the right of the filter row.

The page will now show those containers that have been enabled for Stage 1 that have not reached the 100% import stage.

Monitoring Module-level Activity

Valuable information about module-level responsiveness can be gained from the Module Dashboard in Archive Shuttle. Using the dashboard it is possible to see detailed information relating to the operations being performed by particular modules on particular servers involved in the migration. For example, if new mappings have just been created, and yet one or more Enterprise Vault Export modules are showing no items being exported, it may indicate an area that should be investigated further.

Adjusting Priority of Migrations

When reviewing the data synchronization on Stage 1, it might be necessary from time to time to adjust the priority order that they are being executed. A good way to do this is to ensure that all new mappings are created with a priority of 10, then, other priorities can be assigned on the Stage 1 screen in order to raise or lower the priority of particular mappings.

Note: The lower the priority number, the higher the priority.

Monitoring Migrations

Monitoring migrations involves reviewing the Stage 2 Status screen for failures. Each command, performed by the appropriate module, will report back results to the Archive Shuttle Core.

This screen can also be filtered to show (for example) all containers that have not yet finished the migration completely. This can be done as follows:

  1. Navigate to the “Stage 2 (Switch User)” screen.
  2. Click “Reset” on the Actions Bar in order to return the page to the default view
  3. In the drop down list under “Finished” select “No”.
  4. Click on the ‘Apply’ button at the right of the filter row.

The page will now show those containers that have been enabled for Stage 2, but have not yet completed the migration.

Pausing, Resuming, and Skipping Workflow Steps

Particular steps in a Workflow can be paused, resumed or even skipped. This can be done as follows:

  1. Navigate to the “Stage 2 Status” screen.
  2. Click “Reset” on the Actions Bar in order to return the page to the default view
  3. Select one or more containers.
  4. Click the appropriate action button, e.g., Pause, Suspend, Resume

Changing Workflows

At any time a particular container mapping can have the workflow changed to a new one. In order to do this, perform the following:

  1. Navigate to the “Stage 2 (Switch User)” screen.
  2. Click “Change Workflow Policy” on the Actions Bar
  3. Select a new workflow for this mapping.
  4. Click “Save” to commit the change.

Once a new workflow has been selected, all previous workflow steps will be removed for the mapping and the new workflow will begin with the first command in that workflow.

This can also be used to re-run a chosen workflow associated with a container mapping.

Setting users up for migration

Most archive migrations are performed by selecting groups or batches of user archives (containers) to process through Stage 1 and Stage 2. The mapping and selection is performed on the “Bulk Mapping” screen in the Archive Shuttle Web Interface. This screen can be filtered and sorted with the current list of data columns, and additional data columns can be added to help facilitate the selection of containers.

There are many ways that the selection can be defined. Below is an example of selecting users based on the source Vault Store:

  1. Navigate to the “Map Containers” screen.
  2. Click “Reset” on the Actions Bar in order to return the page to the default view
  3. In the text box under “Link Name” enter the name of one of the source Vault Stores, and press enter or click into a different text box
  4. Click the ‘Apply’ button at the right of the filter row.

The page will now refresh to show all archives (containers) in that particular Vault Store (link).

This selection can be further refined, before selecting some or all of the containers and performing the “Add Mapping” function from the Actions Bar.

Configuring, Loading and Saving Filters

Both the “Stage 1 (Sync Data)” and the “Stage 2 (Switch User)” pages in the Admin Interface allow for additional, useful data columns to be added to the screen. These can be re-arranged and sorted as required.

Both of these pages also allow for these filters, and column configurations to be saved under a friendly name. Previously saved filters can be loaded, allowing an administrator to jump between slightly different views of the migration.

Note: The filters are saved per user and can be used on any machine that accesses the Admin Interface from the same Windows account.

Logging

Archive Shuttle performs logging for activities in two locations, both of which can be useful for troubleshooting purposes. Logging can be configured to record data at a number of different levels: Info, Debug, Trace, Warn, Error, and Fatal.

Module-Level Logging

Each module records log information at the INFO level though that can be overridden if required. When the modules are installed, the location of these log files can be chosen. By default, the location is inside the program files folder in a subfolder called Logs.

The current location for the log files can be seen on the Modules page in the Archive Shuttle Admin Interface.

In addition to logging data locally, each module also transmits this log information to the Archive Shuttle Core, using a Web Service.

Each module can log extended information about item-level migration details; for example, on successful migration of an item an EV Import module may log the Archive Shuttle reference for an item, the source item ID and the target item ID.

Core-Level Logging

The Archive Shuttle Core logs information as well as the modules. The location where the log files will be written can be configured during the installation of the Archive Shuttle Core. In addition, to the core operations the Archive Shuttle Core will also log user interface actions to a file on the Archive Shuttle Core.

The Archive Shuttle Core also receives logging from each module. These files have .Client in their filename. This means it is not normally necessary to get log files from the servers where modules are installed.

PowerShell Execution

It is possible to add customized steps into a Stage 2 workflow and to execute PowerShell scripts. These can perform customer or environment specific actions. There are essentially two steps which need to be performed:

  • Addition of scripts
  • Customization of the Workflow

These are covered in the next sections.

Addition of scripts

This section of the user interface allows an administrator to add PowerShell scripts to perform various tasks.

Scripts can be added, edited and deleted. When adding a script it should be given a friendly name, and description. The script itself can be added directly, or it can be uploaded from a file that you already have.

Customization of Workflow

PowerShell scripts can be added to an existing workflow. It is also possible to add multiple scripts, if required. To add a script edit the desired workflow, and from the right hand side of the screen click on the PowerShellScriptExecutionRunScript command. This will add that command to the bottom of the workflow.

The command can the be moved to the appropriate position in the workflow.

The command in the workflow needs to be edited so that you can provide the name of the script that you want to execute.

Examples of PowerShell Execution

The following article contains examples of PowerShell scripts.

Modules

This section of the user interface lists all the Archive Shuttle modules and on which server they have been installed. The display also shows whether a module is enabled or not and the current logging level for the module and Core.

Modules have to be enabled in order for them to receive work to do.

The modules are monitored every 5 minutes to check if they are still running, or have failed. If they have failed an attempt will be made to restart a module, by issuing a command to the Admin Module on the affected machine. By default Archive Shuttle will try 5 times to restart a module. Every retry to start the module is by default one more minute apart from the previous attempt for a maximum of 10 minutes. The status of the module will be on the System Health page.

Actions to be performed on the Modules page

  • Enable

Enable selected modules

  • Disable

Disable selected modules

  • Configure

Most modules allow specific configuration relating to parallelism and other performance/load related elements to be configured.  For example you could configure one EV Export module to have archive and item parallelism of 10 and 2 respectively, and another EV Export module could have archive and item parallelism of 15 and 1 respectively.

It’s also possible to have these configuration changes scheduled to be effective at particular times of day and days of the week.

The data grid showing module information also includes columns which show which configuration is active, and the time left before the next change in configuration.

  • Start

Start the selected module immediately

  • Stop

Send a command to the selected module to stop processing

  • Restart

Send a command to the selected module to restart

  • Delete

Remove a module. It may be necessary to choose a replacement.

  • Enable performance statistics

Enables the collection of performance statistics for a module

  • Disable performance statistics

Disables the collection of performance statistics for a module

  • Set Schedule

Define a schedule for when the selected module should run

  • Update

Send a command to the selected module to perform an automatic update to the latest version.

Note: In order for the “Update” command to work correctly, there are additional steps that should be performed on the Archive Shuttle Core Server. These are covered later in this section

  • Download

Allows you to download the module MSI package.

  • Set Log Level

Allow the logging level of a module (or set of modules) to be changed in real-time. Increasing the logging level may help with providing more detail to Quadrotech Support for troubleshooting.

  • Refresh

Refreshes the module grid

Note: Disabled modules still transmit backlog data to the Core Web Service. These modules do not get new work to perform.

Setting a Schedule for a Module

Each of the Archive Shuttle modules can have a schedule defined for when the module should execute tasks. To set a schedule, follow these steps:

  1. Navigate to the “Modules” page.
  2. Select a Module by clicking on the checkbox next to the name of the module
  3. Click on “Set Schedule” in the navigation bar

Note: When setting a schedule remember to click on “Save” in the “Set Schedule” window to commit the schedule to the database.

The following screenshot shows a module schedule where the module is configured to run 24×7:

The following screenshot shows a module schedule where the module is configured to run just on Saturday and Sunday:

Update Installed Modules

During a complex migration there may be many modules installed on different servers throughout an environment. Archive Shuttle has a method for simplifying the process of updating the modules when new versions become available. In order to update modules from the Archive Shuttle Web Interface, follow these steps:

  1. Download the new MSI file, the file required will have a name formatted in this manner: ArchiveShuttleModulesInstall-X.x.x.xxxxx
  2. Copy the MSI file to the following folder on the server that hosts the Archive Shuttle Web Service:

Webservice\bin\ModuleInstallers

  1. Navigate to the “Modules” page in the Archive Shuttle User Interface.
  2. Select one or more modules, and click “Update”.

The MSI file will then be transferred to the server that was selected, and then the MSI file will be executed in order to update the installation.

Restart Enterprise Vault Services

From the ‘Services’ tab it is possible to restart some or all of the Enterprise Vault services on a particular Enterprise Vault server. You can also direct the command to a particular Archive Shuttle Admin Service.

Active Directory

This is an overview page of the discovered domains, and whether they are enabled for scanning. Active Directory information is collected through the assigned Archive Shuttle Active Directory Collector module.

Each domain in the forest where the Active Directory Collector Module is running will be shown, but by default user scanning for each domain will be disabled.

If there is an Active Directory Collector Module in additional forests, they too will be shown on this screen.

Note: “Sync All AD Users” will synchronize user information from all enabled domains.

Actions to be performed on the Active Directory page

  • Sync AD Users

This instructs the appropriate Active Directory Collector modules to synchronize user information from all enabled Active Directory domains.

  • Sync AD User

This allows you to specify a single Active Directory user that should get synchronized. This is useful if you have issues with a single User and allows you to easily troubleshoot this one user, or simply if you know you had changes for a single user and you need to show up ASAP.

  • Sync AD Domains

This instructs the Active Directory Collector module to gather a list of all domains in the environment/forest.

  • Enable

When one or more domains have been selected, clicking the ‘Enable’ button in the Actions Bar will enable synchronization for those domains.

  • Disable

When one or more domains have been selected, clicking the ‘Disable’ button in the Actions Bar will disable synchronization for those domains.

  • Refresh

Refreshes the current screen

User Groups

This is an overview page of the groups that have been defined in Archive Shuttle. Groups are used to apply a tag to users so that actions relating to their migration, or progress monitoring can be performed easily in the Admin Interface.

As an example, a group might be defined as “Migration Test Users”. There are several places in the Admin Interface (listed below) where users can then be added to this group. Groups can also be managed from this page in the Admin Interface.

Users/Groups can also be imported from a CSV file, if necessary.

Actions to be performed on the User Groups page

  • Add: Create a new group.
  • Edit: The name of a group can be modified.
  • Delete: A User Group can be deleted.
  • Unassign Users: Remove users from a group.
  • Assign Users (CSV Import): Imports users from a CSV file to a specific group.

Note: CSV files that can be selected for import should contain either a list of User SIDs to add to a particular group, sAMAccountNames, or Container Names

  • Refresh: Refresh the list of users and groups.
  • Reset: Reset the grid to the default view.

There are several places where containers/users can be added to groups:

  • User Dashboard
  • Bulk Mapping
  • Manual Mapping
  • Existing Mappings
  • Stage 1 (Sync Data)
  • Stage 2 (Switch User)

Once added a group, all these same pages in the Admin Interface allow the data to be filtered and grouped by the group name.

Note: A container/user can only belong to one group.

Tag Management

Tags are an organizational mechanism that can be used on any source or target container to identify special subset of the containers within a migration.

You might decide to use tags to…

  • Flag users under legal consideration
  • Group users into migration waves
  • Identify problematic containers
  • Identify VIPs that require special consideration
  • Identify containers that are out of scope

You can create, modify, and assign tags. Then, you can filter lists using those tags. You can create and apply tags on any of these pages:

  • User Information
  • Bulk Mapping
  • Existing Mapping
  • Stage 1
  • Stage 2

Note: Tags can be used with any type of container (including ownerless ones). Additionally, you can assign separate tags for a mapping’s source and target containers.

When you select objects and click the Tag Assignment button, you can create, remove, or assign tags using this window:

Tag Management page

The Configuration > Tag Management page shows a list of current tags and the number of containers assigned each tag. A tag can be expanded to review the containers that have the tag.

You can create, edit, or delete tags and tag assignments on this page.

An imported CSV file can be used to assign one or more tags to user group containers.

The supported fields for matching the containers are:

  • UserSid
  • SaMAccountName
  • Container Name

The format of the CSV file should be one of the following:

  • SID,tagname
  • samAccountName,tagname
  • ContainerName,tagname

Multiple tags can be assigned to a container, for example:

  • samAccountName,tag1,tag2,tag3

You can assign a tag to all members of a group using the Assign User Groups button. This allows you to select a group and assign a specific tag to all containers in that group. This can be useful when you’re transitioning from using groups, to using tags.

EV Environment

This is an overview page of all the Enterprise Vault environments that are needed for the migration project. New Enterprise Vault Directories can be added, and Archive Shuttle collects information about this Enterprise Vault directory through the assigned Archive Shuttle Enterprise Vault Collector module.

For every Enterprise Vault Directory, the table lists the following information:

  • Vault Store Name
  • Number of Archive in this Vault Store
  • Total Item Count in this Vault Store
  • Total Item Size in this Vault Store
  • Total Shortcut Count in the archives’ associated mailbox
  • Total Legal Hold Count
  • Archive Gathering Enabled
  • Backup Mode enabled

Note: The data values will only be shown on screen when “Archive Gathering” has been enabled, and an EV Collector module has been linked. (See below)

Actions to be performed on the EV Environment page

  • Add EV Directory: Adds an Enterprise Vault Directory to Archive Shuttle. The following information is needed in order to perform this action:
  • Module to associate: Archive Shuttle Enterprise Vault Collector module which should be “responsible” for collecting information about this Enterprise Vault Directory
  • Display Name: Friendly name for this Enterprise Vault Directory. This is only a display name and can be used to identify this Enterprise Vault Directory
  • EV SQL Server: SQL Server name / instance where this Enterprise Vault Directory hosts its Directory database. This can be a hostname, a Fully Qualified Domain Name, an IP Address or all of the previous with a SQL Named Instance provided.

Example: mysqlserver.mydomain.com

mysqlserver

192.168.0.1

mysqlserver.mydomain.com\instance

mysqlserver\instance

192.168.0.1\instance

  • EV SQL Database Name: Database name. Defaults to EnterpriseVaultDirectory. This entry is for display purposes only.
  • Sync all EV Environments: Queues an update request for all Enterprise Vault Directories by their respective Archive Shuttle Enterprise Vault Collector Modules.
  • Sync Active Directory: Queues an update request for all enabled Active Directory Domains by their respective Archive Shuttle Active Directory Collector modules.
  • Archive Gathering
  • Enable: Enable Archive Gathering for the selected Vault Store(s)
  • Disable: Disable Archive Gathering for the selected Vault Store(s)
  • Run Now: Queue an Archive Gathering request for the selected Vault Store(s)
  • Refresh: Refresh the information on the displayed EV Environment table

EV Retention Mappings

This page shows all Enterprise Vault Retention Categories that have been mapped, and allows an administrator to add or delete existing mappings.

A Retention Category mapping is necessary in cross-Enterprise Vault Site/Directory migration scenarios. This is required so that Archive Shuttle knows which Retention Category it should apply to the target item, based on the retention category of the source item.

A Retention Category mapping can also be used to change the Retention Category in intra-Enterprise Vault Site/Directory migration scenarios. This is useful if it is required to consolidate Retention Categories.

Note: Without Retention Category mappings in place, Archive Shuttle will export data from Enterprise Vault, but not import any items into Enterprise Vault.

Actions to be performed on the EV Retention Mappings page

  • Create Mapping: Create a new Retention Category Mapping. The following information is required, and can be selected from drop-down lists when creating the mapping:

– Source site

– Source Retention Category

– Target Site

– Target Retention Category

  • Create Multiple Mappings: This can be used to add multiple retention category mappings at one time.
  • Add Intrasite Migration Mappings: This can be used in intra-Enterprise Vault Site scenarios. It maps every Retention Category to itself, thus keeping the existing Retention Category during migration.
  • Delete Mapping: Deletes the selected Retention Category mapping(s)
  • Edit Mapping: Allows a mapping to be modified.
  • Refresh: Refresh the information on the displayed Retention Category table

Note: Unmapped retention categories can be seen on the System Health page

Links

The page shows all Links in the environment (e.g., Vault Stores, Exchange Mailbox databases) that were discovered through the Active Directory Collector Modules, and the Enterprise Vault Collector Modules. It also shows links that were created manually (e.g. Native Format Links, and Office 365 links).

For each link, you can see the following information:

  • Type: Enterprise Vault, Exchange, Office 365, or PST.
  • Name: Name of the link. This is usually the Vault Store name or the Exchange database name. For links that were created manually it will be the name that was chosen.
  • Computer Name: The name of the server hosting the link. This is usually the Enterprise Vault server or the Exchange server.
  • AS DB: If an Archive Shuttle Item database has been created for this Link. A green and white check mark will be displayed if the database exists.
  • Failed Item Threshold: The failed item threshold that has been configured for this link.
  • Number of Containers: Container Count for this Link
  • Associated Modules: This grid will show the module to link mappings that have been defined.
  • Staging Area Path: Path used for export or import for this link

Note: The Archive Shuttle Active Directory Collector, and Enterprise Vault Collector may have already scanned Active Directory and pre-populated some of the information on this page.

Actions to be performed on the Links page

  • Create PST Link: Provides the ability to create a new PST link. It is possible to choose a name for the link, and to specify the PST Output Path where final PST files will be placed
  • Create O365 Link: Provides the ability to create a new Office 365 link. It is possible to choose a name for the link.
  • Create Proofpoint Link: Provides the ability to create a new Proofpoint link. It is possible to choose a name for the link, and to specify the output folder.
  • Create Database: Create an Item database for the selected Link.

Note: The source Link needs to be of type “Enterprise Vault”. An Item database should be created for each Enterprise Vault source Vault Store.

You need to provide the following information:

  • Link SQL Server: The SQL Server name / instance where you want to create the Link database. This can be a hostname, a Fully Qualified Domain Name, an IP Address or all of the previous with a SQL Named Instance provided.

Example: mysqlserver.mydomain.com

mysqlserver

192.168.0.1

mysqlserver.mydomain.com\instance

mysqlserver\instance

192.168.0.1\instance

  • SQL Database Name

Default: ArchiveShuttleItem_<linkID>. Read only.

  • Map Modules: Provides the ability to allocate modules to this link.

Note: Multiple links can be selected and modules allocated to them.

  • Use Local Modules: Attempts to automatically map modules to this link based on the modules that may be already deployed to the servers involved. For example, if the Archive Shuttle EV Collector Module, Export Module and Provisioning Module have already been deployed on a particular Enterprise Vault server when “Auto Map Modules” is selected for a vault store that resides on this server, those three modules will be automatically mapped to the link.Use Local Modules
  • Threshold: Define an error threshold for the migration. If the count of failed messages is below this threshold, the migration is still considered successful, and Stage 2 will continue at the appropriate time.
  • Set Export Path: This option should be configured when reviewing failed items, and moving them to a secondary location. This location is where those items will be moved to from the regular staging area.
  • Staging Area Path: Defines a path to which items should be exported, and items ingested. Multiple links can share the same export/import path, or they can be separate in order to distribute the exported data ready for ingestion. This path can be overridden on individual links if required.
  • PST Output Path: Allows the configuration of a PST Output Path for links where the type is PST. The output Path can be an UNC Path or an Azure Blob Storage Account.
  • Temporary Path: Allows the configuration of a temporary path to store PST files while they are processed on PST as a Target. It is recommended that this is FAST disk (Preferrably SSD)
  • Refresh: Refresh the contents of the grid.
  • Columns: Allows the selection of additional data columns that can be added to the grid.
  • Reset: Reset the columns and grid layout to their defaults.
  • Sync Mailboxes: Synchronizes mailbox information from Office 365 to Archive Shuttle
  • Cleanup Staging Area: Issues a command to the Archive Shuttle modules to clean up the staging area of already imported files. The EVExport, and all Import Modules will receive the command.
  • Enable/Disable/Run-Now Archive Gathering: Allows quick access to the functionality provided on the EV Environment page. These buttons allow for archive gathering to be enabled or disabled for a link, and to issue the command to perform archive gathering now.
  • Enable/Disable Offline Mode: When a link is enabled for offline mode, it will perform migration based on the Offline Media that has been configured for it. The staging area, if shown on this page, will not be used.
  • Set Rollover Threshold: Specify the size in MB that PST files should be rolled over.
  • Watermark: The high and low watermarks can be configured per link. More information on the functionality of the high and low watermarks is available in the section relating to System Health.

Naming Policies

File Name Policy

The File Name Policies page in the Admin Interface allows an administrator to customize file name policies to be used in migrations where the target is PST. Tokens are used to construct the file name of the PST file when it is renamed/moved to the PST Output Path. The possible tokens are:

Token Description
*username* Username of the owning user (sAMAccount Name)
*firstname* First name of the owning user
*lastname* Last name of the owning user
*fullname* Full name of the owning user
*email* E-mail address of the owning user
*upn* User principal name of the owning user
*pstid* ID of the PST file; continuous integer over all PST files
*pstnumber* Number of PST file; continuous integer per user
*archivename* Name of the archive
*archiveID* The Enterprise Vault Archive ID associated with the archive

The tokens can be used to construct filenames and paths.

When creating or editing a policy a live example of a file name will be shown to help with the policy design.

Folder Name Policy

The Folder Name Policies page in the Admin Interface allows an administrator to customize folder name policies to be used in journal explosion migration. Tokens are used to construct the folder name of the place in the mailbox where items are migrated to. The possible tokens are:

Token Description
*Original SMTP address* The SMTP address of the original recipient
*Mapped SMTP address* The SMTP address of the mailbox where data is being migrated to

There is also a checkbox to indicate whether the purges folder should still be used as a root, and then the constructed folder name is used as a subfolder from that.

When creating or editing a policy a live example of a name will be shown to help with the policy design.

Archive Name Policy

The Archive Name Policies page in the Admin Interface allows an administrator to customize archive name policies to be used when the RenameSourceArchive command runs in the Stage 2 part of a migration. Tokens are used to construct the name of the archive. The possible tokens are:

Token Description
*firstname* First name of the owning user
*lastname* Last name of the owning user
*fullname* Full name of the owning user
*upn* User principal name of the owning user
*SMTP address* The primary SMTP address associated with the owner
*SAM Account Name* The SAM Account Name associated with the owner
*Container Mapping ID* The Archive Shuttle container mapping ID
*Archive Name* The name of the Enterprise Vault archive.

When creating or editing a policy a live example of will be shown to help with the policy design.

The policy can be used when editing the RenameSourceArchive command as part of a Workflow Policy.

 

Workflow Policies

The Workflow Policies page in the Admin Interface allows an administrator to customize Workflows and create new ones. These can either be new, original workflows or copies of existing workflows with some amendments.

Note: It is recommended that the built-in workflows are left as-is, and copies made if required.

Creating a New Policy

A new Policy can be created by clicking on “New” the Actions Bar and entering a name for the policy. It is recommended that existing policies are reviewed before creating new ones.

New Policies can contain any of the commands list on the right hand side of the Workflow Policies screen.

Note: It is recommended that new policies contain the command-pair “CollectItemsForArchive” and “WaitForImportFinished”, these perform the last-synch of the container.

When creating a new policy it is required to indicate the types of mapping that can use this particular workflow by selecting the check boxes at the top right of the Workflow Policies screen.

For information on the detail relating to each command, see below.

Note: Remember to save before leaving the screen in order to commit the changes to the Workflow Policy to the Archive Shuttle database.

Copying a Policy

When a policy is being viewed it is sometimes desirable to copy the policy and then make some small alterations to it. This is achieved by using the “Copy” button in the Actions Bar. Once a copy of the policy has been made, it can be given a new name, and saved to the Archive Shuttle database.

Editing a Policy

A policy can be edited by clicking on “Edit” in the Actions Bar, and selecting an existing policy from the list. If appropriate, the types of mapping that the policy can be used with can be adjusted. All changes are only committed to the Archive Shuttle database when the “Save” button is clicked.

Adding a New Command

A new command can be added to the bottom of the list of current commands in the Workflow by single clicking on it in the list on the right hand side of the page.

Moving a Command

A command can be moved from its current position in the list by clicking on the grey title area and dragging it to a new position in the list:

Editing the Details of a Command

The details behind a command such as the number and frequency of retries can be edited by clicking on the hyper link relating to the command, as shown below:

An example of the information that is then displayed is shown below:

Filter Policies

The Filter Policies page in the Admin Interface allows an administrator to customize filters relating to the data migration and create new ones.

In order to filter by ‘Path’ the ‘Collect Extended Metadata’ must be enabled in System Configuration.

Filters should be given a name to easily identify them, and can be based on:

Field Description
ArchivedDate The date the item was archived
Path The path to the archived item
RetentionCategory The retention category of the archived item
HasShortcut The item has message properties implying it was archived
ItemDate The date/time of the item itself.
ItemSize The size of the archived item

Some filters are not applicable for certain sources.  The following are only applicable to migrations from Enterprise Vault.

  • ArchivedDate
  • RetentionCategory
  • HasLegalHold

The ItemSize filter can not be used with migrations from EV.Cloud.

 

Filter conditions are logically ANDed together. For example, it is possible to migrate date from a particular path AND below a particular size.

Example Filters

The following section gives some examples of filters that can be created. Once they have been created in the Admin Interface, they can then be used when performing container mapping operations.

Only migrate items which have shortcuts

Create New Filter

  • Give the filter a name, e.g., “With Shortcut”

Add a filter condition

  • Click “New” in the “Filter Condition” section.
  • Choose the “Policy” which was created previously.
  • Choose “HasShortcut” from the “Filter By” selection
  • Choose “Yes” in the “Value” selection
  • Click Add

Only migrate items which belong to a particular retention category

Create New Filter

  • Give the filter a name, e.g., “3 Year Retention Category”

Add a filter condition

  • Click on “New” in the “Filter Condition” section.
  • Choose the “Policy” which was created previously.
  • Choose “RetentionCategory” from the “Filter By” selection
  • Select the appropriate retention category from the drop-down list of categories from the source environment
  • Click Add

Only migrate data with ItemDate younger than 2011-12-31

Create New Filter

  • Give the filter a name, e.g., “Newer than 2011”

Add a filter condition

  • Click on “New” in the “Filter Condition” section.
  • Choose the “Policy” which was created previously.
  • Choose “ItemDate” from the “Filter By” selection
  • Select the Operator “YoungerThan” from the Operator drop down
  • Select the date 31st December 2011 from the date picker
  • Click Add

Manage Mappings

The Manage Mappings area of the Admin Interface contains multiple views of the containers involved in the archive migration.

The three views provided are:

  • Bulk Mapping
  • Manual Mapping
  • Existing Mappings

The different views are described in the following sections.

Mapping Templates

Using the Mapping Templates feature, you can create a container mapping template where you can set the total mailbox item count limit or item size limit. Then, when creating a mapping, you can assign a template and override settings for the O365 Module in SysConfig.

Note: Currently the Mapping Templates feature is available for Virtual Journal only.

Creating a Mapping Template

Create a mapping template before creating a mapping by following these steps:

  1. Select Configuration > Templates > Mapping Templates from the left menu bar.
  2. Click Add.
  3. Enter a name/description and desired limits using the Mapping Template Configuration window (shown below).
  4. Save the template.

Or, if you don’t create a mapping template in advance, you can create one using the Create new Template link available on the Add Mappings Wizard (shown below).

Assigning a Mapping Template

For new mappings, the Add Mappings Wizard prompts you to choose the mapping template you want to assign (see image above).

If you don’t assign a template (or an empty one is assigned) during mapping and item collection/migration is enabled, you’ll see this warning message (in red) within the wizard:

If you opt to not choose a mapping template, the default settings configured in SysConfig – O365 Module are applied.

If you haven’t enabled item collection/migration for the mapping, you can change or assign a mapping template on Existing Mapping page using the Set Template button shown below.

Bulk Mappings

This page is used to map containers to either new or to existing containers based on certain criteria.

The following basic information about each container is visible in the grid view:

  • Name
  • Indicator field to show whether the container has an owner or not
  • Owner Full Name
  • Group
  • Type of container (e.g., Enterprise Vault)
  • Archive Type
  • Indicator field to show whether the container has a mailbox or not
  • Link Name
  • EV Archive Status
  • Item Count
  • Total Size
  • Indicator field to show whether the container is mapped or not

If a Container’s owner / user could not be determined in Active Directory, it is considered “ownerless”, and is marked as such in the Name column. Example: John Doe (Ownerless)

Actions to be performed on the Bulk Container Mapping page

  • Add Mappings

Add mappings for selected containers. A pop-up wizard allows the mapping to be defined. The information that is required depends on the target:

  • Enterprise Vault

The target user strategy must then be chosen as follows:

  • Same User
  • Different User

Choose this option if the migration is to take place to another domain. The target domain can be chosen from a drop-down list, and matching criteria must be specified (e.g., Legacy Exchange DN, SID History, User Name)

 

The container strategy must then be chosen as follows:

  • Create new containers

Choose this option if Archive Shuttle should create new containers in the target environment.

  • Using existing containers

Choose this option if Archive Shuttle should use existing containers in the target environment.

  • Create new if there is no existing

Choose this option if Archive Shuttle should primarily use existing containers in the target environment, and if they do not exist, they will be created.

The target link must then be selected.

  • Choose the Link where you want to migrate to:

Select an entry from the drop down list that will correspond to a link in the target environment.

  • Exchange

The mailbox type must then be chosen as follows:

  • Primary mailbox

Choose this option if Archive Shuttle should ingest the data in to the users primary mailbox.

  • Secondary (Archive) Mailbox

Choose this option if Archive Shuttle should ingest the data in to the users secondary (archive) mailbox. If this option is selected, then an option of what action should be taken if a secondary mailbox does not exist can be specified. The choice of options are to either skip the ingestion or ingest into the primary mailbox instead.

  • Office 365

The mailbox type must then be chosen as follows:

  • Primary mailbox

Choose this option if Archive Shuttle should ingest the data in to the users primary mailbox.

  • Secondary (Archive) Mailbox

Choose this option if Archive Shuttle should ingest the data in to the users secondary (archive) mailbox. If this option is selected then an option of what action should be taken if a secondary mailbox does not exist can be specified. The choice of options are to either skip the ingestion or ingest into the primary mailbox instead.

The target link must then be selected.

  • Choose the Link where you want to migrate to:

Select an entry from the drop down list that will correspond to a link in the target environment.

  • PST

The format must then be chosen as follows:

  • PST

The output format will be PST

The target link must then be selected.

  • Choose the Link where you want to migrate to:

Select an entry from the drop down list that will correspond to a link in the target environment.

Later in adding this type of mapping, it is required to choose a PST File name policy. This is selected from a drop down list.

The remaining elements in the pop-up wizard are then the same regardless of the target for the migration:

  • Workflow Policy

The workflow policy must then be chosen, the list of workflows that are available will be determined by the chosen target. For example, if the migration is to Enterprise Vault, only those workflows applicable to an Enterprise Vault migration will be shown.

Additional workflow policies can be defined. This is described in the section Workflow Policies.

  • Filter Policy

The filter policy must then be chosen from the drop-down list. The will be a default filter which performs no filtering.

Additional filter policies can be defined. This is described in the section Filter Policies.

  • Choose Container Mapping Settings

Properties for the new mapping can then be set

  • Migration Status: Enabled

Enable immediate migration for the newly added mappings

  • Migration Status: Disabled

Does not immediately start migration for the newly added mappings

  • Item Gathering Status: Enabled

Enables item gathering for the newly added mappings

  • Item Gathering Status: Disabled

Does not start item gathering for the newly added mappings

There will be a summary page displayed at the end of the wizard when creating a new container mapping. This contains all of the information gathered during the wizard and should be reviewed before committing the changes.

  • Run Item Gathering

Start Full or Delta item collection for selected containers. The source container will be examined and data relating to each item will be collected and added to the appropriate Archive Shuttle Link database.

This would normally be set when configuring the container mapping and is provided in the navigation bar as a means of forcing the update and for troubleshooting.

 

  • Run Shortcut Gathering

Start Full shortcut collection for selected containers.

  • Add to Group

One or more containers can be added to an existing or new group. This group membership can then be used for filtering and batching of users for migration.

  • Assign Archive User

A different owner can be associated with a particular container.

  • Refresh

Refresh the data in the grid view.

  • Columns

Add additional columns to the table. Drag and drop a column name to the header column of the grid view.

  • Reset

Resets the source and target container tables back to the default list of columns, and removes any currently defined filters.

  • Export to PDF, XLS, XLSX, CSV

Allow data to be exported to several formats

On the Bulk Mapping screen, it is also possible to view details of a particular container/user. To do this, select the small magnifying glass next to the user and a pop-up window will be launched showing the User Information page for that container.

When performing a bulk mapping it is possible to postpone and schedule the collection of items, and the migration. This is a step in the mapping wizard.

Manual Mappings

Allows an administrator to create mappings for single containers. The source as well as target container must already exist. Performing a mapping operation in this manner is useful for containers such as Enterprise Vault Journal Archives, or where containers need to be merged to a target container.

The following basic information about each container is visible in the grid view:

  • Name
  • Owner Full Name
  • Group
  • Type
  • Mailbox Type
  • Link Name
  • Content Count
  • Content Size
  • Link name
  • Mapping Counts for the source and target

Actions to be performed on the Manual Mapping page

  • Add Mapping

Adds a mapping for the selected containers. A source and target container must have been selected before this navigation bar item is available.

  • Add to Group

One or more containers can be added to an existing or new group. This group membership can then be used for filtering and batching of users for migration.

  • Refresh

Refreshes source and target container tables.

  • Reset

Resets the source and target container tables back to the default list of columns, and removes any currently defined filters.

Existing Mappings

Containers that have already been mapped are listed on the Existing Mappings page. This page gives the ability to Enable/Disable mappings for migration and perform other container mapping related actions.

Note: Additional columns are available if the ‘Advanced view’ is enabled (under the ‘Views’ tab) though this makes the user interface slower if there are a large number of mappings.

The table shows the following information for each existing container mapping:

  • Container Mapping ID
  • Source Container Type
  • Source Container User
  • Source Group
  • Source Container Link Name
  • Flag to indicate if the container has an owner
  • Flag to indicate if the container has a mailbox
  • Target Container Type
  • Target Container User
  • Target Group
  • Target Container Link Name
  • Workflow Policy
  • Filter Policy
  • Priority
  • Item Gathering Enabled

Indicates whether the container mapping is enabled for item gathering (container level).

  • Enabled for Migration

Indicates whether the container mapping is enabled for migration. Archive Shuttle does NOT start to export / import when a container mapping is disabled for migration.

  • Stage 2

Indicates whether the container mapping has been switched to the target. If it has, Stage 2 of the Workflow has been started.

  • Stage 2 Finished

Indicates whether Stage 2 has finished for this mapping

Actions to be performed on the Existing Container Mappings page

These actions can be performed on this page:

  • Set Priority

Adjust the priority for the selected container mapping(s)

  • Enable Stage 2

Switch the selected container(s) to the target

  • Enable for Migration
  • Disable for Migration
  • Enable Item Gathering
  • Disable Item Gathering
  • Delete Mapping
  • Run Item Gathering
  • Run Shortcut Gathering
  • Set Workflow Policy
  • Set Filter Policy
  • Set Failed Item Threshold
  • Add to Group

One or more containers can be added to an existing or new group. This group membership can then be used for filtering and batching of users for migration.

  • View Mappings Health

Switches to a status page which can be used to check mappings.  See the section in this document called ‘Mappings Health’ for more information.

  • Comment

Set a comment of up to 256 characters on one or multiple mappings.

  • Refresh
  • Columns
  • Reset
  • Set Rollover Threshold

Specify the size in MB that PST files should be rolled over.

  • Disable Offline Mode
  • Enable Offline Mode

Mappings Health

The mappings health page allows you to get information relating to the status of a mapping.  More details are given below.

It is possible to see Overall Stats by clicking on the button in the toolbar. This will show any issues detected with mappings.

The Run Health Check button will perform checks on all mappings currently configured in the system. After the health check runs the overall stats page will give statistics about issues, and individual mappings with issues will be presented in the data grid.

Reports

Scheduled Reports

Archive Shuttle includes a number of pre-built reports can be scheduled to be delivered to different recipients. This is controlled on the Scheduled Reports screen in the Archive Shuttle User Interface.

The current reports which are available:

Report Name Description
Performance This report shows export and import performance in a bar graph both as a summary and per source link for:

  • The last day
  • The last week
  • The last 30 days
  • The last 3 months
  • The last 12 months
  • A custom time frame can also be chosen
Stage2Report This report shows information relating to Stage 2, including:

  • Containers that have finished Stage 2 in the last 24 hours
  • Containers still in Stage 2
  • Containers which are in Stage 2 and have errors
  • Containers in Stage 2 which may be stuck/hung
  • Containers in Stage 2 which have been paused.
FailedItemReport This report shows items which have failed migration. The report is divided up per link, and shows:

  • Items which have failed to export
  • Items which have failed to import

The reports are in PDF format

 

Each report can be configured to email one or more recipients, and is controlled via separate schedules, or, optionally triggered immediately using the ‘Send Now’ button.

In order to send out reports it is necessary to use the System Configuration page and enter information in the SMTP Settings page.

 

Stage 1 (Sync Data)

This page shows an overview of all containers enabled for migration, and their synchronization status. It also allows data to be exported to a variety of formats for reporting on the progress of the migration.

Note: Additional columns are available if the ‘Advanced view’ is enabled (under the ‘Views’ tab) though this makes the user interface slower if there are a large number of mappings.

The following basic information about each container is visible in the grid view:

  • Full Name
  • Group
  • Routed
  • Exported
  • Retryable Errors
  • Permanent Errors
  • Total Errors
  • Exported Percentage
  • Imported
  • Retryable Errors
  • Permanent Errors
  • Total Errors
  • Imported Percentage
  • Priority
  • Item Gathering Enabled
  • Migration Enabled
  • Stage 2 Enabled
  • Stage 2 Finished
  • Failed Item Threshold
  • Ignore Failed Item Threshold
  • Needs Provisioning

Actions to be performed on the Migration Status page

These actions can be performed on this page:

  • Set Threshold

Sets the failed item threshold for the selected mapping or mappings

  • Retry Failed Items

Retry any failed items

A hanging export or import means that it has been running for more than 1 hour.

 

Retrying permanently failed items will reset the error count for those items and retry them.

  • Re-export Items

Items can be re-rexported. It is possible to re-export all items, just those that failed to export or just those that failed to import.

  • Enable Stage 2

Switch the selected container(s) to the target

  • Add to Group

One or more containers can be added to an existing or new group. This group membership can then be used for filtering and batching of users for migration.

  • Set Priority

Allow the migration priority to be set

  • View Mappings Health

Switches to a status page which can be used to check mappings.  See the section in this document called ‘Mappings Health’ for more information.

  • Comment

Set a comment of up to 256 characters on one or multiple mappings.

  • Refresh

Refreshes the tables data

  • Columns

Allows the selection of additional data columns, which might then be used to select groups of containers

  • Reset

Resets the grid to the default columns, and removes any filters

  • Load

Loads a saved filter/page-layout

  • Save

Saves the current filter/page-layout

  • Export to PDF, XLS, XLSX, CSV

Allow data to be exported to several formats

Actions to be performed on the Actions tab

These actions can be performed on this tab:

  • Enable Migration

Enables the selected mappings for migration

  • Disable Migration

Disables the selected mappings from migration

  • Enable Item Gathering

Enabled Item Gathering for the selected mapping

  • Disable Item Gathering

Disables Item Gathering for the selected mapping

  • Run Item Gathering

Queues a command to collect item level metadata for the selected mappings. You can chose between a Full Collection or a Delta Collection. A full collection will be triggered if a source does not support Delta Collection. (e.g. PST)

  • View Details

Shows the number of items collected, and when the collection status was last updated. It’s useful to view this on archives which contain a large number of items in order to see the progress of the collection process.

  • Run Shortcut Collection

Queues a command to collect shortcut metadata for the selected mappings

 

Stage 2 (Switch User)

This page shows an overview of all containers that have been switched to target and shows the status in the Stage 2 workflow.

Note: Additional columns are available if the ‘Advanced view’ is enabled (under the ‘Views’ tab) though this makes the user interface slower if there are a large number of mappings.

The following basic information about each container is visible in the grid view:

  • Source User Name
  • Group
  • Source Container Type
  • Link (for the source)
  • Target User Name
  • Group
  • Target Container Type
  • Link (for the target)
  • Command

The current command which is being executed by Archive Shuttle

  • Status

The status of the command

  • Error

Information relating to errors

  • Date/Time
  • Next Command

The next command to be executed by Archive Shuttle

  • Finished

Actions to be performed on the Workflow Status page

These actions can be performed on this page:

  • Refresh

Refreshes the tables data

  • Columns

Allows the selection of additional data columns, which might then be used to select groups of containers

  • Reset

Resets the grid to the default columns, and removes any filters

  • Load

Loads a saved filter/page-layout

  • Save

Saves the current filter/page-layout

  • Add to Group

One or more containers can be added to an existing or new group. This group membership can then be used for filtering and batching of users for migration.

  • Reset Workflow Status

Can be used to retry the current command

  • Skip

Skip the current command in the workflow

  • Pause

Pause the workflow

  • Resume

Continue from a paused workflow

  • Change Policy

Allows a new policy to be chosen and the workflow restarted with that policy

  • Export to PDF, XLS, XLSX, CSV

Allow data to be exported to several formats

This page shows an overview of activity taking place on the links involved in the migration project.

The following basic information about the link progress is logged in the upper grid:

  • Link Name
  • Archive Count
  • Total Mappings
  • Progress Users
  • Progress Stage 1 (Synced)
  • Progress Stage 2 (Switch)

The following basic information about the item-level progress is logged in the lower grid:

  • Link Name
  • Archive Items
  • Mapped Items
  • Collected Items
  • Collected Items Percentage
  • Routed Items
  • Exported Items
  • Exported Items Percentage
  • Imported Items
  • Imported Items Percentage

Performance

The Performance Dashboard provides information about the migrations being performed on an hour by hour basis. The screen allows an administrator to select information relating to:

  • The last day
  • The last week
  • The last month
  • A custom time frame

Information about the migration is then displayed in a tabular format as follows:

Item Description
Hour Date/Time
Exported Count Number of items exported
Exported Size Size of all data exported
Imported Count Number of items imported
Imported Size Size of all data imported

It is also possible to view the information on a per-link basis, and each set of data is also displayed as a graph at the right hand side of the page.

Failed Items

This page shows an overview of any per-item failures that Archive Shuttle has encountered during the migration. When the screen is first entered a particular link should be chosen from the drop down list. This will filter the Failed Items screen to show only items relating to that link.

The following basic information about each failure is logged in the grid view:

  • Container Mapping ID

The Archive Shuttle ID for the container mapping – reference only

  • Item Routing ID

The Archive Shuttle ID for the item which has failed – reference only

  • Is Exported Failed?

An indicator field to show whether it is an export failure

  • Is Import Failed?

An indicator field to show whether it is an import failure

  • Is Failed Permanently?

An indicator field to show whether it is concerned that the error is permanent (e.g., file not found on exporting an item)

  • Item Routing Error Count

The current error count for the item

  • Error Text

The error which was reported to Archive Shuttle from the underlying module, or from Exchange/Enterprise Vault

  • Last Date Time UTC

The last date/time where this was reported by Archive Shuttle

  • In FTP

An indicator to show whether the item was uploaded to the Quadrotech FTP Server.

  • Download Link

A hyperlink to the failed item.

The Failed Items page also allows an administrator to submit failed items for reprocessing. This is achieved by selecting one or more items and selecting the button on the Actions Bar. It is also possible to reprocess all failed items. When reprocessing is selected, this will have the effect of resubmitting the item to the appropriate module. For example, if the issue is that the item could not be exported, then that item will be resubmitted to the export module. Likewise, if the issue is that the item could not be ingested into Office 365, then the item will be resubmitted to the Office 365 module.

Reprocessing failed items will reset the Item Routing Error Count for those items.

 

This action of reprocessing failed items can take a few minutes before the appropriate modules receive the command, perform the action, and feedback the results.

An additional option on the Failed Items screen is to ‘Move Selected Items’. This is specifically for the situations where items have failed to be imported to the target, and never can be. By selecting one of more of these items, they can be moved from the Staging Area to a different area specified on the target link (via an option on the Links Page).

Permanently failed items might include items which are too large to be ingested into the target (e.g., in an EV to Exchange migration where message size limits are in place)

Index Fragmentation

Archive Shuttle has a reliance on a performant SQL Server in order to achieve high throughput of items during migration. If SQL is struggling to handle the load, performance will drop. The Index Fragmentation page shows some key metrics about the SQL Indices associated with each link database. These can get fragmented over time during the Enterprise Vault Collection and export/import processes. The information on the index fragmentation page is updated hourly.

The screen highlights tables/indices that are of particular concern according to this:

Row Highlighting Reason
No highlighting Fragmentation is not significant, or the number of pages in the index is not over 1000.
Yellow Fragmentation is between 10 and 30 % and the page count in the index is more than 1000.
Red Fragmentation is over 30 % and the page count in the index is more than 1000. Accessing data associated with this table/index will not be performant.

The recommended actions to take are as follows:

Row Highlighting Action
No highlighting No action required
Yellow Perform an “Index Reorganization”
Red Perform an “Index Rebuild”

More information specific to SQL Server and management of the Archive Shuttle databases can be found in the SQL Best Practices Guide.

 

License

The License page in the Admin Interface gives an overview of the usage to-date of Archive Shuttle against the purchased amount of licenses and license volume. It also shows key information about when the license will expire.

If required, a license can be extended and then added to the system. See the section titled Adding a new License to Archive Shuttle in this guide.

 

Data on this page is refreshed hourly and shows exported data quantities at that time. User licenses are used when a user enters Stage 2.

A new license can be uploaded, if required.

License expiry notifications may be seen from time to time as the migration progresses. The notifications will appear as follows:

Notification Description
Date When there is 7 days left before the license expires
Volume When there is 3% left of the volume license
Users When there is 5% of the user license count remaining

Support Bundles

To assist with troubleshooting issues encountered during a migration a page has been added to the Admin Interface called “Support Bundles”. Multiple bundles can be configured if required, each bundle consists of:

  • Archive Shuttle and Enterprise Vault version information
  • Archive Shuttle log files

The bundles can be generated, downloaded or sent directly to the Quadrotech FTP server for review by Support Engineers.

Offline Media

The Offline Media configuration helps support the process whereby the source and target staging areas need to be different.

This is currently supported for:

  • EV to EV, O365 or Exchange
  • EAS to EV, O365, Exchange
  • Dell Archive Manager to EV, O365, Exchange

 

Some examples of when this might occur are:

  • Source and target environments are in different non-trusted forests
  • Slow WAN links mean that the bulk of the data will be migrated via ‘disk shipping’

There are a number of steps to follow in order to successfully use the Offline Media feature in Archive Shuttle, and these are described below:

Create Mapping Set

On the Offline Media a new source and target mapping set can be created by clicking on ‘Create’ in the Actions bar. This is the first step and in the mapping the source link is specified along with the target link, and name can be given to the set.

Add Export and Import Locations

With the Mapping Set selected multiple export and import paths can then be defined. Export paths can be configured to be one of three states:

Status Description
Open Data can be written to this path
Closed Data will not be written to this path
Ready Data path is on standby

If an open export path becomes full, then the next ready container will be used.

The source and target links need to have ‘Offline Mode’ enabled. This is performed by selecting the links on the Links page in the Admin Interface and selected ‘Enable’ in the ‘Offline Mode’ of the Actions bar.

When this has been enabled the Staging Area Path for those links will no longer be used. Instead, the paths used are those defined on the ‘Offline Media’ page

Creating Container Mappings

The container mappings are configured in the same way as ordinary mappings.

Export Data

Data will then begin to be collected and exported from the source containers and placed on the staging areas defined in the Offline Media set.

Ship the data

Periodically during the export, or at the end of the export, the exported data then needs to be ‘shipped’ to the target import path. This can be simply a case of copying the data manually or by shipping the disk and performing the necessary actions to make it available on the given name in the target environment.

If the data is copied, ensure that the very top level (below the share name itself) is copied to the target.

 

Enable Scanning of the Import Location

When the data is ready to be imported the Import Path has to be enabled for scanning on the “Offline Media” page.

When scanning is enabled, the location is scanned by the import module each hour for new data. If found, the data is then made available for the import module to process.

 

Once the scanning has occurred the data will be made available for the Import Module.

Import Data

Once the target Import Path has been scanned and data recorded in the Item Database as being available for import, the import module will process the data in the container mapping as normal.

Mapping Archiving (Cargo Bay)

When using the Mapping Archiving feature, item data associated with finished mappings is moved from active tables to a backup database (the Cargo Bay). This reduces the number of records in active tables, thus improving performance when you’re interacting with active item data.

Enable Mapping Archiving (Cargo Bay)

Prerequisites

Before enabling Cargo Bay, you need to install SQL Server Integration Services 14.0 on the core machine. Here’s how to do it:

  1. Download the SQL Server 2017 installer (Enterprise or Developer).
  2. Run the setup.exe file.
  3. Click Installation > New SQL Server stand-alone installation (shown below).
  4. In the ‘Feature Selection’ area, select ‘Integration Services’ and click ‘Next’.
  5. In the ‘Server Configuration’ section, select ‘Manual’ from the ‘Startup Type’ field.
  6. Finish the installation and then restart IIS.

Once the prerequisites have been installed, execute the following SQL statement on the Archive Shuttle Directory database. If the migration environment is using Archive Shuttle Cloud, contact the Customer Experience Team

Update SettingDefinition
set DefaultValueNumeric = 1 WHERE Name = 'EnableCargoBay'

Enabling for a small number of mappings

It is possible to enable archiving on just a selection of the mappings.  This can be done by selecting the mappings and clicking the button ‘Enable Mapping Archiving’. This is available on the following screens:

  • Existing mapping screen (on the ‘Main’ tab)
  • Stage 1 screen (on the ‘Actions’ tab)
  • Stage 2 screen (on the ‘Mapping Archiving’ tab)

Mapping Archiving statuses and notifications

The following statuses might be seen against a mapping:

Status Description
NotStarted Archiving not started or mapping not eligible for archiving
Running Archiving in progress
Failed Archiving is failed
Finished Archiving finishes successfully
ArchivingForced Archiving of mapping is forced
ArchivingDisabled Archiving of mapping is disabled
ArchivingDisabledAutomatically Archiving of mapping is automatically disabled due to failed item detection
Finished with Warnings Archiving finished successfully, but with warnings because empty tables were not deleted
Cancelled Archiving cancelled

The following notifications might be seen when performing enablement or disablement of mapping archiving:

Action Notification
Enable Mapping(s) successfully enabled for archiving
Enable Mapping(s) cannot be enabled for archiving
Enable Mapping(s) already enabled for archiving
Enable Not all mappings were enabled for archiving. See logs for more details.
Disable Mapping(s) successfully disabled for archiving
Disable Mapping(s) cannot be disabled for archiving
Disable Mapping(s) already disabled for archiving
Disable Not all mappings were disabled for archiving. See logs for more details.

 

System Configuration

The System Configuration page in the Archive Shuttle Admin Interface contains many settings that can be used to customize the migration and environment. Some of these settings will affect the workflow of the migration while others will affect throughput and performance.

Changing these configuration options should be done after careful analysis of the current workflow, environment and throughput.

Changes to system settings take place after a few minutes when the appropriate module checks in for new work items.

Schedule Settings

On the System Configuration page different settings/configurations can be applied according to a schedule.

When you first enter the System Configuration, the default schedule is shown. Configuring specific schedules consists of the following three steps:

 

1. Create and name a new schedule.

2. Make the required configuration changes for this schedule, above, or below the default schedule.

3. Select the times of day, or days of week that this schedule applies.

Date/Time is based on the data and time of the Core.

 

Example: More export parallelism on Saturday / Sunday

In this example we will configure higher Enterprise Vault export parallelism for Saturday and Sunday. The steps to follow are:

1. Navigate to the System Configuration page

2. Review the current configuration, this is the default configuration when no other configuration is set.

3. Via the button on the toolbar add a new schedule. Give the schedule a name (eg Weekend) and choose a colour from the available list.

4. Change the EV Export module parallelism settings to the new, higher values that you wish to use. When changes are made the entry will become bold, and a checkbox will appear underneath ‘Custom’.

5. Click on ‘Save’, in the toolbar, to commit those changes.

6. From the toolbar click on ‘Time Assignment’ in order to review the current scheduled configurations.

7. Click on the new schedule on the right hand side, and then highlight all of Saturday and all of Sunday.

8. Click on ‘Save’ on the schedule view in order to commit those changes.

General Settings

This section contains general settings relating to Archive Shuttle:

Item Description
General
Do not re-export on File Not Found Normally if an import module files that a required file is not present on the staging area it will report this to the Archive ShuttleCore and the Core will instruct the export module to re-export the item. If this behavior is not required, select this checkbox
Turn off the post-processing of secondary archives This will disable the post processing modules from performing actions against an Exchange (or O365) Personal Archive
Stage 2 Default Priority The normal priority which will be applied to users when they enter stage 2
Autoenable Stage 2 With this option enabled when a container gets to 100% exported and 100% imported, it will automatically move into Stage 2.

Without this option set containers, will remain in Stage 1 until Stage 2 is enabled for them manually.

Delete Shortcut Directly after Import With this option selected, successfully ingested items in to Exchange, or Office 365 will have the shortcut removed from the target container straight away. Without this option selected, the shortcuts will only be deleted once the stage 2 command is executed.

This option can greatly enhance the user experience when migrating data back to existing containers (e.g., primary mailbox)

Disable shortcuts collection Stops modules from collecting information about shortcuts in a mailbox
Archive Migration Strategy When a batch of users has been selected for migration and given a specific priority, within that, a migration strategy can be used to also govern the order that the migrations will take place. The options here are:

  • Smallest archive first (based on archive size)
  • Largest archive first (based on archive size)
  • Oldest first (based on the archived date/time of items)
  • Youngest first (based on the archived date/time of items)

The default strategy equates to a random selection within the batch that was chosen

Do not transmit non-essential Active Directory Data Stops the Active Directory Collector module from returning metadata about users which is not required for the product to function.
Use HOTS format Instructs the Core and Modules to expect and use HOTS format for items on the staging area.
Clear staging area files older than [hours] When the staging area cleanup process runs it will only remove files older than this number of hours.
Logging
Log communication between modules and Core Logs the sent command, and received results in a separate XML file on the Archive Shuttle Core. Note: Should only be used on advisement from Quadrotech Support
Log SQL Queries Logs the internal Archive Shuttle SQL queries and timing information in to the Archive Shuttle Core Log file. Note: Should only be used on advisement from Quadrotech Support
Default Core Log Level Allows the logging level for the Archive Shuttle Core to be changed.
Send errors to Quadrotech This option sends application exceptions to a service which Quadrotech can use to help track the cause of unexpected application exceptions.
Delete ItemRoutingErrorLogs after successful Export/Import This option allows the system to remove any reference to issues/errors during export or import once an individual has been successfully exported or imported.
Clear Module Performance Logs By default module performance logs will be kept for 30 days, but this can be changed in this setting.
Item Database
Default size [MiB] Shows the size that new item databases will be created.
Default log file size [MiB] Shows the default size of new item database log files.
Item database update retry timeout [hours] Shows the number of hours that will elapse before upgrades of the item databases will be retried.
User Interface
Global Timezone By default the system operates in UTC
My Timezone Override Allows a user specific timezone to be specified
Item size scale Can be used to control whether item sizes in the user interface are displayed as bytes, Mib, GiB, TiB, or displayed in automatic scale
Options ‘All’ on grids will be disabled when this threshold is exceeded If a value is specified here it will stop Archive Shuttle giving the option to display ‘all’ values on a grid, when there are a large number of items to display.
Delete mapping with all related data Normally a container mapping cannot be deleted if data has been ingested into the target container, or Stage 2 has been enabled. If this option is selected then the mapping and related Archive Shuttle SQL data will be deleted.

The Administrator of the target environment will need to remove the target container.

The Administrator of the Archive Shuttle environment will need to remove the data from the Staging Area

Lock active mapping deletion Prevents deletion of mappings if they are actively migrating.
Folder
Folder name for folder-less items Specify a name to use for items which do not have a folder.
Treat all messages as folderless If enabled this will ignore the folder which was obtained during the export of an item, and just ingest all items in to the folder specified as the ‘folderless items’ folder
Split folder parallelism Maximum number of containers to process in parallel for folder splitting.
Split threshold for large folders (except journal / folderless) If specified it indicates the maximum number of items in each of the journal folders before a new folder is created.
Split threshold folderless items If specified it indicates the maximum number of items in the folderless folder before a new folder is created.
Journal split base folder name Specifies the first part of the folder name which is used to stored data migrated from a journal archive.
Journal split threshold If specified it indicates the maximum number of items in each folder before a new folder is created.
Calendar Folder Names A list of folder names which are translated to Calendar, so that the folder will get the correct icon when it is created.
Task Folder Names A list of folder names which are translated to Task, so that the folder will get the correct icon when it is created.
Contact Folder Names A list of folder names which are translated to Contact, so that the folder will get the correct icon when it is created.
Chain of Custody
Re-export on Chain of Custody error Instructs the import module to report a Chain of Custody error back to the Core, and for the Core to then queue work for the export module to re-export the item.
Enable Chain of Custody for Extraction Causes the export modules to generate a hash of items (to be stored in the Item Databases) when items are exported
Enable Chain of Custody for Ingestion Causes the import modules to validate the Chain of Custody information when items are ingested. If an item fails this check it will not be imgested.
Reliability
Allow import of messages with empty recipient e-mail If a message is found without recipient information it will normally fail the ingestion process. This can be overridden with this option
Allow import of corrupt messages Items which fail the MSG file specification may not be ingested into the target container. Some of these failures can be overridden with this option checked.
Enable item level watermarks Archive Shuttle will stamp items that get imported with a Watermark, specifying details about the Item like Source Environment, Source Id, Archive Shuttle Version, ItemRoutingId, LinkId
Fix Missing Timestamps If the message delivery time is missing on a message the messaging library will generate it, from other properties already present on the message
Missing timestamp date fallback The date to be used in case of a missing timestamp

SMTP Settings

This section contains settings relating to the SMTP configuration (for sending out reports):

Item Description
SMTP Server The FQDN of an SMTP server which can be used to send emails
SMTP Port The port to be used when connecting to the SMTP server
Use SSL A flag to indicate that SSL should be used to connect
Use Authentication A flag to indicate whether the SMTP server supports anonymous connections, or requires authentication
SMTP Username The username to be used for authentication
SMTP Password The password to be used for authentication
SMTP From Email Address The ‘from’ address to be used in the outbound mail
SMTP From Name The display name to be used in the outbound mail

Here’s a walk-through of the steps to configure SMTP settings:

FTP Settings

This section contains settings relating to the FTP configuration:

Item Description
FTP URL The URL to use to connect to the FTP server
FTP username The username to be used when connecting to the FTP server
FTP password The password to be used when connecting to the FTP server

Journal Explosion Settings

This section contains settings relating to Journal Explosion

Item Description
Import root folder Specified the folder where imported items will be placed.
Delete items from staging are after initial import If enabled items are removed from the staging area after initial export.
Delete items without journal explosion import routings Items without journal explosion import routings will be deleted from the staging area. It is recommended to enable this if all user mappings are already enabled for import.

EV Collector Module

This section contains settings relating to the EV Collector Module:

Item Description
EV Collector Enable Shortcut Collection A flag to indicate whether or not the EV Collector module will collect information relating to shortcuts.

It is only necessary to set this if the migration is going to use the ‘has shortcut’ filter

Collector Parallelism Defines how many archives will be collected in parallel
Collect Extended Metadata Reads legal hold and path of items using the Enterprise Vault API for filtering purposes.

It is recommended that this is only enabled when filtering by folder

Use the BillingOwner on Archives which would be otherwise ownerless This uses the owner set as “Billing Owner” in Enterprise Vault as the Owner of the archive instead of trying to use the entries in the Exchange Mailbox Entry table. This is useful where an Active Directory account relating to an archive has been deleted, for an employee who has left the company for example. This would normally show as Ownerless in Archive Shuttle, but with this switch enabled the Enterprise Vault Collector module will attempt to use the “Billing Owner”.
Use EWS for EV Shortcut Collection A flag to indicate whether the module should use MAPI or EWS to collect shortcut information.
Ignore LegacyExchangeDN when matching EV users With this option enabled the ownership detection for EV archives is modified so that the LegacyExchangeDN is not used.
Limit stored results Stops the module trying to get more work, if there is a backlog of tranmissions to be sent to the Core.
Threshold database limit The number of items in the backlog before limiting takes place
Collect EV Shortcuts for hybrid mailboxes Enables the collection of shortcuts from mailboxes in a hybrid configuration where the mailbox is on-premise but the personal archive is in the cloud.

EV Export Module

This section contains settings relating to the EV Export Module:

Item Description
General
EV Export Archive Parallelism Defines how many archives will be exported in parallel. Total thread count = EV Export Archive Parallelism multiplied by EV Export Item Parallelism.
EV Export Item Parallelism Defines how many items should be exported in parallel per archive. Total thread count = EV Export Parallelism multiplied by EV Export Item Parallelism.
EV Export Storage If using Azure for the Staging Area storage (or you are migrating a journal archive for an older version of EV where Archive Shuttle is doing the envelope reconstruction) ensure this option is set to Memory, otherwise File System or Memory with File System Fallback can be selected. In either of the described situations if ‘File’ is chosen, an error will be reported in the export module log file and export will not proceed.

This setting can be adjusted if there are problems exporting very large items.

Export Provider Priority Specifies the order in which EV export mechanisms will be tried.
Limit stored results Stops the module trying to get more work, if there is a backlog of tranmissions to be sent to the Core.
Threshold database limit The number of items in the backlog before limiting takes place
Failures handling
Export Messages with invalid Message ID from Journal When enabled it will mean that items from a journal that require envelope reconstruction will still be processed (and a P1 message generated) even if the Original Message ID attribute can not be found in the item that was retrieved from EV (meaning that an EV Index lookup can not be performed).

Note: This setting will have an impact in that it may mean BCC information is not added to the P1 envelope (since it can not be obtained from EV)

Prevent exporting of items if envelope reconstruction fails This will prevent Archive Shuttle from providing an item from EV if the envelope reconstruction fails. With this setting disabled some items may be provided without an appropriate P1 (Envelope) message.
Fail items permanently on specified errors Indicates whether Archive Shuttle should mark certain items as permanently failed, even on the first failure.
Error message(s) to permanently fail items on A list of error messages which will cause items to be marked as permanently failed (if the previous setting is enabled)

EV Import Module

This section contains settings relating to the EV Import Module:

Item Description
Offline Scan Parallelism Indicates the number of threads that should be used when scanning offline media
EV Default Retention Category ID When a non-Enterprise Vault source is used, and the target of the migration is an Enterprise Vault environment, indicate here the retention category ID to apply to the data which is ingested
EV Import Archive Parallelism Defines how many archives will be imported in parallel.

Total thread count = EV Import Archive Parallelism multiplied by EV Import Item Parallelism.

EV Import Item Parallelism Defines how many items should be imported in parallel per archive.

Total thread count = EV Import Parallelism multiplied by EV Import Item Parallelism.

Import Journal Archive Through Exchange Imports a journal archive through Exchange instead of through the Enterprise Vault API. Elements from the staging area will be added to Exchange (for an appropriate Enterprise Vault task to process) rather than directly into Enterprise Vault.
Journal Mailbox Threshold If using the ‘Import Journal Archive Through Exchange’ option then this setting can be used to limit when ingest will be stopped while the appropriate task processing the mailbox catches up.
Suspend imports while EV is archiving Disables import module while Enterprise Vault is in its archiving schedule.
Ingest Provider Priority Indicate the type of ingest provider to use.
Read file to memory Allow reading of files to system memory before ingestion
Read file to memory threshold (bytes) Items below this size will be read into memory (to speed up ingestion), whereas items above this size won’t be read into memory
Limit stored results Stops the module trying to get more work, if there is a backlog of tranmissions to be sent to the Core.
Threshold database limit The number of items in the backlog before limiting takes place

EV Provisioning Module

This section contains settings relating to the EV Provisioning Module:

Item Description
Convert orphaned into shared archives When orphaned archives are migrated to another Enterprise Vault environment the target archive normally becomes a normal Exchange mailbox archive. With this option selected, the target archive will be an Enterprise Vault Shared Archive instead.

The Shared Archive maintains the original folder structure, no permissions added to the archive.

Limit stored results Stops the module trying to get more work, if there is a backlog of tranmissions to be sent to the Core.
Threshold database limit The number of items in the backlog before limiting takes place

Shortcut Process Module

This section contains settings relating to the Shortcut Processing Module:

Item Description
Shortcut Process Parallelism Defines how many archives will be post processed in parallel.

Total thread count = EV Post Process Parallelism multiplied by EV Post Process Item Parallelism.

Shortcut Process Item Parallelism Defines how many items will be post processed in parallel per archive.

Total thread count = EV Post Process Parallelism multiplied by EV Post Process Item Parallelism.

Delete shortcuts not related to migrated items When shortcut deletion is progressing foreign shortcuts will be also be deleted.
Delete messages with EV properties but without proper shortcut message class If this option is enabled and items are found to have EV attributes (such as Archive ID) they will be deleted by the shortcut process module.
Use EWS for EV Processing Enables the post processing to use EWS rather than MAPI for processing
Config to Use By default the Exchange configuration will be used, but if the post processing should operate on Office 365 mailboxes select that from the drop down list.
Shortcut deletion maximum batch count Shortcuts will be grouped into batches of 100 items. This number indicates the number of those batches to be processed in parallel.
Limit stored results Stops the module trying to get more work, if there is a backlog of tranmissions to be sent to the Core.
Threshold database limit The number of items in the backlog before limiting takes place
Collect shortcuts for both primary and second mailboxes If enabled this will enable collection of shortcut information from both the primary mailbox and secondary mailbox (personal archive)

If the source environment is Source One, EAS or Dell Archive Manager, then the shortcut processing module should be configured to use EWS.

Exchange Import Module

This section contains settings relating to the Exchange Import Module:

Item Description
General
Use per server EWS Url for Exchange import If this is enabled then the import module will use the EWS Url configured in Active Directory on the Exchange Server object rather than a general Url for all ingest requests
Import Root Folder When ingesting data in to Exchange mailboxes or personal archives it is sometimes required to ingest the archived data into a top level subfolder (and then the archive folders beneath that). Specify the name of that top level folder here.
Import Root Folder When ingesting data in to Exchange mailboxes or personal archives it is sometimes required to ingest the archived data into a top level subfolder (and then the archive folders beneath that). Specify the name of that top level folder here.
Maximum Batch Size Bytes The maximum size that a batch of items can be, which is then sent in one go to Exchange
Maximum Batch Size Items Maximum number of items in a batch
Exchange Timeout Seconds Timeout in seconds until Archive Shuttle aborts the ingest (i.e. upload/processing)
Disable reminders for appointments in the past This will remove the MAPI properties relating to whether a reminder has been sent/fired or not as the item is ingested into the target. If this is not enabled reminders may appear for long overdue archived items.
Mark migrated items as read If this is enabled all migrated will be marked-as-read by the import module
Limit stored results Stops the module trying to get more work, if there is a backlog of tranmissions to be sent to the Core.
Threshold database limit The number of items in the backlog before limiting takes place
Threading/Parallelism
Offline Scan Parallelism Number of threads that will be used for scanning offline media
Exchange Mailbox Parallelism Defines how many Exchange mailbox imports will be ingested to in parallel.
Exchange Batch Parallelism Defines how many batches will be ingested per mailbox in parallel
Exchange Item Parallelism Defines how many items will be ingested per mailbox in parallel
Connectivity
Exchange Version Specify the version of Exchange which is in use.
Disable Certificate check Disable the certificate validity when connecting to Exchange
Exchange Connection URL Specify an Autodiscover URL if the default one does not work
Use Service Credentials for Logon Authenticate to Exchange with the credentials which the Exchange Import Module Server is running as.

Native Import Module

This section contains settings relating to the Native Import Module:

Item Description
Stamps a header to imported messages for ProofPoint to identify message source When enabled a message-header is added to each item as it is added to a PST file. The header is called x-proofpoint-source-id and has the itemid/item routing id, as the value. For example:

x-proofpoint-source-id: 91016abe-51e3-bdd6-132f-fb6763ecc751/2865103

Native Import File Parallelism Defines how many PST files will be imported to in parallel.
Native Import Item Parallelism Defines how many items will be ingested in parallel per PST file.
Finalize finished PSTs in Stage 1 With this option enabled finished/full PST files will be moved to the output area whilst the mapping is still in Stage 1. This will only happen on PSTs which are complete, ie those that have split at the predefined threshold. For migrations lower than the threshold, which therefore have just a single PST, this PST will not be moved in stage 1.
Limit stored results Stops the module trying to get more work, if there is a backlog of tranmissions to be sent to the Core.
Threshold database limit The number of items in the backlog before limiting takes place

Office 365 Module

This section contains settings relating to the Office 365 Module:

Item Description
General
Number of fastest servers to use Determines how many servers from the list of fastest are used. They will be picked randomly by the module
Import Root Folder When ingesting data in to Office 365 mailboxes or personal archives it is sometimes required to ingest the archived data into a top level subfolder (and then the archive folders beneath that). Specify the name of that top level folder here.
Ingest Provider Priority Determines which ingestion methods will be used to ingest data into Office 365 and in which order
Office 365 Batch Size Bytes The maximum size that a batch of items can be, which is then sent in one go to Office 365
Office 365 Batch Size Items Maximum number of items in a batch
Office 365 Timeout Seconds Timeout in seconds until Archive Shuttle aborts the ingest (i.e. upload/processing)
Disable reminders for appointments in the past This will remove the MAPI properties relating to whether a reminder has been sent/fired or not as the item is ingested into the target. If this is not enabled reminders may appear for long overdue archived items.
Mark migrated items as read If this is enabled all migrated will be marked-as-read by the import module
Convert journal messages to O365 journaling format If this option is enabled information in the P1 envelope gets added to an attribute called GERP, and added to the message as it is ingested in to Office 365. This makes those items Office 365 journal-format messages.
Limit stored results Stops the module trying to get more work, if there is a backlog of tranmissions to be sent to the Core.
Threshold database limit The number of items in the backlog before limiting takes place
Virtual Journal
Virtual Journal Item Count Limit The maximum number of items in a virtual journal mapping before a new mapping will be created.
Virtual Journal Size Limit The maximum size of a virtual journal mapping before a new mapping will be created.
Threading/Parallelism
Offline Scan Parallelism Number of threads that will be used for scanning offline media
Office 365 Mailbox Parallelism Defines how many items will be ingested per mailbox in parallel.
Office 365 Item Parallelism Defines how many items will be ingested per mailbox in parallel.
Office 365 Batch Parallelism Defines how many batches will be ingested per mailbox in parallel
Connectivity
Use faster server (round-trip) If enabled, from time to time the Office 365 module will get a list of servers responding to Office 365 ingest requests and use only those for ingestion.
Office 365 Exchange Version Specify the Office 365 Exchange version
Disable certificate check Disable the certificate validity check when connecting to Office 365
Use Multiple IP from DNS When Office 365 returns multiple IP address entries for it’s ingestion service this setting will allow the ingest module to communicate to all of those IP addresses instead of just one. For this to work, the ‘Disable certificate check’ option must be enabled.
Exchange Connection URL Specify an Autodiscover URL if the default one does not work

PST Export Module

This section contains settings relating to the PST Export Module:

Item Description
General
File Parallelism The number of PST files to ingest from simultaneously
PST item collection file parallelism The number of PST files to scan simultaneously
Limit stored results When the module is working, should the number of items it is tracking and not sent the result to the Core be limited?
Threshold database limit The number of items allowed to be stored locally, and not sent to the Core, before the module stops requesting more work.
Journal Explosion
Process messages without P1 header If enabled items will still be processed for journal explosion even if they are missing a P1 header
Process distribution lists from messages without P1 If enabled items will still be processed for journal explosion even if they are missing a P1 header

EAS Zantaz Module

This section contains settings relating to the EAS Zantaz Module:

Item Description
EAS Archive Parallelism The number of archives to process in parallel
EAS Item Parallelism The number of items to process in parallel per archive
Limit stored results Stops the module trying to get more work, if there is a backlog of tranmissions to be sent to the Core.
Threshold database limit The number of items in the backlog before limiting takes place

Sherpa Module

This section contains settings relating to the Sherpa MailAttender Module:

Item Description
Sherpa Archive Parallelism The number of archives to process in parallel
Sherpa Item Parallelism The number of items to process per mailbox in parallel
Limit stored results Stops the module trying to get more work, if there is a backlog of tranmissions to be sent to the Core.
Threshold database limit The number of items in the backlog before limiting takes place

SourceOne Module

This section contains settings relating to the SourceOne Module:

Item Description
SourceOne Container Parallelism The number of archives to process in parallel
SourceOne Collect Items Container Parallelism The number of items to process per archive in parallel
SourceOne Sync Archives Batch Size Defines how many archives will be synced in one batch.
Ignore Missing Transport Headers Ignores missing transport headings in case of export from Journal export.
Email address used in case ‘sender is null’ error Specify an email address to be used if the module reports a ‘sender is null’ error
Limit Stored Results Select this option to limit stored results.
Threshold Database Limit Limit of records in local database, when module stops asking for work items.

Dell Archive Manager Module

This section contains settings relating to the Dell Archive Manager Module:

Item Description
Container Parallelism Defines how many archives will be exported in parallel.
Collect Item Parallelism Defines how many items should be collected in parallel per archive.
Limit Stored Results Select this option to limit stored results.
Threshold Database Limit Limit of records in local database, when module stops asking for work items.
Collect size of archives If enabled the module will also collect the overall size of archives

PowerShell Script Execution Module

This section contains settings relating to the PowerShell Script Execution Module:

Item Description
Item Description
Container Parallelism Defines how many archives will be exported in parallel.
Collect Item Parallelism Defines how many items should be collected in parallel per archive.

Preview Features

Before becoming generally available, new Archive Shuttle features are introduced as Preview Features. Preview Features must be enabled using the hidden page:

  • FeaturePreview.aspx

Here’s how the page looks:

To use a preview feature, select it from the list, and then click Enable.

Appendix A: Ports used by Archive Shuttle

The following table shows the network communication ports used by Archive Shuttle. These ports are provided for reference for the situation where a firewall exists between the source and target environments in the archive migration.

SOURCE Destination Port(s) Description
Module Servers Archive ShuttleServer TCP 80 (HTTP)

TCP 443 (HTTPS)

HTTP/S Communication from Modules to Archive ShuttleServers
Export / Import Module Servers Storage Servers (CIFS Shares) TCP 445 (CIFS) Access to CIFS Shares
Archive ShuttleServer SQL Server TCP 1433 (MSSQL) Access to Microsoft SQL Server
Archive ShuttleServer DNS Servers UDP 53 (DNS) Access to DNS for Windows
Archive ShuttleServer Domain Controllers TCP 88 (Kerberos)

TCP 445 (CIFS)

Access to Domain Controllers for Windows
Enterprise Vault Module Servers Enterprise Vault Servers Please see your Enterprise Vault documentation on what ports are needed for the Enterprise Vault API to talk to Enterprise Vault
Exchange Module Servers Exchange Servers Please see your Microsoft Exchange documentation on what ports are needed to talk to Microsoft Exchange (MAPI/EWS)
Office 365 Module Servers Office 365 TCP 443 (HTTPS) HTTPS communication from Archive Shuttle Office 365 Module to Office 365

Appendix B: Adding a new License to Archive Shuttle

From time to time it might be necessary to update the license information for Archive Shuttle. This may be because additional modules have been purchased, or it may be because the following type of warning has been displayed, and then additional licenses purchases:

In order to update the license information for Archive Shuttle, the following steps need to be performed:

1. Copy the new license.lic file to the following folder:

Webservice\bin\

2. Execute an IISReset command

Print Friendly, PDF & Email
Updated on December 6, 2018

Was this article helpful?

Related Articles