1. Home
  2. Archive Shuttle
  3. Archive Shuttle Administration Guide
  1. Home
  2. AS Administration
  3. Archive Shuttle Administration Guide

Archive Shuttle Administration Guide

Contents

Configuring Archive Shuttle

This section outlines the basic steps required to configure Archive Shuttle. This is applicable for most archive migration projects. The user interface and other components of Archive Shuttle are explained in subsequent sections. We recommend reading each section before performing the post-installation configuration.

Initial Archive Shuttle setup

After Archive Shuttle has been installed, the following five tasks need to be performed before you start migrating archives:

  • Configure module schedules.
  • Enable Active Directory domains.
  • Add Enterprise Vault environment(s), if Enterprise Vault is the source or target environment.
  • Add link databases.
  • Configure module mappings.

These tasks are explained below.

Configure module schedules

During Archive Shuttle installation, the core product, databases, and a number of modules will have been deployed in the environment in order to fulfill migration requirements. During the installation of the modules, each module is enabled. If modules are not enabled, they will not receive any work from the Archive Shuttle Core.

In order to check that all appropriate modules are enabled and operational, follow these steps:

    1. From the Archive Shuttle web interface, click Configuration > AS Modules in the navigation bar.
    1. Review the list of modules:
        • Ensure all required modules are present. For more information see the Installation Overview, as well as the Planning Guide, which will help you validate this part of the migration.
      • Ensure that the computer name, domain name, and version are as expected.
  1. Verify that all modules are reported appropriately and that none of the modules have a red background. If they do, there is a communication problem between the module and the Archive Shuttle Core.

All of the modules are configured to run continuously by default. If this doesn’t meet your requirements for the migration, modules can be individually scheduled to more suitable times using the Set Schedule button on the AS Modules page. This is described later in this guide.

Enable Active Directory domains

A few minutes after the Active Directory Collector Module is enabled, a list of domains where the Active Directory Collector Module is running become visible on the Configuration > Active Directory page.

By default, all domains are discovered, but won’t be scanned for user accounts. In order to enable one or more domains to be scanned, follow these steps:

    1. From the Archive Shuttle web interface, click Configuration > Active Directory in the navigation bar.
    1. Review the list of domains.
    1. Select the check box next to one or more domains.
    1. Click Enable in the navigation bar.
  1. Make sure the Scan Domain column shows a check mark.

Add Enterprise Vault environment(s)

Note: Skip this step if Enterprise Vault is not the source or target of a migration.

The next step to perform in the migration workflow is to add Enterprise Vault environment(s). This is performed from the Configuration > EV Environment page. This page also displays environments that are currently configured.

If the migration is between two Enterprise Vault directory databases, then this page is used to enter details for both directory databases. If the migration is between Enterprise Vault and an external system, such as Microsoft Exchange or Office 365, then only one Enterprise Vault Directory database is added. Likewise, if the migration is between two Enterprise Vault sites within the same Enterprise Vault directory, only one entry is listed on this page.

In order to add an Enterprise Vault environment, follow these steps:

    1. From the Archive Shuttle web interface, click Configuration > EV Environment.
    1. Click Add to add an Enterprise Vault directory.
    1. Enter the required information (for example, the fully qualified domain name and instance name, if it’s not the default instance).
  1. Click Add/Update to apply the changes to the Archive Shuttle database.

Once you specify required EV Directory databases, the Enterprise Vault Collector Module gathers and displays information about the environment. These are then displayed in the EV Environment page after a few minutes.

Tip: Click Refresh in the navigation bar to reload the list of vault stores.

Enable vault stores

Enable archive collection for each vault store that you’ll use in the migration by following these steps:

    1. From the Archive Shuttle web interface, click Configuration > EV Environment in the navigation bar.
    1. Select the checkbox next to each vault store.
    1. In the navigation bar, click Enable.
  1. Verify that the Archive Collection Enabled column now contains a check mark.

Add link databases

Once archive collection is enabled, the next step in the migration is to add link databases for each of the vault stores. A link database is needed for each link in the source environment.

The Configuration > Links page may already contain some links that were collected from the Active Directory Collector Module (the Exchange databases) and from the Enterprise Vault Collector Module (the Vault Stores, as a result of adding each Enterprise Vault environment). These links are created automatically. You must manually create other links, for example, links to Office 365 and PST. To create a link database:

  1. Open the Configuration > Links page.
  2. Select the tab of the type of link database you’re creating (for example, Office 365), and then click Create Link.
  3. Follow the prompts to create the link.

After a link database is created, check the EV Environment page again to review the progress of the archive and item gathering stages.

Note: The number of archives and size of data will not be populated on the EV Environment page until the module linkage is added, but the vault stores will display.

Configure module mappings

The next stage of the configuration involves mapping modules to the Links in the migration project. To do it, follow these steps:

  1. Go to Configuration > Links.
  2. Select the check box next to a vault store/Exchange database/source link.
  3. Click Map Modules in the navigation bar.
  4. From the pop-up window, select appropriate modules from the drop-down lists, and then click Save.

Edit module mappings

Module mappings can be modified by following these steps:

  1. Go to Configuration > Links.
  2. Select the check box next to a vault store/Exchange database.
  3. Click Map Modules.
  4. Make desired changes to the current mapping for this link, and then click Save.

Shortly after module mappings have been added, the appropriate modules perform their tasks and the Number of Containers column updates to show the number of containers in the link.

For the modules that are performing export and import functions, it’s also important to set the staging area path. The corresponding modules need to be paired up to facilitate the flow of archived data. For example, the EV Export module path should correspond to the Exchange import path if the migration is needed to flow in that direction.

Setting up the staging area path

To configure the staging area path:

    1. Go to the Configuration > Links page.
    1. Select the checkbox next to a vault store/Exchange database/source link.
  1. Click Path and follow the prompts.

Repeat this step for each export and each import module. For the second and subsequent paths you define, the Staging Area Path is pre-populated with the path that was previously used. This makes it possible to quickly configure many modules with the same storage location.

The current export/import path and module mappings can be viewed on the Links page. Make sure all links are correctly defined with the appropriate export/import path before proceeding with the migration of any archived data.

It’s also possible to configure the Default Staging Area Path that all modules and links will use unless the setting is overridden on an individual link.

Validating the setup

Once the stages above have been performed and all necessary setup and configuration steps are complete, the archive migration is ready to begin.

At this point, a collection of links have been defined and mapped to modules that perform tasks on those links. The source and target environments have been defined within Archive Shuttle and all of the Archive Shuttle Modules are enabled and set to an appropriate schedule. Finally, the Enterprise Vault retention categories have been mapped according to the migration needs.

Note: Before proceeding with any migration of archived data, confirm that each stage has been performed.

Depending on the migration, additional steps may be required. These are described later in this guide.

Migrating archives

Step by step instructions are available via the links below:

Using HOTS

HOTS is a feature with Archive Shuttle which allows for less bandwidth usage for migrations to Office 365 by combining a highly-optimised transfer system with storage of extracted data taking place in Microsoft Azure. The following diagram presents an overview of HOTS:

Requirements

For migrations from legacy archives to Office 365 using HOTS the following needs to be considered:

    • Using HOTS is supported for all currently supported sources, when migrating to Office 365 mailboxes or Personal Archives.
    • More CPU usage might be needed on the source environment in order to create the HOTS-format data
    • An Azure Storage Account must be configured and used for storing the extracted data
    • All export and import modules must have been configured with the connection string to the Azure Storage Account
  • A bridgehead server running the ingest and shortcut processing module should be deployed in Azure to facilitate the ingestion of the data from the Azure Storage Account. (The specification of the bridgehead server is given in the Installation Guide)

Note: It is possible to limit the storage used in Azure by configuring a high watermark for the links.

Additional Configuration

The following needs to be performed in Archive Shuttle in order to utilize HOTS:

System Setting

In Archive Shuttle HOTS must be enabled in System Configuration -> General

Configuration on each module

Each Export Module must be configured with a connection string in order to be able to send data to the Azure Storage Account. This is configured in the Credentials Editor. The connection string is obtained from the Azure Portal. The corresponding Import Module needs to also be configured with the credentials.

Configuration on link

The source and target links need to use an appropriate Azure Storage Account connection. This is done from the Links page within Archive Shuttle.

High Watermark on link

By default Microsoft Azure places a very high maximum allowance for Storage Accounts. Customers may wish to restrict the usage of the Storage Account because it may involve additional cost to Microsoft. To place a limit on the amount of storage to be used the high watermark can be set on each of the source links.

Using the Archive Shuttle Web Interface

Accessing the Web Interface

The Archive Shuttle User Interface is fully web-based. This allows archive migrations to be performed from anywhere in an organization from an Internet browser. It also means that archive migrations can be managed externally by partners of the migration organization. We recommend that HTTPS be enabled in that situation.

The following table shows the browsers that can be used to access the Archive Shuttle interface:

Browser Supported?
Internet Explorer 8 No
Internet Explorer 9 Yes
Internet Explorer 10 Yes
Internet Explorer 11 Yes
Firefox No
Chrome No
Opera No
Chromium No
Opera No
Safari No

Note: Internet Explorer 10 requires that the Compatibility View for local intranet sites setting to be disabled. Additional information is available in the section:Disable Internet Explorer 10 Compatibility View Setting

General Information

All pages share a common look and feel within the Archive Shuttle user interface.

On the left side is the main navigation area. It is used to jump from page to page, where every page controls a different aspect of Archive Shuttle.

Along the top are “Actions”, which are context sensitive to each page.

The following pages are available:

  • Progress & Performance: Provides an overview of the status of the migration.
  • System Health: Shows key information about the health of the migration environment.
  • User Information: Allows an administrator to see detailed information gathered about users.
  • Bulk Mapping: Provides tasks to create container mappings.
  • Manual Mapping: Provides tasks to create single container mappings.
  • Existing Mapping: Provides tasks to manipulate existing mappings
  • Offline Media: Allows configuration of offline media (e.g., disk shipping)
  • Stage 1 (Sync Data): Provides detailed information on the migration status for each Container mapping. This is the “Item Sync” status for the container.
  • Stage 2 (Switch User): Provides detailed information on the migration status for each Container mapping. This is the “Workflow” status after switching a container to the target environment.
  • Link Statistics: Provides detailed information on the migration status for each Link.
  • Performance: Provides a detailed hourly performance overview of the migration and allows selection of date/time ranges to see this data historically and graphically
  • Failed Items: Provides information on items that have failed to export or import
  • Index Fragmentation: Shows information about SQL Index Fragmentation
  • License: Provides information on the licenses available for migration
  • Support Bundles: Allows the creation and manipulation of log file bundles that are helpful to Quadrotech Support Engineers.
  • AS Modules: Allows an administrator to control the configuration of modules deployed in the environment
  • Active Directory: Provides a list of Active Directory domains and allows for them to be scanned for users.
  • User Groups: Provides an administrator the ability to manage Archive Shuttle groups. These are used as aids when selecting containers to migrate.
  • EV Environment: Provides an overview of the Enterprise Vault Environment. It’s also used to add new environments and various Enterprise Vault related collection tasks.
  • EV Retention Mappings: Provides insight into Enterprise Vault Retention Category mappings that are needed for the migration.
  • Links: Provides an overview of Links, allows creation of an Archive Shuttle Item databases as well as enabling Map Modules to Links.
  • Workflow Policies: Allows an administrator to review the commands within a workflow, modify workflows and add new workflows.
  • Migration Filters: Allow an administrator to construct filters that can filter the data during migration.
  • System Configuration: Allows an administrator to fine-tune the migration environment

Grids / Tables

General

All grids / tables throughout the Archive Shuttle Web Interface have common functionality, which will not be explained in every section individually, but is provided here for your reference.

Filtering

Many of the grids / tables can be sorted and filtered as required.

Sorting can be achieved by clicking on the appropriate column header. Ascending versus descending order is achieved by clicking on the column heading a second time.

Filtering is provided on many columns by the means of the textbox below a column header. Data entered into this text box will cause Archive Shuttle to “filter by” on that column. The small “antenna” symbol behind the textbox can be used to change the type of filter, with the following being common options that are available:

    • Begins with
    • Contains
    • Doesn’t contain
    • Ends with
    • Equals
  • Doesn’t equal

It is possible to combine multiple column filters to further enhance the displayed data in the grid / table.

In order to apply the filter, which may contain multiple columns/selections click on the ‘Apply’ button at the right of the filter row.

A second filtering option is provided by the little “antenna” symbol behind the header name of a column. It provides a Microsoft ® Excel ® like filtering interface which lists the following values for an easy selection:

  • (All): Does not apply a filter, displays All Items
  • (Blanks): Only displays items that do not have a value
  • (Non blanks): Only displays items that have a value, hiding blank values
  • Values: Aggregated list of values from that column to choose from

Paging

All grids provide standard paging functionality. It is possible to see the current page, and select to view another page from the pager at the bottom of the grid.

Progress & Performance

The Progress and Performance Dashboard provides an administrator with an overview of the current state of the archive migration project. The following information is displayed:

Item Description
Status User finished migration:

Percentage of mapped containers that have been through the complete workflow

Stage 1 (Synced):

Percentage of users that have been synched.

Stage 2 (Switched):

Percentage of enabled users switched to target

Basic statistics Data gathered for the migration, including:

Users

Total number of users

Mailboxes

Total number of mailboxes

Archives

Total number of archives

Archived Items

Total number of archived items

Archived Items Size

Size Total size of the archived items

Mappings

Total number of containers that have been mapped

Enabled for Sync (Stage 1)

Total number of containers that are in Stage 1

Enabled for Switch (Stage 2)

Total number of containers that are in Stage 2

Finished Migration

Total number of containers that have finished migration

Last 10 Workflow Completed Users Last ten users that have fully completed their migration workflow

Clicking in this area will move you to the Stage 2 screen within the Admin Interface.

Export Speed GB/h

Extraction Speed Items/h

Ingestion Speed GB/h

Ingestion Speed Items/h

Export and import speed per hour, calculated from the last ten minutes values
Overall Performance Collection, Export and Import performance broken down into a timetable.

    • Last ten minutes
    • Last hour
    • Current day
    • Current week
    • Current month
    • Current quarter
    • Last 6 months
    • Current year
  • All
Archive Statistics For how many archive have archive statistics been received.

Archive statistics are:

    • Count of items in an archive
    • Size of items in an archive
  • Count of shortcuts in mailbox belonging to this archive
Collected Items How many items need to be collected in total (based on archive statistics), and how many have already been collected.
Current Activity
    • Running Export Item Count
    • Running Imports Item Count
    • Failed Export item Count
    • Failed Import Item Count
    • Running Export Item Size
    • Running Import Item Size
    • Failed Export Item Size
  • Failed Import Item Size
Exported Items

Exported Size

Imported Items

Imported Size

Exported and imported count/size as a percentage

The Dashboard can optionally be switched to show all of this information for a specific link order to more closely monitor a particular part of the migration project.

It is also possible to show the extraction and ingestion speed in items per second, rather than items per hour.

Optionally, you can configure the page’s widgets to display compressed sizes.

System Health

The dashboard provides an administrator with an overview of storage and other health parameters that may affect the migration being undertaken. If a problem is detected a yellow triangle will be visible in the user interface and a number. The number represents the number of problem types.

The page is divided into the following sections:

Section Description
Modules Information relating to the status of modules. Those with issues will be shown with a warning symbol. Hover over those items and a tooltip will give more information.

In addition this page shows details relating to any credentials which have been configured to be used by the system to access resources.

Storage Paths Shows information about the storage paths used by the system.
Miscellaneous Any retention categories that are not mapped to a target environment are displayed here.
Events Status events will be shown on this page. A list of events that may be displayed are described in this article.
Sync Details relating to environment sync activities will be shown on this page.

Free space on a staging area location is color coded as shown in the following table:

Highlighting Reason
No highlighting Free space is above 100 GB
Yellow Free space is below 100 GB
Red Free space is below 50 GB

In addition, the System Health page gives an overview of any modules that are disabled or otherwise not running. This is shown at the top left of the screen in the Admin Interface.

If all available Office 365 licenses are already assigned and there is no available license left, on the Events tab on the System Health page displays a warning message.

The Link information at the bottom of the page also highlights the ‘Used’ space on each link. Used space is defined as the used space from disk. This is also color coded for ease of identifying a potential problem, as follows:

Highlighting Reason
No highlighting Used space is within normal parameters
Yellow Used space is between 75% and 90% of the maximum allowed
Red Used space is above 90% of the maximum allowed

If the Used Space reaches 100% on a link, exporting of data will be stopped until data has been imported, even if there is still free disk space available on disk. This is further illustrated as follows:

Note: High and low watermarks can be adjusted from the Links page in the Archive Shuttle UI.

The link information that is included also gives an indication of the indexing performance on the target (if the target is Enterprise Vault). The Index Incomplete column shows the number of items that are waiting to be indexed on the target environment. This information is updated once per minute by the EV Import module.

Note: If the Index Incomplete column shows a large value, it might indicate an issue with indexing in the target environment, which should be investigated.

User Dashboard

The dashboard provides comprehensive information that has been discovered about a specifiable user. The following information is displayed:

Item Description
Basic User Information Shows information about the user such as the User SID, login name, OU where the Active Directory account resides.

In addition, the user address can be displayed. This might be helpful to confirm the chosen user identify.

Mailboxes Shows detailed information relating to the user mailbox, and personal archive as discovered by Archive Shuttle.
Archives Displays detailed information relating to the primary Enterprise Vault archive that has been discovered for the user.
Migration Status Information relating to the migration can also be viewed, if this user is currently mapped for migration. The information includes the number of items, priority, number of failed items and information relating to the size of the arhive.

The information is also displayed graphically as a series of pie charts at the bottom of the screen.

Stage 2 Status This section shows the current state of any Stage 2 commands which have run on this user.

When migration is underway, this page will show additional information related to the migration. This includes statistics and overall progress graphs. It is also possible to see the Failed Items for a selected user/migration by clicking on the hyperlink on this page.

Module Dashboard

Valuable information about module-level responsiveness can be gained from the Module Dashboard in Archive Shuttle. Using the dashboard it is possible to see detailed information relating to the operations being performed by particular modules on particular servers involved in the migration. For example, if new mappings have just been created, and yet one or more Enterprise Vault Export modules are showing no items being exported, it may indicate an area that should be investigated further.

Day-to-day administration

The tasks that need to be performed from day to day during an Archive Shuttle based migration vary from project to project; however, there are some aspects that should always be performed:

Monitoring Performance

Performance of the migration can be viewed in various ways within the Archive Shuttle Web Interface and can be exported to a variety of formats. It is recommended to review the following:

Progress & Performance

Over a period of time, the number of containers (users) processed should start to increase. In fact, all the statistics should rise towards 100% during the migration. It is also possible to view the information displayed on this page both for the overall progress of the migration and an individual link level. To get a fine level of detail on the progress of a migration, it is also possible to show the graphs on the page as items/second rather than items/hour.

The data which is displayed and the order it is displayed in can be customized to meet the needs of the particular migration. These views can be stored, and can be switched between when required. Each chart or data grid is a widget that can be moved around the screen, or removed by clicking on the small ‘x’ at the top right of the widget. Once removed, widgets can be re-added if required.

Link Statistics

Ensure each link is progressing towards 100% over time

Performance

Once steady-state migration has been reached the number of items being exported and imported per hour should be fairly consistent.

System Health

Ensure that the Free Space amounts do not reach a warning or critical level.

Also ensure that modules are not disabled or otherwise not running.

Finally ensure that the link ‘used’ space is within normal parameters.

Monitoring Workflows

Monitoring workflows involves reviewing the Stage 1 Status screen for hangs and failures. By default, the number of export errors and the number of import errors are shown as data columns on the screen. Additional data columns can be added (by clicking on Columns in the Actions Bar and dragging additional columns on to the data grid).

The Stage 1 Status screen can also be filtered and sorted to show (for example) all containers with less than 100% imported. This can be done as follows:

    1. Navigate to the “Stage 1 (Sync Data)” screen.
    1. Click “Reset” on the Actions Bar in order to return the page to the default view
    1. In the text box under “Imported Percentage” enter 100.
    1. Click on the small filter icon to the right of the text box, and choose “is less than” from the pop-up list
  1. Click on the ‘Apply’ button at the right of the filter row.

The page will now show those containers that have been enabled for Stage 1 that have not reached the 100% import stage.

Monitoring Module-level Activity

Valuable information about module-level responsiveness can be gained from the Module Dashboard in Archive Shuttle. Using the dashboard it is possible to see detailed information relating to the operations being performed by particular modules on particular servers involved in the migration. For example, if new mappings have just been created, and yet one or more Enterprise Vault Export modules are showing no items being exported, it may indicate an area that should be investigated further.

Adjusting Priority of Migrations

When reviewing the data synchronization on Stage 1, it might be necessary from time to time to adjust the priority order that they are being executed. A good way to do this is to ensure that all new mappings are created with a priority of 10, then, other priorities can be assigned on the Stage 1 screen in order to raise or lower the priority of particular mappings.

Note: The lower the priority number, the higher the priority.

Monitoring Migrations

Monitoring migrations involves reviewing the Stage 2 Status screen for failures. Each command, performed by the appropriate module, will report back results to the Archive Shuttle Core.

This screen can also be filtered to show (for example) all containers that have not yet finished the migration completely. This can be done as follows:

    1. Navigate to the “Stage 2 (Switch User)” screen.
    1. Click “Reset” on the Actions Bar in order to return the page to the default view
    1. In the drop down list under “Finished” select “No”.
  1. Click on the ‘Apply’ button at the right of the filter row.

The page will now show those containers that have been enabled for Stage 2, but have not yet completed the migration.

Pausing, Resuming, and Skipping Workflow Steps

Particular steps in a Workflow can be paused, resumed or even skipped. This can be done as follows:

    1. Navigate to the “Stage 2 Status” screen.
    1. Click “Reset” on the Actions Bar in order to return the page to the default view
    1. Select one or more containers.
  1. Click the appropriate action button, e.g., Pause, Suspend, Resume

Changing Workflows

At any time a particular container mapping can have the workflow changed to a new one. In order to do this, perform the following:

    1. Navigate to the “Stage 2 (Switch User)” screen.
    1. Click “Change Workflow Policy” on the Actions Bar
    1. Select a new workflow for this mapping.
  1. Click “Save” to commit the change.

Once a new workflow has been selected, all previous workflow steps will be removed for the mapping and the new workflow will begin with the first command in that workflow.

This can also be used to re-run a chosen workflow associated with a container mapping.

Setting users up for migration

Most archive migrations are performed by selecting groups or batches of user archives (containers) to process through Stage 1 and Stage 2. The mapping and selection is performed on the “Bulk Mapping” screen in the Archive Shuttle Web Interface. This screen can be filtered and sorted with the current list of data columns, and additional data columns can be added to help facilitate the selection of containers.

There are many ways that the selection can be defined. Below is an example of selecting users based on the source Vault Store:

    1. Navigate to the “Map Containers” screen.
    1. Click “Reset” on the Actions Bar in order to return the page to the default view
    1. In the text box under “Link Name” enter the name of one of the source Vault Stores, and press enter or click into a different text box
  1. Click the ‘Apply’ button at the right of the filter row.

The page will now refresh to show all archives (containers) in that particular Vault Store (link).

This selection can be further refined, before selecting some or all of the containers and performing the “Add Mapping” function from the Actions Bar.

Configuring, Loading and Saving Filters

Both the “Stage 1 (Sync Data)” and the “Stage 2 (Switch User)” pages in the Admin Interface allow for additional, useful data columns to be added to the screen. These can be re-arranged and sorted as required.

Both of these pages also allow for these filters, and column configurations to be saved under a friendly name. Previously saved filters can be loaded, allowing an administrator to jump between slightly different views of the migration.

Note: The filters are saved per user and can be used on any machine that accesses the Admin Interface from the same Windows account.

Logging

Archive Shuttle performs logging for activities in two locations, both of which can be useful for troubleshooting purposes. Logging can be configured to record data at a number of different levels: Info, Debug, Trace, Warn, Error, and Fatal.

Module-Level Logging

Each module records log information at the INFO level though that can be overridden if required. When the modules are installed, the location of these log files can be chosen. By default, the location is inside the program files folder in a subfolder called Logs.

The current location for the log files can be seen on the Modules page in the Archive Shuttle Admin Interface.

In addition to logging data locally, each module also transmits this log information to the Archive Shuttle Core, using a Web Service.

Each module can log extended information about item-level migration details; for example, on successful migration of an item an EV Import module may log the Archive Shuttle reference for an item, the source item ID and the target item ID.

Core-Level Logging

The Archive Shuttle Core logs information as well as the modules. The location where the log files will be written can be configured during the installation of the Archive Shuttle Core. In addition, to the core operations the Archive Shuttle Core will also log user interface actions to a file on the Archive Shuttle Core.

The Archive Shuttle Core also receives logging from each module. These files have .Client in their filename. This means it is not normally necessary to get log files from the servers where modules are installed.

PowerShell Execution

It is possible to add customized steps into a Stage 2 workflow and to execute PowerShell scripts. These can perform customer or environment specific actions. There are essentially two steps which need to be performed:

    • Addition of scripts
  • Customization of the Workflow

These are covered in the next sections.

Addition of scripts

This section of the user interface allows an administrator to add PowerShell scripts to perform various tasks.

Scripts can be added, edited and deleted. When adding a script it should be given a friendly name, and description. The script itself can be added directly, or it can be uploaded from a file that you already have.

Customization of Workflow

PowerShell scripts can be added to an existing workflow. It is also possible to add multiple scripts, if required. To add a script edit the desired workflow, and from the right hand side of the screen click on the PowerShellScriptExecutionRunScript command. This will add that command to the bottom of the workflow.

The command can the be moved to the appropriate position in the workflow.

The command in the workflow needs to be edited so that you can provide the name of the script that you want to execute.

Examples of PowerShell Execution

The following article contains examples of PowerShell scripts.

Modules

This section of the user interface lists all the Archive Shuttle modules and on which server they have been installed. The display also shows whether a module is enabled or not and the current logging level for the module and Core.

Modules have to be enabled in order for them to receive work to do.

The modules are monitored every 5 minutes to check if they are still running, or have failed. If they have failed an attempt will be made to restart a module, by issuing a command to the Admin Module on the affected machine. By default Archive Shuttle will try 5 times to restart a module. Every retry to start the module is by default one more minute apart from the previous attempt for a maximum of 10 minutes. The status of the module will be on the System Health page.

Actions to be performed on the Modules page

  • Enable: Enable selected modules
  • Disable: Disable selected modules
  • Configure: Most modules allow specific configuration relating to parallelism and other performance/load related elements to be configured.  For example you could configure one EV Export module to have archive and item parallelism of 10 and 2 respectively, and another EV Export module could have archive and item parallelism of 15 and 1 respectively.

It’s also possible to have these configuration changes scheduled to be effective at particular times of day and days of the week.

The data grid showing module information also includes columns which show which configuration is active, and the time left before the next change in configuration.

  • Start: Start the selected module immediately
  • Stop: Send a command to the selected module to stop processing
  • Restart: Send a command to the selected module to restart
  • Delete: Remove a module. It may be necessary to choose a replacement.
  • Enable performance statistics: Enables the collection of performance statistics for a module
  • Disable performance statistics: Disables the collection of performance statistics for a module
  • Set Schedule: Define a schedule for when the selected module should run
  • Update: Send a command to the selected module to perform an automatic update to the latest version.

Note: For the “Update” command to work correctly, there are additional steps that should be performed on the Archive Shuttle Core Server. These are covered later in this section

  • Download: Allows you to download the module MSI package.
  • Set Log Level: Allow the logging level of a module (or set of modules) to be changed in real-time. Increasing the logging level may help with providing more detail to Quadrotech Support for troubleshooting.
  • Refresh: Refreshes the module grid

Note: Disabled modules still transmit backlog data to the Core Web Service. These modules do not get new work to perform.

Setting a Schedule for a Module

Each of the Archive Shuttle modules can have a schedule defined for when the module should execute tasks. To set a schedule, follow these steps:

    1. Navigate to the “Modules” page.
    1. Select a Module by clicking on the checkbox next to the name of the module
  1. Click on “Set Schedule” in the navigation bar

Note: When setting a schedule remember to click on “Save” in the “Set Schedule” window to commit the schedule to the database.

The following screenshot shows a module schedule where the module is configured to run 24×7:

The following screenshot shows a module schedule where the module is configured to run just on Saturday and Sunday:

Update Installed Modules

During a complex migration there may be many modules installed on different servers throughout an environment. Archive Shuttle has a method for simplifying the process of updating the modules when new versions become available. In order to update modules from the Archive Shuttle Web Interface, follow these steps:

    1. Download the new MSI file, the file required will have a name formatted in this manner: ArchiveShuttleModulesInstall-X.x.x.xxxxx
  1. Copy the MSI file to the following folder on the server that hosts the Archive Shuttle Web Service:

Webservice\bin\ModuleInstallers

    1. Navigate to the “Modules” page in the Archive Shuttle User Interface.
  1. Select one or more modules, and click “Update”.

The MSI file will then be transferred to the server that was selected, and then the MSI file will be executed in order to update the installation.

Restart Enterprise Vault Services

From the ‘Services’ tab it is possible to restart some or all of the Enterprise Vault services on a particular Enterprise Vault server. You can also direct the command to a particular Archive Shuttle Admin Service.

Active Directory

This is an overview page of the discovered domains, and whether they are enabled for scanning. Active Directory information is collected through the assigned Archive Shuttle Active Directory Collector module.

Each domain in the forest where the Active Directory Collector Module is running will be shown, but by default user scanning for each domain will be disabled.

If there is an Active Directory Collector Module in additional forests, they too will be shown on this screen.

Note: “Sync All AD Users” will synchronize user information from all enabled domains.

Actions to be performed on the Active Directory page

  • Sync AD Users

This instructs the appropriate Active Directory Collector modules to synchronize user information from all enabled Active Directory domains.

  • Sync AD User

This allows you to specify a single Active Directory user that should get synchronized. This is useful if you have issues with a single User and allows you to easily troubleshoot this one user, or simply if you know you had changes for a single user and you need to show up ASAP.

  • Sync AD Domains

This instructs the Active Directory Collector module to gather a list of all domains in the environment/forest.

  • Enable

When one or more domains have been selected, clicking the ‘Enable’ button in the Actions Bar will enable synchronization for those domains.

  • Disable

When one or more domains have been selected, clicking the ‘Disable’ button in the Actions Bar will disable synchronization for those domains.

  • Refresh

Refreshes the current screen

User Groups

This is an overview page of the groups that have been defined in Archive Shuttle. Groups are used to apply a tag to users so that actions relating to their migration, or progress monitoring can be performed easily in the Admin Interface.

As an example, a group might be defined as “Migration Test Users”. There are several places in the Admin Interface (listed below) where users can then be added to this group. Groups can also be managed from this page in the Admin Interface.

Users/Groups can also be imported from a CSV file, if necessary.

Actions to be performed on the User Groups page

  • Add: Create a new group.
  • Edit: The name of a group can be modified.
  • Delete: A User Group can be deleted.
  • Unassign Users: Remove users from a group.
  • Assign Users (CSV Import): Imports users from a CSV file to a specific group.

Note: CSV files that can be selected for import should contain either a list of User SIDs to add to a particular group, sAMAccountNames, or Container Names

  • Refresh: Refresh the list of users and groups.
  • Reset: Reset the grid to the default view.

There are several places where containers/users can be added to groups:

    • User Dashboard
    • Bulk Mapping
    • Manual Mapping
    • Existing Mappings
    • Stage 1 (Sync Data)
  • Stage 2 (Switch User)

Once added a group, all these same pages in the Admin Interface allow the data to be filtered and grouped by the group name.

Note: A container/user can only belong to one group.

Tag Management

Tags are an organizational mechanism that can be used on any source or target container to identify special subset of the containers within a migration.

You might decide to use tags to…

  • Flag users under legal consideration
  • Group users into migration waves
  • Identify problematic containers
  • Identify VIPs that require special consideration
  • Identify containers that are out of scope

You can create, modify, and assign tags. Then, you can filter lists using those tags. You can create and apply tags on any of these pages:

  • User Information
  • Bulk Mapping
  • Existing Mapping
  • Stage 1
  • Stage 2

Note: Tags can be used with any type of container (including ownerless ones). Additionally, you can assign separate tags for a mapping’s source and target containers.

When you select objects and click the Tag Assignment button, you can create, remove, or assign tags using this window:

Tag Management page

The Configuration > Tag Management page shows a list of current tags and the number of containers assigned each tag. A tag can be expanded to review the containers that have the tag.

You can create, edit, or delete tags and tag assignments on this page.

An imported CSV file can be used to assign one or more tags to user group containers.

The supported fields for matching the containers are:

  • UserSid
  • SaMAccountName
  • Container Name

The format of the CSV file should be one of the following:

  • SID,tagname
  • samAccountName,tagname
  • ContainerName,tagname

Multiple tags can be assigned to a container, for example:

  • samAccountName,tag1,tag2,tag3

You can assign a tag to all members of a group using the Assign User Groups button. This allows you to select a group and assign a specific tag to all containers in that group. This can be useful when you’re transitioning from using groups, to using tags.

EV Environment

This is an overview page of all the Enterprise Vault environments that are needed for the migration project. New Enterprise Vault Directories can be added, and Archive Shuttle collects information about this Enterprise Vault directory through the assigned Archive Shuttle Enterprise Vault Collector module.

For every Enterprise Vault Directory, the table lists the following information:

    • Vault Store Name
    • Number of Archive in this Vault Store
    • Total Item Count in this Vault Store
    • Total Item Size in this Vault Store
    • Total Shortcut Count in the archives’ associated mailbox
    • Total Legal Hold Count
    • Archive Gathering Enabled
  • Backup Mode enabled

Note: The data values will only be shown on screen when “Archive Gathering” has been enabled, and an EV Collector module has been linked. (See below)

Actions to be performed on the EV Environment page

  • Add EV Directory: Adds an Enterprise Vault Directory to Archive Shuttle. The following information is needed in order to perform this action:
  • Module to associate: Archive Shuttle Enterprise Vault Collector module which should be “responsible” for collecting information about this Enterprise Vault Directory
  • Display Name: Friendly name for this Enterprise Vault Directory. This is only a display name and can be used to identify this Enterprise Vault Directory
  • EV SQL Server: SQL Server name / instance where this Enterprise Vault Directory hosts its Directory database. This can be a hostname, a Fully Qualified Domain Name, an IP Address or all of the previous with a SQL Named Instance provided.

Example: mysqlserver.mydomain.com

mysqlserver

192.168.0.1

mysqlserver.mydomain.com\instance

mysqlserver\instance

192.168.0.1\instance

  • EV SQL Database Name: Database name. Defaults to EnterpriseVaultDirectory. This entry is for display purposes only.
  • Sync all EV Environments: Queues an update request for all Enterprise Vault Directories by their respective Archive Shuttle Enterprise Vault Collector Modules.
  • Sync Active Directory: Queues an update request for all enabled Active Directory Domains by their respective Archive Shuttle Active Directory Collector modules.
  • Archive Gathering
  • Enable: Enable Archive Gathering for the selected Vault Store(s)
  • Disable: Disable Archive Gathering for the selected Vault Store(s)
  • Run Now: Queue an Archive Gathering request for the selected Vault Store(s)
  • Refresh: Refresh the information on the displayed EV Environment table

EV Retention Mappings

This page shows all Enterprise Vault Retention Categories that have been mapped, and allows an administrator to add or delete existing mappings.

A Retention Category mapping is necessary in cross-Enterprise Vault Site/Directory migration scenarios. This is required so that Archive Shuttle knows which Retention Category it should apply to the target item, based on the retention category of the source item.

A Retention Category mapping can also be used to change the Retention Category in intra-Enterprise Vault Site/Directory migration scenarios. This is useful if it is required to consolidate Retention Categories.

Note: Without Retention Category mappings in place, Archive Shuttle will export data from Enterprise Vault, but not import any items into Enterprise Vault.

Actions to be performed on the EV Retention Mappings page

  • Create Mapping: Create a new Retention Category Mapping. The following information is required, and can be selected from drop-down lists when creating the mapping:

– Source site

– Source Retention Category

– Target Site

– Target Retention Category

  • Create Multiple Mappings: This can be used to add multiple retention category mappings at one time.
  • Add Intrasite Migration Mappings: This can be used in intra-Enterprise Vault Site scenarios. It maps every Retention Category to itself, thus keeping the existing Retention Category during migration.
  • Delete Mapping: Deletes the selected Retention Category mapping(s)
  • Edit Mapping: Allows a mapping to be modified.
  • Refresh: Refresh the information on the displayed Retention Category table

Note: Unmapped retention categories can be seen on the System Health page

Links

The page shows all Links in the environment (e.g., Vault Stores, Exchange Mailbox databases) that were discovered through the Active Directory Collector Modules, and the Enterprise Vault Collector Modules. It also shows links that were created manually (e.g. Native Format Links, and Office 365 links).

For each link, you can see the following information:

  • Type: Enterprise Vault, Exchange, Office 365, or PST.
  • Name: Name of the link. This is usually the Vault Store name or the Exchange database name. For links that were created manually it will be the name that was chosen.
  • Computer Name: The name of the server hosting the link. This is usually the Enterprise Vault server or the Exchange server.
  • AS DB: If an Archive Shuttle Item database has been created for this Link. A green and white check mark will be displayed if the database exists.
  • Failed Item Threshold: The failed item threshold that has been configured for this link.
  • Number of Containers: Container Count for this Link
  • Associated Modules: This grid will show the module to link mappings that have been defined.
  • Staging Area Path: Path used for export or import for this link

Note: The Archive Shuttle Active Directory Collector, and Enterprise Vault Collector may have already scanned Active Directory and pre-populated some of the information on this page.

Actions to be performed on the Links page

  • Create PST Link: Provides the ability to create a new PST link. It is possible to choose a name for the link, and to specify the PST Output Path where final PST files will be placed
  • Create O365 Link: Provides the ability to create a new Office 365 link. It is possible to choose a name for the link.
  • Create Proofpoint Link: Provides the ability to create a new Proofpoint link. It is possible to choose a name for the link, and to specify the output folder.
  • Create Database: Create an Item database for the selected Link.

Note: The source Link needs to be of type “Enterprise Vault”. An Item database should be created for each Enterprise Vault source Vault Store.

You need to provide the following information:

  • Link SQL Server: The SQL Server name / instance where you want to create the Link database. This can be a hostname, a Fully Qualified Domain Name, an IP Address or all of the previous with a SQL Named Instance provided.

Example: mysqlserver.mydomain.com

mysqlserver

192.168.0.1

mysqlserver.mydomain.com\instance

mysqlserver\instance

192.168.0.1\instance

  • SQL Database Name

Default: ArchiveShuttleItem_<linkID>. Read only.

  • Map Modules: Provides the ability to allocate modules to this link.

Note: Multiple links can be selected and modules allocated to them.

  • Use Local Modules: Attempts to automatically map modules to this link based on the modules that may be already deployed to the servers involved. For example, if the Archive Shuttle EV Collector Module, Export Module and Provisioning Module have already been deployed on a particular Enterprise Vault server when “Auto Map Modules” is selected for a vault store that resides on this server, those three modules will be automatically mapped to the link.Use Local Modules
  • Threshold: Define an error threshold for the migration. If the count of failed messages is below this threshold, the migration is still considered successful, and Stage 2 will continue at the appropriate time.
  • Set Export Path: This option should be configured when reviewing failed items, and moving them to a secondary location. This location is where those items will be moved to from the regular staging area.
  • Staging Area Path: Defines a path to which items should be exported, and items ingested. Multiple links can share the same export/import path, or they can be separate in order to distribute the exported data ready for ingestion. This path can be overridden on individual links if required.
  • PST Output Path: Allows the configuration of a PST Output Path for links where the type is PST. The output Path can be an UNC Path or an Azure Blob Storage Account.
  • Temporary Path: Allows the configuration of a temporary path to store PST files while they are processed on PST as a Target. It is recommended that this is FAST disk (Preferrably SSD)
  • Refresh: Refresh the contents of the grid.
  • Columns: Allows the selection of additional data columns that can be added to the grid.
  • Reset: Reset the columns and grid layout to their defaults.
  • Sync Mailboxes: Synchronizes mailbox information from Office 365 to Archive Shuttle
  • Cleanup Staging Area: Issues a command to the Archive Shuttle modules to clean up the staging area of already imported files. The EVExport, and all Import Modules will receive the command.
  • Enable/Disable/Run-Now Archive Gathering: Allows quick access to the functionality provided on the EV Environment page. These buttons allow for archive gathering to be enabled or disabled for a link, and to issue the command to perform archive gathering now.
  • Enable/Disable Offline Mode: When a link is enabled for offline mode, it will perform migration based on the Offline Media that has been configured for it. The staging area, if shown on this page, will not be used.
  • Set Rollover Threshold: Specify the size in MB that PST files should be rolled over.
  • Watermark: The high and low watermarks can be configured per link. More information on the functionality of the high and low watermarks is available in the section relating to System Health.

Naming Policies

File Name Policy

The File Name Policies page in the Admin Interface allows an administrator to customize file name policies to be used in migrations where the target is PST. Tokens are used to construct the file name of the PST file when it is renamed/moved to the PST Output Path. The possible tokens are:

Token Description
*username* Username of the owning user (sAMAccount Name)
*firstname* First name of the owning user
*lastname* Last name of the owning user
*fullname* Full name of the owning user
*email* E-mail address of the owning user
*upn* User principal name of the owning user
*pstid* ID of the PST file; continuous integer over all PST files
*pstnumber* Number of PST file; continuous integer per user
*archivename* Name of the archive
*archiveID* The Enterprise Vault Archive ID associated with the archive

The tokens can be used to construct filenames and paths.

When creating or editing a policy a live example of a file name will be shown to help with the policy design.

Folder Name Policy

The Folder Name Policies page in the Admin Interface allows an administrator to customize folder name policies to be used in a Journal Explosion migration. Tokens are used to construct the folder name of the place in the mailbox where items are migrated to. The possible tokens are:

Token Description
*Original SMTP address* The SMTP address of the original recipient
*Mapped SMTP address* The SMTP address of the mailbox where data is being migrated to

There is also a checkbox to indicate whether the purges folder should still be used as a root, and then the constructed folder name is used as a subfolder from that.

When creating or editing a policy a live example of a name will be shown to help with the policy design.

Archive Name Policy

The Archive Name Policies page in the Admin Interface allows an administrator to customize archive name policies to be used when the RenameSourceArchive command runs in the Stage 2 part of a migration. Tokens are used to construct the name of the archive. The possible tokens are:

Token Description
*firstname* First name of the owning user
*lastname* Last name of the owning user
*fullname* Full name of the owning user
*upn* User principal name of the owning user
*SMTP address* The primary SMTP address associated with the owner
*SAM Account Name* The SAM Account Name associated with the owner
*Container Mapping ID* The Archive Shuttle container mapping ID
*Archive Name* The name of the Enterprise Vault archive.

When creating or editing a policy a live example of will be shown to help with the policy design.

The policy can be used when editing the RenameSourceArchive command as part of a Workflow Policy.

Workflow Policies

The Workflow Policies page in the Admin Interface allows an administrator to customize Workflows and create new ones. These can either be new, original workflows or copies of existing workflows with some amendments.

Note: It is recommended that the built-in workflows are left as-is, and copies made if required.

Creating a New Policy

A new Policy can be created by clicking on “New” the Actions Bar and entering a name for the policy. It is recommended that existing policies are reviewed before creating new ones.

New Policies can contain any of the commands list on the right hand side of the Workflow Policies screen.

Note: It is recommended that new policies contain the command-pair “CollectItemsForArchive” and “WaitForImportFinished”, these perform the last-synch of the container.

When creating a new policy it is required to indicate the types of mapping that can use this particular workflow by selecting the check boxes at the top right of the Workflow Policies screen.

For information on the detail relating to each command, see below.

Note: Remember to save before leaving the screen in order to commit the changes to the Workflow Policy to the Archive Shuttle database.

Copying a Policy

When a policy is being viewed it is sometimes desirable to copy the policy and then make some small alterations to it. This is achieved by using the “Copy” button in the Actions Bar. Once a copy of the policy has been made, it can be given a new name, and saved to the Archive Shuttle database.

Editing a Policy

A policy can be edited by clicking on “Edit” in the Actions Bar, and selecting an existing policy from the list. If appropriate, the types of mapping that the policy can be used with can be adjusted. All changes are only committed to the Archive Shuttle database when the “Save” button is clicked.

Adding a New Command

A new command can be added to the bottom of the list of current commands in the Workflow by single clicking on it in the list on the right hand side of the page.

Moving a Command

A command can be moved from its current position in the list by clicking on the grey title area and dragging it to a new position in the list:

Editing the Details of a Command

The details behind a command such as the number and frequency of retries can be edited by clicking on the hyper link relating to the command, as shown below:

An example of the information that is then displayed is shown below:

Filter Policies

The Filter Policies page in the Admin Interface allows an administrator to customize filters relating to the data migration and create new ones.

In order to filter by ‘Path’ the ‘Collect Extended Metadata’ must be enabled in System Configuration.

Filters should be given a name to easily identify them, and can be based on:

Field Description Source
ArchivedDate The date the item was archived EV
Path The path to the archived item All
RetentionCategory The retention category of the archived item EV
HasShortcut The item has message properties implying it was archived All
ItemDate The date/time of the item itself. All
ItemSize The size of the archived item All
HasLegalHold Whether the items is on legal hold EV

Filter conditions are logically ANDed together. For example, it is possible to migrate date from a particular path AND below a particular size.

Example Filters

The following section gives some examples of filters that can be created. Once they have been created in the Admin Interface, they can then be used when performing container mapping operations.

Only migrate items which have shortcuts

Create New Filter

  • Give the filter a name, e.g., “With Shortcut”

Add a filter condition

    • Click “New” in the “Filter Condition” section.
    • Choose the “Policy” which was created previously.
    • Choose “HasShortcut” from the “Filter By” selection
    • Choose “Yes” in the “Value” selection
  • Click Add

Only migrate items which belong to a particular retention category

Create New Filter

  • Give the filter a name, e.g., “3 Year Retention Category”

Add a filter condition

    • Click on “New” in the “Filter Condition” section.
    • Choose the “Policy” which was created previously.
    • Choose “RetentionCategory” from the “Filter By” selection
    • Select the appropriate retention category from the drop-down list of categories from the source environment
  • Click Add

Only migrate data with ItemDate younger than 2011-12-31

Create New Filter

  • Give the filter a name, e.g., “Newer than 2011”

Add a filter condition

    • Click on “New” in the “Filter Condition” section.
    • Choose the “Policy” which was created previously.
    • Choose “ItemDate” from the “Filter By” selection
    • Select the Operator “YoungerThan” from the Operator drop down
    • Select the date 31st December 2011 from the date picker
  • Click Add

Manage Mappings

The Manage Mappings area of the Admin Interface contains multiple views of the containers involved in the archive migration.

The three views provided are:

    • Bulk Mapping
    • Manual Mapping
  • Existing Mappings

The different views are described in the following sections.

Mapping Templates

Using the Mapping Templates feature, you can create a container mapping template where you can set the total mailbox item count limit or item size limit. Then, when creating a mapping, you can assign a template and override settings for the O365 Module in SysConfig.

Note: Currently the Mapping Templates feature is available for Virtual Journal only.

Creating a Mapping Template

Create a mapping template before creating a mapping by following these steps:

  1. Select Configuration > Templates > Mapping Templates from the left menu bar.
  2. Click Add.
  3. Enter a name/description and desired limits using the Mapping Template Configuration window (shown below).
  4. Save the template.

Or, if you don’t create a mapping template in advance, you can create one using the Create new Template link available on the Add Mappings Wizard (shown below).

Assigning a Mapping Template

For new mappings, the Add Mappings Wizard prompts you to choose the mapping template you want to assign (see image above).

If you don’t assign a template (or an empty one is assigned) during mapping and item collection/migration is enabled, you’ll see this warning message (in red) within the wizard:

If you opt to not choose a mapping template, the default settings configured in SysConfig – O365 Module are applied.

If you haven’t enabled item collection/migration for the mapping, you can change or assign a mapping template on Existing Mapping page using the Set Template button shown below.

Bulk Mappings

This page is used to map containers to either new or to existing containers based on certain criteria.

The following basic information about each container is visible in the grid view:

    • Name
    • Indicator field to show whether the container has an owner or not
    • Owner Full Name
    • Group
    • Type of container (e.g., Enterprise Vault)
    • Archive Type
    • Indicator field to show whether the container has a mailbox or not
    • Link Name
    • EV Archive Status
    • Item Count
    • Total Size
  • Indicator field to show whether the container is mapped or not

If a Container’s owner / user could not be determined in Active Directory, it is considered “ownerless”, and is marked as such in the Name column. Example: John Doe (Ownerless)

Actions to be performed on the Bulk Container Mapping page

  • Add Mappings

Add mappings for selected containers. A pop-up wizard allows the mapping to be defined. The information that is required depends on the target:

  • Enterprise Vault

The target user strategy must then be chosen as follows:

    • Same User
  • Different User

Choose this option if the migration is to take place to another domain. The target domain can be chosen from a drop-down list, and matching criteria must be specified (e.g., Legacy Exchange DN, SID History, User Name)

The container strategy must then be chosen as follows:

  • Create new containers

Choose this option if Archive Shuttle should create new containers in the target environment.

  • Using existing containers

Choose this option if Archive Shuttle should use existing containers in the target environment.

  • Create new if there is no existing

Choose this option if Archive Shuttle should primarily use existing containers in the target environment, and if they do not exist, they will be created.

The target link must then be selected.

  • Choose the Link where you want to migrate to:

Select an entry from the drop down list that will correspond to a link in the target environment.

  • Exchange

The mailbox type must then be chosen as follows:

  • Primary mailbox: Choose this option if Archive Shuttle should ingest the data in to the users primary mailbox.
  • Secondary (Archive) Mailbox: Choose this option if Archive Shuttle should ingest the data in to the users secondary (archive) mailbox. If this option is selected, then an option of what action should be taken if a secondary mailbox does not exist can be specified. The choice of options are to either skip the ingestion or ingest into the primary mailbox instead.
  • Office 365

The mailbox type must then be chosen as follows:

  • Primary mailbox: Choose this option if Archive Shuttle should ingest the data in to the users primary mailbox.
  • Secondary (Archive) Mailbox: Choose this option if Archive Shuttle should ingest the data in to the users secondary (archive) mailbox. If this option is selected then an option of what action should be taken if a secondary mailbox does not exist can be specified. The choice of options are to either skip the ingestion or ingest into the primary mailbox instead.

The target link must then be selected.

  • Choose the Link where you want to migrate to:

Select an entry from the drop down list that will correspond to a link in the target environment.

  • PST

The format must then be chosen as follows:

  • PST: The output format will be PST

The target link must then be selected.

  • Choose the Link where you want to migrate to:

Select an entry from the drop down list that will correspond to a link in the target environment.

Later in adding this type of mapping, it is required to choose a PST File name policy. This is selected from a drop down list.

The remaining elements in the pop-up wizard are then the same regardless of the target for the migration:

  • Workflow Policy: The workflow policy must then be chosen, the list of workflows that are available will be determined by the chosen target. For example, if the migration is to Enterprise Vault, only those workflows applicable to an Enterprise Vault migration will be shown. Additional workflow policies can be defined. This is described in the section Workflow Policies.
  • Filter Policy: The filter policy must then be chosen from the drop-down list. The will be a default filter which performs no filtering. Additional filter policies can be defined. This is described in the section Filter Policies.
  • Choose Container Mapping Settings: Properties for the new mapping can then be set.
  • Migration Status: Enabled: Enable immediate migration for the newly added mappings.
  • Migration Status: Disabled. Does not immediately start migration for the newly added mappings.
  • Item Gathering Status: Enabled. Enables item gathering for the newly added mappings.
  • Item Gathering Status: Disabled: Does not start item gathering for the newly added mappings. There will be a summary page displayed at the end of the wizard when creating a new container mapping. This contains all of the information gathered during the wizard and should be reviewed before committing the changes.
  • Run Item Gathering: Start Full or Delta item collection for selected containers. The source container will be examined and data relating to each item will be collected and added to the appropriate Archive Shuttle Link database.

Note: This would normally be set when configuring the container mapping and is provided in the navigation bar as a means of forcing the update and for troubleshooting.

  • Run Shortcut Gathering: Start Full shortcut collection for selected containers.
  • Add to Group: One or more containers can be added to an existing or new group. This group membership can then be used for filtering and batching of users for migration.
  • Assign Archive User: A different owner can be associated with a particular container.
  • Refresh: Refresh the data in the grid view.
  • Columns: Add additional columns to the table. Drag and drop a column name to the header column of the grid view.
  • Reset: Resets the source and target container tables back to the default list of columns, and removes any currently defined filters.
  • Export to PDF, XLS, XLSX, CSV: Allow data to be exported to several formats

On the Bulk Mapping screen, it is also possible to view details of a particular container/user. To do this, select the small magnifying glass next to the user and a pop-up window will be launched showing the User Information page for that container.

When performing a bulk mapping it is possible to postpone and schedule the collection of items, and the migration. This is a step in the mapping wizard.

Manual Mappings

Allows an administrator to create mappings for single containers. The source as well as target container must already exist. Performing a mapping operation in this manner is useful for containers such as Enterprise Vault Journal Archives, or where containers need to be merged to a target container.

The following basic information about each container is visible in the grid view:

    • Name
    • Owner Full Name
    • Group
    • Type
    • Mailbox Type
    • Link Name
    • Content Count
    • Content Size
    • Link name
  • Mapping Counts for the source and target

Actions to be performed on the Manual Mapping page

  • Add Mapping: Adds a mapping for the selected containers. A source and target container must have been selected before this navigation bar item is available.
  • Add to Group: One or more containers can be added to an existing or new group. This group membership can then be used for filtering and batching of users for migration.
  • Refresh: Refreshes source and target container tables.
  • Reset: Resets the source and target container tables back to the default list of columns, and removes any currently defined filters.

Existing Mappings

Containers that have already been mapped are listed on the Existing Mappings page. This page gives the ability to Enable/Disable mappings for migration and perform other container mapping related actions.

Note: Additional columns are available if the ‘Advanced view’ is enabled (under the ‘Views’ tab) though this makes the user interface slower if there are a large number of mappings.

The table shows the following information for each existing container mapping:

    • Container Mapping ID
    • Source Container Type
    • Source Container User
    • Source Group
    • Source Container Link Name
    • Flag to indicate if the container has an owner
    • Flag to indicate if the container has a mailbox
    • Target Container Type
    • Target Container User
    • Target Group
    • Target Container Link Name
    • Workflow Policy
    • Filter Policy
    • Priority
  • Item Gathering Enabled: Indicates whether the container mapping is enabled for item gathering (container level).
  • Enabled for Migration: Indicates whether the container mapping is enabled for migration. Archive Shuttle does NOT start to export / import when a container mapping is disabled for migration.
  • Stage 2: Indicates whether the container mapping has been switched to the target. If it has, Stage 2 of the Workflow has been started.
  • Stage 2 Finished: Indicates whether Stage 2 has finished for this mapping

Actions to be performed on the Existing Container Mappings page

These actions can be performed on this page:

  • Set Priority: Adjust the priority for the selected container mapping(s)
  • Enable Stage 2: Switch the selected container(s) to the target
    • Enable for Migration
    • Disable for Migration
    • Enable Item Gathering
    • Disable Item Gathering
    • Delete Mapping
    • Run Item Gathering
    • Run Shortcut Gathering
    • Set Workflow Policy
    • Set Filter Policy
    • Set Failed Item Threshold
  • Add to Group: One or more containers can be added to an existing or new group. This group membership can then be used for filtering and batching of users for migration.
  • View Mappings Health: Switches to a status page which can be used to check mappings.  See the section in this document called ‘Mappings Health’ for more information.
  • Comment: Set a comment of up to 256 characters on one or multiple mappings.
    • Refresh
    • Columns
    • Reset
  • Set Rollover Threshold: Specify the size in MB that PST files should be rolled over.
    • Disable Offline Mode
  • Enable Offline Mode

Mappings Health

The mappings health page allows you to get information relating to the status of a mapping.  More details are given below.

It is possible to see Overall Stats by clicking on the button in the toolbar. This will show any issues detected with mappings.

The Run Health Check button will perform checks on all mappings currently configured in the system. After the health check runs the overall stats page will give statistics about issues, and individual mappings with issues will be presented in the data grid.

Reports

Scheduled Reports

Archive Shuttle includes a number of pre-built reports can be scheduled to be delivered to different recipients. This is controlled on the Scheduled Reports screen in the Archive Shuttle User Interface.

The current reports which are available:

Report Name Description
Performance This report shows export and import performance in a bar graph both as a summary and per source link for:

    • The last day
    • The last week
    • The last 30 days
    • The last 3 months
    • The last 12 months
  • A custom time frame can also be chosen
Stage2Report This report shows information relating to Stage 2, including:

    • Containers that have finished Stage 2 in the last 24 hours
    • Containers still in Stage 2
    • Containers which are in Stage 2 and have errors
    • Containers in Stage 2 which may be stuck/hung
  • Containers in Stage 2 which have been paused.
FailedItemReport This report shows items which have failed migration. The report is divided up per link, and shows:

    • Items which have failed to export
  • Items which have failed to import

Note: The reports are in PDF format

Each report can be configured to email one or more recipients, and is controlled via separate schedules, or, optionally triggered immediately using the ‘Send Now’ button.

Note: In order to send out reports it is necessary to use the System Configuration page and enter information in the SMTP Settings page.

Stage 1 (Sync Data)

This page shows an overview of all containers enabled for migration, and their synchronization status. It also allows data to be exported to a variety of formats for reporting on the progress of the migration.

Note: Additional columns are available if the ‘Advanced view’ is enabled (under the ‘Views’ tab) though this makes the user interface slower if there are a large number of mappings.

The following basic information about each container is visible in the grid view:

    • Full Name
    • Group
    • Routed
    • Exported
    • Retryable Errors
    • Permanent Errors
    • Total Errors
    • Exported Percentage
    • Imported
    • Retryable Errors
    • Permanent Errors
    • Total Errors
    • Imported Percentage
    • Priority
    • Item Gathering Enabled
    • Migration Enabled
    • Stage 2 Enabled
    • Stage 2 Finished
    • Failed Item Threshold
    • Ignore Failed Item Threshold
  • Needs Provisioning

Actions to be performed on the Migration Status page

These actions can be performed on this page:

  • Set Threshold: Sets the failed item threshold for the selected mapping or mappings
  • Retry Failed Items: Retry any failed items

Note: A hanging export or import means that it has been running for more than 1 hour.

Retrying permanently failed items will reset the error count for those items and retry them.

  • Re-export Items: Items can be re-rexported. It is possible to re-export all items, just those that failed to export or just those that failed to import.
  • Enable Stage 2: Switch the selected container(s) to the target
  • Add to Group: One or more containers can be added to an existing or new group. This group membership can then be used for filtering and batching of users for migration.
  • Set Priority: Allow the migration priority to be set
  • View Mappings Health: Switches to a status page which can be used to check mappings.  See the section in this document called ‘Mappings Health’ for more information.
  • Comment: Set a comment of up to 256 characters on one or multiple mappings.
  • Refresh: Refreshes the tables data
  • Columns: Allows the selection of additional data columns, which might then be used to select groups of containers
  • Reset: Resets the grid to the default columns, and removes any filters
  • Load: Loads a saved filter/page-layout
  • Save: Saves the current filter/page-layout
  • Export to PDF, XLS, XLSX, CSV: Allow data to be exported to several formats

Actions to be performed on the Actions tab

These actions can be performed on this tab:

  • Enable Migration: Enables the selected mappings for migration
  • Disable Migration: Disables the selected mappings from migration
  • Enable Item Gathering: Enabled Item Gathering for the selected mapping
  • Disable Item Gathering: Disables Item Gathering for the selected mapping
  • Run Item Gathering: Queues a command to collect item level metadata for the selected mappings. You can chose between a Full Collection or a Delta Collection. A full collection will be triggered if a source does not support Delta Collection. (e.g. PST)
  • View Details: Shows the number of items collected, and when the collection status was last updated. It’s useful to view this on archives which contain a large number of items in order to see the progress of the collection process.
  • Run Shortcut Collection: Queues a command to collect shortcut metadata for the selected mappings

Stage 2 (Switch User)

This page shows an overview of all containers that have been switched to target and shows the status in the Stage 2 workflow.

Note: Additional columns are available if the ‘Advanced view’ is enabled (under the ‘Views’ tab) though this makes the user interface slower if there are a large number of mappings.

The following basic information about each container is visible in the grid view:

    • Source User Name
    • Group
    • Source Container Type
    • Link (for the source)
    • Target User Name
    • Group
    • Target Container Type
    • Link (for the target)
  • Command

The current command which is being executed by Archive Shuttle

  • Status: The status of the command
  • Error: Information relating to errors
    • Date/Time
  • Next Command: The next command to be executed by Archive Shuttle
  • Finished

Actions to be performed on the Workflow Status page

These actions can be performed on this page:

  • Refresh: Refreshes the tables data
  • Columns: Allows the selection of additional data columns, which might then be used to select groups of containers
  • Reset: Resets the grid to the default columns, and removes any filters
  • Load: Loads a saved filter/page-layout
  • Save: Saves the current filter/page-layout
  • Add to Group: One or more containers can be added to an existing or new group. This group membership can then be used for filtering and batching of users for migration.
  • Reset Workflow Status: Can be used to retry the current command
  • Skip: Skip the current command in the workflow
  • Pause: Pause the workflow
  • Resume: Continue from a paused workflow
  • Change Policy: Allows a new policy to be chosen and the workflow restarted with that policy
  • Export to PDF, XLS, XLSX, CSV: Allow data to be exported to several formats

This page shows an overview of activity taking place on the links involved in the migration project.

The following basic information about the link progress is logged in the upper grid:

    • Link Name
    • Archive Count
    • Total Mappings
    • Progress Users
    • Progress Stage 1 (Synced)
  • Progress Stage 2 (Switch)

The following basic information about the item-level progress is logged in the lower grid:

    • Link Name
    • Archive Items
    • Mapped Items
    • Collected Items
    • Collected Items Percentage
    • Routed Items
    • Exported Items
    • Exported Items Percentage
    • Imported Items
  • Imported Items Percentage

Performance

The Performance Dashboard provides information about the migrations being performed on an hour by hour basis. The screen allows an administrator to select information relating to:

    • The last day
    • The last week
    • The last month
  • A custom time frame

Information about the migration is then displayed in a tabular format as follows:

Item Description
Hour Date/Time
Exported Count Number of items exported
Exported Size Size of all data exported
Imported Count Number of items imported
Imported Size Size of all data imported

It is also possible to view the information on a per-link basis, and each set of data is also displayed as a graph at the right hand side of the page.

Optionally, you can configure the page’s widgets to display compressed sizes.

Failed Items

This page shows an overview of any per-item failures that Archive Shuttle has encountered during the migration. When the screen is first entered a particular link should be chosen from the drop down list. This will filter the Failed Items screen to show only items relating to that link.

The following basic information about each failure is logged in the grid view:

  • Container Mapping ID: The Archive Shuttle ID for the container mapping – reference only
  • Item Routing ID: The Archive Shuttle ID for the item which has failed – reference only
  • Is Exported Failed?: An indicator field to show whether it is an export failure
  • Is Import Failed?: An indicator field to show whether it is an import failure
  • Is Failed Permanently?: An indicator field to show whether it is concerned that the error is permanent (e.g., file not found on exporting an item)
  • Item Routing Error Count: The current error count for the item
  • Error Text: The error which was reported to Archive Shuttle from the underlying module, or from Exchange/Enterprise Vault
  • Last Date Time UTC: The last date/time where this was reported by Archive Shuttle
  • In FTP: An indicator to show whether the item was uploaded to the Quadrotech FTP Server.
  • Download Link: A hyperlink to the failed item.

The Failed Items page also allows an administrator to submit failed items for reprocessing. This is achieved by selecting one or more items and selecting the button on the Actions Bar. It is also possible to reprocess all failed items. When reprocessing is selected, this will have the effect of resubmitting the item to the appropriate module. For example, if the issue is that the item could not be exported, then that item will be resubmitted to the export module. Likewise, if the issue is that the item could not be ingested into Office 365, then the item will be resubmitted to the Office 365 module.

Note: Reprocessing failed items will reset the Item Routing Error Count for those items.

Note: As of Archive Shuttle 9.5, FTP settings (URL, username, password) are no longer hardcoded for the ExportFailedItemsToFTP command and use the settings from System Configuration.

This action of reprocessing failed items can take a few minutes before the appropriate modules receive the command, perform the action, and feedback the results.

An additional option on the Failed Items screen is to ‘Move Selected Items’. This is specifically for the situations where items have failed to be imported to the target, and never can be. By selecting one of more of these items, they can be moved from the Staging Area to a different area specified on the target link (via an option on the Links Page).

Note: Permanently failed items might include items which are too large to be ingested into the target (e.g., in an EV to Exchange migration where message size limits are in place)

Index Fragmentation

Archive Shuttle has a reliance on a performant SQL Server in order to achieve high throughput of items during migration. If SQL is struggling to handle the load, performance will drop. The Index Fragmentation page shows some key metrics about the SQL Indices associated with each link database. These can get fragmented over time during the Enterprise Vault Collection and export/import processes. The information on the index fragmentation page is updated hourly.

The screen highlights tables/indices that are of particular concern according to this:

Row Highlighting Reason
No highlighting Fragmentation is not significant, or the number of pages in the index is not over 1000.
Yellow Fragmentation is between 10 and 30 % and the page count in the index is more than 1000.
Red Fragmentation is over 30 % and the page count in the index is more than 1000. Accessing data associated with this table/index will not be performant.

The recommended actions to take are as follows:

Row Highlighting Action
No highlighting No action required
Yellow Perform an “Index Reorganization”
Red Perform an “Index Rebuild”

More information specific to SQL Server and management of the Archive Shuttle databases can be found in the SQL Best Practices Guide.

License

The License page in the Admin Interface gives an overview of the usage to-date of Archive Shuttle against the purchased amount of licenses and license volume. It also shows key information about when the license will expire.

If required, a license can be extended and then added to the system. See the section titled Adding a new License to Archive Shuttle in this guide.

Data on this page is refreshed hourly and shows exported data quantities at that time. User licenses are used when a user enters Stage 2.

A new license can be uploaded, if required.

License expiry notifications may be seen from time to time as the migration progresses. The notifications will appear as follows:

Notification Description
Date When there is 7 days left before the license expires
Volume When there is 3% left of the volume license
Users When there is 5% of the user license count remaining

Support Bundles

To assist with troubleshooting issues encountered during a migration a page has been added to the Admin Interface called “Support Bundles”. Multiple bundles can be configured if required, each bundle consists of:

    • Archive Shuttle and Enterprise Vault version information
  • Archive Shuttle log files

The bundles can be generated, downloaded or sent directly to the Quadrotech FTP server for review by Support Engineers.

Offline Media

The Offline Media configuration helps support the process whereby the source and target staging areas need to be different.

This is currently supported for:

    • EV to EV, O365 or Exchange
    • EAS to EV, O365, Exchange
  • Dell Archive Manager to EV, O365, Exchange

Some examples of when this might occur are:

    • Source and target environments are in different non-trusted forests
  • Slow WAN links mean that the bulk of the data will be migrated via ‘disk shipping’

There are a number of steps to follow in order to successfully use the Offline Media feature in Archive Shuttle, and these are described below:

Create Mapping Set

On the Offline Media a new source and target mapping set can be created by clicking on ‘Create’ in the Actions bar. This is the first step and in the mapping the source link is specified along with the target link, and name can be given to the set.

Add Export and Import Locations

With the Mapping Set selected multiple export and import paths can then be defined. Export paths can be configured to be one of three states:

Status Description
Open Data can be written to this path
Closed Data will not be written to this path
Ready Data path is on standby

If an open export path becomes full, then the next ready container will be used.

The source and target links need to have ‘Offline Mode’ enabled. This is performed by selecting the links on the Links page in the Admin Interface and selected ‘Enable’ in the ‘Offline Mode’ of the Actions bar.

When this has been enabled the Staging Area Path for those links will no longer be used. Instead, the paths used are those defined on the ‘Offline Media’ page

Creating Container Mappings

The container mappings are configured in the same way as ordinary mappings.

Export Data

Data will then begin to be collected and exported from the source containers and placed on the staging areas defined in the Offline Media set.

Ship the data

Periodically during the export, or at the end of the export, the exported data then needs to be ‘shipped’ to the target import path. This can be simply a case of copying the data manually or by shipping the disk and performing the necessary actions to make it available on the given name in the target environment.

If the data is copied, ensure that the very top level (below the share name itself) is copied to the target.

Enable Scanning of the Import Location

When the data is ready to be imported the Import Path has to be enabled for scanning on the “Offline Media” page.

When scanning is enabled, the location is scanned by the import module each hour for new data. If found, the data is then made available for the import module to process.

Once the scanning has occurred the data will be made available for the Import Module.

Import Data

Once the target Import Path has been scanned and data recorded in the Item Database as being available for import, the import module will process the data in the container mapping as normal.

Mapping Archiving (Cargo Bay)

When using the Mapping Archiving feature, item data associated with finished mappings is moved from active tables to a backup database (the Cargo Bay). This reduces the number of records in active tables, thus improving performance when you’re interacting with active item data.

Enable Mapping Archiving (Cargo Bay)

Prerequisites

Before enabling Cargo Bay, you need to install SQL Server Integration Services 14.0 on the core machine. Here’s how to do it:

  1. Download the SQL Server 2017 installer (Enterprise or Developer).
  2. Run the setup.exe file.
  3. Click Installation > New SQL Server stand-alone installation (shown below).
  4. In the ‘Feature Selection’ area, select ‘Integration Services’ and click ‘Next’.
  5. In the ‘Server Configuration’ section, select ‘Manual’ from the ‘Startup Type’ field.
  6. Finish the installation and then restart IIS.

Once the prerequisites have been installed, execute the following SQL statement on the Archive Shuttle Directory database. If the migration environment is using Archive Shuttle Cloud, contact the Customer Experience Team

Update SettingDefinition
set DefaultValueNumeric = 1 WHERE Name = 'EnableCargoBay'

Enabling for a small number of mappings

It is possible to enable archiving on just a selection of the mappings.  This can be done by selecting the mappings and clicking the button ‘Enable Mapping Archiving’. This is available on the following screens:

  • Existing mapping screen (on the ‘Main’ tab)
  • Stage 1 screen (on the ‘Actions’ tab)
  • Stage 2 screen (on the ‘Mapping Archiving’ tab)

Mapping Archiving statuses and notifications

The following statuses might be seen against a mapping:

Status Description
NotStarted Archiving not started or mapping not eligible for archiving
Running Archiving in progress
Failed Archiving is failed
Finished Archiving finishes successfully
ArchivingForced Archiving of mapping is forced
ArchivingDisabled Archiving of mapping is disabled
ArchivingDisabledAutomatically Archiving of mapping is automatically disabled due to failed item detection
Finished with Warnings Archiving finished successfully, but with warnings because empty tables were not deleted
Cancelled Archiving cancelled

The following notifications might be seen when performing enablement or disablement of mapping archiving:

Action Notification
Enable Mapping(s) successfully enabled for archiving
Enable Mapping(s) cannot be enabled for archiving
Enable Mapping(s) already enabled for archiving
Enable Not all mappings were enabled for archiving. See logs for more details.
Disable Mapping(s) successfully disabled for archiving
Disable Mapping(s) cannot be disabled for archiving
Disable Mapping(s) already disabled for archiving
Disable Not all mappings were disabled for archiving. See logs for more details.

System Configuration

The System Configuration page in the Archive Shuttle Admin Interface contains many settings that can be used to customize the migration and environment. Some of these settings will affect the workflow of the migration while others will affect throughput and performance.

Changing these configuration options should be done after careful analysis of the current workflow, environment and throughput.

Changes to system settings take place after a few minutes when the appropriate module checks in for new work items.

Schedule Settings

On the System Configuration page different settings/configurations can be applied according to a schedule.

When you first enter the System Configuration, the default schedule is shown. Configuring specific schedules consists of the following three steps:

1. Create and name a new schedule.

2. Make the required configuration changes for this schedule, above, or below the default schedule.

3. Select the times of day, or days of week that this schedule applies.

Date/Time is based on the data and time of the Core.

Example: More export parallelism on Saturday / Sunday

In this example we will configure higher Enterprise Vault export parallelism for Saturday and Sunday. The steps to follow are:

1. Navigate to the System Configuration page

2. Review the current configuration, this is the default configuration when no other configuration is set.

3. Via the button on the toolbar add a new schedule. Give the schedule a name (eg Weekend) and choose a colour from the available list.

4. Change the EV Export module parallelism settings to the new, higher values that you wish to use. When changes are made the entry will become bold, and a checkbox will appear underneath ‘Custom’.

5. Click on ‘Save’, in the toolbar, to commit those changes.

6. From the toolbar click on ‘Time Assignment’ in order to review the current scheduled configurations.

7. Click on the new schedule on the right hand side, and then highlight all of Saturday and all of Sunday.

8. Click on ‘Save’ on the schedule view in order to commit those changes.

General Settings

This section contains general settings relating to Archive Shuttle:

Item Description
General
Do not re-export on File Not Found Normally if an import module files that a required file is not present on the staging area it will report this to the Archive ShuttleCore and the Core will instruct the export module to re-export the item. If this behavior is not required, select this checkbox
Turn off the post-processing of secondary archives This will disable the post processing modules from performing actions against an Exchange (or O365) Personal Archive
Stage 2 Default Priority The normal priority which will be applied to users when they enter stage 2
Autoenable Stage 2 With this option enabled when a container gets to 100% exported and 100% imported, it will automatically move into Stage 2.

Without this option set containers, will remain in Stage 1 until Stage 2 is enabled for them manually.

Delete Shortcut Directly after Import With this option selected, successfully ingested items in to Exchange, or Office 365 will have the shortcut removed from the target container straight away. Without this option selected, the shortcuts will only be deleted once the stage 2 command is executed.

This option can greatly enhance the user experience when migrating data back to existing containers (e.g., primary mailbox)

Disable shortcuts collection Stops modules from collecting information about shortcuts in a mailbox
Archive Migration Strategy When a batch of users has been selected for migration and given a specific priority, within that, a migration strategy can be used to also govern the order that the migrations will take place. The options here are:

    • Smallest archive first (based on archive size)
    • Largest archive first (based on archive size)
    • Oldest first (based on the archived date/time of items)
  • Youngest first (based on the archived date/time of items)

The default strategy equates to a random selection within the batch that was chosen

Do not transmit non-essential Active Directory Data Stops the Active Directory Collector module from returning metadata about users which is not required for the product to function.
Use HOTS format Instructs the Core and Modules to expect and use HOTS format for items on the staging area.
Clear staging area files older than [hours] When the staging area cleanup process runs it will only remove files older than this number of hours.
Logging
Log communication between modules and Core Logs the sent command, and received results in a separate XML file on the Archive Shuttle Core. Note: Should only be used on advisement from Quadrotech Support
Log SQL Queries Logs the internal Archive Shuttle SQL queries and timing information in to the Archive Shuttle Core Log file. Note: Should only be used on advisement from Quadrotech Support
Default Core Log Level Allows the logging level for the Archive Shuttle Core to be changed.
Send errors to Quadrotech This option sends application exceptions to a service which Quadrotech can use to help track the cause of unexpected application exceptions.
Delete ItemRoutingErrorLogs after successful Export/Import This option allows the system to remove any reference to issues/errors during export or import once an individual has been successfully exported or imported.
Clear Module Performance Logs By default module performance logs will be kept for 30 days, but this can be changed in this setting.
Item Database
Default size [MiB] Shows the size that new item databases will be created.
Default log file size [MiB] Shows the default size of new item database log files.
Item database update retry timeout [hours] Shows the number of hours that will elapse before upgrades of the item databases will be retried.
User Interface
Global Timezone By default the system operates in UTC
My Timezone Override Allows a user specific timezone to be specified
Item size scale Can be used to control whether item sizes in the user interface are displayed as bytes, Mib, GiB, TiB, or displayed in automatic scale
Options ‘All’ on grids will be disabled when this threshold is exceeded If a value is specified here it will stop Archive Shuttle giving the option to display ‘all’ values on a grid, when there are a large number of items to display.
Delete mapping with all related data Normally a container mapping cannot be deleted if data has been ingested into the target container, or Stage 2 has been enabled. If this option is selected then the mapping and related Archive Shuttle SQL data will be deleted.

The Administrator of the target environment will need to remove the target container.

The Administrator of the Archive Shuttle environment will need to remove the data from the Staging Area

Lock active mapping deletion Prevents deletion of mappings if they are actively migrating.
Folder
Folder name for folder-less items Specify a name to use for items which do not have a folder.
Treat all messages as folderless If enabled this will ignore the folder which was obtained during the export of an item, and just ingest all items in to the folder specified as the ‘folderless items’ folder
Split folder parallelism Maximum number of containers to process in parallel for folder splitting.
Split threshold for large folders (except journal / folderless) If specified it indicates the maximum number of items in each of the journal folders before a new folder is created.
Split threshold folderless items If specified it indicates the maximum number of items in the folderless folder before a new folder is created.
Journal split base folder name Specifies the first part of the folder name which is used to stored data migrated from a journal archive.
Journal split threshold If specified it indicates the maximum number of items in each folder before a new folder is created.
Calendar Folder Names A list of folder names which are translated to Calendar, so that the folder will get the correct icon when it is created.
Task Folder Names A list of folder names which are translated to Task, so that the folder will get the correct icon when it is created.
Contact Folder Names A list of folder names which are translated to Contact, so that the folder will get the correct icon when it is created.
Chain of Custody
Re-export on Chain of Custody error Instructs the import module to report a Chain of Custody error back to the Core, and for the Core to then queue work for the export module to re-export the item.
Enable Chain of Custody for Extraction Causes the export modules to generate a hash of items (to be stored in the Item Databases) when items are exported
Enable Chain of Custody for Ingestion Causes the import modules to validate the Chain of Custody information when items are ingested. If an item fails this check it will not be imgested.
Reliability
Allow import of messages with empty recipient e-mail If a message is found without recipient information it will normally fail the ingestion process. This can be overridden with this option
Allow import of corrupt messages Items which fail the MSG file specification may not be ingested into the target container. Some of these failures can be overridden with this option checked.
Enable item level watermarks Archive Shuttle will stamp items that get imported with a Watermark, specifying details about the Item like Source Environment, Source Id, Archive Shuttle Version, ItemRoutingId, LinkId
Fix Missing Timestamps If the message delivery time is missing on a message the messaging library will generate it, from other properties already present on the message
Missing timestamp date fallback The date to be used in case of a missing timestamp

SMTP Settings

This section contains settings relating to the SMTP configuration (for sending out reports):

Item Description
SMTP Server The FQDN of an SMTP server which can be used to send emails
SMTP Port The port to be used when connecting to the SMTP server
Use SSL A flag to indicate that SSL should be used to connect
Use Authentication A flag to indicate whether the SMTP server supports anonymous connections, or requires authentication
SMTP Username The username to be used for authentication
SMTP Password The password to be used for authentication
SMTP From Email Address The ‘from’ address to be used in the outbound mail
SMTP From Name The display name to be used in the outbound mail

Here’s a walk-through of the steps to configure SMTP settings:

FTP Settings

This section contains settings relating to the FTP configuration:

Item Description
FTP URL The URL to use to connect to the FTP server
FTP username The username to be used when connecting to the FTP server
FTP password The password to be used when connecting to the FTP server

Journal Explosion Settings

This section contains settings relating to Journal Explosion

Item Description
Import root folder Specified the folder where imported items will be placed.
Delete items from staging are after initial import If enabled items are removed from the staging area after initial export.
Delete items without Journal Explosion import routings Items without Journal Explosion import routings will be deleted from the staging area. It is recommended to enable this if all user mappings are already enabled for import.

EV Collector Module

This section contains settings relating to the EV Collector Module:

Item Description
EV Collector Enable Shortcut Collection A flag to indicate whether or not the EV Collector module will collect information relating to shortcuts.

It is only necessary to set this if the migration is going to use the ‘has shortcut’ filter

Collector Parallelism Defines how many archives will be collected in parallel
Collect Extended Metadata Reads legal hold and path of items using the Enterprise Vault API for filtering purposes.

It is recommended that this is only enabled when filtering by folder

Use the BillingOwner on Archives which would be otherwise ownerless This uses the owner set as “Billing Owner” in Enterprise Vault as the Owner of the archive instead of trying to use the entries in the Exchange Mailbox Entry table. This is useful where an Active Directory account relating to an archive has been deleted, for an employee who has left the company for example. This would normally show as Ownerless in Archive Shuttle, but with this switch enabled the Enterprise Vault Collector module will attempt to use the “Billing Owner”.
Use EWS for EV Shortcut Collection A flag to indicate whether the module should use MAPI or EWS to collect shortcut information.
Ignore LegacyExchangeDN when matching EV users With this option enabled the ownership detection for EV archives is modified so that the LegacyExchangeDN is not used.
Limit stored results Stops the module trying to get more work, if there is a backlog of tranmissions to be sent to the Core.
Threshold database limit The number of items in the backlog before limiting takes place
Collect EV Shortcuts for hybrid mailboxes Enables the collection of shortcuts from mailboxes in a hybrid configuration where the mailbox is on-premise but the personal archive is in the cloud.

EV Export Module

This section contains settings relating to the EV Export Module:

Item Description
General
EV Export Archive Parallelism Defines how many archives will be exported in parallel. Total thread count = EV Export Archive Parallelism multiplied by EV Export Item Parallelism.
EV Export Item Parallelism Defines how many items should be exported in parallel per archive. Total thread count = EV Export Parallelism multiplied by EV Export Item Parallelism.
EV Export Storage If using Azure for the Staging Area storage (or you are migrating a journal archive for an older version of EV where Archive Shuttle is doing the envelope reconstruction) ensure this option is set to Memory, otherwise File System or Memory with File System Fallback can be selected. In either of the described situations if ‘File’ is chosen, an error will be reported in the export module log file and export will not proceed.

This setting can be adjusted if there are problems exporting very large items.

Export Provider Priority Specifies the order in which EV export mechanisms will be tried.
Limit stored results Stops the module trying to get more work, if there is a backlog of tranmissions to be sent to the Core.
Threshold database limit The number of items in the backlog before limiting takes place
Failures handling
Export Messages with invalid Message ID from Journal When enabled it will mean that items from a journal that require envelope reconstruction will still be processed (and a P1 message generated) even if the Original Message ID attribute can not be found in the item that was retrieved from EV (meaning that an EV Index lookup can not be performed).

Note: This setting will have an impact in that it may mean BCC information is not added to the P1 envelope (since it can not be obtained from EV)

Prevent exporting of items if envelope reconstruction fails This will prevent Archive Shuttle from providing an item from EV if the envelope reconstruction fails. With this setting disabled some items may be provided without an appropriate P1 (Envelope) message.
Fail items permanently on specified errors Indicates whether Archive Shuttle should mark certain items as permanently failed, even on the first failure.
Error message(s) to permanently fail items on A list of error messages which will cause items to be marked as permanently failed (if the previous setting is enabled)

EV Import Module

This section contains settings relating to the EV Import Module:

Item Description
Offline Scan Parallelism Indicates the number of threads that should be used when scanning offline media
EV Default Retention Category ID When a non-Enterprise Vault source is used, and the target of the migration is an Enterprise Vault environment, indicate here the retention category ID to apply to the data which is ingested
EV Import Archive Parallelism Defines how many archives will be imported in parallel.

Total thread count = EV Import Archive Parallelism multiplied by EV Import Item Parallelism.

EV Import Item Parallelism Defines how many items should be imported in parallel per archive.

Total thread count = EV Import Parallelism multiplied by EV Import Item Parallelism.

Import Journal Archive Through Exchange Imports a journal archive through Exchange instead of through the Enterprise Vault API. Elements from the staging area will be added to Exchange (for an appropriate Enterprise Vault task to process) rather than directly into Enterprise Vault.
Journal Mailbox Threshold If using the ‘Import Journal Archive Through Exchange’ option then this setting can be used to limit when ingest will be stopped while the appropriate task processing the mailbox catches up.
Suspend imports while EV is archiving Disables import module while Enterprise Vault is in its archiving schedule.
Ingest Provider Priority Indicate the type of ingest provider to use.
Read file to memory Allow reading of files to system memory before ingestion
Read file to memory threshold (bytes) Items below this size will be read into memory (to speed up ingestion), whereas items above this size won’t be read into memory
Limit stored results Stops the module trying to get more work, if there is a backlog of tranmissions to be sent to the Core.
Threshold database limit The number of items in the backlog before limiting takes place

EV Provisioning Module

This section contains settings relating to the EV Provisioning Module:

Item Description
Convert orphaned into shared archives When orphaned archives are migrated to another Enterprise Vault environment the target archive normally becomes a normal Exchange mailbox archive. With this option selected, the target archive will be an Enterprise Vault Shared Archive instead.

The Shared Archive maintains the original folder structure, no permissions added to the archive.

Limit stored results Stops the module trying to get more work, if there is a backlog of tranmissions to be sent to the Core.
Threshold database limit The number of items in the backlog before limiting takes place

Shortcut Process Module

This section contains settings relating to the Shortcut Processing Module:

Item Description
Shortcut Process Parallelism Defines how many archives will be post processed in parallel.

Total thread count = EV Post Process Parallelism multiplied by EV Post Process Item Parallelism.

Shortcut Process Item Parallelism Defines how many items will be post processed in parallel per archive.

Total thread count = EV Post Process Parallelism multiplied by EV Post Process Item Parallelism.

Delete shortcuts not related to migrated items When shortcut deletion is progressing foreign shortcuts will be also be deleted.
Delete messages with EV properties but without proper shortcut message class If this option is enabled and items are found to have EV attributes (such as Archive ID) they will be deleted by the shortcut process module.
Use EWS for EV Processing Enables the post processing to use EWS rather than MAPI for processing
Config to Use By default the Exchange configuration will be used, but if the post processing should operate on Office 365 mailboxes select that from the drop down list.
Shortcut deletion maximum batch count Shortcuts will be grouped into batches of 100 items. This number indicates the number of those batches to be processed in parallel.
Limit stored results Stops the module trying to get more work, if there is a backlog of tranmissions to be sent to the Core.
Threshold database limit The number of items in the backlog before limiting takes place
Collect shortcuts for both primary and second mailboxes If enabled this will enable collection of shortcut information from both the primary mailbox and secondary mailbox (personal archive)

If the source environment is Source One, EAS or Dell Archive Manager, then the shortcut processing module should be configured to use EWS.

Exchange Import Module

This section contains settings relating to the Exchange Import Module:

Item Description
General
Use per server EWS Url for Exchange import If this is enabled then the import module will use the EWS Url configured in Active Directory on the Exchange Server object rather than a general Url for all ingest requests
Import Root Folder When ingesting data in to Exchange mailboxes or personal archives it is sometimes required to ingest the archived data into a top level subfolder (and then the archive folders beneath that). Specify the name of that top level folder here.
Maximum Batch Size Bytes The maximum size that a batch of items can be, which is then sent in one go to Exchange
Maximum Batch Size Items Maximum number of items in a batch
Exchange Timeout Seconds Timeout in seconds until Archive Shuttle aborts the ingest (i.e. upload/processing)
Disable reminders for appointments in the past This will remove the MAPI properties relating to whether a reminder has been sent/fired or not as the item is ingested into the target. If this is not enabled reminders may appear for long overdue archived items.
Mark migrated items as read If this is enabled all migrated will be marked-as-read by the import module
Limit stored results Stops the module trying to get more work, if there is a backlog of tranmissions to be sent to the Core.
Threshold database limit The number of items in the backlog before limiting takes place
Threading/Parallelism
Offline Scan Parallelism Number of threads that will be used for scanning offline media
Exchange Mailbox Parallelism Defines how many Exchange mailbox imports will be ingested to in parallel.
Exchange Batch Parallelism Defines how many batches will be ingested per mailbox in parallel
Exchange Item Parallelism Defines how many items will be ingested per mailbox in parallel
Connectivity
Exchange Version Specify the version of Exchange which is in use.
Disable Certificate check Disable the certificate validity when connecting to Exchange
Exchange Connection URL Specify an Autodiscover URL if the default one does not work
Use Service Credentials for Logon Authenticate to Exchange with the credentials which the Exchange Import Module Server is running as.

Native Import Module

This section contains settings relating to the Native Import Module:

Item Description
Stamps a header to imported messages for ProofPoint to identify message source When enabled a message-header is added to each item as it is added to a PST file. The header is called x-proofpoint-source-id and has the itemid/item routing id, as the value. For example:

x-proofpoint-source-id: 91016abe-51e3-bdd6-132f-fb6763ecc751/2865103

Native Import File Parallelism Defines how many PST files will be imported to in parallel.
Native Import Item Parallelism Defines how many items will be ingested in parallel per PST file.
Finalize finished PSTs in Stage 1 With this option enabled finished/full PST files will be moved to the output area whilst the mapping is still in Stage 1. This will only happen on PSTs which are complete, ie those that have split at the predefined threshold. For migrations lower than the threshold, which therefore have just a single PST, this PST will not be moved in stage 1.
Limit stored results Stops the module trying to get more work, if there is a backlog of tranmissions to be sent to the Core.
Threshold database limit The number of items in the backlog before limiting takes place

Office 365 Module

This section contains settings relating to the Office 365 Module:

Item Description
General
Number of fastest servers to use Determines how many servers from the list of fastest are used. They will be picked randomly by the module
Import Root Folder When ingesting data in to Office 365 mailboxes or personal archives it is sometimes required to ingest the archived data into a top level subfolder (and then the archive folders beneath that). Specify the name of that top level folder here.
Ingest Provider Priority Determines which ingestion methods will be used to ingest data into Office 365 and in which order
Office 365 Batch Size Bytes The maximum size that a batch of items can be, which is then sent in one go to Office 365
Office 365 Batch Size Items Maximum number of items in a batch
Office 365 Timeout Seconds Timeout in seconds until Archive Shuttle aborts the ingest (i.e. upload/processing)
Disable reminders for appointments in the past This will remove the MAPI properties relating to whether a reminder has been sent/fired or not as the item is ingested into the target. If this is not enabled reminders may appear for long overdue archived items.
Mark migrated items as read If this is enabled all migrated will be marked-as-read by the import module
Convert journal messages to O365 journaling format If this option is enabled information in the P1 envelope gets added to an attribute called GERP, and added to the message as it is ingested in to Office 365. This makes those items Office 365 journal-format messages.
Limit stored results Stops the module trying to get more work, if there is a backlog of tranmissions to be sent to the Core.
Threshold database limit The number of items in the backlog before limiting takes place
Virtual Journal
Virtual Journal Item Count Limit The maximum number of items in a virtual journal mapping before a new mapping will be created.
Virtual Journal Size Limit The maximum size of a virtual journal mapping before a new mapping will be created.
Threading/Parallelism
Offline Scan Parallelism Number of threads that will be used for scanning offline media
Office 365 Mailbox Parallelism Defines how many items will be ingested per mailbox in parallel.
Office 365 Item Parallelism Defines how many items will be ingested per mailbox in parallel.
Office 365 Batch Parallelism Defines how many batches will be ingested per mailbox in parallel
Connectivity
Use faster server (round-trip) If enabled, from time to time the Office 365 module will get a list of servers responding to Office 365 ingest requests and use only those for ingestion.
Office 365 Exchange Version Specify the Office 365 Exchange version
Disable certificate check Disable the certificate validity check when connecting to Office 365
Use Multiple IP from DNS When Office 365 returns multiple IP address entries for it’s ingestion service this setting will allow the ingest module to communicate to all of those IP addresses instead of just one. For this to work, the ‘Disable certificate check’ option must be enabled.
Exchange Connection URL Specify an Autodiscover URL if the default one does not work.
Use modern authentication (OAuth) Select this option to ingest to Exchange Online using modern Azure Active Directory (OAuth) authentication.

PST Export Module

This section contains settings relating to the PST Export Module:

Item Description
General
File Parallelism The number of PST files to ingest from simultaneously
PST item collection file parallelism The number of PST files to scan simultaneously
Limit stored results When the module is working, should the number of items it is tracking and not sent the result to the Core be limited?
Threshold database limit The number of items allowed to be stored locally, and not sent to the Core, before the module stops requesting more work.
Journal Explosion
Process messages without P1 header If enabled items will still be processed for Journal Explosion even if they are missing a P1 header
Process distribution lists from messages without P1 If enabled items will still be processed for Journal Explosion even if they are missing a P1 header

Metalogix Export Module

This section contains settings relating to the Metalogix Export Module:

Item Description
Metalogix Archive Parallelism The number of archives to process in parallel
Metalogix Item Parallelism The number of items to process in parallel per archive
Threshold Database Limit The number of items in the backlog before limiting takes place
Limited Stored Results Stops the module trying to get more work, if there is a backlog of transmissions to be sent to the Core
Allow import of corrupted messages If selected, import of corrupted messages is allowed

EAS Zantaz Module

This section contains settings relating to the EAS Zantaz Module:

Item Description
EAS Archive Parallelism The number of archives to process in parallel
EAS Item Parallelism The number of items to process in parallel per archive
Limit stored results Stops the module trying to get more work, if there is a backlog of transmissions to be sent to the Core.
Threshold database limit The number of items in the backlog before limiting takes place

Sherpa Module

This section contains settings relating to the Sherpa MailAttender Module:

Item Description
Sherpa Archive Parallelism The number of archives to process in parallel
Sherpa Item Parallelism The number of items to process per mailbox in parallel
Limit stored results Stops the module trying to get more work, if there is a backlog of tranmissions to be sent to the Core.
Threshold database limit The number of items in the backlog before limiting takes place

SourceOne Module

This section contains settings relating to the SourceOne Module:

Item Description
SourceOne Container Parallelism The number of archives to process in parallel
SourceOne Collect Items Container Parallelism The number of items to process per archive in parallel
SourceOne Sync Archives Batch Size Defines how many archives will be synced in one batch.
Ignore Missing Transport Headers Ignores missing transport headings in case of export from Journal export.
Email address used in case ‘sender is null’ error Specify an email address to be used if the module reports a ‘sender is null’ error
Limit Stored Results Select this option to limit stored results.
Threshold Database Limit Limit of records in local database, when module stops asking for work items.

Dell Archive Manager Module

This section contains settings relating to the Dell Archive Manager Module:

Item Description
Container Parallelism Defines how many archives will be exported in parallel.
Collect Item Parallelism Defines how many items should be collected in parallel per archive.
Limit Stored Results Select this option to limit stored results.
Threshold Database Limit Limit of records in local database, when module stops asking for work items.
Collect size of archives If enabled the module will also collect the overall size of archives

PowerShell Script Execution Module

This section contains settings relating to the PowerShell Script Execution Module:

Item Description
Item Description
Container Parallelism Defines how many archives will be exported in parallel.
Collect Item Parallelism Defines how many items should be collected in parallel per archive.

Preview Features

Before becoming generally available, new Archive Shuttle features are introduced as Preview Features. Preview Features must be enabled using the hidden page:

  • FeaturePreview.aspx

Here’s how the page looks:

To use a preview feature, select it from the list, and then click Enable.

Appendix A: Ports used by Archive Shuttle

The following table shows the network communication ports used by Archive Shuttle. These ports are provided for reference for the situation where a firewall exists between the source and target environments in the archive migration.

SOURCE Destination Port(s) Description
Module Servers Archive Shuttle Server TCP 80 (HTTP)

TCP 443 (HTTPS)

HTTP/S Communication from Modules to Archive Shuttle Servers
Export / Import Module Servers Storage Servers (CIFS Shares) TCP 445 (CIFS) Access to CIFS Shares
Archive Shuttle Server SQL Server TCP 1433 (MSSQL) Access to Microsoft SQL Server
Archive Shuttle Server DNS Servers UDP 53 (DNS) Access to DNS for Windows
Archive Shuttle Server Domain Controllers TCP 88 (Kerberos)

TCP 445 (CIFS)

Access to Domain Controllers for Windows
Enterprise Vault Module Servers Enterprise Vault Servers Please see your Enterprise Vault documentation on what ports are needed for the Enterprise Vault API to talk to Enterprise Vault
Exchange Module Servers Exchange Servers Please see your Microsoft Exchange documentation on what ports are needed to talk to Microsoft Exchange (MAPI/EWS)
Office 365 Module Servers Office 365 TCP 443 (HTTPS) HTTPS communication from Archive Shuttle Office 365 Module to Office 365

Appendix B: Adding a new License to Archive Shuttle

From time to time it might be necessary to update the license information for Archive Shuttle. This may be because additional modules have been purchased, or it may be because the following type of warning has been displayed, and then additional licenses purchases:

In order to update the license information for Archive Shuttle, the following steps need to be performed:

1. Copy the new license.lic file to the following folder:

Webservice\bin\

2. Execute an IISReset command

Print Friendly, PDF & Email
Updated on November 2, 2019

Was this article helpful?

Related Articles