AWS Snowball User Guide
Using the Snowball Client
To authenticate the Snowball client's access to a Snowball
1. Obtain your manifest and unlock code.
a. Get the manifest from the AWS Snowball Management Console or the job management API.
Your manifest is encrypted so that only the unlock code can decrypt it. The Snowball client
compares the decrypted manifest against the information that was put in the Snowball when
it was being prepared. This comparison verifies that you have the right Snowball for the data
transfer job you’re about to begin.
b. Get the unlock code, a 29-character code that also appears when you download your manifest.
We recommend that you write it down and keep it in a separate location from the manifest that
you downloaded, to prevent unauthorized access to the Snowball while it’s at your facility.
2. Locate the IP address for the Snowball on the Snowball's E Ink display. When the Snowball is
connected to your network for the first time, it automatically creates a DHCP IP address. If you want
to use a different IP address, you can change it from the E Ink display. For more information, see
Using an AWS Snowball Appliance (p. 45).
3. Execute the snowball start command to authenticate your access to the Snowball with the
Snowball's IP address and your credentials, as follows:
snowball start -i [IP Address] -m [Path/to/manifest/file] -u [29 character unlock
code]
Example
snowball start -i 192.0.2.0 -m /user/tmp/manifest -u 01234-abcde-01234-ABCDE-01234
Schemas for Snowball Client
The Snowball client uses schemas to define what kind of data is transferred between your on-premises
data center and a Snowball. You declare the schemas whenever you issue a command.
Sources for the Snowball Client Commands
Transferring file data from a local mounted file system requires that you specify the source path, in
the format that works for your OS type. For example, in the command snowball ls C:\\User\Dan
\CatPhotos s3://MyBucket/Photos/Cats, the source schema specifies that the source data is
standard file data.
For importing data directly from a Hadoop Distributed File System (HDFS) to a Snowball, you specify the
Namenode URI as the source schema, which has the hdfs://IP Address:port format. For example:
snowball cp -n hdfs://192.0.2.0:9000/ImportantPhotos/Cats
s3://MyBucket/Photos/Cats
Destinations for the Snowball Client
In addition to source schemas, there are also destination schemas. Currently, the only supported
destination schema is s3://. For example, in the command snowball cp -r /Logs/April s3://
MyBucket/Logs, the content in /Logs/April is copied recursively to the MyBucket/Logs location on
the Snowball using the s3:// schema.
Importing Data from HDFS
You can import data into Amazon S3 from your on-premises Hadoop Distributed File System (HDFS)
through a Snowball. You perform this import process by using the Snowball client. Importing from HDFS
54