AWS Snowball User Guide
HDFS Troubleshooting
• The size of the file is greater than 5 TB – Objects in Amazon S3 must be 5 TB or less in size, so
files that are larger 5 TB in size can't be transferred to the Snowball. If you encounter this problem,
separate the file into parts smaller than 5 TB, compress the file so that it's within the 5 TB limit, or
otherwise reduce the size of the file, and try again.
• The file is a symbolic link, and only contains a reference to another file or directory – Symbolic links
(or junctions) can't be transferred into Amazon S3.
• There are permissions issues for access to the file – For example, a user might be trying to read a file
on the Snowball client when that user doesn't have read permissions for that file. Permissions issues
result in precheck failures.
• Object key length too large –If an object's key length is larger than 933 bytes, it fails the precheck.
For a list of files that can't be transferred, check the terminal before data copying starts. You can also
find this list in the <temp directory>/snowball-<random-character-string>/failed-files
file, which is saved to your Snowball client folder on the workstation. For Windows, this temp directory
would be located in C:/Users/<username>/AppData/Local/Temp. For Linux and Mac, the temp
directory would be located in /tmp.
If you discover errors when you run the snowball validate command, identify the files that failed
the transfer, resolve the issues that the error messages report, and then transfer those files again. If
your validation command fails with the same error message, then you can use the –f option with the
snowball cp command to force the copy operation and overwrite the invalid files.
HDFS Troubleshooting
When setting up a data transfer from your HDFS (version 2.x) cluster to a Snowball device, you may
encounter Kerberos authentication errors. This can happen if you're not using one of the verified
encryption types known to work with Snowball:
• des3-cbc-sha1-kd
• aes-128-cts-hmac-sha1-96
• 256-cts-hmac-sha1-96
• rc4-hmac (arcfour-hmac)
If you've encountered a Kerberos authentication issue, you can attempt to resolve it with one of the
following workarounds:
• Temporarily disable Kerberos – If you disable Kerberos on your HDFS cluster, you should also
disconnect any non-essential active connections to the cluster while transferring data. Once your
transfer is complete, reactivate your Kerberos authentication.
• Use a Snowball Edge with the file interface – The Snowball Edge provides an NFS mount point
through it's file interface feature. You could mount the Snowball Edge, and copy the files from your
HDFS cluster. For more information on using the file interface, see Using the File Interface for the AWS
Snowball Edge in the AWS Snowball Edge Developer Guide.
Troubleshooting Adapter Problems
If you're communicating with the Snowball through the Amazon S3 Adapter for Snowball using the AWS
CLI, and you encounter an error that says Unable to locate credentials. You can configure
credentials by running "aws configure". you need to configure your AWS credentials used by
the CLI to run commands. For more information, see Configuring the AWS Command Line Interface in
the AWS Command Line Interface User Guide.
100