Object Storage

Model training and inference often require storing large amounts of unstructured data. Intel® Tiber™ Developer Cloud offers object storage, with a choice of CLI clients, to manage storage buckets.

Prerequisites

Important

This guide assumes you’ve already created an instance and that you can access it via SSH. You can only access private bucket storage from any Intel® Tiber™ Developer Cloud compute platform, whether virtual or bare metal machines, or Intel® Kubernetes Services.

Create bucket

  1. In the left side menu, select Storage > Object Storage.

  2. Click Create bucket.

  3. Enter a bucket name in the Name field. Optional: Enter a Description.

  4. Click Enable versioning if desired.

    Note

    Versioning ensures that data recovery is available for a limited time. This feature provides data recovery in case of application failure or unintentional deletion.

Create bucket user

Follow these steps to enable storage bucket access for a principal.

General rules

  • You must create a bucket principal to enable access.

  • You may create other principals for the same bucket.

  • Principals are mapped to buckets through a policy.

Note

“Principals” means users who can consume data in object storage buckets.

  1. Click Manage principals and permissions.

  2. Click Create principal.

    Options: You can apply permissions for all buckets or per bucket. To do so, select menu items under Allowed actions and Allow policies.

  3. Select the Credentials tab, below the new principal. These credentials are used for logging into a bucket.

Later in this workflow, we refer to the Credentials tab for login.

Install CLI Client

For first-time use, install the CLI client software in your instance to access bucket storage.

This guide explains how to install the AWS/* S3 CLI, which is one of many options. For detailed commands, view the AWS CLI Command Reference.

Tip

You choose which CLI Client you wish to use.

Optional CLI clients

You can install boto3 by following Boto3 Documentation.

You can install the MINIO Client* software to connect to storage buckets. Visit MINIO Documentation to learn more.

Install AWS S3 CLI

  1. From the console, open an instance where you want to access object storage. If you have not created an instance, complete steps in Get Started and return.

  2. Follow the onscreen instructions and SSH into your instance.

  3. Install the AWS* S3 CLI client. Going forward, we call this the CLI client.

    sudo apt-get update && sudo apt-get install awscli -y
    
  4. Verify the CLI client was properly installed.

    aws --version
    
  5. Confirm your standard output is similar.

    aws-cli/1.22.34 Python/3.10.6 Linux/5.15.0-1020-kvm botocore/1.23.34
    

    Note

    Your version may differ. These steps confirm proper installation only.

Installation complete.

Access Storage Bucket

During each login session, you must enter credentials to access a storage bucket.

Note

Unless you log out of the current principal account, session history is preserved. If you log in as different principal, you’re required to use new credentials.

Credentials Login

Follow this instruction to generate a password.

  1. In the left side menu, select Storage > Object Storage.

  2. In the Object Storage tab, click Manage Principals and Permissions.

  3. View the principals table and click on the principal name.

  4. Click in the Principals tab.

  5. Click Generate password.

    1. Optional: You may skip to Option 2 - Create .env file to use an alternate credential method. Otherwise, continue.

    Note

    The AccessKey and SecretKey are similar to a Username and Password.

  6. Enter command to log in. You’ll be prompted for credentials.

    aws configure
    
  7. Copy and paste the values from the console to the CLI client from previous steps:

    • AWS Access Key ID [None] - AccessKey

    • AWS Secret Access Key [None] - SecretKey

  8. For Default region name, press <Enter> to accept default.

  9. For Default output format, type “json”, and press <Enter>.

Configure Environment Variables

Choose one option below. Configuring environment variables simplifies storage bucket queries.

By using an option below, you don’t need to add the flag --endpoint-url https://private.endpoint.url See also Environmental variables for AWS CLI.

Option 1 - Export the URL Endpoint for this session

  1. In the console, navigate to Storage Bucket > Details.

  2. Find the Private Endpoint URL.

    1. Click the copy icon after Private Endpoint URL.

  3. Run the following command, replacing “your_endpoint_url” with endpoint you copied in the previous step.

    export AWS_ENDPOINT_URL='your_endpoint_url'
    
  4. Example: List the bucket.

    aws s3 ls
    
  5. Skip to S3 Bucket Commands.

Option 2 - Create .env file

Create an .env file, with settings, to simplify queries.

Caution

If you program the .env file to persist (e.g., bash script), note - with each AWS login, you must update the AWS_SECRET_ACCESS_KEY.

  1. Navigate to the root of your instance.

  2. Create an .env file.

    touch .env
    
  3. Add the following lines to your:file:.env file.

    sudo vi .env
    
    export AWS_ACCESS_KEY_ID='your_access_key_id'
    export AWS_SECRET_ACCESS_KEY='your_secret_access_key'
    export AWS_ENDPOINT_URL='your_endpoint_url'
    

    Caution

    Enclose all credentials within single quotes.

  4. Replace your_access_key_id, your_secret_access_key, and your_endpoint_url using the next steps.

  5. Find the Private Endpoint URL.

    1. Click the copy icon after Private Endpoint URL.

    2. Paste the value for AWS_ENDPOINT_URL in .env.

  6. From Storage Buckets, click Manage buckets principals and permissions.

  7. Click Credentials.

    1. Click Generate password.

    2. Paste values for AccessKey and SecretKey, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY respectively in .env.

  8. Load environment variables from .env file

    source .env
    
  9. Query storage bucket data with S3 Bucket Commands.

  10. Example: List the bucket.

    aws s3 ls
    

S3 Bucket Commands

Use the examples for AWS CLI Command Reference to construct a command. Using these commands assumes you’ve already configured your CLI Client to use an option above.

Delete Bucket

  1. In the console, navigate to Storage > Object Storage.

  2. In the Object Storage table, find Bucket.

Important

Recommended: First remove the principals associated with the bucket.

  1. In the Actions column, select Delete.

  2. In the dialog, select Delete again to confirm your request.

    1. Assure that the bucket is empty before deletion.

Update user policy

Follow these steps to change permmissions for principals.

Edit or Delete Bucket User

  1. Visit Storage > Object Storage.

  2. Click Manage principals and permissions.

  3. In the Principals table, find the Actions column at right.

  4. To Edit, continue. Or skip to next step.

    1. Select Edit to modify permissions.

    2. Click Edit

    3. Select permissions that apply.

    4. Select Save to apply changes.

  5. To delete, select Delete.

  6. In the dialog, select Delete again to confirm your request.

Apply Lifecycle Rules

  1. Select Bucket Name > Lifecycle Rules.

  2. Click Create rule.

    Next, the Add Lifecycle Rule workflow appears.

    Note

    Choose only one, Delete Marker or Expiry Days. Selecting the first means Expiry Day and Non current expiry days are disabled. The Delete Marker is related to versioning.

  3. Enter a name, following the onscreen instructions.

  4. Enter a prefix.

    In our example, we use only the /cache directory.

    • If you don’t enter a prefix, the rule applies to all items in the bucket

    • If you do enter a prefix, the rule applies to a specfic directory

  5. For non current expiry days, you may leave blank (or enter “0”) if you don’t use versioning. See also: NonCurrentVersionExpiration Docs.

  6. To edit/delete a Lifecycle Rule, return to Bucket name > Lifecycle Rules Then click Edit or Delete and follow the onscreen instructions.

Network Security Group

To view this feature, navigate to the Bucket Name > Details. Network security is enforced using Source IP Filtering, which restricts user access to a bucket from a specific IP using a subnet mask.